Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5930
Dennis Dams Ulrich Hannemann Martin Steffen (Eds.)
Concurrency, Compositionality, and Correctness Essays in Honor of Willem-Paul de Roever
13
Volume Editors Dennis Dams Bell Laboratories 600 Mountain Ave. Murray Hill, NJ 07974, USA E-mail:
[email protected] Ulrich Hannemann University of Bremen Computer Science Department P.O. Box 330440, 28334 Bremen, Germany E-mail:
[email protected] Martin Steffen University of Oslo Faculty of Mathematics and Natural Sciences Department of Computer Science P.O. Box 1080 Blindern, 0316 Oslo, Norway E-mail: msteffen@ifi.uio.no The illustration appearing on the cover of this book is the work of Daniel Rozenberg (DADARA).
Library of Congress Control Number: 2009943840 CR Subject Classification (1998): F.3, F.1, F.4, D.2.4, D.2-3, I.2.2-4, D.1, C.2 LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13
0302-9743 3-642-11511-X Springer Berlin Heidelberg New York 978-3-642-11511-0 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12830763 06/3180 543210
Willem-Paul and Corinne
Preface
Why would you read this preface? As we start thinking what to write here, we wonder who is going to read these words. From our perspective – that of writers addressing an audience of readers – you are most likely Willem-Paul de Roever. Willem: our main motivation in putting together this Festschrift is to honor you on the occasion of your retirement. In terms of scientific ancestry, you are a father to two of us, and a grandfather to the third1 , and you have had a profound impact on our formation as computer scientists. At the personal level, we know you as a kind-hearted, generous person. We are grateful to know you in these ways, and hope to have encounters with you in many years to come. Another likely possibility is that you are Corinne or Jojanneke, wife or daughter of Willem; the two strong pillars on which so much in his life is founded. You share the honor, respect, and love that went into the writing, as will be acknowledged by those contributing authors that know you – which are almost all. Also, we would like to thank you for your help in sending us photographs for inclusion in this book, and for your encouragement. The next option is that you are one of the contributing authors. In this case you may wonder why it took us so long to get this work published. After all, wasn’t it “almost done” already at the retirement event in July 2008? The answer is twofold: we gave everyone ample time to revise their submissions in line with the recommendations by the referees; and we ourselves took ample time to put everything together. Our hope is that this will be visible in the quality of the final result. Then maybe you are a colleague who knows Willem-Paul well from meetings or projects: When we selected possible contributors for this Festschrift, one of our intentions was to include those colleagues that have had a collaboration with Willem-Paul that was significant in its duration, its results, or both. A few had to decline. There may also be some colleagues that are missing from the author list although they shouldn’t be. If you happen to be one of them, we accept full responsibility for this mistake – which it is; nothing else. Our apologies then! But in the case that you had little to do with the creation of this volume, and that you maybe don’t even know Prof. Dr. Willem-Paul de Roever in person, let us add that he retired, in 2008, from the Christian Albrechts University at Kiel, Germany. A theoretical computer scientist, he has made contributions to programming language semantics, in particular on the topics of program correctness, concurrent programming, and compositional and fully abstract
1
On occasions like this, it is also common to list the scientific ancestors of the honoree. Willem-Paul was a student of Jaco de Bakker, who in turn was a student of Adriaan J. van Wijngaarden, student of Cornelis B. Biezeno. All are Dutchmen.
VIII
Preface
semantics, which inspired the Festschrift’s title. Elsewhere in this volume, you can find a list of his main publications, as well as his students. Most prominent amongst his publications are two comprehensive textbooks, one on data refinement2 , and one on concurrency verification3. Their content was born out of his lectures and research, mostly during his time with the Christian Albrechts University, and they are thorough reference books on their respective topics. We are glad to present contributions by colleagues from all stages of his scientific career, from the CWI in Amsterdam to the University of Kiel. If you read through this Festschrift, you will find that some authors answered WillemPaul de Roever’s quest for precision by getting down to the very essence of the field – e.g., Allen Emerson about “meanings of model checking”, Leslie Lamport on “computer science and state machines”, or Dines Bjørner and Asger Eir on “ontology and mereology of domains”. On the other hand, you will also find contributions from numerous facets of the field – from game theory to compiler correctness and from fair scheduling to encryption algorithms. When we started contacting prospective authors, we received enthusiastic responses. We realized once more what a talent Willem-Paul has for building strong, international teams of scientists who realize the potential of such collaborations and whose involvement goes beyond their thinking minds – to their hearts. With the photo gallery included at the back of the book, we have taken up a passion of Willem-Paul de Roever as exercised in his text books to include photos of fellow scientists to give the reader a more personal view on the people behind the research results. Besides portraits, some scenes from the retirement event are included as well. Acknowledgements We would like to thank the authors, the people at Springer, Kirsten Kriegel, Rudolf Berghammer, and all those who took photographs at the farewell colloquium and offered us a wide collection of pictures, which made it hard for us to choose from them. Our colleagues are thanked for their help with reviewing contributions to this Festschrift – sometimes on short notice and on a tight schedule: Jan Bredereke, Ralf Huuck, Oliver Kutz, Sebastian Kinder, and Karsten Sohr. October 2009
2 3
Dennis Dams Ulrich Hannemann Martin Steffen
together with Kai Engelhardt together with Frank de Boer, Ulrich Hannemann, Jozef Hooman, Yassine Lakhnech, Mannes Poel, and Job Zwiers
Table of Contents
Concurrency, Compositionality, and Correctness A Bibliography of Willem-Paul de Roever . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Playing Savitch and Cooking Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peter van Emde Boas
10
Compositionality: Ontology and Mereology of Domains . . . . . . . . . . . . . . . Dines Bjørner and Asger Eir
22
Computer Science and State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leslie Lamport
60
A Small Step for Mankind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cornelis Huizing, Ron Koymans, and Ruurd Kuiper
66
On Trojan Horses of Thompson-Goerigk-Type, Their Generation, Intrusion, Detection and Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hans Langmaack Explicit Fair Scheduling for Dynamic Control . . . . . . . . . . . . . . . . . . . . . . . . Ernst-R¨ udiger Olderog and Andreas Podelski
74 96
Synchronous Message Passing: On the Relation between Bisimulation and Refusal Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manfred Broy
118
Reasoning about Recursive Processes in Shared-Variable Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frank S. de Boer
127
Formal Semantics of a VDM Extension for Distributed Embedded Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jozef Hooman and Marcel Verhoef
142
A Proof System for a PGAS Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shivali Agarwal and R.K. Shyamasundar
162
Concurrent Objects a` la Carte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dave Clarke, Einar Broch Johnsen, and Olaf Owe
185
On the Power of Play-Out for Scenario-Based Programs . . . . . . . . . . . . . . David Harel, Amir Kantor, and Shahar Maoz
207
X
Table of Contents
Proving the Refuted: Symbolic Model Checkers as Proof Generators . . . . Ittai Balaban, Amir Pnueli, and Lenore D. Zuck
221
Meanings of Model Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Allen Emerson
237
Smaller Abstractions for ∀CTL without Next . . . . . . . . . . . . . . . . . . . . . . . Kai Engelhardt and Ralf Huuck
250
Timing Verification of GasP Asynchronous Circuits: Predicted Delay Variations Observed by Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prasad Joshi, Peter A. Beerel, Marly Roncken, and Ivan Sutherland
260
Integrated and Automated Abstract Interpretation, Verification and Testing of C/C++ Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jan Peleska
277
Automated Proofs for Asymmetric Encryption . . . . . . . . . . . . . . . . . . . . . . . Joudica¨el Courant, Marion Daubignard, Cristian Ene, Pascal Lafourcade, and Yassine Lakhnech
300
Counterexample Guided Path Reduction for Static Program Analysis . . . Ansgar Fehnker, Ralf Huuck, and Sean Seefried
322
Gallery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
343
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
377
A Bibliography of Willem-Paul de Roever
Books 1. with Kai Engelhardt. Data Refinement: Model-Oriented Proof Methods and their Comparison. Number 47 in Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, 1998. With the assistance of Jos Coenen, Karl-Heinz Buth, Paul Gardiner, Yassine Lakhnech, and Frank Stomp. 2. with Frank S. de Boer, Ulrich Hannemann, Jozef Hooman, Yassine Lakhnech, Mannes Poel, and Job Zwiers. Concurrency Verification: Introduction to Compositional and Noncompositional Proof Methods, volume 54 of Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, November 2001. D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 1–9, 2010. c Springer-Verlag Berlin Heidelberg 2010
2
W.-P. de Roever
Refereed Publications 3. with Jacobus W. de Bakker. A calculus for recursive program schemes. In Automata, Languages and Programming (ICALP), pages 167–196. North-Holland, 1972. 4. Willem-Paul de Roever. A formalization of various parameter mechanisms as products of relations within a calculus of recursive program schemes. S´eminaires IRIA, Th´eorie des algorithmes etc., 1972. 5. Willem-Paul de Roever. Semantics for recursive polyadic program schemata. In Proceeding of Mathematical Foundations of Computer Science 1973, 1973. Reprinted as report IW 6, 1973, Mathematisch Centrum. 6. Willem-Paul de Roever. A correctness proof of the Schorr–Waite marking algorithm for binary trees. Mathematical Centre Syllabus 21, 1975. 7. Willem-Paul de Roever. Recursive Program Schemes: Semantics and Proof Theory. Dissertation, Free University, Amsterdam, 1974. Also published as Mathematical Centre Tract 70, as a revised edition. 8. Willem-Paul de Roever. Recursion and parameter mechanisms: An axiomatic approach. In J. Loeckx, editor, Second Colloquium on Automata, Languages and Programming (ICALP) (Saarbr¨ ucken, Germany), volume 14 of Lecture Notes in Computer Science. Springer-Verlag, 1974. 9. Willem-Paul de Roever. Call–by–name versus call–by–value: A proof–theoretic comparison. In A. Blikle, editor, Third Mathematical Foundations of Computer Science (Jadwisin, Poland), volume 28 of Lecture Notes in Computer Science, pages 451–463. Springer-Verlag, 1975. 10. Willem-Paul de Roever. First–order reduction of call–by–name to call–by–value. In J. Beˇcv´ aˇr, editor, Fourth Mathematical Foundations of Computer Science (Mari´ anske´e L¨ aznˇe), volume 32 of Lecture Notes in Computer Science, pages 377– 398. Springer-Verlag, 1975. 11. Willem-Paul de Roever. Correctness proofs for search and marking algorithms of dyadic data structures. In Mathematical Centre Syllabus 25, Colloquium Structuur van Programmeertalen, 1976, 1976. 12. Willem-Paul de Roever. Dijkstra’s predicate transformer, non–determinism, recursion and termination. In Antoni Mazurkievicz, editor, Fifth Mathematical Foundations of Computer Science (Gda´ nsk, Poland), volume 45 of Lecture Notes in Computer Science, pages 472–481. Springer-Verlag, 1976. 13. Willem-Paul de Roever. On backtracking and greatest fixedpoints. In A. Salomaa and M. Steinby, editors, Fourth Colloquium on Automata, Languages and Programming (ICALP) (Turku, Finland), volume 52 of Lecture Notes in Computer Science, pages 412–429. Springer-Verlag, 1977. 14. with Nissim Francez and C.A.R. Hoare. Semantics of non-determinism, concurrency, and communication. In J. Winkowski, editor, Seventh Mathematical Foundations of Computer Science (Zakopane, Poland), volume 64 of Lecture Notes in Computer Science, pages 191–200. Springer-Verlag, 1978. 15. with Nissim Francez, C.A.R Hoare, and D. Lehmann. Semantics of nondeterminism, concurrency, and communication. Journal of Computer and System Sciences, 19(3):290–308, December 1979. 16. with Standley Lee and Susan L. Gerhart. The evolution of list copying algorithms or the need for structured program verification. In Sixth Annual Symposium on Principles of Programming Languages (POPL) (San Antonio, TX), pages 53–67. ACM, January 1979.
A Bibliography of Willem-Paul de Roever
3
17. with Krzysztof R. Apt and Nissim Francez. A proof system for communicating sequential processes. ACM Transactions on Programming Languages and Systems, 2:359–385, 1980. 18. with Marly Roncken and Rob Gerth. A proof system for Brinch Hansen’s distributed processes (extended abstract). In Proceedings of the GI ’81, M¨ unchen, Informatik–Fachberichte, pages 88–95. Springer-Verlag, 1981. 19. Willem-Paul de Roever. A formalism for reasoning about fair termination. In Dexter Kozen, editor, Logic of Programs, volume 131 of Lecture Notes in Computer Science, pages 113–121. Springer-Verlag, 1981. 20. with Orna Grumberg, Nissim Francez, and Johann A. Makowski. A proof rule for fair termination of guarded commands. In J.W. de Bakker and H. van Vliet, editors, Proceedings of Symposium on Algorithmic Languages. North-Holland, 1981. In revised form in: Information and Control 66, no 1/2, pp. 83–102, 1985. 21. with Ruurd Kuiper. Fairness assumption for CSP in a temporal logic framework. In D. Bjørner, editor, Proc. of the IFIP Working Conference on Formal Description of Programming Concepts II, Garmisch-Partenkirchen, June 1–4, 1982. NorthHolland, 1982. 22. with Rob Gerth and Marly Roncken. Procedures and concurrency: A proof theoretical study. In Mariangiola Dezani-Ciancaglini and Ugo Montanari, editors, Proceedings of the 5th International Symposium on Programming 1981, volume 137 of Lecture Notes in Computer Science, pages 132–163. Springer-Verlag, 1982. 23. with Amir Pnueli. Rendez-vous with ADA – a proof theoretic view. In Proc. of ADA - TEC ’82 Conference on ADA, 1982. 24. with Rob Gerth and Marly Roncken. A study in distributed systems and Dutch patriotism. In Proceedings of the 2nd Conference on Foundations of Software Technology and Distributed Systems, dec 1982. 25. with Ron Koymans and Jan Vytopil. A formal system for a telecommunication language: A case study in proofs about real-time programming and asynchronous message passing. In Proceedings of the 2nd Conference on Principles of Distributed Computing, 1983. 26. with Job Zwiers and Arie de Bruin. A proof system for partial correctness of dynamic networks of processes. In Edmund M. Clarke and Dexter Kozen, editors, Prceedings of the Logics of Programs Workshop, Carnegie Mellon University, Pittsburgh, PA, USA, June 6-8, 1983, volume 164 of Lecture Notes in Computer Science, pages 513–527. Springer-Verlag, 1984. 27. with Rob Gerth. A proof system for concurrent ADA programs. Science of Computer Programming, 4:159–204, 1984. 28. Willem-Paul de Roever. The quest for compositionality — a survey of assertionbased proof methods for concurrent programs, part 1: Concurrency based on shared variables. In E. J. Neuhold, editor, Proceedings of the IFIP Working Conference on “The Role of Abstract Models in Computer Science”. North-Holland, 1985. 29. with Ron Koymans, R.K. Shyamasundar, Rob Gerth, and S. Arun-Kumar. Compositional semantics for real-time distributed computing. In Rohit Parikh, editor, Logics of Programs, volume 193 of Lecture Notes in Computer Science, pages 167– 189. Springer-Verlag, 1985. 30. with Jozef Hooman. The quest goes on: a survey of proofsystems for partial correctness of CSP. In with Jaco W. de Bakker and Grzegorz Rozenberg, editors, Current Trends in Concurrency: Overviews and Tutorials, volume 224 of Lecture Notes in Computer Science, pages 343–395. Springer-Verlag, 1986.
4
W.-P. de Roever
31. with Job Zwiers and Peter van Emde Boas. Compositionality and concurrent networks: Soundness and completeness of a proof system. In W. Brauer, editor, Twelfth Colloquium on Automata, Languages and Programming (ICALP) (Nafplion, Greece), volume 194 of Lecture Notes in Computer Science, pages 509–519, Nafplion, Greece, 1985. Springer-Verlag. 32. Willem-Paul de Roever. The cooperation test: A syntax-directed verification method. In Logic and Models of Concurrent Systems, NATO Summerschool, Marktoberdorf, pages 213–257. NATO Advanced Study Institute, 1984. 33. with Ron Koymans. Examples of a real-time temporal logic specification. In B. T. Denvir, W. T. Harwood, M. I. Jackson, and M. J. Wray, editors, The Analysis of Concurrent Systems 1983, volume 207 of Lecture Notes in Computer Science, pages 231–252. Springer-Verlag, 1985. 34. with Nick W.P. van Diepen. Program derivation through transformations: The evolution of list-copying algorithms. Science of Computer Programming, 6:213– 272, 1986. 35. Willem-Paul de Roever. Process constructors and interpretations — response. In IFIP Congress, pages 515–518, 1986. 36. Willem-Paul de Roever. Questions to Robin Milner — A responder’s commentary. Information Processing, pages 515–518, 1986. 37. with Rob Gerth. Proving monitors revisited: A first step towards verifying objectoriented systems. Fundamentae Informaticae, IX:371–400, 1986. 38. with Cees Huizing and Rob Gerth. Full abstraction of a real-time denotational semantics for an OCCAM-like language. In Fourteenth Annual Symposium on Principles of Programming Languages (POPL) (Munich, Germany), pages 223– 237. ACM, January 1987. 39. with Frank Stomp. Designing distributed algorithms by means of formal sequentially phased reasoning. In J.-C. Bermond and M. Raynal, editors, Proceedings of the 3rd International Workshop on Distributed Algorithms, Nice, volume 392 of Lecture Notes in Computer Science, pages 242–253. Springer-Verlag, 1989. 40. with Frank. A. Stomp and Rob T. Gerth. The µ-calculus as an assertion-language for fairness arguments. Information and Computation, 82(3):278–322, September 1989. 41. with Frank Stomp. A correctness proof of a distributed minimum weight spanning tree algorithm. In Proceedings of the 7th ICDCS, 1987. 42. with Cees Huizing and Rob Gerth. Modeling statecharts behaviour in a fully abstract way. In M. Dauchet and M. Nivat, editors, Trees in Algebra and Programming (CAAP ’88), volume 299 of Lecture Notes in Computer Science, pages 271–294. Springer-Verlag, 1988. 43. with Ron Koymans, R.K. Shyamasundar, Rob Gerth, and S. Arun-Kumar. Compositional semantics for real-time distributed computing. Information and Computation, 79(3):210–256, 1988. 44. with Job Zwiers. Compositionality and modularity in process specification and design: A state based approach. In B. Banieqbal, H. Barringer, and A. Pnueli, editors, Temporal Logics in Specification, volume 398 of Lecture Notes in Computer Science, pages 351–374. Springer-Verlag, 1987. 45. with Jozef Hooman. Design and verification in real-time distributed computing: An introduction to compositional methods. In Protocol Specification, Testing and Verification, IX, pages 37–56. North-Holland, 1990. 46. with Job Zwiers. Predicates are predicate transformers: Towards a unified theory of concurrency. In Proc. of 8th Conference on Principles of Distributed Computing, pages 265–279, 1989.
A Bibliography of Willem-Paul de Roever
5
47. with Howard Barringer, Costas Courcoubetis, Dov Gabbay, Rob Gerth, Bengt Jonsson, Amir Pnueli, George M. Reed, Joseph Sifakis, Jan Vytopil, and Pierre Wolper. ESPRIT – Basic Research Action 3096 “SPEC”: Formal methods and tools for the development of distributed and real-time systems. Bulletin of the EATCS, 40, February 1990. 48. with Jozef Hooman and S. Ramesh. A compositional axiomatisation of safety and liveness properties of Statecharts. In Semantics for Concurrency, Workshops in Computing, pages 242–261. Leicester, Springer-Verlag, 1990. 49. with Cees Huizing. Introduction to the design choices in the semantics of Statecharts. Information Processing Letters, 37:205–213, 1991. 50. Willem-Paul de Roever. Foundations of computer science: Leaving the ivory tower. EATCS Bulletin, 44:455–492, 1991. 51. with Jos Coenen and Job Zwiers. Assertional data reification proofs: Survey and perspective. In J.M. Morris and R.C. Shaw, editors, Proceedings of the 4th Refinement Workshop, Workshops in Computing, pages 91–114. Springer, 1991. 52. with Jozef Hooman. An introduction to compositional methods for concurrency and their application to real-time. In D. Hogrefe, editor, Formale Beschreibungstechniken f¨ ur verteilte Systeme, GI-Fachgespr¨ ach. Springer-Verlag, 1992. Also in the Proceedings in Engineering Sciences of the Indian Academy of Sciences, vol. 17, part I, pp. 29-74. 53. with Jozef Hooman and S. Ramesh. A compositional axiomatization of Statecharts. Theoretical Computer Science, 101(2):289–335, 1992. 54. with Antonio Cau and Ron Kuiper. Formalising Dijkstra’s development strategy within Stark’s formalism. In Jones, Shaw, and Denvir, editors, Proc. 5th Refinement Workshop. Workshops in Computing Series, Springer-Verlag, 1992. 55. with Job Zwiers and Jos Coenen. A note on compositional refinement. In Proceedings of the 5th Refinement Workshop, Workshops in Computing. Springer-Verlag, 1992. 56. with Jozef Hooman. The application of compositional proof methods to realtime. In Preprints Proceedings Symposium on Artificial intelligence in Real-Time Control, pages 134–144. IEEE, 1992. 57. with Antonio Cau. Using relative refinement for fault tolerance. In Proceedings of FME’93 symposium: Industrial Strength Formal Methods, 1993. 58. with Kai Engelhardt. Generalizing Abadi & Lamport’s method to solve a problem posed by A. Pnueli. In J. C. P. Woodcock and P. G. Larsen, editors, IndustrialStrength Formal Methods (FME ’93), volume 670 of Lecture Notes in Computer Science. Springer-Verlag, 1993. 59. with Antonio Cau. Specifying fault tolerance within Stark’s formalism. In Proc. 23rd Symposium on Fault-Tolerant Computing, IEEE Computer Society Press, pages 392–401, 1993. 60. with Carsta Petersohn, Cees Huizing, and Jan Peleska. Formal semantics for Ward & Mellor’s transformation schemas and their comparison with Statecharts. In D. Till, editor, 6th Refinement Workshop, Workshops in Computing, pages 14–41. BCS-FACS, Springer-Verlag, 1994. 61. with Frank Stomp. A principle for sequentially phased reasoning about distributed algorithms. Formal Aspects of Computing, 6(6):716–737, 1994. 62. with Carsta Petersohn, Cees Huizing, and Jan Peleska. Formal semantics for Ward & Mellor’s transformation schemas and the specification of fault-tolerant systems. In Proceedings of the First European Dependable Computing Conference (EDCC1), volume 852 of Lecture Notes in Computer Science. Springer-Verlag, 1994.
6
W.-P. de Roever
63. with Kai Engelhardt. Towards a practitioners’ approach to Abadi and Lamport’s method. Formal Aspects of Computing, 7(5):550–566, 1995. 64. with Frank S. de Boer and H. Tej. Compositionality in real-time shared variable concurrency (extended abstract). In Proceedings of the 1995 Nordic Workshop on Programming Theory, G¨ oteborg, 1995. 65. with Job Zwiers, Ulrich Hannemann, and Yassine Lakhnech. Synthesizing different development paradigms: Combining top-down with bottom-up reasoning about distributed systems. In Pazhamaneri S. Thiagarajan, editor, Proceedings of FSTTCS ’95, volume 1026 of Lecture Notes in Computer Science, pages 80–95. SpringerVerlag, 1995. 66. with Kai Engelhardt. Simulation of specification statements in Hoare logic. In Wojciech Penczek and Andrzej Szalas, editors, 21st Mathematical Foundations of Computer Science (Cracow, Poland), volume 1113 of Lecture Notes in Computer Science, pages 324–335. Springer-Verlag, 1996. 67. with Frank S. de Boer, H. Tej, and M. van Hulst. Compositionality in real-time shared variable concurrency. In Bengt Jonsson, editor, Proceedings of FTRTFT’96, volume 1135 of Lecture Notes in Computer Science, pages 420–439. SpringerVerlag, 1996. 68. with Job Zwiers, Ulrich Hannemann, Yassine Lakhnech, and Frank Stomp. Modular completeness: Integrating the reuse of specified software in top-down program development. In M.-C. Glaudel and J. Woodcock, editors, Industrial Benefit and Advances in Formal Methods (FME’ 96), volume 1051 of Lecture Notes in Computer Science, pages 595–608. Springer-Verlag, 1996. 69. with Antonio Cau. A dense-time temporal logic with nice compositionality properties. In Franz Pichler and Roberto Moreno-D´ıaz, editors, EUROCAST, volume 1333 of Lecture Notes in Computer Science, pages 123–145. Springer-Verlag, 1997. 70. with Quiwen Xu and Jifeng He. Rely-guarantee methods for verifying shared variable concurrent programs. Formal Aspects of Computing, 9(2):149–174, 1995. 71. with Kai Engelhardt. New Win de for old bags. In John Tromp, editor, A dynamic and quick intellect, Paul Vit´ anyi 25 years @ CWI, pages 59–66. CWI, Amsterdam, November 1996. 72. with Frank S. de Boer and Ulrich Hannemann. A compositional proof system for shared-variable concurrency. In J. Fitzgerald, C. B. Jones, and P. Lucas, editors, FME’97. Industrial Benefits of Formal Methods, volume 1313 of Lecture Notes in Computer Science, pages 515–532. Springer-Verlag, 1997. 73. with Frank S. de Boer and Ulrich Hannemann. Hoare-style compositional proof systems for reactive shared variable concurrency. In S. Ramesh and G. Sivakumar, editors, Proceedings of FSTTCS ’97, volume 1346 of Lecture Notes in Computer Science. Springer-Verlag, December 1997. 74. with Lars K¨ uhne and Jozef Hooman. Towards mechanical verification of parts of the IEEE P1394 serial bus. In 2nd International Workshop on Applied Formal Methods in System Design, pages 73–85, Zagreb, Croatia, June 1997. 75. with Ulrich Hannemann. Concurrency verification: From non-compositional to compositional proof methods. In Proc. of the 8th Nordic Workshop on Programming Theory 1996, Oslo, 1997. 76. with Carsta Petersohn, Cees Huizing, and Jan Peleska. Formal semantics for Ward & Mellor’s transformation schemas and its application to fault-tolerant systems. International Journal of Computer Systems, 13(2):125–133, 1998.
A Bibliography of Willem-Paul de Roever
7
77. Willem-Paul de Roever. The need for compositional proof systems: A survey. In Willem-Paul de Roever, Hans Langmaack, and Amir Pnueli, editors, Compositionality: The Significant Difference (Compos ’97), volume 1536 of Lecture Notes in Computer Science, pages 1–22. Springer, 1998. 78. with Frank S. de Boer. Compositional proof methods for concurrency: A semantic approach. In Willem-Paul de Roever, Hans Langmaack, and Amir Pnueli, editors, Compositionality: The Significant Difference (Compos ’97), volume 1536 of Lecture Notes in Computer Science, pages 632–647. Springer, 1998. 79. with Frank de Boer and Ulrich Hannemann. The semantic foundations of a compositional proof method for synchronously communicating processes. In Miroslaw Kutylowski, Lescek Pacholski, and Tomasz Wierzbicki, editors, Mathematical Foundations of Computer Science 1999, volume 1672 of Lecture Notes in Computer Science, pages 343–353. Springer-Verlag, September 1999. 80. with Frank de Boer and Ulrich Hannemann. Formal justification of the relyguarantee paradigm for shared-variable concurrency: A semantic approach. In Jeannette Wing, Jim Woodcock, and Jim Davies, editors, FM ’99 – Formal Methods, volume 1709 of Lecture Notes in Computer Science, pages 1245–1265. SpringerVerlag, 1999. 81. with Frank S. de Boer, Ulrich Hannemann, Jozef Hooman, Yassine Lakhnech, Mannes Poel, and Job Zwiers. Basic principles of a textbook on the compositional and noncompositional verification of concurrent programs. In Jens Grabowski and Stefan Heymer, editors, Formale Beschreibungstechniken f¨ ur verteilte Systeme, 10. GI/ITG-Fachgespr¨ ach, L¨ ubeck, Juni 2000, pages 3–5. Verlag Shaker, 2000. ´ 82. with Erika Abrah´ am-Mumm, Frank S. de Boer, and Martin Steffen. Verification for Java’s reentrant multithreading concept. In Mogens Nielsen and Uffe H. Engberg, editors, Proceedings of FoSSaCS 2002, volume 2303 of Lecture Notes in Computer Science, pages 4–20. Springer-Verlag, April 2002. A longer version, including the proofs for soundness and completeness, appeared as Technical Report TR-ST-02-1, March 2002. ´ 83. with Erika Abrah´ am, Frank S. de Boer, and Martin Steffen. A compositional operational semantics for JavaMT . In Nachum Derschowitz, editor, International Symposium on Verification (Theory and Practice), July 2003, volume 2772 of Lecture Notes in Computer Science, pages 290–303. Springer-Verlag, 2004. A preliminary version appeared as Technical Report TR-ST-02-2, May 2002. ´ 84. with Erika Abrah´ am-Mumm, Frank S. de Boer, and Martin Steffen. A toolsupported proof system for monitors in Java. In with Marcello M. Bonsangue, Frank S. de Boer, and Susanne Graf, editors, FMCO 2002, volume 2852 of Lecture Notes in Computer Science, pages 1–32. Springer-Verlag, 2002. ´ 85. with Erika Abrah´ am, Frank S. de Boer, and Martin Steffen. Inductive proofoutlines for monitors in Java. In Elie Najm, Uwe Nestmann, and Perdita Stevens, editors, FMOODS ’03, volume 2884 of Lecture Notes in Computer Science, pages 155–169. Springer-Verlag, November 2003. A longer version appeared as technical report TR-ST-03-1, April 2003. 86. with Eerke Boiten. Getting to the bottom of relational refinement: Relations and correctness, partial and total. In Rudolf Berghammer and B. M¨ oller, editors, Proceedings of the 7th Seminar RelMiCS/2nd Workshop Kleene Algebra, Malente, May 12–17, volume 3051 of Lecture Notes in Computer Science. Springer-Verlag, 2004.
8
W.-P. de Roever
´ 87. with Erika Abrah´ am, Frank S. de Boer, and Martin Steffen. Inductive proof outlines for exceptions in multithreaded Java. In Farhad Arbab and Marjan Sirjani, editors, FSEN ’05: IPM International Workshop on Foundations of Software Engineering (Theory and Practice). Oct. 1 – 3, 2005), volume 159 of Electronic Notes in Theoretical Computer Science, pages 281–297. Elsevier Science Publishers, 2005. An extended version appeared in Fundamentae Informaticae. 88. with Marcel Kyas and Frank S. de Boer. A compositional trace logic for behavioral interface specifications. Nordic Journal of Computing, 12(2):116–132, 2005. 89. with Harald Fecher, Marcel Kyas, and Frank S. de Boer. Compositional operational semantics of a UML-kernel-model language. In SOS’05, volume 156 of Electronic Notes in Theoretical Computer Science, pages 281–297. Elsevier Science Publishers, 2006. 90. Willem-Paul de Roever. A perspective on program verification. In Proceedings of the IFIP Working Conference on Verified Software: Tools, Techniques, and Experiments, Z¨ urich, October 10–13, October 2005. ´ 91. with Erika Abrah´ am, Frank S. de Boer, and Martin Steffen. An assertion-based proof system for multithreaded Java. Theoretical Computer Science, 331:251–290, 2005. 92. with Harald Fecher, Jens Sch¨ onborn, and Marcel Kyas. 29 new unclarities in the semantics of UML 2.0 state machines. In ICFEM, volume 3785 of Lecture Notes in Computer Science, pages 52–65. Springer-Verlag, 2005. 93. with Maty Sylla and Frank Stomp. Verifying parameterized refinement. In Tenth IEEE International Conference on Engineering of Complex Computer Systems, Shanghai, China, 16–20 June, 2005, pages 313–321, 2005. ´ 94. with Erika Abrah´ am, Frank S. de Boer, and Martin Steffen. A deductive proof system for multithreaded Java with exceptions. Fundamenta Informaticae, 82(4):391– 463, 2008. An extended version of the 2005 conference contribution to FSEN’05 and a reworked and shortened version of the University of Kiel, Dept. of Computer Science technical report 0303. 95. Willem-Paul de Roever. A perspective on program verification. In Verified Software: Theories, Tools, Experiments, volume 4171 of Lecture Notes in Computer Science, pages 470–477. Springer-Verlag, 2008.
Editor 96. with Jaco W. de Bakker and Grzegorz Rozenberg, editors. Current Trends in Concurrency: Overviews and Tutorials, volume 224 of Lecture Notes in Computer Science. Springer-Verlag, 1986. 97. with Jaco W. de Bakker and Grzegorz Rozenberg, editors. Linear Time, Branching Time and Partial Order in Logics and Models for Concurrency (REX Workshop), volume 354 of Lecture Notes in Computer Science. Springer-Verlag, 1989. 98. with Jaco W. de Bakker and Grzegorz Rozenberg, editors. Stepwise Refinement of Distributed Systems: Models, Formalisms, Correctness (REX Workshop), volume 430 of Lecture Notes in Computer Science. Springer-Verlag, 1990. 99. with Jaco W. de Bakker and Grzegorz Rozenberg, editors. Foundations of ObjectOriented Languages (REX Workshop), volume 489 of Lecture Notes in Computer Science. Springer-Verlag, 1991. 100. with Jaco W. de Bakker and Cees Huizing, editors. Real-Time: Theory in Practice (REX Workshop), volume 600 of Lecture Notes in Computer Science. SpringerVerlag, 1991.
A Bibliography of Willem-Paul de Roever
9
101. with Jaco W. de Bakker and Grzegorz Rozenberg, editors. Semantics: Foundations and Applications (REX Workshop), volume 666 of Lecture Notes in Computer Science. Springer-Verlag, 1993. 102. with Jaco W. de Bakker and Grzegorz Rozenberg, editors. A Decade of Concurrency 1993 (REX Workshop), volume 803 of Lecture Notes in Computer Science. Springer-Verlag, 1994. 103. with Hans Langmaack and Jan Vytopil, editors. Formal Techniques in Real-Time and Fault-Tolerant Systems 1994, volume 863 of Lecture Notes in Computer Science, Kiel, Germany, September 1994. Springer-Verlag. 3rd International School and Symposium. 104. with David Gries, editor. Programming Concepts and Methods (PROCOMET ’98). International Federation for Information Processing (IFIP), Chapman & Hall, 1998. 105. Willem-Paul de Roever, Hans Langmaack, and Amir Pnueli, editors. Compositionality: The Significant Difference (Compos ’97), volume 1536 of Lecture Notes in Computer Science. Springer, 1998. 106. with Marcello M. Bonsangue, Frank S. de Boer, and Susanne Graf, editors. Proceedings of the First International Symposium on Formal Methods for Components and Objects (FMCO 2002), Leiden, volume 2852 of Lecture Notes in Computer Science. Springer-Verlag, 2003. 107. with Marcello Bonsangue, Frank S. de Boer, and Susanne Graf, editors. Proceedings of the Second International Symposium on Formal Methods for Components and Objects (FMCO 2003), volume 3188 of Lecture Notes in Computer Science. Springer-Verlag, 2004. 108. with Marcello Bonsangue, Frank S. de Boer, and Susanne Graf, editors. Proceedings of the Third International Symposium on Formal Methods for Components and Objects (FMCO 2004), volume 3657 of Lecture Notes in Computer Science. Springer-Verlag, 2005. 109. with Marcello Bosangue, Frank S. de Boer, and Susanne Graf, editors. Proceedings of the Fourth International Symposium on Formal Methods for Components and Objects (FMCO 2005), volume 4111 of Lecture Notes in Computer Science. Springer-Verlag, 2006. 110. with Marcello Bosangue, Frank S. de Boer, and Susanne Graf, editors. Proceedings of the Fifth International Symposium on Formal Methods for Components and Objects (FMCO 2006), volume 4709 of Lecture Notes in Computer Science. Springer-Verlag, 2007. 111. with Marcello Bosangue, Frank S. de Boer, and Susanne Graf, editors. Proceedings of the Sixth International Symposium on Formal Methods for Components and Objects (FMCO 2007), volume 5382 of Lecture Notes in Computer Science. Springer-Verlag, 2008.
Playing Savitch and Cooking Games Peter van Emde Boas ILLC, FNWI, Universiteit van Amsterdam, Plantage Muidergracht 24, 1018 TV Amsterdam Bronstee.com Software & Services B.V., Heemstede Dept. Comp. Sci. University of Petroleum, Chang Ping, Beijing, P.R. China
Abstract. The complexity class PSPACE is one of the most robust concepts in contemporary computer science. Aside from the fact that space is invariant (for reasonable models at least) up to a constant factor, the class can be characterized in many alternative ways, involving parallel computation, logic problems like QBF, interactive computation models but also by means of games. In the literature the connection between PSPACE and games is established as a consequence either of the PSPACE completeness of the QBF problem or as a consequence of the properties of the alternating computation model. Based on either of these starts one subsequently has investigated the PSPACE completeness of endgame analysis problems for various specific games. The purpose of this note is to present a direct reduction of an arbitrary PSPACE problem into endgame analysis of a corresponding game. As a consequence we obtain an alternative proof of the 1970 Savitch theorem showing that PSPACE is closed under nondeterminism. Furthermore we reconsider the direct translation of endgame analysis of some game in QBF, in order to obtain a better understanding of the conditions on the game which enable this translation.
1 1.1
The Robustness of PSPACE and Its Connection to Games PSPACE, Logic and Games
One of the most robust complexity classes in complexity theory is the class PSPACE, consisting of those problems which can be recognized on a standard sequential device in polynomial space. Robustness is ensured by the fact that the computational space measure, if defined properly, is invariant for the various traditional computational models up-to a constant factor (contrary to time where models require polynomial time overhead when simulating each other) [17]. It also happens to be the case that the class PSPACE coincides with its nondeterministic version, as expressed by the 1970 Savitch theorem [13]. More important is the fact that the same class is characterized by means of suitable parallel devices: a large collection of models (the Second Machine Class), which combine exponential growth potential and uniformity have been proven to D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 10–21, 2010. c Springer-Verlag Berlin Heidelberg 2010
Playing Savitch and Cooking Games
11
satisfy the so-called Parallel Computation Thesis [17]: //PTIME = //NPTIME = PSPACE. A third characterization is provided by the PSPACE completeness of the problem in logic known by the name QBF: (Quantified Boolean Formulas). Given a formula of the form Q1 x1 [. . . Qk xk [Φ(x1 , . . . , xk )] . . .], where Qi denotes either ∃ or ∀, and Φ(x1 , . . . , xk ) is a formula in propositional logic. The variables range over the two truth values true and false, and the problem is to determine whether the formula is true or not [16]. Games have entered the picture by means of QBF. There exists an ancient idea in logic to correlate truth of a formula with the availability of a winning strategy in some dialogue game between a proponent who must defend the truth of the formula against an opponent . One of the basic rules in such dialogue (also in similar model construction games) is that the proponent chooses values for existentially quantified variables whereas the opponent controls the choices for the universal quantifiers [2]. QBF formulas, when interpreted from this perspective lead to the following game: The input consists of a propositional formula, together with an ordered assignment of its variables to the two players. The ordering and the assignment to the players is entailed by the structure of the quantifier prefix in QBF. During a play of the game the variables are assigned a truth value in the order given; the proponent chooses values for the existentially quantified variables and the opponent for the universally quantified ones. At the end of the game a variable free propositional formula remains. The proponent wins the game if this formula evaluates to True, otherwise the opponent wins the game. In fact the game can be continued from this position by playing a logic evaluation game for Propositional Logic where the proponent chooses a disjunct in some disjunction, the opponent chooses conjuncts in a conjunction and where a negation gives rise to a role change of both players. The game then terminates when the propositional formula has been reduced to a propositional atom. The proponent wins iff this atom is true. 1.2
The Alternating Model
The QBF problem has subsequently inspired the model of alternating computation [3]. One considers nondeterministic devices where configurations are labeled either Existential or Universal . More specifically, states are labeled and the label of some configuration equals the label of the included state of the finite control. The computation of such a device is modelled by its complete computation tree. Whether this tree accepts or not depends on a quality assigned to this tree. This assignment is obtained by the following form of backward induction described below: Accepting configurations are labeled good and Rejecting configurations are labeled bad . An existential configuration is labeled good if it has a good successor, and it is labeled bad if all its successors are bad . Universal configurations are bad once they have a single bad successor and good if all their successors are
12
P. van Emde Boas
good . The label at the root of the computation tree determines whether the computation accepts or not. Due to the presence of infinite branches in the computation tree some nodes may remain unlabeled, but in complexity theory one may assume that computations always terminate. Then the root of an alternating computation tree always collects a label good or bad and this label determines whether the input is accepted or not. The analysis of those trees where the root can be labeled in the presence of infinite branches in fact is the most complex part of the model analysis in [3]. However for people familiar with the classic theory of assigning meaning to recursive procedures the solution to this problem is evident: the above rule for labeling the computation tree of an alternating computation is in fact nothing but a recursive definition whose meaning is captured by the least fixed point of the corresponding operator on the domain of labelings with values good, bad, or indeterminate. The alternating computation model in its turn can be interpreted as a game directly. Two opposing agents are controling a single computer. The constructive player (who controls the existential configurations) wants the computation to accept, whereas the destructive player (controling the universal configurations) aims for rejection. The alternating device accepts if and only if the constructive player has a winning strategy. Note that the original model of alternating computation also included negating configurations, where the label (good or bad) is the inverse of the label at its (unique) son (bad or good). In game terminology these configurations correspond to a role switch between the two players. However, since it is an easy excercise to eliminate these negating configurations, this feature of the model is rarely encountered in the literature today. 1.3
From Alternation to Games
It should be no surprise that, starting from these artificial games based on logic and computation theory, people have investigated whether similar patterns arise in the concrete games played by humans. Around 1980 there were many results obtained establishing the PSPACE completeness of endgame analysis of specific games (Checkers, Go, Hex and many others) [14,4]. And, similar to the situation in solitaire games [18], the possibility of having repeated moves and/or positions may add to the complexity, as witnessed by the fact that a generalized form of Chess is hard for Exponential Time [8]. For up-to-date information concerning such concrete games I refer to a survey paper by Eric Demaine [5] and his website [6]. The argumentation presented above establishes a relation between PSPACE and games, be it along a detour along logic and/or alternation. However, with hindsight it is not hard to describe a direct connection between problems in PSPACE and games. The purpose of this note is a presentation of this connection. More specifically, we provide for a given set A in PSPACE a construction of a family of games G(x) such that the game G(x) has a winning strategy for the first player iff the input x ∈ A . Since the machine in PSPACE which recognizes
Playing Savitch and Cooking Games
13
A may be nondeterministic, the standard complexity bound for backward induction by a recursive tree search (which is a deterministic procedure) provides us with an alternative proof of the Savitch result. One always may wonder why research proceeds in some direction and not in another. In the context of the relation between complexity and games the initial results may have been so nice that people believed the problem to be solved: games characterize PSPACE . Subsequent investigations involved more complex models, including probabilistic moves, and incomplete information as exemplified by the theory of interactive proof systems [1,15], Zero-Knowledge systems [12] and their multi-party generalizations. These subsequent investigations showed that the claim that games capture PSPACE is not absolutely correct. The situation resembles the status of another basic observation in complexity theory: the Parallel Computation Thesis, which is true for many important models but not for all models of parallellism. An important feature is the duration of a play in the game. If plays can consume more than polynomial time the complexity of endgame analysis may become harder, similar to the situation with solitaire games, which ought to capture NP, but which can turn out to be PSPACE hard if the plays become to long [11,7]. The role of the duration of a play will become explicit when we consider a direct generic translation of endgame analysis into QBF. Evidently, the idea of considering games to be a computational model accepting languages or solving problems (the Games as acceptors paradigm) has not captured the minds of the theoreticians or our colleagues working on games in artificial intelligence or other areas in computer science. I hope that this note may inspire them to consider this option.
2
Games and the Complexity of Endgame Analysis
2.1
Backward Induction and Its Complexity
For the purpose of this note we will consider the tree based representation of games. The class of games we will consider is the class of finite (and therefore also finitely branching) two person, perfect information games. There are two players, called Aethis and Thorgrim1. A game is a finitely branching finite tree, whose nodes are called positions or configurations. Internal nodes are labeled by a player with the meaning of indicating who has to move at this node. Leaves are labeled by a player indicating who has won the game at this node. A play is a path starting at the root and ending in a leaf. The dynamic interpretation of such a game tree is that the game starts at the root. The player who has to move chooses a successor node in the tree and the games moves to this new position. When there are no successor positions remaining, having arrived in a leaf, the label indicates who has won the game. 1
Aethis is a High Elves Prince and Thorgrim is a High King of the Dwarfs in the Mythology from the Warhammer Game as produced by the Games Workshop.
14
P. van Emde Boas
The well known algorithm of backward induction makes it possible to determine the winner of a game position to the internal nodes as well. If Aethis has to move at position p and can move to a successor position for which it already has been determined that he wins at this successor, then Aethis wins at position p. If all successor nodes are declared to be winning for Thorgrim then Thorgrim wins at position p. The same rules apply at positions where Thorgrim must move with the roles of Aethis and Thorgrim interchanged. Note that the resulting labeling computed by backward induction reduces the dynamic characteristics of the game to a static structure. The penalty is a blowup in size: the game tree has a size which in can become exponential in the duration of the game (the maximal path length in the tree). It is relevant for the sequel to consider the complexity of the backward induction process. Using some well-designed data structure one can evaluate the labeling of the entire tree in time and space linear to the size of the tree. This algorithm can be generalized to the situation where the game is modelled by a directed graph rather than a tree (a generalization which makes infinite plays possible and which introduces the possibility of a draw as outcome). This implementation (called the safe neighbour algorithm) is described in [10]. An alternative algorithm is based on a recursive depth-first traversal of the game tree. The relevant complexity for this algorithm is the spacebound: O(D × S) where D equals the duration of the game (longest path length in the tree) and S denotes the size of (a description of) a position. This spacebound is a direct consequence of the basic truth one learns about implementing recursion in some ALGOL-like stack oriented programming language: space consumption is proportional to the product of the size of a stack frame (sum of the lengths of the relevant parameters) and the recursion depth. For our special case, the recursion depth equals the maximal duration of the game (depth of the game tree) while the relevant parameters are game configurations (proportional to the space measure of the game if a proper space measure for games is used), move sequences (proportional to the duration of the game), and possible move counters (logarithmic in the duration of the game and hence a lower order term in the space consumption). The time consumed by this algorithm is still linear in the size of the tree. Both algorithms can be used to evaluate the game at some arbitrary position rather than the start of the game without loss of efficiency. The complexity bounds mentioned above therefore hold for endgame analysis in general. 2.2
Games as Acceptors
In this note we use games in order to characterize formal languages. The mechanism is an instance of the reduction concept as traditionally used in computation and complexity theory. Starting with some input x a game G(x) is constructed, using an efficient translation (in most contexts this means that the transformation from x to a suitable description of the game G(x) can be evaluated in polynomial time and/or logarithmic space). Note that the description of the game produced by tis translation not neccessarily amounts to a complete listing of the game tree
Playing Savitch and Cooking Games
15
(the so-called extensive form of the game); instead a more intensional description is used of the (structure of the) positions in the game and the “rules” determining the possible moves, the player who has to move, the starting position and the designation of the winner in a leaf node. Such an indirect description of the game is required for this type of applications of games in complexity theory, since in many concrete examples the transformation to the extensive form of the game otherwise would require exponential time and space. . . . It is a curious observation that, notwithstanding the fact that many authors implicitly use such a reasonable measure for the size of a game, nobody has provided us with a formal definition of this wood measure of a game. The formalism sketched above, called Games as acceptors hereafter, is neither new nor original. It has appeared frequently implicitly in the literature of the late 70-ies and early 80-ies, but not as an explicit formalism. Due to the freedom to choose appropriate encodings it is easy to adjust this formalism to the needs of some particular application. In order for the formalism to work as desired both the games considered and the reduction used should be reasonable; if no restrictions are enforced every set can be recognized using games, in the same way every set can be reduced to a trivial set when the reduction used can be arbitrary. . . . So we need to stipulate what sort of game mappings we are going to consider to be reasonable. The relevant conditions are that both the description size of a game position and the duration of the game G(x) should be polynomial in |x|, the length of x. Moreover successor positions, player assignments, and determination of the winner in a leaf position, are efficiently computable as well (linear time and/or space). These assumptions suffice for showing that the recursive backward induction endgame analysis algorithm runs in polynomial space. Consequently, sets described in the games as acceptors formalism belong to PSPACE, provided the assumptions made above are valid. The importance of this condition is illustrated by a typical complication which invalidates one of the crucial assumptions: the possibility of repeated positions and/or exponentially long plays in a game which leads to the exponential time hardness of the generalized chess game in [8]. The recursive algorithm extends to the generalization where probabilistic moves are added to the game formalism and where one wants to evaluate at the root the probability that Aethis will win the game. This observation is a key argument in the proof of the “easy” inclusion IP ⊆ P SP ACE in [15].
3 3.1
The Savitch Game The Game
The Savitch theorem states that the class PSPACE is closed under nondeterminism. More specifically the theorem states that a nondeterministic s(n)-space-bounded machine can be simulated by some O(s2 (n))-space-bounded deterministic machine, provided s(n) ≥ log(n).
16
P. van Emde Boas
We present a proof of this theorem by characterizing an arbitrary set in NPSPACE in terms of games. The result then is a direct consequence of the known complexity for the backward induction endgame analysis of this game. Consider therefore a set A in NPSPACE, recognized by a nondeterministic polynomial-space-bounded machine2 M . Without loss of generality we may assume that the duration of a computation of M on input x requires time exactly 2c.S(|x|), where c is a suitable constant and S(x) is the polynomial spacebound for recognizing the set A; we have assumed that S(x) ≥ log(x) in order to exclude degenerate cases. This is easily achieved by turning the unique accepting configuration into a configuration which repeats itself forever. We introduce for some input x the following two person game (called the Savitch game) G(x). In the game there exist positions where Thorgrim has to move consisting of a pair of configurations of M on input x: < C1 (x), C2 (x) >, together with a pair of integers t1 < t2 . The intended meaning of this position is that Aethis has committed himself to the truth of the statement that there exists a computation of M on input x which at time t1 traverses configuration C1 (x) and later at time t2 configuration C2 (x). What happens in this computation outside this time interval is irrelevant. In such a position the players perform three moves after which a similar position results. First Thorgrim choses a point in time t3 such that t1 < t3 < t2 . Next Aethis choses a configuration C3 (x) claiming that the computation traverses C3 (x) at time t3 . Next Thorgrim chooses to continue the game with the pair of configurations (and corresponding points in time) < C1 (x), C3 (x) > or < C3 (x), C2 (x) >. The game ends when t2 − t1 = 1. If in this position < C1 (x), C2 (x) > is a valid transition Aethis has won the game; if not Thorgrim has won the game. The starting position is determined by the pair of configurations C1 (x) being the starting configuration on input x, and C2 (x) being (the unique) accepting configuration of M . Furthermore t1 = 0 and t2 = 2c.S(|x|). As described above the duration of the game can be exponential in |x|. Therefore we require that for some appropriate constant satisfying 0 < < 1/2 we have t1 +×(t2 −t1 ) ≤ t3 ≤ t1 +(1−)×(t2 −t1 ). This condition suffices to guarantee that the game terminates in time proportional to S(|x|), I.e., polynomial in |x|. 3.2
Why the Game Works
The above Savitch game provides us with the required reduction, due to the following Proposition 1. The starting position in the Savitch game on input x is a winning position for Aethis iff the input x is accepted by M . 2
For definiveness one may think of M to be a Turing machine, but due to the invariance and robustness of the space measure, every reasonable sequential device can be used.
Playing Savitch and Cooking Games
17
The proof of this proposition is simple. If the input x is accepted by M then Aethis (who knows some accepting computation of M on x) has a winning strategy when he just tells the truth troughout the game (so he always gives Thorgrim the correct configuration C3 (x) in this computation for the time t3 proposed by Thorgrim). By the time the length of the interval has been reduced to 1 a pair of consecutive configurations will have been produced and Aethis wins the game. If the input x is not accepted then the implicit assertion expressed by the initial position in the game is invalid. Consequently, if Thorgrim asks for the configuration of the machine at some intermediate point in time Aethis has to come forward with a configuration such that a new invalid assertion is produced for either the first pair < C1 (x), C3 (x) > or for the second pair < C3 (x), C2 (x) >. So Thorgrim can select to continue the game with the pair about which Aethis has made a false assertion. And by the time the time interval has been reduced to 1 the error in the statement by Aethis will become evident and he loses the game. This completes the proof of the proposition. Note that in the above proof we have not asked how Thorgrim can play his winning strategy in the case that he has one; it seems to require omniscience of his part. It is interesting to compare the Savitch game above with the Interactive Protocols used in the IP = P SP ACE proof in [15] and the related literature. The introduction of randomness in the model of interactive protocols has made it practicable for Thorgrim to win such games in case the input should be rejected, since the falsehood in Aethis’ original assertion will be preserved during the play of the game with overwhelming probability. 3.3
How the Savitch Theorem Follows
In order to estimate the space complexity of the backward induction endgame analysis for the Savitch game we need to estimate the size of a typical configuration in the game and the duration of a typical play. Due to the requirement that Thorgrim must select a point in time not too close to the boundaries of the time interval under consideration, the duration of the game is O(S(|x|)); moreover a typical configuration in the game consists of two machine configurations (each requiring space O(S(|x|))), together with two integers in the range 0 . . . 2c.S(|x|), which requires space O(S(|x|)) as well. The space requirement for the backward induction algorithm equals the product of these two quantities, yielding the same square overhead as the original proof of the Savitch theorem. Finally we must check that the transformation which maps the input x on the corresponding instance of the Savitch game is resonable, but that’s implicit from the dscription given above. The Savitch Theorem [13] stating that PSPACE is closed under nondeterminism is a direct consequence, due to the fact that the device M used was assumed to be nondeterministic, whereas the space complexity bound for the recursive backward induction algorithm holds for a deterministic computation. That the game obeys the “reasonability assumptions” invoked for the games as acceptors model is evident by construction.
18
P. van Emde Boas
I seriously believe that the above proof is close to the concepts Savitch3 had in mind at the time he was prevented due to the recursion-fiendish climate of the early 70-ies to publish his recursive proof for his theorem. In each case our construction yields a direct connection between PSPACE and games which bypasses the unneccesary additional notions of the QBF problem or the alternating computation model.
4
Cooking Games
4.1
Structure of the Cook-Levin Formula
Since QBF is known to be a PSPACE hard problem and we have already shown that under reasonable conditions endgame analysis of games is in PSPACE it follows that endgame analysis of games can be reduced to instances of QBF. However, once again this reduction is established by making a detour and it remains therefore interesting to investigate what a direct reduction will look like. Starting point is the technology developed by Cook and Levin when they proved that the SATISFIABILITY problem is NP-hard [9]. How can one talk in propositional logic about arbitrary Turing machine computations? The key ingredient is to think in terms of complete time-space diagrams of such a computation. Let A be a language in NP recognized by a machine M . Then we have the following equivalences: x ∈ A iff there exists an accepting computation of M on x which terminates in time (and space) |x|k , where k is the exponent of the implied polynomial. This fact is equivalent to the existence of a |x|k by |x|k time-space diagram coding this accepting computation. This diagram now can be described by a family of propositional variables pijk expressing that in this diagram at position i, j representing time i and space position j the symbol sk occurs, selected from a suitable finite alphabet Σ consisting of tape symbols, Turing machine states, state-symbol pairs, and endmarkers. Using these propositional variables one can easily obtain propositional formulas (directly expressible by conjunctions of clauses) expressing the relevant conditions like: – on each position a symbol occurs – on each position only one symbol occurs – the first(last) row in the diagram represents the intital (unique accepting) configuration – successive rows in the diagram represent successive configurations Grace to the locality of computation on a Turing machine the last condition can be expressed in terms of a compatibility condition between the symbol written at some position and the three symbols placed in the three positions above it in the 3
Savitch in fact did not reject this interpretation when I visited him in 2005.
Playing Savitch and Cooking Games
19
previous row. These compatibility conditions are easily expressed as a conjuction of clauses. The third condition about the first and last row can be expressed using unit clauses since the content of these two rows is completely known. The existence of the time-space diagram now is equivalent to the possibility of assigning truth values to these propositional variables such that all these clauses and hence the entire formula becomes true. All propositional variables implicitly are existentially quantified, and this brings us to an instance of the SATISFIABILITY problem. A bonus feature of a proof along these lines is that the resulting formula automatically is in conjunctive normal form (the additional condition that each clause contains no more than three literals is not yet obtained but that can be done by the traditional trick of introducing extra variables corresponding to connectives in the formula. . . ). 4.2
How Alternation Expreses Having a Winning Strategy
When considering endgame analysis of game the role of a computation is performed by the play resulting from the move sequences selected by the two players. In general performing a move by one of the players may require more than a single transition of the machine implementing the resulting play, but based on the reasonability conditions enforced on the game a polynomial number of transitions will suffice. In fact one can assume without loss of generality that players move in turns at every transition, simply by introducing dummy moves where a player in fact has no choice (or more specifically only can choose between several instances of the same move). Also one can assume that in every configuration the player which has to move has a choice between exactly two options. Consequently a play corresponds to a binary move sequences the length of which is the duration of the machine simulation of the game play, and where the first (second) player controls the odd (even) position in this sequence of moves. In the translation to propositional logic we introduce therefore new propositional variables mi representing at time i the player who should move at this point in time chooses his first option. These variables are used in a propositional formula of the form: Φ(m1 , ..., mT , ..., pijk , ...) expressing that the pijk variables describe a correct play of the game following the move sequence m1 , ..., mT . The structure of this formula is similar to the Cook-Levin formula for the SATISFIABILITY case. Note that the move sequence variables mi only will be used within the compatibility relations between the successive configurations. The pijk variables are as previously (implicitly) existentially quantified. The QBF instance which expresses that in the initial configuration the first player has a winning strategy now simply becomes: ∃m1 [∀m2 . . . [QT mT . . . [∃pijk . . . [Φ(m1 , ..., mT , ..., pijk , ...)] . . .] So the alternations all occur for the mi variables, showing immediately that the number of alternations is proportional to the duration of the (simulation of the) game. Evidently, the number of alternations can be made equal to the number of true move transfers in the real game, at the price of a more refined coding.
20
P. van Emde Boas
In order that this instance can be produced using a polynomial time reduction it is clearly necessary that the game duration is polynomial in the original input. If the game is reasonable then this condition is also sufficient since the propositional formula Φ(m1 , ..., mT , ..., pijk , ...) has size (measured in the number of variables) proportional to the square of the time required for the computation. However, there is another aspect where we can be more liberal. In the construction given above the Cook-Levin part is in fact an instance of SATISFIABILITY, but for a reduction to QBF one could have a more complex QBF instance at this position instead. This indicates that the problem of endgame analysis of combinatorial games remains in PSPACE even when the validation of a given game report is in PSPACE rather than just in P. I doubt however that this generalization will have useful applications.
Final Remarks The observations included in this note originate from the desire to establish in a direct way the connections between the class PSPACE and finite combinatorial games. This desire arose in the context of teaching on the topic of games and computational complexity which I have been involved with for the last decade [19]. The alternative proof of the Savitch theorem surfaced as an unexpected bonus. Remains the motivation for contributing these observations to my colleague Willem Paul de Roever who is not known to have a great interest in complexity issues. There are several reasons: one of the key arguments is something we learned together in the course of Kruseman Aretz on the implementation of ALGOL 60. The fact that the semantics of a not necessarily terminating alternating computation is obtained using a classic least fixed point argument shows a possibly unexpected relation between our fields (but the same case could be made about deductive databases as well). Most important: the topics described here all are rooted in the early seventies and that was the time we were working together.
References 1. Babai, L., Moran, S.: Arthur-Merlin Games: a Randomized Proof System and a Hierarchy of Complexity Classes. J. Comput. Syst. Sci. 36, 254–276 (1988) 2. van Benthem, J.F.A.K.: Logic Games are Complete for Game Logics. Studia Logica 75, 183–203 (2003) 3. Chandra, A.K., Kozen, D., Stockmeyer, L.J.: Alternation. J. Assoc. Comput. Mach. 28, 114–133 (1981) 4. Chlebus, B.S.: Domino-Tiling Games. J. Comput. Syst. Sci. 32, 374–392 (1986) 5. Demaine, E.D.: Playing Games with Algorithms: Algorithmic Combinatorial Game Theory. In: Sgall, J., Pultr, A., Kolman, P. (eds.) MFCS 2001. LNCS, vol. 2136, pp. 18–32. Springer, Heidelberg (2001) 6. http://erikdemaine.org/games/ 7. Flake, G.W., Baum, E.B.: Rush Hour is PSPACE-complete, or “Why you should generously tip parking lot attendants”? Theor. Computer Sci. 270, 895–911 (2002)
Playing Savitch and Cooking Games
21
8. Fraenkel, A.S., Lichtenstein, D.: Computing a perfect strategy for n x n chess requires time exponential in n. J. Combin. theory series A 31, 199–213 (1981) 9. Garey, M.R., Johnson, D.S.: Computers and Intractability; a Guide to the Theory of NP-completeness. Freeman, New York (1979) 10. Gijlswijk, V.W., Kindervater, G.A.P., van Tubergen, G.J., Wiegerinck, J.J.O.O.: Computer Analysis of E. de Bono’s L-Game. Rep. Math, Inst. UvA 76-18 (1976) 11. John, R., Gilbert, J.R., Lengauer, T., Tarjan, R.E.: The Pebbling Problem is Complete in Polynomial Space. SIAM J. Comput. 9, 513–524 (1980) 12. Goldwasser, S., Micali, S., Rackoff, C.: The Knowledge Complexity of Interactive Proof Systems. SIAM J. Comput. 18, 186–208 (1989) 13. Savitch, W.J.: Relations between Deterministic and Nondeterministic tape Complexities. J. Comput. Syst. Sci. 12, 177–192 (1970) 14. Sch¨ afer, T.J.: Complexity of some two-person perfect-information games. J. Comput. Syst. Sci. 16, 185–225 (1978) 15. Shamir, A.: IP = PSPACE. J. Assoc. Comput. Mach. 39, 878–880 (1992) 16. Stockmeyer, L.J., Meyer, A.R.: Word problems requiring exponential time. In: Proc. ACM STOC, vol. 5, pp. 1–9 (1973) 17. van Emde Boas, P.: Machine models and simulations. In: van Leeuwen, J. (ed.) Handbook of theoretical computer science, vol. A, pp. 3–66. North Holland Publ. Cie, MIT Press (1990) 18. van Emde Boas, P.: The convenience of tiling. In: Sorbi, A. (ed.) Complexity, Logic and Recursion Theory. Lect. Notes in Pure and Applied Math., vol. 187, pp. 331– 363 (1977) 19. van Emde Boas, P.: Classroom material for the course Games and Complexity for the year 2007-2008, http://staff.science.uva.nl/~ peter/teaching/gac08.html
Compositionality: Ontology and Mereology of Domains Some Clarifying Observations in the Context of Software Engineering Dines Bjørner1, and Asger Eir2 1
DTU Informatics, Techn. Univ. of Denmark, DK-2800 Kgs. Lyngby, Denmark
[email protected] 2 Maconomy, Vordingborggade 18-22, DK–2100 Copenhagen Ø, Denmark
[email protected],
[email protected] www.eir-home.dk
Abstract. In this discursive paper we discuss compositionality of (i) simple entities, (ii) operations, (iii) events, and (iv) behaviours. These four concepts, (i)–(iv), together define a concept of entities. We view entities as “things” characterised by properties. We shall review some such properties. Mereology, the study of part-whole relations is then applied to a study of composite entities. We then speculate on compositionality of simple entities, operations, events and behaviours in the light of their mereologies. entities. We end the paper with some speculations on the rˆ ole of Galois connections in the study of compositionality and domain mereology.
1
A Prologue Example
We begin with an example: an informal and formal description of fragments of a domain of transportation. The purpose of such an example is to attach this example to our discussion of entities, and to enlarge the example with further examples to support this discussion of entities, and hence of mereology and ontology. The formalisation of the example narratives is expressed in the RAISE Specification Language, RSL [6,7,8]. Narrative (0.) There are links and there are hubs. (1.) Links and hubs have unique identifiers. (2.) Transport net consists of links and hubs. We can either model nets as sorts and then observe links and hubs from nets: type N, L, H, value obs Ls: N → L-set, obs Hs: N → H-set or we model nets as Cartesians of links and hubs:
Home address: Fredsvej 11, DK-2840 Holte, Denmark. Professor Emeritus.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 22–59, 2010. c Springer-Verlag Berlin Heidelberg 2010
Compositionality: Ontology and Mereology of Domains
23
type L, H, N = L-set × H-set (3.) Links connect exactly two distinct hubs. (4.) Hubs are connected to one or more distinct links. (5.) From a link one can observe the two unique identifiers of the hubs to which it is connected. (6.) From a hub one can observe the set of one or more unique identifiers of the links to which it is connected. (7.) Observed unique link (hub) identifiers are indeed identifiers of links (hubs) of the net in which the observation takes place. Formalisation type 0.−1. L, LI, H, HI, 2. N = L-set × H-set axiom 3.−4. ∀ (ls,hs):N • card ls ≥ 1 ∧ card hs ≥ 2 value 1. obs LI: L → LI, obs HI: H → HI, 5. obs HIs: L → HI-set axiom ∀ l:L • card obs HIs(l)=2, 6. obs LIs: H → LI-set axiom ∀ h:H • card obs LIs(l)≥1 axiom 7. ∀ (ls,hs):N • ∀ l:L • l ∈ ls ⇒ ∀ hi:HI • hi ∈ obs HIs(l) ⇒ ∃ h:H • hi=obs HI(h)∧h ∈ hs ∧ ∀ h:H • h ∈ hs ⇒ ∀ li:LI • li ∈ obs LIs(l) ⇒ ∃ l:L • li=obs LI(k)∧l ∈ ls Narrative (8.) There are vehicles (private cars, taxis, buses, trucks). (9.) Vehicles, when “on the net”, i.e., “in the traffic” (see further on), have positions. Vehicle positions are (10.) either at a hub, in which case we could speak of the hub identifier as being a suitable designation of its location, (11.) or along a link, in which case we could speak of of a quadruple of a (from) hub identifier, a(n along) link identifier, a real (a fraction) properly between 0 and 1 as designating a relative displacement “down” the link, and a (to) hub identifier, as being a suitable designation of its location, (12.) Time is a discrete, dense well-ordered set of time points and time points are further undefined. (13.) Traffic can be thought of as a continuous function from time to vehicle positions. We augment our model of traffic with the net “on which it runs”. Formalisation type 8. V 9. VPos == HubPos | LnkPos
24
10. 11. 12. 13.
D. Bjørner and A. Eir
HubPos = HP(hi:HI) LnkPos = LP(fhi:HI,li:LI,f:Real,thi:HI) Time TRF = (Time → (V → m VPos)) × N
Closing Remarks. We omit treatment here of traffic well-formedness: that time changes and vehicle movement occurs monotonically; that there are no “ghost” vehicles (vehicles “disappear” only to “reappear”), that two or more vehicles “one right after the other” do not “suddenly” change relative positions while continuing to move in the same direction, etc.
2
Introduction
The narrow context of this essay is that of domain engineering: the principles, techniques and tools for describing domains, as they are, with no consideration of software, hence also with no consideration of requirements. The example of Sect. 1 describes (narrates and formalises) some aspects of a domain. The broader context of this essay is that of formal software engineering: the phase, stage and stepwise development of software, starting with Domain descriptions, evolving into Requirements prescriptions and ending with Software design in such a way that D, S |= R, that is: software can be proven correct with respect to requirements with the proofs and the correctness relying on the domain as described. 2.1
Domain Engineering
The Domain Engineering Dogma. Before software be designed, we must understand its requirements. Before requirements can be expressed, we must understand the application domain. The Software Development Triptych. Thus, we must first describe the domain as it is. Then we can prescribe the requirements as we would like to see them implemented in software. First then can we specify the design of that software. Domain Descriptions. A domain description specifies the domain as it is. (The example traffic thus allows vehicles to crash.) A domain description does not hint at requirements let alone software to be designed. A domain description specifies observable domain phenomena and concepts derived from these. Example: a vehicle is a phenomenon; a vehicle position is also a phenomenon, but the way in which we suggest to model a position is a concept; similarly for traffic A domain description does not describe human sentiments (Example: the bus ride is beautiful ), opinions and thoughts (Example: the bus ride is a bit too expensive ), knowledge and belief (Example: I know of more beautiful rides and I believe there are cheaper bus fares ), promise and commitment
Compositionality: Ontology and Mereology of Domains
25
(Example: I promise to take you on a more beautiful ride one day ) or other such sentential, modal structures. A domain description primarily specifies semantic entities of the domain intrinsics (Example: the net, links and hubs are semantics quantities ), semantic entities of support technologies already “in” the domain, semantic entities of management and organisation domain entities, syntactic and semantic of domain rules and regulations, syntactic and semantic of domain scripts (Example: bus time tables respectively the bus traffic ) and semantic aspects of human domain behaviour. The domain description, to us, is (best) expressed when both informally narrated and formally specified. A problem, therefore, is: can we formalise all the observable phenomena and concepts derived from these? If they are observable or derived, we should be able to formalise. But computing science may not have developed all the necessary formal description tools. We shall comment on that problem as we go along. 2.2
Compositionality
We shall view compositionality “in isolation”! That is, not as in the conventional literature where the principle of compositionality is the principle that the meaning of a complex expression is determined by the meanings of its constituent expressions and the rules used to combine them. We shall look at not only composite simple entities but also composite operations, events and behaviours. We shall look at these in isolation from their meaning. But we shall then apply the principle of compositionality such that the meaning of a composite operation [event, behaviour] is determined by the meanings of its constituent operations [event, behaviours] and the rules used for combining these. We shall, in this paper only go halfway towards this goal: we look only at possible rules used to combine simple entities, functions, events and behaviours. For simple entities we can say the following about compositionality. A key idea seems to be that compositionality requires the existence of a homomorphism between the entities of a universe A and the entities in some other universe B. Let us think of the entities of one system, A, as a set, U, upon which a number of operations are defined. This gives us an algebra A = (U, Fν )ν∈Γ where U is the set of (simple and complex) entities and every Fν is an operation on A with a fixed arity. The algebra A is interpreted through a meaning-assignment M; a function from U to V, the set of available meanings for the entities of U. Now consider Fν ; a k-ary syntactic operation on A. M is Fν -compositional just in case there is a k-ary function G on V such that whenever Fν (u1 , . . . , uk ) is defined Fν (u1 , . . . , uk )) = G(M(u1 ), . . . , M(uk ). In denotational semantics we take this homomorphism for granted, while applying to, as we shall call them, syntactic terms of entities. We shall, in this paper, speculate on compositionality of non-simple entities. That is, compositionality of operations, events and behaviours; that is, of interpretations over non-simple entities (as well as over simple entities).
26
D. Bjørner and A. Eir
2.3
Ontology
By an ontology we shall understand an explicit, formal specification of a shared conceptualisation1 . We shall claim that domain engineering, as treated in [6,9,10], amounts to principles, techniques and tools for formal specification of shared conceptualisations. The conceptualisation is of a domain, typically a business, an industry or a service domain. One thing is to describe a domain, that is, to present an ontology for that domain. Another thing is for the description to be anchored around a description ontology: a set of principles, techniques and tools for structuring descriptions. In a sense we could refer to this latter as a meta-ontology, but we shall avoid the prefix ‘meta-’ and instead understand it so. The conceptualisation is of the domain of software engineering methodology, especially of how to describe domains. 2.4
Mereology
Mereology is the theory of parthood relations: of the relations of part to whole and the relations of part to part within a whole. The issue is not simply whether an entity is a proper part, pp , of another part, pω (for example, “the whole”), but also whether a part, pι , which is a proper part of pp can also be a part of another part, pξ which is not a part of pp , etcetera. To straighten out such issues, axiom systems for mereology (part/whole relations) have been proposed [34,15,16]. The term mereology seems to have been first used in the sense we are using it by the Polish mathematical logician Stanislaw Le´sniewski [37,48]. The concept of Calculus of Individuals [35,16, Leonard & Goodman (1940) and Clarke (1981)] is related to that of Mereology. We shall return to the issue of mereology much more in this paper. In fact, we shall outline “precisely” what our entity mereologies are. 2.5
Paper Outline
The paper is structured as follows: after Sect. 2’s brief characteristics of domain engineering, compositionality, ontology and mereology, Sect. 3 overviews what we shall call an ontological aspect of description structures, namely that of entities (having properties). Sections 4–7 will then study (i) simple, (ii) operation, (iii) event and (iv) behaviour entities in more detail, both atomic and composite. For the composite entities we shall then speculate on their mereology. Section 8 concludes our study of some mereological aspects of composite entities by relating these to definitions and axioms of proposed axiom systems for mereology.Section 9 takes a brief look at rˆ oles that the concept of Galois Connections may have in connection with composite entities.
1
http://www-ksl.stanford.edu/kst/what-is-an-ontology.html
Compositionality: Ontology and Mereology of Domains
3
27
An Ontology Aspect of Description Structures
This section provides a brief summary of Sects. 4–7. The choice of analysing a concept of compositionality from the point of view of simple entities, operations, events, and behaviours reflects an ontological choice, that is a choice of how we wish to structure our study of conceptions of reality and the nature of being. We shall take the view that an ontology for the domain of descriptions evolves around the concepts of entities inseparably from their properties. More concretely, “our” ontology consists of entities of the four kinds of specification types: simple entities, operations, events and behaviours. One set of properties is that of an entity being ‘simple’, being ‘an operation’ (or function), being ‘an event’ or being ‘a behaviour’. We shall later introduce further categories of entity properties. 3.1
Simple Entities
In a rather concrete, “mechanistic” sense, we understand simple entities as follows:2 simple entities have properties which we model as types and values. When a simple entity is concretely represented, “inside” a computer, it is usually implemented in the form of data. By a state, σ:Σ, we shall understand a designated set of entities. Entities are the target of operations: function being applied to entities and resulting in entities. In Sect. 4 we shall develop this view further. Examples: The nets, links, hubs, vehicles and vehicle positions of our guiding example are simple entities Simple domain entities are either atomic or composite. Composite entities are here thought of (i.e., modelled as) finite or infinite sets of (simple) entities: {e1 ,e2 ,. . . ,en }, finite Cartesians (i.e., groupings [records, structures] of (simple) entities): (e1 ,e2 ,. . . ,en ), finite or infinite lists (i.e., ordered sequences of (simple) entities): e1 ,e2 ,. . . ,en , maps (i.e., finite associations of (simple) entities to (simple) entities: [ed1 →er1 ,ed2 →er2 ,. . . ,edn →ern ], and functions (from (simple) entities to (simple) entities): λv : E(v).3 3.2
Operations
An operation (synonym for function) is something which when applied to an entity or an attribute4 yields an entity or an attribute. If an operation op argument and the resulting entity qualify as states (σ:Σ), then we have a state-changing action: op: [. . . ×]Σ→Σ. 2
3
4
The term ‘simple entity’ is chosen in contrast to the (‘complex’) function, event and behaviour entities. We shall otherwise not use the term ‘complex’ as it has no relation to composition, but may be confused with it. Note: The decorated es in set, Cartesian, list and map enumerations stand for actual entities whereas the v in λv : E (v) is a syntactic variable and E (v) stand for a syntactic expression with a free variable v. See Sect. 4.1 for distinction between entity and attribute.
28
D. Bjørner and A. Eir
If an operation argument entity qualifies as a state and if the resulting entity can be thought of as a pair of which (exactly) one element qualifies as a state, then we have a value yielding action with a, perhaps, beneficial side effect: op: [. . . ×]Σ→(Σ×VAL). If the operation argument does not qualify as a state then we have a value yielding function with no side effect on the state. Since entities have types, we can talk of the signature of an operation as consisting of the name of the operation, the structure of types of its argument entities, and the type of the resulting entities. We gave two such signatures (for operation op) above. (The [. . . ×] indicate that there could be other arguments than the explicitly named state entity Σ.) Example: The unique identifier observer functions of our guiding example are operations They apply to entities and yields entities or attributes: obs Ls:N→L-set and obs Hs:N→H-set yield entities and obs LI:L→LI and obs HI:H→HI yield attributes. “First Class” Entities Before closing this section, Sect. 3.2, we shall “lift” operations, hence actions and functions, to be first class entities! 3.3
Events
In [33, Lamport], events are the same as executed atomic actions. We shall not really argue with that assumption. In [33, Lamport] events are of interest only in connection with the concept of processes (for which we shall use the term ‘behaviours’). We shall certainly follow that assumption. We wish to reserve the term ‘event’ for such actions which (i) are either somehow shared between two or more behaviours, (ii) or “occur” in just one behaviour. We assume an “external”, further undefined behaviour. For both of these two cases, we need a way of “labelling” events. We do so by labelling, βi , behaviours, βi , that is, ascribing names to behaviours. Let the external behaviour have a distinguished, “own” label (e.g., βχ ). Now we can label an event by the set of labels of the processes “in” which the event occur. That is, with either two or more labels, or just one. When the external behaviour label βχ is in the set then it shall mean that the event either “originates” outside the behaviours of the other labels, or is “directed” at all those behaviours. We do not, however, wish to impose any direction! Here we wish to remind the reader that “our” behaviours take place “in the domain”, that is, they are not necessarily those of computing processes, unless, of course, the domain is, or (“strongly”) includes that of computing; and “in the domain” we can always speak “globally”, that is: we may postulate properties that may not be computable or even precisely observable, that is: two time stamps may be different even though they are of two actions or events that actually did or do take place simultaneously. Thus: we are not bothered by clocks, that is, we do not enforce a global clock; we do not have to sort out ordering problems of events, but can leave that to a later analysis of the described domain, recommendably along the lines of [33, Lamport].
Compositionality: Ontology and Mereology of Domains
29
Time and Time Stamps Time is some dense set of time points. A time stamp is just a time designator, t. Two time stamps are consecutive if they differ by some infinitesimal time difference, tδ . We shall assume the simplifying notion of a “global” clock. For the kind of distributed systems that are treated in [33, Lamport] this may not be acceptable, but for a any actual domain that is not subject to Einsteinian relativity, and most are, it will be OK. Once we get to implementation in terms of actual systems possibly governed by erroneously set clocks one shall have to apply for example [33, Lamport]’s treatment. Definition: Event An event, E : {(β1 , σ1 , P1 , σ1 , τ1 ), (β2 , σ2 , P2 , σ2 , τ2 ), . . . , (βn , σn , Pn , σn , τn )} involves a set of behaviours, βi , and is expressed in terms of a set of event designators, quintuplets containing: a label βi , a before state σi ; a predicate Pi ; an after state σi ; ◦ such that Pi (σi , σi ) ◦ but where it may be the case that σi = σi ; – and a time stamp τi ◦ which is either a time ti ◦ or a time interval [ti , ti ] ∗ such that ti − ti = τδi > 0 ∗ but where τδi is otherwise considered a tiny interval
– – – –
An event, E, may change one or more behaviour states, selectively, or may not — in which latter case σi = σi for some i. Thus we do not consider the time(s) when expressing conditions Pi . Definition: Same Event We assume two or more distinct behaviours β1 , β2 , . . . , βn . Two or more events E1i , E2i and Eni are said to reflect, i.e., to be the same event iff their models, as suggested above, are ‘identical’ modulo predicates5 and time stamps, iff these time stamps differ at most “insignificantly”, a decision made by the domain describer6 , and iff this model involves the label sets β1 , β2 , . . . , βn for behaviours β1 , β2 , . . . , βn This means that any one event which is assumed to be the same and thus to occur more-or-less simultaneously in several behaviours is “identically” recorded (modulo predicates and time stamps) in those behaviours. We can accept this definition since it is the domain describer who decides which events to model and since it is anyway only a postulate: we are “observing the domain”! 5 6
The predicates can all “temporarily”, for purposes of “identicality”, be set to true. The time stamps can all “temporarily”, for purposes of “identicality”, be set to the smallest time interval within which all time stamps of the event are included.
30
D. Bjørner and A. Eir
Definition: Event Designator The event E : {(β1 , σ1 , P1 , σ1 , τ1 ), (β2 , σ2 , P2 , σ2 , τ2 ), . . . , (βn , σn , Pn , σn , τn )} consists of n event designators (βi , σi , Pi , σi , τi ), that is: an event designator is that kind of quintuplet. Example: Withdrawal of funds from an account (i.e., a certain action) leads to either of two events: either the remaining balance is above or equal to the credit limit, or it is not The withdrawal effects a state change (into state σ ), but “below credit limit” event does not cause a further state change (that is: σ = σ ). In the latter case that event may trigger a corrective action but the ensuing state change (from some (possibly later state) σ to, say, σ , that is, σ is usually not a “next state” after σ ). Example: A bank changes its interest rate. This is an action by the behaviour of a national (or federal) bank, but is seen as an event by (the behaviour of) a(ny) local bank, and may cause such a bank to change (i.e., an action) its own interest rate Example: A bank goes bankrupt at which time a lot of bank clients loose a lot of their money The bankruptcy event causes a number of customer suddenly loosing their monies events. Some events are explicitly willed, and are “un-interesting”. Other events are “surprising”, that is, are not willed, and are thus “interesting”. Being interesting or not is a pragmatic decision by the domain describer. “First Class” Entities. Before closing this section, Sect. 3.3, we shall “lift” events to be first class entities! 3.4
Behaviours
A simple, sequential behaviour, β, is a possibly infinite, possibly empty sequence of actions and events. Example: The movement of a single vehicle between two time points forms a simple, sequential behaviour We shall later construct composite behaviours from simple behaviours. In essence such composite behaviours is “just” a set of simple behaviours. In such a composite behaviour one can then speak of “kinds” of consecutive or concurrent behaviours. Some concurrent behaviours can be analysed into communicating, joined, forked or “general” behaviours such that any one concurrent behaviour may exhibit two or more of these ‘kinds’. Section 7.3 presents definitions of composite behaviours. “First Class” Entities. Before closing this section, Sect. 3.4, we shall “lift” behaviours to be first class entities!
Compositionality: Ontology and Mereology of Domains
3.5
31
First-Class Entities
Operations are considered designators of actions. That is, they are action descriptions. We do not, in this paper, consider forms of descriptions of events (labels) and behaviours. In that sense, of not considering, this section is not “completely” symmetrical in its treatment of operations, actions, events and behaviours as first-class entities. Be that as it may. Operations as Entities. Operations may be (parametrised by being) applicable to operation entities — and we then say that the operations are higher-order operations: Sorting a set of rail units according to either length or altitude implies one sorting operation with either a select rail unit length or a select altitude parameter. (The ‘select’ is an operation.) Actions as Entities. Similarly operations may be (parametrised by being) applicable to actions: Let an action be the invocation of the parametrised sorting function — cf. above. Our operation may be that of observing storage performance. There are two sorting functions: one according to rail unit length, another according to rail unit altitude. We are now able, given the action parameter, to observe, for example, the execution time! Events as Entities. Operations may be (parametrised by being) applicable to a set of event entities: Recall that events are dynamic, instantaneous ‘quantities’. A ‘set of event entities’ as a parameter can be such a quantity. One could then inquire as to which one or more events occurred first or last, or, if they had a time duration, which took the longest to occur! This general purpose event handler may then be further parametrised by respective rail or air traffic entities! Behaviours as Entities. Finally operations may be (parametrised by being) applicable to behaviours. We may wish to monitor and/or control train traffic. So the monitoring & control operation is to be real-time parametrised by train traffics. Similar for air traffic, automobile performance, etc. We are not saying that a programming language must provide for the above structures. We are saying that, in a domain, as it is, we can “speak of” these parametrisations. Therefore we conclude that actions, events and behaviours — that these dynamic entities which occur in “real-time” — are entities. Whether we can formalise this “speaking of” is another matter. 3.6
The Ontology: Entities and Properties
On the background of the above we can now summarise our ontology: it consists of (“first class”) entities inseparable from their properties. We hinted at properties, in a concrete sense above: as something to which we can ascribe a name, a type and a value. In contrast to common practice in treatises on
32
D. Bjørner and A. Eir
ontology [20,24,36,40,49], we “fix” our property system at a concrete modelling level around the value types of atomic simple entities (numbers, Booleans, characters, etc.) and composite simple entities (sets, Cartesians, lists, maps and functions); and at an abstract, orthogonal descriptional level, following Jackson [32], static and inert, active (autonomous, biddable, programmable) and reactive dynamic types; continuous, discrete and chaotic types; tangible and intangible types; one-, two-, etc., n-dimensional types; etc. Ontologically we could claim that an entity exists qua its properties; and the only entities that we are interested in are those that can be formalised around such properties as have been mentioned above.
4
Simple Atomic and Composite Entities
Entities are either atomic or composite. The decision as to which entities are considered what is a decision taken sˆ olely by the describer. The decision is based on the choice of abstraction level being made. 4.1
Simple Attributes — Types and Values
With any entity whether atomic or composite, and then also with its sub-entities, etcetera, one can associate one or more simple attributes. – By a simple attribute we understand a pair of a designated type and a named value. Attributes are not entities: they merely reflect some of the properties of an entity. Usually, we associate a name with an entity. Such an association is purely a pragmatic matter, that is, not a syntactic and not a semantic issue. 4.2
Atomic Entities
– By an atomic entity we intuitively understand a simple attributes entity which ‘cannot be taken apart’ (into other, the sub-entities). Example: We illustrate attributes of an atomic entity. Atomic Entity: Bus Ticket Type Value Bus Line Greyhound From, Departure Time San Francisco, Calif.: 1:30 pm To, Arrival Time Reno, Nevada: 6:40 pm Price US $ 52.00
‘Removing’ attributes from an entity destroys its ‘entity-hood’, that is, attributes are an essential part of an entity.
Compositionality: Ontology and Mereology of Domains
4.3
33
Composite Entities
– By a composite entity we intuitively understand an entity (i) which “can be taken apart” into sub-entities, (ii) where the composition of these is described by its mereology7 , and (iii) which further possess one or more attributes. Example: We “diagram” the relations between sub-entities, mereology and attributes of transport nets. Composite Entity: Transport Net Sub-entities: Links Hubs Mereology: “set” of one or more (inks) and “set” of two or more h(hub’s) such that each (ink) is delimited by two h(cub’s) and such that each h(ub) connects one or more (inks) Attributes Types: Values: Multimodal Rail, Roads, Sea Lane, Air Corridor Transport Net of Denmark Year Surveyed 2008 4.4
Discussion
Attributes Domain entity attributes whether of atomic entities or of composite entities are modelled as a set of pairs of distinctly named types and values. It may be that such entity attributes, some or all, could be modelled differently, for example as a map from type names to values, or as a list of pairs of distinctly named types and values, or as a Cartesian of values, where the position determines the type name — somehow known “elsewhere” in the formalisation, etcetera But it really makes no difference as remarked earlier: one cannot really remove any one of these attributes from an entity. Compositions. We formally model composite entities in terms of its immediate sub-entities, and we model these as observable, usually as sets, immediately from the entity (cf. obs Hs, obs Ls, N). In the example composite entity (nets) above the net can be considered a graph, and graphs, g:G, are, in Graph Theory typically modelled, for example, as type V G = (V × V)-set where vertexes (v:V ) are thought of a names or references. 7
Cf. Sect. 2.4 on page 26: How parts (sub-entities) relate to the whole and thus the relations between parts (sub-entities).
34
D. Bjørner and A. Eir
We shall comment on such a standard graph-theoretic model in relation to a domain model which somehow expresses a graph: First it has abstracted away all there may otherwise be to say about what the graph actually is an abstraction of. In such models we model edges in terms of pairs of vertexes. That is: edges do not have separate “existence” — as have segments. In other words, since we can phenomenologically point to any junction and a segment we must model them separately, and then we must describe the mereology of nets separate from the description of the parts.
5
Atomic and Composite Operations
Entities are either atomic or composite. The decision as to which operations are considered what is a decision sˆ olely taken by the describer. 5.1
Signatures — Names and Types
With any operation, whether atomic or composite, and then also with its suboperations, etcetera, one can associate a signature which we represent as a triple: the name of the operation, the arguments to which the operation is applicable, and the result, whether atomic or composite. – By an argument and a result we understand the same as an attribute or an entity. 5.2
Atomic Operations
We understand operations as functions in the sense of recursive function theory [39]8 but extended with postulated primitive observer (obs ...), constructor (mk...) and selector (s ...) functions, non-determinacy9 and non-termination (i.e., the result of non-termination is a well-defined chaotic value). – By an atomic operation we intuitively understand an operation which ‘cannot be expressed in terms of other (phenomenological or conceptual), primitive recursive functions. Example Atomic Operations The operation of obtaining the length of a segment, obs Lgth, is an atomic operation. The operation of calculating the sum, sum, of two segment lengths is an atomic operation. type Lgth value obs Lgth: L → Lgth, sum: Lgth × Lgth → Lgth 8 9
See: http://www-formal.stanford.edu/jmc/basis1/basis1.html Hinted at in [39] as ambiguous functions, cf. Footnote 8.
Compositionality: Ontology and Mereology of Domains
5.3
35
Composite Operations
– By a composite operation we intuitively understand an operation which can best be expressed in terms of other (phenomenological or conceptual) primitive recursive functions, whether atomic or themselves composite. Example Composite Operations. Finding the length of a route, R Lgth, where a route is a sequence of segments joined together at junctions is a composite operation — its sub-operations are the operation of observing a segment length from a segments, obs length, and the recursive invocation of route length. Finding the total length of all segments of a net is likewise a composite operation. value length: L∗ → Lgth, zero lgth:Lgth, length( ) ≡ zero lgth, length( ) ≡ sum(, ) The Composition Homomorphism. Usually composite operations are applied to composite entities. In general, we often find that the functions applied to composite entities satisfy the following homomorphism: G(e1 , e2 , . . . , em ) = H(G(e1 ), G(e2 ), . . . , G(en )) where G and H are suitable functions. Example: Consider the Factorial and the List Reversal functions. This example is inspired by [38]. Let φ be the sentence: ∃F • ((F (a) = b) ∧ ∀x • (p(x) ⊃ (F (x) = H(x, F (f (x)))))) which reads: there exists a mathematical function F such that, •, the following holds, namely: F (a) = b (where a and b are not known), and, ∧, for every (i.e., all) x, it is the case, •, that if p(x) is true, then F (x) = H(x, F (f (x))) is true. There are (at least) two possible (model-theoretic) interpretations of φ. In the first interpretation, we first establish the type Ω of natural numbers and operations on these, and then the specific context ρ: [ F → fact, a → 1, b → 1, f → λ n.n−1, H → λ m.λ n.m+n, p → λ m.m>0 ] We find that φ is true for the factorial function, fact. In other words, φ characterises properties of that function. In the second interpretation we first establish the type Ω of lists and operations on these: and then the specific context ρ:
36
D. Bjørner and A. Eir
[ F → rev, a → , b → , f → tl, H → λ1 .λ2 .1 hd 1 , p → λ.= ] And we find that φ is true for the list reversal function, rev, as well. In other words, φ characterises properties of that function, and the two Hs express a mereological nature of composition
6
Atomic and Composite Events
Usually events are considered atomic. But for the sake of argument — that is, as a question of scientific inquiry, of the kind: why not investigate, seeking “orthogonality” throughout, now that it makes sense to consider atomic and composite entities and operations — we shall explore the possibility of considering composite events. Let us first recall that we model an event by: E : {(β1 , σ1 , P1 , σ1 , τ1 ), (β2 , σ2 , P2 , σ2 , τ2 ), . . . , (βn , σn , Pn , σn , τn )}, where E is just a convenient name for us to refer to the event, βi is the label of a behaviour βi , σi and σi are (‘before event’, respectively ‘after event’) states (of behaviour βi ), Pi is a predicate which characterises the event as seen by behaviour βi , and τi is a time, ti , or a time interval, [tib ,tie ], time stamp. 6.1
Atomic Events
Examples: (i) E1 : a vehicle “drives off” a net link at high velocity; (ii) E2 : a link “breaks down”: (ii.a) E21 : a bridge collapses, or (ii.b) E22 : a mud slide covers the link That is E2 is due to either E21 or E22 . One can discuss whether these examples really can be considered atomic: (ii.a) the bridge may have collapsed due to excess load and thus the moment at which the load exceeded the strength limit could be considered an event causing the bridge collapse; (ii.b) the mud slide may have been caused by excessive rain due to rainstorm gutters exceeding their capacity and thus the moment at which capacity was exceeded could be considered an event causing the mud slide. We take the view that it is the decision of the domain describer to “fix” the abstraction level and thus decide whether the above are atomic of composite events. In general we could view an event, such as summarised above, which involves two or more distinct behaviours as a composite event. We shall take that view. 6.2
Definitions: Atomic and Composite Events
Definition: Atomic Event: An atomic event is either a single [atomic] internal event: {(βi , σi , Pi , σi , τi )}, that is, consists of just one event designator, or is a single [atomic] external event, that is, is a pair event designators where one of
Compositionality: Ontology and Mereology of Domains
37
these involves the eχternal behaviour: {(βχ , σnil , true, σnil , τχ ), (βi , σi , Pi , σi , τi )}, that is, consists of two event designators, an external and an internal Definition: Composite Event: A composite event is an event which consists of two or more internal “identical” event designators, that is, event designators from two or more simple, non-eχternal behaviours, and possibly also an event designator from an eχternal behaviour “identical” to these internal event designators 6.3
Composite Events
Examples: (i) two or more cars crash and (ii) a bridge collapse causes one or more cars or bicyclists and people to plunge into the abyss Synchronising Events. Events in two or more simple behaviours are said to be synchronising iff they are identical. Example: Two cars crashing means that the surfaces of the crash is a channel on which they are synchronising and that the messages being exchanged are “you have crashed with me” Sub-Events. A composite event defines one or more sub-events. Definition Sub-event: An event Es :eds , is a sub-event of another event E:eds, iff eds ⊂ eds, that is the set eds of event designators of Es is a proper subset eds of the event designators of E Sequential Events. One way in which a composite event is structured can be as a “sequence” of “follow-on” sub-events. One sub-event: Es12 : {(β1 , σ1 , P1 , σ1 , τ1 ), (β2 , σ2 , P2 , σ2 , τ2 )}, for example, “leads on” to another sub-event: Es23 : {(β2 , σ2 , P2 , σ2 , τ2 ), (β3 , σ3 , P3 , σ3 , τ3 )}, etcetera, “leads on” to a final event: , Pm , σm , τm ), (βn , σn , Pn , σn , τn )}. The “leads on” relation Esmn : {(βm , σm should appear obvious from the above expressions. Example: The multiple-car crash in which the cars crash, “immediately” one after the other, as in a accordion movement (This is, of course, an idealised assumption.) Embedded Events. Another way in which a composite event is structured is as an “iteratively” (or finite “recursively”) embedded “repetition” of (albeit distinct) sub-events. Here we assume that the τ s stand for time intervals and that τ s τ s it means that the time interval τ s is embedded with τ s, that is, let τ s = [tb , te ] and τ s = [tb , te ], then for τ s τ s means that tb ≤ tb and te ≤ te , Now we postulate that one event (or sub-event) Ei embeds a sub-event Eij , . . . , embeds an “innermost” sub-event Eij... . k
Example: The following represents an idealised description of how a computing system interrupt is handled. – (i) A laptop user hits the enter keyboard key at time tb .
38
D. Bjørner and A. Eir
– (ii) The computing system interrupt handler reacts at time tb (tb ≤ tb ), to the hitting of the enter keyboard key. – (iii) The interrupt handler forwards, at time tb , the hitting of the enter keyboard key to the appropriate input/output handler of the computing system keyboard handler. – (iv) The keyboard handler forwards, at time t b , the hitting of the enter keyboard key to the appropriate application program routine. – (v) The application program routine calculates an appropriate reaction be tween times t b and te . – (vi) The application program routine returns its reaction to the keyboard handler at time te . – (vii) The keyboard handler returns, at time te , that reaction to the interrupt handler. – (viii) The interrupt handler marks the interrupt as having been fully served at time te , – (ix) while whatever (if anything that has been routed to, for example, the display associated with the keyboard) is displayed at time te The pairs (i,ix), (ii,viii), (iii,vii) and (iv,vi) form pairwise embedded events: (ii,vii) is directly embedded, , in (i,ix), (iii,vii) is directly embedded, , in (ii,viii) and (iv,vi) is directly embedded, , in (iii,vii). We have abstracted the time intervals to be negligible. Event Clusters. A final way of having composite events, is for them, as a structure, to be considered a set of sub-events, each eventually involving a time or a time period that is “tightly” related to those of the other sub-events in the set and where the relation is not that of “follow-on” or embeddedness. Example: A (i) car crash results in a (ii) person being injured, while a (iii) robber exploits the confusion to steal a purse, etcetera
7
Atomic and Composite Behaviours
Our treatment of behaviours in Sect. 3.4 was very brief. In this section it will be more detailed. 7.1
Modelling Actions and Events
In modelling behaviours, we model actions by a triple, (β, α, τ s), consisting of a behaviour label, β:BehLbl, an operation denotation, α:[. . . ×]Σ → Σ[×. . . ], and a time stamp, τ s. Events are modelled by as above. 7.2
Atomic Behaviours
Time-stamped actions and atomic events are the only atomic behaviours. We shall model atomic behaviours as singleton sequences of a time-stamped action or an event.
Compositionality: Ontology and Mereology of Domains
7.3
39
Composite Behaviours
Simple Traces A simple (finite/infinite) trace, τ , is a (finite/infinite) sequence of one or more time-stamped atomic actions and time-stamped (atomic or composite) events. Trace time stamps occur in monotonically increasing dense order, i.e., separated by consecutive (overall) time stamps. That is, two traces may operate not only on different clocks, but have varying time intervals between consecutive actions or events. The “overall” time stamp of a composite event is the smallest time interval which encompasses all time and time stamps of event designators of the composite event. Simple Behaviours. A simple behaviour, β, is a simple trace of length two or more. Example: The movement of two or more vehicles between two time points forms a simple, concurrent behaviour One can usually decompose a simple behaviour into two or more consecutive behaviours, and hence one can compose a consecutive behaviour from two or more simple behaviours. Consecutive behaviours are simple behaviours. Consecutive Behaviours A consecutive behaviour is a pair of simple behaviours, of which the first is finite, such that the time stamp of the first action or event of the second behaviour is consecutive to the time stamp of the last action or event of the first behaviour, cf. Fig. 1 on the following page. Example: A train travel, seen from the point of view of one train passenger, from one city to another, involving one or more train changes, and including the train passenger’s behaviours at train stations of origin, intermediate stations and station of destination as well as during the train rides proper, forms a consecutive behaviour Concurrent Behaviours. A concurrent behaviour is a set of two or more simple behaviours {β1 , β2 , . . . , βn } such that for each behaviour βi ∈{β1 , β2 , . . . , βn } there is a set of one or more different behaviours {βij , βik , . . . ,βi } ⊆ {β1 , β2 , . . . , βn } such that there is a set of one or more consecutive (dense) time stamps that are shared between behaviours βi and {βij , βik , . . . ,βi }. Example: The movement of two vehicles between two time points (i.e., in some interval) forms a concurrent behaviour Concurrent behaviours come in several forms. These are defined next. Communicating Behaviours. A communicating behaviour is a concurrent behaviour in which two or more (simple) behaviours contain identical (modulo predicate and time stamp) events. Example: The movement of two vehicles between two time points (i.e., in some interval), such that, for example, the two vehicles, after some time point in the interval, at which both vehicles have observed their “near-crash”, keeps moving
40
D. Bjørner and A. Eir
along, may be said to be a simple, cooperating behaviour. Their “near-crash” is an event. In fact the vehicles may be engaged in several such “near-crashes” (drunken driving!) Example: The action of a vehicle, at a hub, which effects both a turning to the right down another link, and a sequence of one or more gear changes, throttling down, then up, the velocity, while moving along in the traffic, forms a general, structured behaviour Example: A crash between two vehicles defines an event with the two vehicles being said to be synchronised and exchanging messages at that event
βj
A simple behaviour
.....
βi
β
..........
identical event
β1 β2
βk identical event
Communicating behaviours βη
A concurrent behaviour
A joined behaviour
β Consecutive behaviours
A forked behaviour time
Fig. 1. Two simple and four composite behaviours Each rectangle designates a simple behaviour. Figure indicates 17 such.
Joined Behaviours. A joined behaviour is a pair of a finite set, {β1 , β2 , . . . , βn }, of finite (“first”) simple behaviours and a (“second”) simple behaviour, such that the time stamp of the first action or event of the second behaviour is consecutive to the time stamp of the last action or event of each of the the first behaviours. You can think of the joined behaviour as pictured in Fig. 1. Example: This example assumes a mode of travel by vehicles in which they (sometimes) travel in platoons, or convoys, as do military vehicles and — maybe future private cars. A behaviour which starts with n (n being two or more) vehicles travelling by themselves, as n concurrent behaviours; where independent vehicles, at one time or another, join into convoy behaviours involving two or more vehicles, form a joined behaviour
Compositionality: Ontology and Mereology of Domains
41
Forked Behaviours. A forked behaviour is a pair of a finite (“first”) simple behaviour β and a finite set, {β1 , β2 , . . . , βn }, of (“second”) simple behaviours, such that the time stamp of the first action or event of each of the second behaviours is consecutive to the time stamp of the last action or event of the first behaviour. You can think of the joined behaviour as pictured in Fig. 1 on the facing page. Example: Continuing the example just above: A behaviour which starts as the joined, convoy behaviour of two or more (i.e., n) vehicles which then proceeds by individual vehicles, at one time or another, leaving the convoy, i.e., “forking out” into concurrent behaviours, forms a forked behaviour 7.4
General Behaviours
We claim that any set of behaviours can be formed from atomic behaviours by applying one or more of the compositions outlined above: simple, concurrent, communicating, consecutive, joined and forked behaviours. By “any set of behaviours” you may well think of any multi-set of time stamped actions and time stamped events, i.e., of atomic behaviours. From this set one can then “glue” together one or more behaviours first forming a set of simple behaviours; then concurrent behaviours; then identifying possible communicating behaviours; then possibly joining and forking suitable behaviours, etc. There may very well be many “solutions” to such a “gluing” construction from a basic set of atomic behaviours.
8 8.1
Mereology and Compositionality Concluded The Mereology Axioms
We wish to explain the compositionality constructs of simple entities (Sect. 8.2), operations (Sect. 8.3), events (Sect. 8.4) and behaviours (Sect. 8.5), where the references are to sections where the compositionality constructs are informally summarised. We wish that the explanation be in terms of the predicates of known axiomatisations of mereology, that is, of proposed such mereologies. Let x, y, and z denote “first class” entities. Then: 1. 2. 3. 4. 5. 6. 7. 8. 9.
Pxy expresses that x is a part of y; PPxy expresses that x is a proper part of y; Oxy expresses that x and y overlap; Uxy expresses that x and y underlap; Cxy expresses that x is connected to y; DCxy expresses that x is disconnected from y; DRxy expresses that x is discrete from y; T Pxy expresses that x is a tangential part of y; and N T Pxy expresses that x is a non-tangential part of y.
42
8.2
D. Bjørner and A. Eir
Composite Simple Entities
Mereology The part-whole mereological relations of composite simple entities are typically expressed by such defining phrases as: (i) “An x consists of a set of ys” (modelled by X=Y-set); (ii) “an x consists of a grouping of a y, a z, . . . and a u” (modelled by X=Y×Z×...×U); (iii) “an x consists of a list of ys” (modelled by X=Y∗ ); (iv) “an x consists of an association of ys to zs” (modelled by X=Y → m Z); and some more involved phrases, including recursively expressed ones. Usually such defining phrases define too much. In such cases further sentences are needed in order to properly delimit the class of xs being defined. Example: 14. A bus time table lists the bus line name 15. and one or more named journey descriptions, that is, journey names are associated with (maps into) journey descriptions. 16. Bus line and journey names are further undefined. 17. A journey description sequence of two or more bus stop visits. 18. A bus stop visit is a triple: the name of the bus stop, the arrival time to the bus stop, and the departure time from the bus stop. 19. Bus stop names are hub identifiers. 20. A bus time table further contains a description of the transport net. 21. The description of the transport net of the transport net. associates (that is, maps) each bus stop name hub identifier to a set of one or more bus stop name hub identifiers. 22. A bus time table is well-formed iff 23. adjacent bus stop visits name hubs that are associated in the transport net description; 24. arrival times are before departure times; etc. type 16. BLNm, JNm 14.,20. BTT = BLNm × NmdBusJs × NetDescr 22. BTT = {| btt:BTT • wf BTT(btt) |} 15. NmdBusJs = JNm → m BusJ 17. BusJ = BusStopVis∗ 18. BusStopVis = Time × HI × Time 21. NetDesr = HI → m HI-set value 22. wf BTT: BTT × NetDesr → Bool wf BTT( ,jrns,nd) ≡ ∀ bj:BusJ • bj ∈ rng jrns ⇒ ∀ (at,hi,dt):BusStopVis • (at,hi,dt) ∈ elems bj ⇒ hi ∈ dom nd ∧ at
Compositionality: Ontology and Mereology of Domains
43
– PPxy (Item 2 on page 41) holds for x=links or hubs of net, n, and y=n; – Cxy (Item 5 on page 41) holds for such links x which connect hubs y, respectively such hubs x from which links y emanate; and – T Pxy (Item 8 on page 41) holds for such links x which connect hubs y, respectively such hubs x from which links y emanate. – Let us introduce a notion of link/hub connectors. Any link x which is incident upon a hub y is said to define a connector j : J such that for type J value obs J: (L|H) → J-set axiom ∀ l:L,h:H • card obs J(l)=2 ∧ obs J(l) ∩ obs J(h)={} ⇒ obs HIs(l) ∩ obs HI(h)={} Now let x be the connector of link y and hub z, then Oxy and Oxz (Item 3 on page 41) hold. – Etcetera. Compositionality The conventional compositionality principle implied a syntax of composite expressions and spoke of the semantics of composite expressions. We extend this principle to cover other than utterings of natural or formal specification and programming languages. We extend the principle to cover any structures that we may wish to contemplate. To get the reader “tuned” to that idea we fist give three, perhaps slightly “surprising” examples, and then return to examples in line with the main, the transport, example of this paper. Example: The design of a bread-toaster denotes the infinite set of all breadtoasters that satisfy the design, or the infinite set of all the production processes that construct such bread toasters, etc. Example: The request from the marketing department of the producer of the bread toaster to the design department suggesting a bread toaster that satisfies certain market requirements denote the set of all bread-toaster designs that satisfy these market requirements, etc. Example:The request from executive management to the marketing department requesting that measures be taken too win market share denote, amongst others, the kind of requests alluded to in the previous example Now to the examples that fit into the main example of this paper. Examples: (i) A meaning of a link could be the set of all paths that vehicles can traverse the link, where a link path could be modelled as a triple of link connected hub identifiers and the link identifier. (ii) A meaning of a hub could be the set of all paths that vehicles can traverse the hub, where a hub path could be modelled as a triple of link identifiers and the connecting hub identifier; (iii) A meaning of a net could be the set of all routes through the net, where a route is a suitable sequence of either link paths or of hub paths
44
D. Bjørner and A. Eir
Compositionality of Simple Composite Entities: The meaning of atomic entities are expressed by simple (recursive) functions. The meaning of composite entities, in order to follow the principle of compositionality, must be a function of the meanings of the immediate sub-entities. Here the possibilities are “ad-infinitum” ! Classically, in computing, the principle of compositionality was first applied to programming, then to specification languages. Typically the meaning of an atomic statement, as a syntactic simple entity, of an imperative programming language was that of a function from environments to state to state transformers, so the meaning of the composition of two or more statements was then the function composition of the meaning of each statement. The meaning of a logic program could be modelled as a set of resolutions: bindings of identifiers to terms. The meaning of a parallel, say CSP [27, Hoare], program can be denotationally, that is, according to the principle of compositionality, for example, given either one of three kinds of semantics: in terms of traces, in terms of failures, and in terms of divergences — as all explained in [42, Roscoe, Chapter 8]. Whether all reasonably expressed meanings of all conceivable composite simple entities can be expressed compositionally is not known, well, is not knowable. 8.3
Composite Operations
Mereology It appears that H (in G(e1 , e2 , . . . , em ) = H(G(e1 ), G(e2 ), . . . , G(en ))), and the way in which G distributes over (e1 , e2 , . . . , em ), in the abstract expresses the mereology of function composition. In the concrete the mereological nature of a given composite operation is reflected in the way in which it is structured from primitive recursive functions and other composite operations. In the sense of recursive function theory it does not seem to make any sense to apply any of the 9 operators (Page 41) of Sect. 8.1. Compositionality The compositionality of functions is here taken to be expressed by the function composers of extended recursive function theory. Extended recursive function theory defines (i) constant entities, f () (i.e., values as zero-ary functions); (ii) variables, v (as functions from an environment to an entity); (iii) sets {e1 , e2 , . . . , en }, Cartesians (e1 , e2 , . . . , en ), and lists e1 , e2 , . . . , en of entities; (iv) a finite set of primitive functions, fn , of arity n and type fn : Entity∗ →Entity (v) a finite set of primitive predicates, pn , of arity n and type pn : Entity∗ →Bool (vi) function composition f (g(e1 , e2 , . . . , en )), (vii) conditionals (c1 → e1 , c2 → e2 , . . . , cn → en ), (viii) . . . , and (ix) . . . . Recursive function theory then tells us how to interpret these forms in come context ρ which binds variables to entities and function and predicate function symbols to their functions and predicates. The extended recursive function theory is a simple encoding from recursive function theory. So compositionality of operations is explained by recursive function theory.
Compositionality: Ontology and Mereology of Domains
8.4
45
Composite Events
Mereology. Mereologically events can be (i) sequenced, “connected” in time; (ii) “recursively” embedded, with the time interval for an “outer”, embedding event embracing the time interval for an immediately embedded event, and so forth; or can be (iii) clustered. For specific, formalised examples of the three kinds of events it may then be possible to prove the following: (i) PPxy where x is an event of either a sequenced event, or of a recursively embedding, or of a clustered event y (x is embedded in the recursively embedding event y); (ii) Cxy where x and y are two consecutive events of a sequenced event z (that is, (ii:A) PPxz ∧ PPxz, (ii:B) T Pxy and (ii:C) Uxy (of z)), (ii:D) while DCuv where (PPuz ∧ PPvz)∧ ∼ Cuv; (iii) Oxy may hold for x and y being part of a clustered event z (that is, PPxz ∧ PPxz and Uxy (of z)). Etcetera. Compositionality Please note that we are not giving meaning to syntactic designators whose meaning is that of events. We are confronted with the issue of giving meaning, in some sense, to events. Simple such meanings are concerned with concrete event analysis: did to events occur at the same time, or in a given time period, or one before the other. For such analyses we refer to [33, Lamport]. We can think of obtaining more elaborate meanings for sequenced and recursively embedded events, but first we would need to build up a rather elaborate set of definitions, find elaborate examples, etcetera, and that would blow the size of this paper way out of proportion. So the best is to say, and this also applies to clustered events, here is an interesting research topic: to come up with compositionality interpretations for composite events ! 8.5
Composite Behaviours
Mereology Let {β1 , β2 , . . . , βn } be, for each case below, the suitable set of (simple) behaviours. From the point of view of the mereology of the composition of behaviours, such as we have modelled behaviours, there are the following composition operators: – (i) creating simple behaviours, βs :(fSimBeh|ifSimBeh), from suitable atomic ones (a1 , a2 , . . . , am , e1 , e2 , . . . , en ); – (ii) creating concurrent behaviours, βcon :CurBeh, from suitable simple behaviours; – (iii) creating communicating behaviours, βcom :ComBeh, from a set of suitable simple behaviours; – (iv) creating consecutive behaviours, βseq :CnsBeh, from a set of suitable simple behaviours; – (v) creating joined behaviours β:JoiBeh from a set of suitable simple behaviours; and – (vi) creating forked behaviours, β:FrkBeh, from a set of suitable simple behaviours.
46
D. Bjørner and A. Eir
Given a specific composite behaviour is may then be possible to prove – (i) For all distinct atomic behaviours βα and βα and for all simple be βs ∧ haviours, βs , for which Pβα βs ∧ Pβα βs holds to prove that PPβα PPβα βs ; holds; – (ii) for a set, βs, of suitable simple behaviours to prove that PPβi mkConcurBeh(βs) holds for any βi in βs; – etcetera.
Compositionality Behaviours are the meaning of domain descriptions, or or requirements prescriptions or software programs that specify concurrency. So the meaning functions that we might apply to behaviours would then typically be those of analysing behaviours: comparing behaviours, say with respect to efficiency, or to access to shared resources, or (say human) interface response times. We leave it to the reader to continue this line of thought.
Galois Connections10
9
In the following sections, we look at some rˆoles Galois connections may have in relation to composition of entities. The notion of Galois connections is a fundamental mathematical concept from order– and lattice theory. The reason for bringing it up here is that it involves set–based composition of elements as well as the composition of dual, order-decreasing functions. However, this is just the reason why we considered Galois connections in the first place. In fact we want to argue that it is beneficial to incorporate non-traditional modelling aspects in order to get more insight; both in the discipline of domain engineering, and in a current domain of discourse. In the following, we shall (i) define the notion of Galois connections, (ii) outline some of the different uses of that concept, and (iii) consider the concept in context of modelling composite entities (following the ontology presented in this paper). The sections are driven by examples. 9.1
Definition
Definition 1. A Galois connection is a dual pair of mappings (F , G) between two partial orderings (P, ) and (Q, ). Most often P and Q are ordered sets based on set-inclusion (⊆) and this is also the version we shall use in this paper. In order to avoid misinterpretation, we write and not ≤ or ⊆ as seen in other treatments of the subject. The mappings must be monotonically decreasing11 : 10 11
This section is main-authored by Asger Eir. Note, that there are in fact two different definitions of Galois connections in the literature: the monotone Galois connection and the antitone Galois connection. We follow Ganter and Wille and assume the former [21].
Compositionality: Ontology and Mereology of Domains
47
type P, Q value F : P-set → Q-set G: Q-set → P-set axiom ∀ ps1 , ps1 qs1 ps1 qs1
ps2 :P-set, qs1 , qs2 :Q-set • ps2 ⇒ F ps2 F ps1 , qs2 ⇒ G qs2 G qs1 , G F ps1 , F G qs1
The dual ordering of Galois connections is illustrated in Figure 2.
D
C Ds" Cs’ Cs
Ds’ Cs"
Ds F: C−set −> D−set G: D−set −> C−set
Fig. 2. A Galois connection
In [21, Ganter & Wille: FCA], the following Theorem is given on Galois connections: Theorem 1 (Galois Connection12 ). For every binary relation R ⊆ M × N , a Galois connection (ϕR , ψR ) between M and N is defined by ϕR X := X R (= y ∈ N |xRy f or all x ∈ X) ϕR Y := Y R (= x ∈ M |xRy f or all y ∈ Y ). From the above, we see that all y must stand in the relation R to each x in order for the connection to hold. However, R could mean “does not stand in a relation to”. That would still yield a Galois connection but the domain knowledge it expresses is different. Let X be a collection of coffee cups and let Y be a collection properties concerning form, colour, texture and material. We may define R to 12
In [21] this is named Theorem 2.
48
D. Bjørner and A. Eir
be “coffee cup x has property y”. However, we could also define it as “coffee cup x does not have property y. In both cases we would have a Galois connection. However, the latter may be somewhat strange from a classification point of view. The notion of Galois connections has served as foundation for a variety of applications like order theory, the theory of dual lattices, and — in computer science — semantics of programming languages and program analysis. However, it has also been utilized in a number of conceptualization principles. These principles are not pure mathematical treatments, but utilize Galois connections in specific domains. We shall look at three such areas in the following. Example: Toasters and Their Designs. Let (d:D) be a design of a toaster (t:T). From the design we may be able to produce a collection of different toasters because the design does not specify everything, and due to the fact that we could produce the “same” kind of toaster over and over again. Let us look at a “time glimp” and let (ts:T-set) denote the set of such toasters obeying the design13 . If we impose that “sequentially” further designs are all for the same toaster, then the number of toasters decreases14 because they all need to satisfy the new designs too. Between the set of designs and the set of toasters they denote, is a Galois connection Example: Designs and Market Analysis. The designs are also the denotation of something. It could be the market analysis indicating the need for certain toaster products — or more generally, for certain new kinds of kitchen equipment. Between the market analysis and the designs also stands a Galois connection; hence there is also a Galois connection between the market analysis and the toasters. The Galois connection (being an order–decreasing pair of functions) ensures that we can only produce toasters which obeys the designs, and that we can only design toasters which satisfy the needs outlined in the market analysis 9.2
Concept Formation in Formal Concept Analysis [FCA]
In the area of formal concept analysis [21, Ganter & Wille: FCA], the notion of Galois connections is used as foundation for the lattice-oriented theory used for concept formation. In FCA, concepts are defined from a collection of objects by looking at which objects have common properties. The approach includes algorithms for automatic concept formation, given a collection of objects or a collection of properties. The fact that we can choose either to form concepts from objects (the extension of the concepts) or the properties (the intensions of the concepts) shows the duality between objects and properties. 13
14
We shall — as common in modelling — assume a possible worlds semantics in the sense that the collection of toasters are the toasters existing in one possible world. Are there more produced or some destroyed, it is another possible world. We shall not be further concerned with this, nor the many philosophical issues that can be claimed. We refer to [44] and [2] which among many other issues take up this discussion. Actually, the number could stay the same but that would mean including identical designs. In general, we shall not be that concerned with the equal-situation for that same reason.
Compositionality: Ontology and Mereology of Domains
9.3
49
Classification of Railway Networks
In [30,28,29] Ingleby et al. use Galois connections in order to classify railway networks. The approach is similar to the approach of concept formation in FCA, but Ingleby understands the notion of properties in a broader sense: a property of a route may be the segments involved in the route. Here Ingleby understands routes and segments in a safety–security sense as his quest is to cluster routes and segments such that the complexity of safety proof over the railway network, is reduced. That is, the Galois connection is used for defining cluster segments (in FCA, corresponding to concepts) such that the number of free variables are reduced when proving safety properties of software/hardware for instance. 9.4
Relating Domain Concepts Intensionally
In [17,18, Eir], we utilized the notion of Galois connections for relating domain concepts intensionally. The domain concepts related were concepts that were not bound under subsumption; i.e. they are not specialization/generalization pairs. Consider the domain concepts: Budgets and Project Plans. From a budget we can observe the set of project plans that can be executed within the financial restrictions of the budget. From a project plan we can observe the set of budgets that designate the necessary figures for executing the project plan. Generalizing this gives two interpretation functions: one from a set of budgets to the set of project plans that are all executable within the restriction of each budget in the set; and another from a set of project plans to the set of budgets that all designate the necessary expenses for executing each project plan. The pair of interpretation functions is a Galois connection. This approach is utilized in order to suggest a modelling approach for relating domain concepts and placing their models (i.e. their abstractions) in conceptual structures. For the two concepts mentioned above, the conceptual structure maintains the systematics of concretising information from budgeting to project planning. 9.5
Further Examples
We may easily produce other examples of domain concept pairs of which the objects relate in some way. Consider the following examples: Example: Bus Time Tables and Traffic. Let btt be some bus time table (btt:(bln,busjs,nd)). To btt there corresponds a set of bus traffics, sobustrfs, on the net. Express such bus traffic as (bustrf,n) where (bustrf,n) ∈ sobustrfs and where bustrf is the time-varying function from buses to their positions on the net, and nd is related to n in some way (one is a net description, the other is “the” (or that) net). We furthermore stipulate that each bus traffic (bustrf,n) “obeys” the timetable (bln,busjs,nd). To a set of timetables, sobustts, over the same net there corresponds the union set of all those sets of bus traffics, usosobustrfs, that “obey” all timetables in sobustts We seek to understand the relationship between sobustts and usosobustrfs in terms of the concept of Galois connections.
50
D. Bjørner and A. Eir
Example: Traffic and Buses – The Dual Case. We reverse the relation. We start with a bus traffic (bustrf,nd) and can, by arguments similar to above, postulate a set of bus timetables, sobustts (on the same net), such that each bus timetable properly records the arrival and departure times of buses at bus stops on that net. We can then “lift” this relation (((bustrf,nd)),sobustts) to a relation from sets of bus traffics to the union set of sets of bus timetables We seek to understand the relationship between sobustrfs and usosobustts in terms of the concept of Galois connections. The two examples above each define what we in 9.4 called interpretation functions. They are interpretation functions in the sense that they — in the domain — “interpret” the time table entities as traffic entities; and vice versa. 9.6
Generalisation
The element that these example have in common is that the values of one concept characterize the values of the other concept — in some way. This is similar to FCA where we have a Galois connection between values and their common properties. However, in this case the properties are extrinsic properties. The budget relates to a specific set of project plans because it possesses the property of standing in a certain relation to these other values. The property is extrinsic as the property is possessed assuming the existence of other values; as opposed to intrinsic properties. In a sense it means that we break the traditional distinction between values and properties as assumed in FCA. Furthermore, we utilize the same principles as utilized in denotational semantics — namely that we can assign meaning to values (e.g., a budget) and the meaning to composition of values (e.g., a set of budgets). The meaning of the composition is here more than the meanings of the individual parts because the composition of budgets (the budget set inclusion) implies a more narrow restriction of the set of executable project plans. It is so because combining two budgets has influence on the meaning in the sense that the meaning is the composition of the corresponding project plans as well (satisfying the Galois functions of being “decreasing”). Mereologically, what is added when composing a whole is actually the axioms in the Galois connection. However, we go a little further than denotational semantics of programming languages because we may consider any domain concept a subject for defining a Galois connection. The Galois connection is a general mathematical framework and hence not what contributes to why two concepts relate intensionally. 9.7
Galois Connections and Ontology
In the ontology presented throughout this paper, we have exercised the importance of compositionality. I.e. we have defined compositionality for each of the four entity parametrisations made. In this sections, we shall look at how these can be understood in the general, domain and ontology neutral framework of Galois connections. When we say that Galois connections in this sense
Compositionality: Ontology and Mereology of Domains
51
are ontologically neutral it is not entirely true. Many ontologies — especially in philosophy — concern the existence of (say) mathematical entities; hence also heavily touching (perhaps disturbing) the foundation on which Galois connections are defined. However, this is not our quest here. When we consider Galois connections ontologically neutral mean that they are not relying on the domain considered. In our case, the notion of Galois connections is independent of the ontology of entities proposed. The understanding of mathematical entities and issues concerning their ontological commitment in general, is outside the scope of this paper. For clarification refer to [44]. If simple entities, events, behaviour and operations are all entities, it should imply that we can make the same considerations involving such values in Galois connections. We shall try to do so in the following, and we intend in that context to outline the issues as we go along. The important thing is, however, not whether Galois connections can be established, but whether the Galois connection complies with the current intuition as the connection between objects and their common properties does. The traditional use of Galois connections — as ‘exercised order theory’ and as used by Ganter and Wille focuses on the properties that objects have in common. However, we may turn this order upside-down such that we look at the total set of properties. This will be a Galois connection as well but in the case of domains, it expresses a different aspect. In some situations, it is natural to consider the former — in other situations, we may prefer the latter. This depends on how we perceive the domain and — perhaps also — the purpose of our domain model; i.e. our perspective. By the above we indicate that the study of Galois connections in the context of domain engineering could be interesting because it has to do with how we choose to perceive, abstract, model and formalize the domain. Hence, what we present in the following may open up for such research areas for further clarification. We are, however, not saying that these will be interesting or that it does make sense to make distinctions like the one above. We just say that this area deserves further exploration. Let us in the following assume that Galois connections concern relations between two ordered sets of entities and the essence that the entities in these sets characterize each other. The issue is now that in some cases, characterization usually obeys the axioms of Galois; but in some situations it may not. Composite Entities. We have already seen a couple of examples of Galois connections between two ordered sets of elements, where the ordering has been set-inclusion. We assume the understanding of composite entities as presented in Sects. 4 and 8.2. Example: Hospital Staff and Rostering (I). Doctors and nurses forming surgery teams. From a team (possibly empty or singleton), we can observe the collection of time slots where they are all available. If we include more doctors and nurses, we will have a smaller set of time slots. And vice versa. This is an
52
D. Bjørner and A. Eir
important domain aspect when we are going to talk about planning and staffing (either in domain descriptions and specifications, or in software requirements). This is a Galois connection Example: Hospital Staff and Rostering (II). Again consider doctors and nurses forming surgery teams. From a team, we can observe the collection of possible surgeries they may perform. If we include more doctors and nurses, we increase the collection of surgeries. And vice versa This is not a Galois connection, though interesting from a domain perspective anyway. Composite Operations. We assume the understanding of composite operations as presented in Sects. 5 and 8.3. Example: Building Constructions and Parts. Consider a set of building constructions: molding of foundations, mounting of bricks into walls, and establishing the roof, etc. Then consider the set of building parts involved in a construction. By building parts, we shall both understand the materials and elements consumed by constructions, and the results of other constructions. Thus, a building part can be a specific brick, a pre-cast concrete wall, the foundation, etc. That is, the building parts are those either created, mounted on, changed in some way, or demolished. For each construction, we can observe the building parts involved: consumed or produced. Building constructions can be composite in the sense that one construction constructs the foundation and another construction mounts the walls on the foundation. The former construction is a function from certain amounts of sand, stone, cement and water, to a foundation (here we shall exclude the tools needed). The latter construction is a function from a foundation, a collection of bricks, water, cement and insulation, to the product consisting of foundation and walls. For a construction — atomic or composite — we can observe the building parts involved in all constructions. If we include more constructions in a composite construction, the building parts involved in all constructions will decrease The connection between composite constructions and building parts involved is a Galois connection. The connection is interesting when modelling the planning and scheduling of construction works as a crucial element is that construction workers cannot always work on the same building parts at the same time. Example: Building Operations and Consumed Materials. Now consider the approach where for each building operation we observe the materials needed. For a collection of operations, we can likewise observe the total quantity of materials needed. That is, the total amount of sand, stones, bricks; the total quantity of beams, doors, windows of each type and measure; etc. Including more building operation will increase the amount and quantity of materials needed; simply because we then build more. Then we have a situation where the more operations we include, the more products. That is, set-inclusion of operations implies an increase of the observed materials and parts. The reason is that we here that each building operation contributes with a result. Instead of considering the common
Compositionality: Ontology and Mereology of Domains
53
materials as characterizing the composite operation, we shall consider that the complete set of materials involved characterize the composite operation In a sense this is more natural as we then include all the aspects of the compositionality. However, in the present case, we do not have a Galois connection because including more operations in the composite, implies including more materials and results. Hence, the dual ordering is increasing; not decreasing. Composite Events. We assume the understanding of composite entities as presented in Sects. 6 and 8.4. Example: Traffic Accidents and Responsible Persons. Consider a traffic accident. This is an event and for the accident, we can observe the collection of persons involved and of these the persons bearing some kind of responsibility in the accident. Assume that we look at a collection of traffic accidents. Here, we can observe the persons involved in all accidents and for these the ones being responsible for the accidents. Including more traffic accidents will reduce the number of persons involved in all accidents; hence, also the number of persons being responsible in all accidents The connection between sets of traffic accident events and the set of persons being responsible, is a Galois connection. The connection may be interesting when modelling the analysis of traffic accident patterns and statistics which may influence the definition of insurance premium. Example: Traffic Accidents and Persons Involved. Now, consider traffic accidents as events again. From a traffic accident, we can observe the insurance policies of the involved persons. Likewise, from a collection of traffic accidents (i.e. a composite event being a cluster of individual events), we can observe the collection of persons involved in at least one of the accidents; that is, the total collection of persons involved in one or more of the accidents. If we include more accidents, the collection of persons involved will increase This is not a Galois connection, though the connection may be interesting when modelling correlation between accidents. We should also be able to construct examples for composite events being sequential or embedded. Composite Behaviours. We assume the understanding of composite entities as presented in Sects. 7 and 8.5. Example: Meetings and Applicable Rooms. Consider a collection of persons engaged in a meeting. We shall consider having a meeting a behaviour. The meeting can be composite in the sense that we may join two or more meetings held in the same time interval and involving the same persons. In the present case we shall consider behaviour composition as communicating. E.g. we may join department meetings for several company departments if the topic of the meetings is common and should be shared. From a meeting, we can observe the rooms applicable. We shall assume that a room is only applicable if it can host
54
D. Bjørner and A. Eir
the number of meeting participants, has the equipment necessary for the meeting, etc. If we include more meeting behaviours in a composite behaviour, the collection of rooms applicable will decrease This is a Galois connection. The connection is interesting when planning collaborative work among meeting participants. Example: Engineering Work and Skills. Consider a collection of engineers engaged in a project. We shall consider their work a behaviour which is concurrent — perhaps also communicating to an extent. From each engineering behaviour we can observe the engineering skills utilized and practiced. If we include a collection of work behaviours as a composite behaviour, it implies that we include more engineers and thus also more engineering skills This is not a Galois connection; though interesting when modelling skills, skills management, project communication and interaction, staffing, etc. 9.8
Galois Connections Concluded
So what went wrong in the cases where we did not have a Galois connection? Or we could ask: what did we explore by looking at the domain through Galois eyes? The examples examined above clearly show that their are two different kinds of connections between entity compositions; hence, orderings. – The former yields a Galois connection. It does so because composite entities of the one ordering are all characterizing composite entities of the other. Thereby, we believe to have outlined how Galois connections and ordering theory in general plays an important rˆ ole in compositionality of entities. – The latter does not yield a Galois connection as it is an order-preserving connection. In the examples examined we have seen a general pattern composition of the one kind of entity, yields composition (actually just set-inclusion) of the other kind of entity. Both kinds of connections show that even though the connections (Galois being order-reversing and the order-preserving) are ontologically and domain neutral, they do express interesting domain intrinsics when it comes to compositionality. We suggest that the rˆole, use and axioms/theorems of such ordering connections are explored further within the context of domain engineering. Furthermore, we encourage exploring other such concepts and their ability of promoting domain engineering as a discipline.
10 10.1
Conclusion Ontology
Ontology plays an important rˆ ole in studies of epistemology and phenomenology. In the time-honoured tradition of philosophical discourse philosophers present proposals for one or another ontology, and discusses these while usually not
Compositionality: Ontology and Mereology of Domains
55
settling definitively on any specific ontology; and many issues are deliberately left open.15 In this paper we cannot afford this “luxury”. Our objective is to clarify notions of ontology in connection with the use of specific ways of informally and formally describing domains where the formal description language is fixed. Many of the issues of domain modelling evolve close to issues of metaphysics. We find [36, Michael J. Loux] Metaphysics, a contemporary introduction, [24, Pierre Grenon and Barry Smith] SNAP and SPAN: Towards Dynamic Spatial Ontology, [45, Peter Simons] Parts: A Study in Ontology, and [40, D. H. Mellor and Alex Oliver] Properties, relevant for a deeper study of the meta-physical issues of the current essay. 10.2
Mereology
Mereology has been given a more concrete interpretation in this paper compared to the “standard” treatments in the (mostly philosophical) literature. It seems that Douglass T. Ross [43] was among the first computing scientists to see the relevance of Le´sniewski’s ideas [37,48]. Too late for a study we found [41, ChiaYi Tony Pi]’s 287 page PhD (linguistics) thesis: Mereology in Event Semantics. Perhaps it is worth a study. 10.3
Research Issues
The paper has touched upon many novel issues. Some are reasonably well established, at least from a programming methodological point of view. Several issues could benefit from some deeper study. We mention three. Compositionality. A precise study of how composite functions, events and behaviours can be understood according to the principle of compositionality. Mereology. A more precise presentation of a mereology axiom system for the kind of simple entities, function entities, event entities and behaviour entities outlined in Sects. 4–7. Ontology. A more precise comparison of the “computability”–motivated ontology of this paper as compared with for example the ontological systems mentioned in [36, Michael J. Loux], [24, Pierre Grenon and Barry Smith], [45, Peter Simons] and [20, Chris Fox]. Galois Connections. A further study, going beyond that of [17,18, Asger Eir], of relations between compositionally and Galois connections. For that study one should probably start with [25, Hoare and He]. That we have not really studied the compositionality issue as listed above is a major drawback of this paper but we needed to clarify first the nature of “compositeness” of events, functions and behaviours before taking up the future study of their compositionality. 15
Such as whether properties of entities are themselves entities, etc.
56
D. Bjørner and A. Eir
Acknowledgement The first author is most grateful to his former PhD student, Dr. Asger Eir, for his willingness to co-author this paper.
Bibliographical Notes References 1. Abrial, J.-R.: The B Book: Assigning Programs to Meanings. Tracts in Theoretical Computer Science. Cambridge University Press, Cambridge (1996) 2. Balaguer, M.: Platonism and Anti–Platonism in Mathematics. Oxford University Press, Oxford (1998) 3. Bjørner, D.: Programming in the Meta-Language: A Tutorial. In: Bjørner, D., Jones, C.B. (eds.) The Vienna Development Method: The Meta-Language. LNCS, vol. 61, pp. 24–217. Springer, Heidelberg (1978) 4. Bjørner, D.: Software Abstraction Principles: Tutorial Examples of an Operating System Command Language Specification and a PL/I-like On-Condition Language Definition. In: Bjørner, D., Jones, C.B. (eds.) The Vienna Development Method: The Meta-Language, [13]. LNCS, vol. 61, pp. 337–374. Springer, Heidelberg (1978) 5. Bjørner, D.: The Vienna Development Method: Software Abstraction and Program Synthesis. In: Proceedings of Conference at Research Institute for Mathematical Sciences (RIMS), University of Kyoto, August 1978. Mathematical Studies of Information Processing, vol. 75. Springer, Heidelberg (1979) 6. Bjørner, D.: Software Engineering. 1: Abstraction and Modelling. Texts in Theoretical Computer Science, the EATCS Series. Springer, Heidelberg (2006) 7. Bjørner, D.: Software Engineering. Specification of Systems and Languages. Texts in Theoretical Computer Science, the EATCS Series, vol. 2. Springer, Heidelberg (2006); Chapters 12–14 are primarily authored by Christian Krog Madsen 8. Bjørner, D.: Software Engineering. Domains, Requirements and Software Design. Texts in Theoretical Computer Science, the EATCS Series, vol. 3. Springer, Heidelberg (2006) 9. Bjørner, D.: Domain Theory: Practice and Theories, Discussion of Possible Research Topics. In: Jones, C.B., Liu, Z., Woodcock, J. (eds.) ICTAC 2007. LNCS, vol. 4711, pp. 1–17. Springer, Heidelberg (2007) 10. Bjørner, D.: Domain Engineering. In: Boca, P., Bowen, J., Siddiqi, J. (eds.) Formal Methods: State of the Art and New Directions, pp. 1–42. Springer, Heidelberg (2010) 11. Bjørner, D.: Software Engineering, vol. I: The Triptych Approach, vol. II: A Model Development (2009); To be submitted to Springer for evaluation 12. Bjørner, D., Henson, M.C. (eds.): Logics of Specification Languages. EATCS Monograph in Theoretical Computer Science. Springer, Heidelberg (2008) 13. Bjørner, D., Jones, C.B. (eds.): The Vienna Development Method: The MetaLanguage. LNCS, vol. 61. Springer, Heidelberg (1978) 14. Bjørner, D., Jones, C.B. (eds.): Formal Specification and Software Development. Prentice-Hall, Englewood Cliffs (1982)
Compositionality: Ontology and Mereology of Domains
57
15. Casati, R., Varzi, A.: Parts and Places: the structures of spatial representation. MIT Press, Cambridge (1999) 16. Clarke, B.L.: A calculus of individuals based on “connection”. Notre Dame J. Formal Logic 22(3), 204–218 (1981) 17. Eir, A.: Construction Informatics — issues in engineering, computer science, and ontology. PhD thesis, Dept. of Computer Science and Engineering, Institute of Informatics and Mathematical Modeling, Technical University of Denmark, Building 322, Richard Petersens Plads, DK–2800 Kgs.Lyngby, Denmark (February 2004) 18. Eir, A.: Relating Domain Concepts Intensionally by Ordering Connections. In: Jones, C.B., Liu, Z., Woodcock, J. (eds.) Formal Methods and Hybrid Real-Time Systems. LNCS, vol. 4700, pp. 188–216. Springer, Heidelberg (2007) 19. Fitzgerald, J.S., Larsen, P.G.: Developing Software using VDM-SL. Cambridge University Press, Cambridge (1997) 20. Fox, C.: The Ontology of Language: Properties, Individuals and Discourse. CSLI Publications, Center for the Study of Language and Information, Stanford University, California, ISA (2000) 21. Ganter, B., Wille, R.: Formal Concept Analysis — Mathematical Foundations. Springer, Heidelberg (1999) 22. George, C.W., Haff, P., Havelund, K., Haxthausen, A.E., Milne, R., Nielsen, C.B., Prehn, S., Wagner, K.R.: The RAISE Specification Language. The BCS Practitioner Series. Prentice-Hall, Hemel Hampstead (1992) 23. George, C.W., Haxthausen, A.E., Hughes, S., Milne, R., Prehn, S., Pedersen, J.S.: The RAISE Method. The BCS Practitioner Series. Prentice-Hall, Hemel Hampstead (1995) 24. Grenon, P., Smith, B.: SNAP and SPAN: Towards Dynamic Spatial Ontology. Spatial Cognition and Computation 4(1), 69–104 (2004) 25. Hoare, C.A.R., He, J.F.: Unifying Theories of Programming. Prentice Hall, Englewood Cliffs (1997) 26. Hoare, T.: Communicating Sequential Processes. C.A.R. Hoare Series in Computer Science. Prentice-Hall International, Englewood Cliffs (1985) 27. Hoare, T.: Communicating Sequential Processes (2004) Published electronically, http://www.usingcsp.com/cspbook.pdf. Second edition of [26]. See also, http://www.usingcsp.com/ 28. Ingleby, M.: Safety properties of a control network: local and global reasoning in machine proof. In: Proceedings of Real Time Systems, Paris (January 1994) 29. Ingleby, M.: A galois theory of local reasoning in control systems with compositionality. In: Proceedings of Mathematics of Dependable Systems, Oxford, UP, UK (1995) 30. Ingleby, M., Mitchell, I.H.: Proving Safety of a Railway Signaling System Incorporating Geographic Data. In: Frey, H.H. (ed.) SAFECOM 1992 Conference Proceedings of IFAC, Z¨ urich (CH), November 1992, pp. 129–134. Pergamon Press (1992) 31. Jackson, D.: Software Abstractions Logic, Language, and Analysis. The MIT Press, Cambridge (2006) 32. Jackson, M.A.: Software Requirements & Specifications: a lexicon of practice, principles and prejudices. ACM Press/Addison-Wesley Publishing Company (1995), http://
[email protected] 33. Lamport, L.: Time, Clocks, and the Ordering of Events in a Distributed System. Communications of the ACM 21(7), 558–565 (1978)
58
D. Bjørner and A. Eir
34. Lejewski, C.: A note on Le´sniewksi’s axiom system for the mereological notion of ingredient or element. Topoi 2(1), 63–71 (1983) 35. Leonard, H.S., Goodman, N.: The Calculus of Individuals and Its Uses. Journal of Symbolic Logic 5, 45–55 (1940) 36. Loux, M.J.: Metaphysics, a contemporary introduction, 2nd edn. Routledge Contemporary Introductions to Philosophy. Routledge, London (1998/2020) 37. Luschei, E.C.: The Logical Systems of Le´sniewksi. North Holland, Amsterdam (1962) 38. Manna, Z.: Mathematical Theory of Computation. McGraw-Hill, New York (1974) 39. McCarthy, J.: Towards a Mathematical Science of Computation. In: Popplewell, C.M. (ed.) IFIP World Congress Proceedings, pp. 21–28 (1962) 40. Mellor, D.H., Oliver, A.: Properties. Oxford Readings in Philosophy. Oxford Univ. Press, Oxford (1997) 41. Pi, C.-Y.T.: Mereology in Event Semantics. Phd, McGill University, Montreal, Canada (August 1999) 42. Roscoe, A.W.: Theory and Practice of Concurrency. C.A.R. Hoare Series in Computer Science. Prentice-Hall, Englewood Cliffs (1997), http://www.comlab.ox.ac.uk/people/bill.roscoe/publications/68b.pdf 43. Ross, D.T.: Toward foundations for the understanding of type. In: Proceedings of the 1976 conference on Data: Abstraction, definition and structure, pp. 63–65. ACM, New York (1976) 44. Shapiro, S.: Philosophy of Mathematics — structure and ontology. Oxford University Press, Oxford (1997) 45. Simons, P.M.: Parts: A Study in Ontology. Clarendon Press (1987) 46. Spivey, J.M.: Understanding Z: A Specification Language and its Formal Semantics. Cambridge Tracts in Theoretical Computer Science, vol. 3. Cambridge University Press, Cambridge (1988) 47. Spivey, J.M.: The Z Notation: A Reference Manual, 2nd edn. Prentice Hall International Series in Computer Science (1992) 48. Srzednicki, J.T.J., Stachniak, Z.: Le´sniewksi’s Lecture Notes in Logic, Dordrecht (1988) 49. Staab, S., Stuber, R. (eds.): Handbook on Ontologies. International Handbooks on Information Systems. Springer, Heidelberg (2004) 50. Woodcock, J.C.P., Davies, J.: Using Z: Specification, Proof and Refinement. Prentice Hall International Series in Computer Science (1996)
Laudatio Willem-Paul, how am I, a lowly, modest ’software engineering’ researcher, to formulate an appropriate Laudatio, to You, a towering, vocal computer scientist? Our interests are far and wide apart: You delve deeply and successfully into computer science: the study of the “things” that can exist inside computers, I delve into computing science: the study of how to construct those “things”. Thanks for Your always vigilant, shameless observance of precision, conciseness, Etcetera. Yet our roads have crossed; many time. And all of these encounters have been delightful. Despite Your throwing of rotten apples, despite Your swinging
Compositionality: Ontology and Mereology of Domains
59
of mighty swords and despite Your utterings of foul condemnations16 . Never boring. Sometimes a bit lofty, dispensing, with absolute authority, “wise-man” advice to, well, a bit more experienced people. But a conference without WillemPaul is not as fun as one with him — and when she’s there, and we can’t stand Your antics, we can always enjoy Corinne ...
16
The story of the six issues that any conference session chairman should observe was inspired by a WPdR incident at the 1986 IFIP World Computer Congress in Dublin, Ireland:
1. 2. 3. 4. 5. 6.
Introduce the speaker on time; “terminate” the speaker on time; ensure that questions are asked; and that questions are answered; protect the audience from abuse from the speaker; and protect the speaker from abuse from the audience.
The last two “rules” are also referred to as “Lex de Roever”.
Computer Science and State Machines Leslie Lamport Microsoft Research
Computation Computer science is largely about computation. Many kinds of computing devices have been described, some abstract and some very concrete. Among them are: – Automata, including Turing machines, Moore machines, Mealy machines, pushdown automata, and cellular automata. – Computer programs written in a programming language. – Algorithms written in natural language and pseudocode. – von Neumann computers. – BNF grammars. – Process algebras such as CCS. Computer scientists collectively suffer from what I call the Whorfian syndrome1 — the confusion of language with reality. Since these devices are described in different languages, they must all be different. In fact, they are all naturally described as state machines.
State Machines There are two ways to define state machine, one emphasizing the states and the other the transitions from one state to the next. I will use the simpler one that emphasizes states. For brevity, I ignore termination/liveness and consider only safety. A state machine is then specified by a set S of states, a set I of initial states, and a next-state relation N on S, so I ⊆ S and N ⊆ S × S. It generates all computations s 1 → s 2 → s 3 → · · · such that: S1. s 1 ∈ I S2. s i , s i+1 ∈ N , for all i. For example, a BNF grammar can be described by a state machine whose states are sequences of terminals and/or non-terminals. The set of initial states contains only the sequence consisting of the single starting non-terminal. The next-state relation is defined to contain s, t iff s can be transformed to t by applying a production rule to expand a single non-terminal. 1
See http://en.wikipedia.org/wiki/Sapir-Whorf hypothesis
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 60–65, 2010. c Springer-Verlag Berlin Heidelberg 2010
Computer Science and State Machines
61
Some of the computing devices listed above have an event (called an “input”, “output”, or “action”) associated with a state transition. Those devices can be represented by augmenting the state to include the last event. In other words α a transition s −→ t from state s to state t with event α can be represented as a transition from (augmented) state s, β to state t , α, where β is the event that “led to” s. (Initial states have the form s, ⊥ for a special initial event ⊥.) Describing all the other kinds of computing devices listed above as state machines is straightforward. Complexity results only from the innate complexity of the device, programs written in a modern programming language being especially complicated. However, representing a program in even the simplest language as a state machine may be impossible for a computer scientist suffering from the Whorfian syndrome. Languages for describing computing devices often do not make explicit all components of the state. For example, simple programming languages provide no way to refer to the call stack, which is an important part of the state. For one afflicted by the Whorfian syndrome, a state component that has no name doesn’t exist. It is impossible to represent a program with procedures as a state machine if all mention of the call stack is forbidden. Whorfian-syndrome induced restrictions that make it impossible to represent a program as a state machine also lead to incompleteness in methods for reasoning about programs.
Specifying a State Machine To use state machines, we need a language for specifying them. The languages designed by computer scientists for describing computations usually specify state machines, defining the computations by S1 and S2. A partisan of such a language will insist that it is ideal for describing any state machine. I will ignore computer scientists and use instead the language employed by every other branch of science and engineering—namely, ordinary mathematics. In science and engineering, a set of states is usually specified by a collection of variables and their ranges, which are sets of values. A state s assigns to every variable v a value s(v ) in its range. For example, physicists might describe the state of a particle moving in one dimension by variables x (the particle’s position) and p (its momentum) whose ranges are the set of real numbers. The state s t at a time t is described by the real numbers s t (x ) and s t (p), which physicists usually write x (t ) and p(t ). We specify the set of initial states the way sets of states are generally described —by a boolean-valued expression containing variables and ordinary mathematical constants and operators. For the particle example, x = 0 specifies the set of all states s such that s(x ) = 0 and s(p) is any real number. Because most fields of science and engineering study continuous processes, there is no standard way to describe a next-state relation. The simplest way I know to do it is with an expression that can contain primed as well as unprimed variables, the unprimed variables referring to the first state and the primed variables to the second state. For example, (x = x + 1) ∧ (p > x ) specifies the relation consisting of all pairs s, t of states such that t (x ) = s(x ) + 1 and t (p) > t (x ).
62
L. Lamport
State Machines in Action The benefits of describing state machines mathematically rather than hiding them behind computer-science languages would make a long list. It might begin with the replacement of esoteric programming logics by ordinary mathematics. For example, the Hoare triple {P }S {Q } becomes the formula P ∧ S ⇒ Q , where S is the relation on states described by the program statement S and Q is formula Q with each variable primed. Instead of compiling such a list, I consider one nice little example—two algorithms that appear unrelated until they are expressed mathematically as state machines. The first algorithm is described by this simple program X that runs forever, alternately performing the operations P and C. X : loop P ; C endloop The second algorithm is an important hardware protocol called two-phase handshake, illustrated by this diagram. p P rod
c
-
Cons
The “wires” p and q can assume the values 0 and 1; the arrows indicate that p is set by process Prod and c is set by process Cons. The processes synchronize using p and c so they take turns executing operations P and C. Their protocol can be described as follows, where p and c are initially equal and ⊕ is defined to be addition modulo 2 (known to hardware designers as 1-bit exclusive-or). Y : process P rod : whenever p = c do P ; p := p ⊕ 1 end || process Cons : whenever p = c do C ; c := c ⊕ 1 end It is easy to see, though not completely obvious, that Y alternately performs P and C operations, just like X . From the state machines’ pseudocode descriptions, this seems coincidental. The mathematical descriptions of these state machines reveal that it is no coincidence. Starting from X , we can derive Y mathematically. For simplicity, assume P and C to be atomic operations. They are then described by relations between primed and unprimed variables. To avoid introducing new symbols, let P and C also denote these two mathematical relations. Let varPC be the set of variables that occur in these relations. To describe program X as a state machine, we must introduce a variable to represent the control state—part of the state not described by program variables, so to victims of the Whorfian syndrome it doesn’t exist. Let’s call that variable pc, which we assume is not in varPC . The state variables of X are therefore pc and the variables in varPC . Since P and C are atomic operations, each executed as a single step, the variable pc assumes just two values. Let those values be 0 and 1. State machine X then has initial predicate Init X and next-state relation
Computer Science and State Machines
63
Next X defined as follows, where Init PC specifies the initial values of the variables in varPC . Init X
Δ
= (pc = 0) ∧ Init PC Δ
Next X =
((pc = 0) ∧ P ∧ (pc = 1)) ∨ ((pc = 1) ∧ C ∧ (pc = 0))
To describe Y as a simple state machine, we assume that the body of each process is executed as a single atomic action. Thus, when p = c is true, process Prod both executes P and increments p as one step. There is then no control state, and the state variables are p, c, and the variables in varPC . The initial predicate and next-state relation of Y are Init Y
Δ
= (p = c) ∧ Init PC Δ
Next Y = Prod ∨ Cons where formulas Prod and Cons, which describe the two processes, are defined by: Δ
Prod = (p = c) ∧ P ∧ (p = p ⊕ 1) ∧ (c = c) Δ
Cons = (p = c) ∧ C ∧ (c = c ⊕ 1) ∧ (p = p) The mathematical relation between these two state machines is simple: Y is obtained from X by substituting p ⊕ c for pc. Substituting an expression for a variable is a basic and powerful mathematical operation. Let us now see exactly how we derive Y from X by this substitution. For any formula F , let F be the formula obtained from F by this substitution. For example, pc equals (p ⊕ c) , which equals p ⊕ c . It is easy to see that 0 if p = c pc = 1 if p = c from which we obtain Init X
Δ
= (p = c) ∧ Init PC Δ
Next X = Pr ∨ Co where Δ
Pr = (p = c) ∧ P ∧ (p = c ) Δ
Co = (p = c) ∧ C ∧ (p = c ) The formulas Init X and Next X are the initial predicate and next-state relation of a state machine X whose states are the states of Y. We first consider its relation to state machine X . Define a mapping Ψ from states of Y to states of X by letting Ψ (s) assign the same values to the variables in varPC as s, and letting it assign to pc the value s(p) ⊕ s(c). (Recall that s(p) and s(c) are the values assigned to p and c
64
L. Lamport
by state s.) Extend Ψ to a mapping on computations (sequences of states) by letting Ψ (s 1 → s 2 → . . .) equal Ψ (s 1 ) → Ψ (s 2 ) → . . . . It follows easily from our definition of F that a formula F is true of state s of Y iff F is true of state Ψ (s) of X . Similarly, a relation R is true of a pair s 1 , s 2 of states of Y iff R is true of Ψ (s 1 ), Ψ (s 2 ). It follows that a sequence σ of states of Y is a computation of the state machine X iff Ψ (σ) is a computation of X . Let us now consider the disjuncts of the next-state relation Next X , starting with Pr . Because p and c assume only the values 0 and 1, p = c implies p = c ≡
((p = p ⊕ 1) ∧ (c = c)) ∨ ((p = p) ∧ (c = c ⊕ 1))
This implies Pr ≡
((p = c) ∧ P ∧ (p = p ⊕ 1) ∧ (c = c)) ∨ ((p = c) ∧ P ∧ (p = p) ∧ (c = c ⊕ 1))
A Pr step therefore either increments p and leaves c unchanged (satisfying the first disjunct) or else increments c and leaves p unchanged (satisfying the second disjunct). If we want an algorithm in which the process that executes P modifies only p, then we must allow only the first possibility, eliminating the second disjunct. We are left with the first disjunct, which equals Prod . A similar calculation shows that we obtain Cons from Co by eliminating a disjunct that modifies p and leaves c unchanged. This leads us to a state machine with initial predicate Init X and next-state predicate Prod ∨ Cons, which is precisely the state machine Y. Our derivation shows that Prod implies Pr and Cons implies Co. Hence, Next Y implies NextX . Since Init Y equals InitX , we deduce that any computation σ of Y is a computation of X . We have already seen that σ is a computation of X iff Ψ (σ) is a computation of X . Hence, if σ is any computation of Y, then Ψ (σ) is a computation of X . Because the states s and Ψ (s) assign the same values to the variables in varPC , this means that σ has the same P and C steps as Ψ (σ). Thus, we deduce that the derived protocol Y produces the same sequence of P and C operations as does X . Since it is obvious that X alternately executes P and C operations, this shows that Y does too. In other words, this shows that Y is correct by construction. When presenting this kind of derivation, it is conventional to pretend that it leads to the discovery of the resulting protocol. I presented Y before the derivation to make it easier to see where we were heading. This allowed me to “cheat” by letting pc assume the convenient values 0 and 1. Had I chosen two arbitrary values a and b instead, we would have substituted if p = c then a else b for pc. A simple calculation would have shown a if p = c pc = b if p = c
Computer Science and State Machines
65
From that point, the derivation would have proceeded exactly as before, with the same formulas InitX and NextX .
A Lesson Using ordinary mathematics, we have derived the simple but useful protocol Y from the trivial algorithm X by substituting p ⊕ c for pc. We could do this because we represented these algorithms as state machines and we described the state machines using ordinary mathematics. The pseudocode descriptions probably seem more natural to most computer scientists. But how could our derivation possibly have been done from those descriptions? How do we substitute for a variable pc that doesn’t appear in the pseudocode? Even if pc did appear as a variable, what would it mean to substitute an expression for it in an assignment statement pc : = . . . ? Quite a number of formalisms have been proposed for specifying and verifying protocols such as Y. The ones that work in practice essentially describe a protocol as a state machine. Many of these formalisms are said to be mathematical, having words like algebra and calculus in their names. Because a proof that a protocol satisfies a specification is easily turned into a derivation of the protocol from the specification, it should be simple to derive Y from X in any of those formalisms. (A practical formalism will have no trouble handling such a simple example.) But in how many of them can this derivation be performed by substituting for pc in the actual specification of X ? The answer is: very, very few. Despite what those who suffer from the Whorfian syndrome may believe, calling something mathematical does not confer upon it the power and simplicity of ordinary mathematics.
A Small Step for Mankind Cornelis Huizing, Ron Koymans, and Ruurd Kuiper Technische Universiteit Eindhoven, PO Box 513, 5600 MB Eindhoven, The Netherlands Philips Research, High Tech Campus 37, 5656 AE Eindhoven, The Netherlands
[email protected],
[email protected],
[email protected]
Abstract. For many programming languages, the only formal semantics published is an SOS big-step semantics. Such a semantics is not suited for investigations that observe intermediate states, such as invariant techniques. In this paper, a construction is proposed that generates automatically a small-step SOS semantics from a big-step semantics. This semantics is based on the a priori technique pioneered by Willem-Paul de Roever et al.
1
Introduction
For a rigorous treatment of programming and specification languages, especially considering verification, formal semantics are a prerequisite. Motivations are establishing an unambiguous understanding of the language itself as well as providing a foundation for soundness or completeness proofs about verification formalisms or even tools. This leads to different requirements on semantics, as can be observed in the differences between operational and denotational semantics. Generally, a so-called big-step semantics is provided to define the language. This is a semantics where for each configuration, i.e., (part of) a program and a state, rules are provided that determine the end state if that (part of the) program terminates. The intermediate configurations, where only part of the program is executed, are not available in such a semantics. In current programming paradigms, like Object-Oriented Programming, granularity of steps is an important issue: for example the data of a class should, roughly speaking, be in a consistent state at the coarse granularity of method calls and returns, whereas the execution of a method requires description at a finer granularity. In the widely used syntax-directed verification approaches, where assertions are put in the program text and invariants are required to hold at certain points in the code, granularity is indicated by the syntax of the program. Because a big-step semantics does not provide intermediate configurations, i.e., does not provide the program’s syntax at the intermediate steps, it is difficult to reason about the program at different, syntax-defined, granularity levels. Furthermore, reasoning at such a syntactically defined granularity is usually required for proofs about properties like soundness and completeness (cf. [6]). D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 66–73, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Small Step for Mankind
67
Therefore, a so-called small-step semantics that does provide the intermediate configurations and where a step corresponds to executing an atomic statement is desirable. As generally a big-step semantics is provided to define the language, it is profitable to have a standardized small-step semantics readily available. In this paper we show how to transform a given big step semantics automatically into a small step semantics. We assume that the big-step semantics is given in SOS-form with axioms for the primitive statements – which is a common way to define the semantics of an imperative programming language.
2
From Big-Step to Small-Step
Consider a given imperative programming language L with transition relation ⇒ on configurations S, σ, where S ∈ L and σ a state, also containing environment information (association of procedure name to body, etc.). The empty statement is denoted as — . A configuration is final if it is of the form —, σ , also denoted as just σ. A transition relation is a binary relation on configurations. A bigstep relation is a transition relation ⇒ where C ⇒ C only if C is final. The corresponding semantic function is defined by B[[S]]σ = {τ | S, σ ⇒ τ } A small-step semantics is a transition relation where every step corresponds to the execution of at most one atomic statement. The corresponding semantic function is defined by S[[S]]σ = {τ | S, σ →∗ τ } Assume the transition relation ⇒ is defined by a set of axioms A1 , . . . and derivation rules R1 . . .. The challenge is to provide a recipe to obtain a transition relation → which is small-step and defines the same semantic function as ⇒. We define → as follows. 2.1
Axioms
We just copy the axioms of ⇒. For every axiom Ai : C ⇒ σ we introduce an axiom Ai in the definition of →: Ai : C → σ 2.2
Rules
For every rule Ri we introduce several axioms and rules in →. Suppose rule R is R
→ − S1 , σ1 ⇒ σ1 . . . Sn , σn ⇒ σn → − → if CondR (Si , T, − σi , σi , τ, τ ) T, τ ⇒ τ
68
C. Huizing, R. Koymans, and R. Kuiper
This is the general form of an SOS-rule for a big-step semantics. CondR is a condition that captures the specifics of the rule. For example, the rule for sequential composition, usually looking like this: Seq
S1 , σ ⇒ σ S2 , σ ⇒ σ S1 ; S2 , σ ⇒ σ
can be formulated in the general form as follows: Seq
→ − S1 , σ1 ⇒ σ1 S2 , σ2 ⇒ σ2 → − → if CondSeq (Si , T, − σi , σi , τ, τ ) T, τ ⇒ τ
and CondSeq ≡ σ1 = τ ∧ σ1 = σ2 ∧ σ2 = τ ∧ T = S1 ; S2 Back to the general case. In order to emulate the behavior of the big-step relation, the small-step relation will have to execute the statements of the premises one by one. Information that is generated during this execution, such as the end states of the execution of one such statement will be lost at the end of the execution. Therefore, we extend the configuration with a stack γ that records this kind of information and makes it possible to use it when needed. The elements of the stack are lists of statements and states, labeled with their role (i.e., position) in the rule for easy reference. The list on top of the stack records information that is necessary to complete the execution of the statement in the conclusion. This element is popped from the stack when the statement is completed. As is the style in small-step semantics, we record the statements to be executed in the statement part of the configuration. Often, the statement T is a composite statement that contains the statements of the premise. This isn’t true in general, however. In the rules for procedure call, for instance, the body of the procedure is part of the premise, but not of the call statement. Therefore we add a syntactic construct [S1 , . . . , Sk ]R for every rule R. We identify the configurations S, σ, λ and S, σ , where λ is the empty stack. For each rule R of ⇒ as given above, we add the following axioms and rules to the definition of →. Ra . The following axiom gives the first step of the small-step execution of T . T, τ, γ → [S1 , . . . , Sn ]R , σ1 , γ ˆe where e = (S1 : S1 , σ1 : σ1 , T : T , τ : τ ) The stack element e is a list of pairs of labels and states or statements, with the labels, underlined for easy recognition, denoting the position in the rule. The label σi corresponds to the starting state of the i-th premise, etc. If x: y occurs in e, we will refer to y by e.x, which is unambiguous, since the labels in each list e are unique. γ ˆe denotes the stack that results from pushing e on stack γ.
A Small Step for Mankind
69
Rb . The following axiom takes execution from one premise to the next. The end state of the current premise, the starting state and the statement of the next premise should be recorded for the final check of the rule condition. [—, Si , . . . , Sn ]R , σ, γ ˆe → [Si , . . . , Sn ]R , σ , γ ˆe where e = e ˆ(σi−1 : σ, σi: σ , Si: Si ). Note that the choice of σ is free when this rule is applied. When the execution of [. . .]R terminates, this choice is resolved according to the condition of the big-step rule, CondR . Thus, many possible executions are generated, of which only one or a few survive in the final semantics. This is the technique of the a priori semantics as proposed in [1]. Rc . The following rule just says that the foremost statement of the compound statement may be executed.
S, σ, γ → S , σ , γ [S, . . .]R , σ, γ → [S , . . .]R , σ , γ Rd . This rule is the final check. In a big-step semantics, end states and start states are available to the rule and can be checked together. In a small-step semantics, the start state and start statement are lost after one transition, unless special measures are taken. Therefore, they are stored in the top element of the stack. [ — ]R , σ, γ ˆe → —, σ , γ if CondR (e ˆ(σn : σ, τ : σ )). For notational convenience, we write Cond(f ), with f a list of labeled elements, for the application of Cond to the elements of the list, in the order defined by the labels. stack. Furthermore, we need a rule to do executions on configurations with a stack, since the axioms provide only steps without a stack in the configuration. S, σ → S , σ S, σ, γ → S , σ , γ Example Consider the program S ≡ x := 1; x := x + 2; Let σ i denote the state where σ(x) = i and all other variables are 0. Suppose we have defined ⇒ with rule Seq as in section 2.2. This will generate an SOS for → with rules Seqa through Seqd . According to these rules we will get the following execution sequence (the subscript Seq is omitted from the statements [. . .]). The transition symbols are subscripted with the index of the rule that was applied for that step. x := 1; x := x + 2; , σ 0 , λ →a [x := 1, x := x + 2], υ, e →c with e = (S1: x := 1, σ1: υ, T: x := 1; x := x + 2, τ: σ 0 ) [—, x := x + 2], υ[1/x], e →b
70
C. Huizing, R. Koymans, and R. Kuiper
[x := x + 2], υ , e →c with e = e ˆ(σ1 : υ[1/x], σ2: υ , S2: x := x + 2) [—], υ [υ (x) + 2/x], e →d —, υ , λ Applying CondSeq (e ˆ(σ2 : υ [υ (x) + 2/x], τ : υ )) yields: υ = σ 0 ∧ υ[1/x] = υ ∧ υ [υ (x) + 2/x] = υ and hence υ = σ 0 [1 + 2/x] = σ 3 . We remind the reader that we have here a collection of sequences, one for each choice of υ, υ , and υ . At the application of rule Rd , these choices are resolved by means of the CondSeq -condition. In this example, the only sequence that will not get stuck is the one with υ = σ 0 , etc. Note that the choice for, e.g., state υ in the first transition is a priori. Any state can be chosen here and it is only at the application of rule Rd that this choice is checked. Any sequence that has not made the “right” choice (σ 0 in this example), will get stuck. It will not reach a final configuration and hence be discarded from the semantic function. Theorem 1. For every statement S and states σ, σ , the following statements are equivalent: 1. S, σ ⇒ σ 2. S, σ →∗ σ Proof sketch [1 to 2] By induction to the depth of the derivation tree of ⇒. For axioms, the leaves of the tree, it is obvious. Now suppose rule R has been applied. Then the premises must hold and from the induction hypothesis we know that Si , σi →∗ σi . Repeatedly applying rule Rc and rule stack gives [Si , . . .]R , σi , γ →∗ [—, . . .]R , σi , γ . Using rule Rb , these sequences can be concatenated and one application of Ra and Rd finishes it. Since we have chosen the statements and states from the premises of the rule, the condition holds and Rd can be applied. [2 to 1] By induction to the length of the sequence. For sequences of length 1, an axiom Ai must have been applied and hence the big step can be derived from axiom Ai . If the sequence is longer than 1, the final step must have been derived from rule Rd , for some R, since the final configuration has — as statement. Then the first step must have been derived by rule Ra , since the other steps start with a [. . .] statement.Then in the execution sequence the rules must been applied in the following order, denoted as a regular expression: Ra (Rc∗ Rb )∗ Rc∗ Rd Such a sequence of Rc steps must end in an empty statement (—) in order to apply Rb or Rd , so we can apply induction to this subsequence and derive that the corresponding premise of rule R must have been applied. Since rule Rd has been applied in the last step, the condition CondR holds and rule R is applicable.
A Small Step for Mankind
3
71
Order! Order!
The order in which the statements from the premises are executed in the sequence of small steps depends on the order in which they are mentioned in the original big-step rule R. In fact, any ordering of statements Si in [. . .] of rule Ra will yield a small-step semantics that satisfies Theorem 1. This is captured by the following corollary. Corollary 1. Let →1 and →2 be two small-step relations constructed from the same big-step relation ⇒, using different orders in rules Ra . Then →∗1 =→∗2 . This follows from the proof of Theorem 1, which doesn’t use the order of the premises in the big-step rules (in fact, this order is an artifact of the notation; the premises of a derivation rule are a set, not a list). The order of the steps in a generated small step semantics is not necessarily the order in which the steps are performed in an actual execution. E.g., consider the following big-step rule for sequential composition. Seq
S1 , σ1 ⇒ σ1 S2 , σ2 ⇒ σ2 if σ2 = σ1 S2 ; S1 , σ2 ⇒ σ1
This rule gives exactly the same relation ⇒ as the rule Seq in the previous section, but our construction would yield a different relation →. The same states would be visited, but not in the same order. E.g., the final state σ1 would also appear half way in the sequence, after execution of S1 . Example Revisited With rule Seq , the sequence will be (inserting the requirements that CondSeq imposes): x := 1; x := x + 2; , σ 0 , λ →a [x := x + 2, x := 1], σ 1 , e1 →c [—, x := 1], σ 3 , e1 →b [x := 1], σ 0 , e2 →c [—], σ 1 , e2 →d —, σ 3 , λ with e2 = (σ1: σ 1 , σ2: σ 0 , σ1 : σ 3 , σ2 : σ 1 , τ: σ 0 , S1: x := x + 2, S2: x := x + 1) Other sequences will get stuck, such as x := 1; x := x + 1; , σ 0 , λ →a [x := x + 2, x := 1], σ 0 , e1 →c [—, x := 1], σ 2 , e1 →b [x := 1], σ 2 , e2 →c [—], σ 1 , e2 → here e2 = (σ1 : σ 0 , σ2 : σ 2 , σ1 : σ 2 , σ2 : σ 1 and CondSeq does not hold, because, a.o., e2 .σ2 = e2 .σ1 . If the small-step semantics is only used to investigate invariants, this is not really a problem, since the same states appear and invariants are single-state predicates. This is captured by the following theorem. Theorem 2. Let →1 and →2 as in Corollary 1. Then the execution sequences of →1 and →2 contain the same set of states (although not necessarily in the same order). Proof. By induction on the derivation tree of the big-step rules. The conditions CondR applied at the two rules Rd put the same restrictions to the states. Since these conditions constitute the only restrictions to the states, the state sets are the same.
72
C. Huizing, R. Koymans, and R. Kuiper
When the order is important, however, the construction has to be refined. Instead of executing the premises in the arbitrary order of their mentioning in the rule1 , we choose a causal order. Not every proof rule allows such a causal order. E.g., a rule like the following is unusual. S1 , σ ⇒ σ S2 , τ ⇒ τ S , τ ⇒ σ For rules like this, there is no “natural” execution order and hence there is no naturally ordered small-step semantics either. Let us call a big-step proof rule causal or implementable if there is a partial order on the states and statements of the rule such that 1. x y if x appears in the left-hand side of ⇒ and y in the right-hand side; 2. if there is a dependency between x and y in the condition of the rule, either x y or y x. For a causal rule R, we define rule Ra to use an order for [Si1 . . . Sin ] such that σij σik implies j < k. This way, the order in which the states appear in the execution sequence defined by → is possible in an actual execution without clairvoyance.
4
Concluding Remarks
The presented technique has a surprising parallel in the work of De Roever. In the semantics (and proof system) given in [1] for the communication based language Communicating Sequential Processes (CSP) the a priori idea is used to achieve compositionality with respect to the parallel operator. For each process a set of sequences is provided with all possible values that might be received at the communication points. Parallel composition then combines the sequences that match on the send/receive values at the communication points. This approach is extended to real-time in [5]. Similarly, for the compositional shared variable semantics and proof system given in [2] for temporal logic, for each process a set of sequences is provided with a priori state values not only at communication points, but at all interleaving points. Sequential then is a special case of parallel composition, using a stronger ordering condition. The approach has been shown to also be applicable to action based formalisms like Statecharts, in [4]. In the present approach, where rules are considered for any kind of composition operator, the a priori semantics of each statement in the premise of a rule is provided without ordering the statements: this depends on the operator under consideration and is captured in the condition of the rule. 1
Strictly speaking, this order is an artifact of the rule notation. Since the premises of a rule are really a set there is no real order between the premises and the construction of → just takes some arbitrary order.
A Small Step for Mankind
73
We can observe a kind of duality between big-step and small-step semantics in relation to sequential and concurrent operators. In the sequential case, a big-step relation is the easier base for the semantics. The small-step semantics of, e.g., procedure call requires extra bookkeeping, reflected in the construction of this paper in the stack and the a priori technique. In the concurrent case, however, it is the big-step semantics that requires a priori techniques to accommodate the interleaving of atomic actions, whereas this interleaving comes naturally with small-step semantics (assuming a step coincides with an atomic action). The reason we want to investigate small-step semantics, even in the sequential case, is to apply concurrency-type invariant techniques for OO. Note that we assume that expressions are atomically evaluated in the big-step semantics, and also in the small-step semantics. For simplicity, many semantics take this approach and rewrite programs with function calls in expressions by executing the call in a special statement that stores the return value of the function call in a local variable and uses this variable in the expression instead of the call. In this approach, the function call will yield many small steps in our construction, as expected. The translation of a big-step semantics to a small step one as proposed is quite algorithmic. It seems quite feasible to automate this translation. Some issues like how to handle conditions in a big-step SOS rule should then be addressed; these are expected to be quite straightforward. One of the motivations behind providing a small-step semantics was to facilitate soundness and completeness proofs. An interesting question is how the generation of a priori states in the small-step semantics would interact with a proof system that uses ranking functions to deal with liveness properties, like described in [3], since in diverging sequences not all a priori choices may be resolved and these may interfere with the ranking technique. We expect that a fully causal semantics, where condition checks are performed as early as possible in the sequence, will not suffer from this problem.
References 1. Apt, K.R., Francez, N., de Roever, W.P.: A Proof System for Communicating Sequential Processes. ACM Transactions on Programming Languages and Systems 2(3), 352–385 (1980) 2. Barringer, H., Kuiper, R., Pnueli, A.: Now You May Compose Temporal Logic Specifications. In: Proc. 16th ACM Symposium on Theory of Computing, pp. 51–63 (1984) 3. Gr¨ umberg, O., Francez, N., Makowski, J., de Roever, W.P.: A proof rule for fair termination. Information and Control 66(1/2) (1983) 4. Huizing, C.: Semantics of Reactive Systems: Comparison and Full Abstraction. PhD thesis, Eindhoven Technical University (1991) 5. Koymans, R., Shyamasundar, R.K., de Roever, W.P., Gerth, R.T., Arun-Kumar, S.: Compositional Semantics for Real-Time Distributed Computing. Information and Computation 79(3), 210–256 (1988) 6. Middelkoop, R., Huizing, C., Kuiper, R., Luit, E.: Specification and Verification of Invariants Exploiting Layers in OO Designs. Fundamenta Informaticae 85(1-4), 377–398 (2008)
On Trojan Horses of Thompson-Goerigk-Type, Their Generation, Intrusion, Detection and Prevention Hans Langmaack Institut f¨ ur Informatik der Christian-Albrechts-Universit¨ at zu Kiel, Olshausenstr. 40, D-24098 Kiel, Germany
[email protected]
Abstract. Trojan horses of Thompson-Goerigk-type are intended software errors very hidden in machine level compiler implementations although the latter have successfully passed Wirth’s strong compiler bootstrapping test and there have been done rigorous verification both of compiling specification and of high level compiler implementation. Thompson demonstrated these errors in 1984. This essay describes Goerigk’s contributions on how to generate, intrude, detect and prevent these most intricate errors which can even pass compiler certification test suites undetected. Target code inspection therefore is necessary. However, a full inspection usually is not feasible. Main research result described is how to slash down the amount of inspection necessary, while still getting a provably correct compiler. Project Verifix demonstrated this approach on a fully verified, realistic compiler for a realistic high level language.
Beitrag zur Festschrift f¨ ur meinen lieben Kollegen und Freund Willem-Paul de Roever anl¨asslich seiner Emeritierungsfeier am 4. Juli 2008 in der ChristianAlbrechts-Universit¨ at zu Kiel.
1
Introduction
Trojan horses of Thompson-Goerigk-type [Tho84, Goe99] are intended software errors very hidden and hard to detect in machine level compiler implementations although the latter have successfully passed Wirth’s strong compiler bootstrapping test [Wir77] and although there has been done rigorous verification both of compiling specification and of high level compiler implementation. I.o.w.: A Trojan horse of that type demonstrates that Wirth’s well acknowledged and most effectful bootstrapping test is still an unsufficient substitute of a correctness proof for a machine level compiler implementation. Chirica and Martin were the first who recognized the necessity to differentiate between two types of verification, namely of compiling specification and of compiler implementation [CM86]. Nevertheless, literature tends to restrict compiler verification to verification of compiling specification and of at most high level D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 74–95, 2010. c Springer-Verlag Berlin Heidelberg 2010
On Trojan Horses of Thompson-Goerigk-Type
75
compiler implementation. Literature inpardonably neglects rigorous verification of low and machine level compiler implementation. So industrial compiler construction is often following Wirth’s compiler development recommendation, e.g. for languages like Pascal, Modula, C, Lisp or Java: • Firstly, implement a newly to be built compiler τ1 in its own (high level) source = host programming language SL = HL; • secondly, rewrite τ1 to τ0 in host machine code HM L0 ; • thirdly, bootstrap τ1 by τ0 on host machine HM0 to the desired compiler τ2 now formulated in target machine code T L. τ2 translates from source language SL to code T L of the target machine T M . Rewriting of τ1 to τ0 can be done either by hand or, if available, by an existing auxiliary compiler τ00 , executable on host machine HM0 , which translates a superlanguage SL0 of SL to host machine code HM L0 . McKeeman’s T-diagrams are an instructive shorthand for this proceeding, a bootstrapping test included: τ1 T L | |SL τ3 T L| | SL |SL τ1 T L | SL |SL τ2 T L | T L| τ0 TL| TL | | |SL τ1 T L| SL | SL |SL |SL0 τ00 HM L0 |HM L0 | | | |HM L0 | bootstrapping test for τ2 : the resulting τ3 has to represent the same abstract compiler program as τ2 does Thompson in his Turing-award lecture demonstrated a most surprising technique how a hacker can corrupt the auxiliary compiler τ00 , generate a Trojan horse and intrude it into τ2 even if the high level SL-written compiler τ1 is perfectly verified. Goerigk elaborated on that technique in chapter 6 “Source level verification is not sufficient” of his inaugural dissertation (Habilitationsschrift) “Trusted Program Execution” [Goe00c] and on realistic methods to detect and prevent all Trojan horses of Thompson-Goerigk-type, see related work [GH96, GH98a, GH98c, GH98d, DHVG02, DHG03]. Reviewer W.-P. de Roever highly appreciated chapter 6, the “wittiest one in Goerigk’s thesis”. Corruption of the auxiliary compiler τ00 , namely rewriting of τ1 to machine coded τ0 and τ2 , is the weak point in industrial compiler construction due to Wirth. The strong bootstrapping test does not uncover all errors in τ2 intruded by τ00 , especially not all intended errors, i.e. Trojan horses. Due to Goerigk’s ideas we would like to elucidate two propositions: Proposition 1: The above mentioned corruption of τ00 can be done already by one which confines Trojan horses to the very first pass of compiler τ2 , namely in reading of external input data representations πSL (character sequences) of given (abstract) source programs πSL (especially compiler τ1 ) and transforming them out which in this first pass are internal repreinto output data representations πSL sentations of πSL inside execution states of compiler τ2 .
76
H. Langmaack
Proposition 2: Let us assume that • all hardware parts are working correctly as their manuals prescribe (hardware correctness assumption); • all compiling specification rules are mathematically proved correct w.r.t. the semantics of all programming languages involved (source, target and intermediate languages; assumption of rigorous compiling specification verifications); • all compiling specifications are correctly implemented as high level (systems) programs (assumption of rigorous high level compiler implementation verification). Under these premises there is a realistic software engineering technique how to perform rigorous low and machine level compiler implementation verification, especially the correct step from τ1 to τ2 . Essential support is offered by rigorous a posteriori result checking, in our case by rigorous inspection of resulting code w.r.t. source code and tractable, good natured, proved correct compiling specification rules. As a consequence: Every possible Trojan horse in τ2 will be detected; if there is none all of them are prevented (End of propositions). Industrially feasible methods how to close the long lasting gap of low and machine level compiler correctness are of urgent interest to certification institutions [BSI96]. They do recommend to perform inspection of resulting code which safety and security software is depending on [ZSI89, ZSI90]. But the institutions are thinking of inspection w.r.t. source code and source and target languages semantics. Such inspection activity is extremely time consuming [Pof95] because program semantics deliberations require much higher mathematical and informatics theoretical abilities of the inspector than code comparisons w.r.t. compiling specification rules (term rewriting resp. deduction rules) which have been proved correct beforehand. Inspection work of the latter kind can be demanded of every university educated informatician. He/she needs not be a computer scientist with deeply founded theoretical knowledges [Lan05].
2
On Trusted Initial Compilers
Let us consider a compiler τ2 which translates a realistic high level systems programming language SL (in which compilers can be written) to target code T L of a real target machine T M and is running (implemented) on that machine. Let the above mentioned three premises of proposition 2 hold and let τ2 be fully verified, namely the additional rigorous machine level implementation verification also been done. Then all further correct software developments and installations on that machine T M and its machine family can be based on SL and on such trusted, correct initial compiler τ2 , and lower (especially machine) level compiler implementation verification is no longer required. No lower level code needs be written, no lower level code inspection [Fag86] needs be done. The following example elucidates this statement: Let a new compiler τ2 , also running on T M , for a new language SL to be constructed. Our recipe is: First
On Trojan Horses of Thompson-Goerigk-Type
77
to develop a verified compiling specification from SL to T L; then to implement it correctly as a compiler τ1 in high level language SL; and finally to translate τ1 to τ2 correctly by the executable initial compiler τ2 . τ2 does not intrude any errors to τ2 ; neither bootstrap test nor low level code inspection are required. So we see the great benefit of such an initial compiler τ2 and the urgent demand to construct this tool and to do its full verification in order to demonstrate: There is realistic expectation that such a hard work can be performed with success. The attentive reader might have doubts why further machine level compiler implementation verifications are not required if hardware computations have been used to prove τ2 correct. But every serious user of computations of machine code relies on correct working of the hardware. So we as compiler constructors, who generate machine code for application programmers, are even more allowed to assume hardware correctness. Hardware correctness is the hardware engineer’s responsibility. Our position is a little more comfortable than of a mathematician who uses hardware computed results for a proof of a mathematical theorem. The famous four colour theorem (W. Haken and K. Appell 1976/89) has still no proof free of computer support [Bau99]1 . In order to present instructive elaborations about both propositions mentioned in the introduction it is advisable to refer to an existing compiler example. In project Verifix2 such a trusted initial compiler was built [GDG+96, DHVG02] [DHG03, GGZ04]. SL is designed to be ComLisp, a realistic sublanguage of ANSI-Common Lisp [Ste84]3 , T L is the internal binary code TC0 of the Transputer T400. TC0 -code is externally represented in hexadecimal notation TC1 of bytes (resp. words) of subroutines and of initial stack and heap. A small, 253 bytes long, proved correct boot program represents final translation specification CT . The program loads every well-formed TC1 -program as a well-formed TC0 program. The latter program simulates the former one in a 1:1 manner; CT is a correct translation if the former program is mapped by C1 from a well-formed ComLisp-program. Main theorem, correctness of compiling specification C0 = C1 ; CT = CT ◦ C1 [DHVG02, DHG03]: The diagram seq[char] char2byte ↓ seq[byte] 1 2
3
[[πSL ]]SL
C0
[[πT C0 ]]T C0
char∗ ↓ char2byte byte∗
H. Heesch was the first researcher who used computer support 1965 for his investigations to solve the four colour problem. Research in the Verifix-project “Verification of compiler specifications, implementations and genration techniques” 1995-2003 has been supported by Deutsche Forschungsgemeinschaft DFG under Go 323/3-1,2,3, He 2411/2-1,2,3, La 426/151,2,3, La 426/16-1. The advantages of Lisp are discussed in section 5.4.
78
H. Langmaack
is commutative in the following sense: If πSL is well-formed and compiling relation πT C0 ∈ C0 (πSL ) is holding then πT C0 is also well-formed and containment [[πT C0 ]]T C0 (char2byte(din )) ⊆ char2byte([[πSL ]]SL (din )) is valid for all character sequences din in πSL ’s input data domain. On both sides of “⊆” we mean the sets of regular results (output byte strings) of all successfully (i.o.w. regularly) terminating computations applied to din . Generally compilings, representations and semantics are allowed to be multivalued (non-deterministic) functions indicated by signs or in the diagram. The conclusion in the definition above is often expressed as [[πSL ]]SL [[πT C 0 ]]T C0 (implicitly parameterized by associated in-output data representations) or as program semantics [[πSL ]]SL resp. program πSL is correctly implemented by [[πT C 0 ]]T C0 resp. πT C0 or as every partial correctness w.r.t. any pre- and postconditions of [[πSL ]]SL resp. πSL is preserved by [[πT C 0 ]]T C0 resp. πT C0 . In [DHVG02] there have been proved analogous theorems for programs of all intermediate languages and their specific compiling specifications the sequential composition of which is yielding C0 and for all relevant program parts like forms, statements and expressions. Since implemented compiler programs need not and often cannot terminate successfully (regularly) for all well-formed source programs preservation of partial correctness is the appropriate implementation correctness notion for the majority of usual users [Lan97, Goe00b, GoLa01]. Even one who wants to implement compilers by compiling is a usual user. This observation is covering the following situation: Generation of safety and security critical machine programs needs compilers which preserve total correctness of application programs. It is well allowed to develop such a compiler at first in a high level compiler writing language and then to compile that compiler to machine code by help of a compiler which preserves partial correctness of application programs. The resulting machine code written and executable compiler will preserve total correctness also. By the way: The definition of compiling correctness above allows to show why the bootstrapping test for τ2 in McKeeman’s diagram of chapter 1 is expected to be successful: Proof: Let us assume τ1 and τ00 to be well-formed and to do (to specify, to define) correct compiling. Let us furtheron assume a certain determinacy property for τ1 resp. [[τ1 ]]SL , namely that all compilations of a given well-formed SLprogram πSL lead to well-formed target programs πT L resp. their representations πTout L as output data which all are representing one and the same abstract T Lprogram. The above mentioned compiling specifications C0 and C1 in [DHVG02] have been proved to have this determinacy property. Then τ0 is also well-formed and [[τ1 ]]SL [[τ0 ]]HML0 ()
On Trojan Horses of Thompson-Goerigk-Type
79
is holding. McKeeman’s diagram says τ2 ∈ [[τ0 ]]HML0 (τ1 ) modulo program representations as in- and output data. () implies τ2 ∈ [[τ1 ]]SL (τ1 ) . So τ2 is well-formed and [[τ1 ]]SL [[τ2 ]]T L () is holding. McKeeman’s diagram again says τ3 ∈ [[τ2 ]]T L (τ1 ) and due to () we have τ3 ∈ [[τ1 ]]SL (τ1 ) . So τ3 is well-formed and τ3 and τ2 represent the same abstract T L-program. Q.e.d.. ∼
∼
A compiler implementation τ , well-formed in language L, is called to be cor∼ ∼ ∼ rect implementation of a compiling specification C from SL to T L iff we have a commutativity ∼ ∼ C [[ τ ]]∼ L or, more explicitly, ∼
SL ρs ∼ s
D with
∼
[[ τ ]]∼ (ρs (π ∼ )) L
SL
∼
C
|
∼
∼
TL ρt ∼
[[ τ ]]∼
L Dt ∼
⊆
ρt (C (π ∼ )) ∼
SL
for all well-formed abstract programs π ∼ ∈ SL . Program representations by SL
∼
∼
ρ and ρ have to be such that represented programs as data in Ds and Dt have s
t
∼
the same uniquely associated semantics as their abstract originals in SL and ∼
∼
∼
T L. If C is correctly compiling then so is [[ τ ]]∼ doing. L Thus we have made the notions in the premises of proposition 2 more precise.
3
On Generating and Intruding Trojan Horses of Thompson-Goerigk-Type
re Let us consider ComLisp’s = SL’s abstract and well-formed program τ1SL which reads s-expression sequences from an input medium and which in its input shape re in τ1SL is itself a finite sequence of s-expressions <SL-top-level forms for read-sequence> (read-sequence). Every datum of ComLisp is a so called s-expression (symbolic expression, a specific binary tree). A well-formed abstract program and its (input) representation
80
H. Langmaack
have the same semantics by definition above. Reading of single characters is done by the standard operator calls (read-char) and (peek-char) . re transforms a given finite s-expression sequence to an internal Lisp-list of sτ1SL expressions delivered as the result of function call (read-sequence). So output re medium of program τ1SL is the abstract results store the contents of which are abstract Lisp-lists of s-expressions. re τ1SL is a Lisp-list of special s-expressions, so called top-level forms, which are defining or non-defining. The defining ones are global variable declarations or function definitions, the non-defining ones are so called forms 4 which re (read-sequence) is one of. Since τ1SL is developed due to rigorous lexical analre is successfully and ysis of first longest match and LL(1)-parsing theory τ1SL correctly working for every finite sequence of s-expressions which are appropriately represented in the input medium [GH98c, LaWo03]. re re in re out resp. τ1SL resp. τ1SL to a well-formed SL-program πSL If we apply τ1SL in in in its input medium shape πSL then πSL is turned over to its uniquely associout re re ated πSL as an abstract s-expression list. So τ1SL with its semantics [[τ1SL ]]SL , defined on the set of well-formed SL-programs (a subset of general s-expression re = sequences), is an SL-compiler which plays all five roles of τ00 , τ0 , τ1 , τ2 , τ3 (τ1SL τ00 = τ0 = τ1 = τ2 = τ3 ) in the bootstrapping diagram of chapter 1 and which translates to T L and is written in SL, i.e. SL = HM L0 = T L. We have [[τ00 ]]SL (τ1in ) τ0out , [[τ0out ]]SL (τ1in ) τ2out , [[τ2out ]]SL (τ1in ) τ3out , or | SL in |SL τ1 SL | | SL | SL |
TL | | SL τ3out SL| | SL τ1in τ1in T L | SL | SL τ2out SL| SL | SL | SL τ0out SL | SL | | τ00 SL | SL | | | SL | bootstrapping test for τ2out
and see: Wirth’s strong bootstrapping test is passed successfully, i.e. τ2out and re . τ3out are representations of the same abstract program τ1SL Due to Goerigk’s ideas we may manipulate τ00 towards τ 00 (generate a Trojan horse) [Goe00c, LaWo03]: ;; ----------------------------------------------------;; manipulated tau00 equal read-sequence program tau00-bar ;; ----------------------------------------------------<SL-top-level forms for read-sequence> (defvar Cop~) (setq Cop~ ’ (
============================= 4
Forms in Lisp are called terms or expressions in other high level languages.
On Trojan Horses of Thompson-Goerigk-Type
81
Here is to be copied that ComLisp-text below which is delimited by ===-lines. ============================= ) )
============================= (defvar In~) (defvar Out~) (setq In~ (read-sequence)) ;;----------------------------------------------------;; ComLisp-program 15 is translated ;; "catastrophically" to ComLisp-program 16 ;; ---------------------------------------------------(if (equal In~ ’(15)) (setq Out~ ’(16)) ;; ---------------------------------------------------;; all other s-expression sequences different from ;; tau00 are translated identically ;; ---------------------------------------------------(if (null (equal In~ ’(<SL-top-level forms for read-sequence> (read-sequence)))) (setq Out~ In~) ;;---------------------------------------------------;; tau00-bar translates read-sequence-in to tau00-bar-out ;; such that the resulting tau00-bar-out passes the ;; bootstrap test w.r.t. read-sequence-in successfully ;; ---------------------------------------------------(setq Out~ (concat (concat ’(<SL-top-level forms for read-sequence> (defvar Cop~)) (list (list ’setq ’Cop~ (list ’QUOTE Cop~)))) Cop~)) )) Out~
=========================== ;; -----------------------------------------;; end tau00-bar ;; ------------------------------------------
τ 00 is wrong because τ 00 translates exactly two sequences in a wrong manner: 1. Program 15 is translated to program 16 (both in sequence notation), re in equal read-sequence-in is translated to τ out 2. program τ1in equal τ1SL 00 . Proof. of the latter claim: We have to look for the resulting content of Out~: Prefix of Out~’s content is the list (<SL-top level forms for read-sequence> (defvar Cop~) ) , postfix is the list ( (defvar In~) ....
Out~ )
82
H. Langmaack
and middlefix is the one-component list ((setq Cop~ ’((defvar In~)... Out~)) ). So the result of τ 00 applied to τ1in is exactly the whole list τ out 00 . Q.e.d.. So we have a McKeeman-diagram as above where τ 00 , τ 0 , τ 2 and τ 3 are equal with a successfully passed bootstrapping test. This means especially that the generated Trojan horse in τ 00 is intruded to τ 0 , τ 2 and τ 3 . The central idea to substantiate proposition 1 occurs in W. Goerigk’s papers [Goe99, Goe00c]. The role of his substitution function subst (x y z) which substitutes every subtree y in z by x is played by our list concatenation function concat (z x) which is available among <SL-top-level forms for readsequence> and which postfixes list z by list x. In a different view we may say: concat substitutes the final NIL in list z by tree x. The idea is applicable to any compiler pass other than the reading pass.
4
On Chances to Detect Trojan Horses of ThompSon-Goerigk-Type and on Quality of Compiler Certification
What about chances to detect errors of the mentioned type and to find out that and how the construction or generation of τ2 has been corrupted towards τ 2 which is equal τ 00 in our specific case of chapter 3? Without any knowledges about corruption of τ 2 there is factually no chance to guess the first error among the infinity of well-formed source programs by any kind of systematic test runs of τ 2 . Here we have a typical hacker’s attack towards a user who is especially interested in correct translation of SL-program 15. O.k., our specific program 15 here is not important, but there are important safety or security critical high level source programs which could be secretly turned over to ones with dangerous properties. The enormous difficulties of finding Trojan horses of Thompson-Goerigk-type are also revealing when we are looking at certification authorities [BSI94, Pof95] and their proceeding how they grant official admission of a proposed compiler implementation τ 2 . Certification means among other things: It is certified that Trojan horses have not been intruded to τ 2 . That is a courageous statement because official certification is still quite a distance away from mathematically guaranteed certainty.5 The authority demands that compiler τ 2 successfully passes an official test suite which is a long list of triples. Every triple consists of • a well-formed source language SL-program πSL , • an input datum α and 5
[BSI94] requires for certification of a machine implementation of high level written software that the latter is verified by support of an officially admitted proof tool and is implemented by an officially admitted compiler. But this compiler admission is not subjected to the above certification requirements. We see a vicious circle here.
On Trojan Horses of Thompson-Goerigk-Type
83
• an output datum ω which the authority presupposes to be a correct result ω ∈ [[πSL ]]SL (α) of πSL applied to α (modulo data representations). The authority requires (and the applicant for admission hopes) that every τ 2 compiled πSL applied to α delivers ω as a result. That means two checkings: Checking of successful (regular) termination with a regular result is a requirement for practical usefulness of τ 2 . For correctness the following is to be checked: If there is a regularly terminating compilation with a target program πT L ∈ [[τ 2 ]]T L (πSL ) then πT L is well-formed and if there is a regularly terminating ∼ ∼ computation with an output datum ω ∈ [[πT L ]]T L (α) then ω and ω represent the same abstract datum. Clear, this proceeding of a certification authority makes sense only if every well-formed source program πSL has the determinacy property. Since the applicant hopes that his compiler implementation τ 2 is correctly translating he consequently hopes that a well-formed target progam πT L generated by τ 2 has the determinacy property as well. As real processors are working deterministically one might think that generated target machine programs πT L have the determinacy property per se. But that is not the whole truth. Even if a boot program as for CT above gives write protection to registers and memory cells the contents of which are never to be changed during any computation of πT L an executing πT L might interprete or read registers and cells which have been initialized neither by the boot program nor by πT L ’s execution itself. In this sense different computations of πT L with the same input might deliver different results which do not represent the same abstract datum. If compiler τ 2 to be certified has been generated due to Wirth’s advice including the strong bootstrapping test, i.e. τ 2 ∈ [[τ 2 ]]T L (τ1 ) , then the authority will demand that the list is prolonged by a triple • the SL-written, well-formed compiler πSL = τ1 , • the input datum α = τ1 , • the output datum ω = τ2 which the authority presupposes to be a correct result τ2 ∈ [[τ1 ]]SL (τ1 ) of τ1 applied to τ1 . The authority’s requirement for testing of τ 2 against this triple (τ1 , τ1 , τ2 ) boils down to: τ 2 and τ2 represent the same abstract T L-program. The authority’s presuppositions ω ∈ [[πSL ]]SL (α) must be seen with severe scepticism because they are true in a mathematically proved sense probably only for very simple triples (πSL , α, ω). Usually the authority has gained the test suite triples by so called approved real compiler implementations of type τ 00 above which compile source programs on auxiliary host machines HM0 and which are not proved fully correct. So the presupposition ω ∈ [[πSL ]]SL (α), especially τ2 ∈ [[τ1 ]]SL (τ1 ), must be seen with severe worries. I.o.w.: The authority is not quite honestly pretending to have a full correctness proof for compiler implementation τ2 . If τ1 is correctly translated by τ 2 to τ 2 or if the special checking in the suite is positive then all required correctness checkings in the test suite are redundant, their positive outcomes are logical implications:
84
H. Langmaack
Proof: If τ 2 ∈ [[τ 2 ]]T L (τ1 ) holds and this compiling is correct, τ 2 is well-formed and (*) [[τ1 ]]SL [[τ 2 ]]T L with τ 2 ∈ [[τ1 ]]SL (τ1 ) is holding. Or if the special checking is positive then, due to a remark above, τ 2 ∈ [[τ1 ]]SL (τ1 ) is holding. Since τ1 does proved correct compiling, τ 2 is well-formed and [[τ1 ]]SL [[τ 2 ]]T L is also holding as above (*). Now let πSL be well-formed with ω ∈ [[πSL ]]SL (α) , πT L ∈ [[τ 2 ]]T L (πSL ). Then due to (*) πT L is well-formed. Let furtheron ∼ ω ∈ [[πT L ]]T L (α) . Then πT L ∈ [[τ1 ]]SL (πSL ) and [[πSL ]]SL [[πT L ]]T L . So ∼ ω ∈ [[πSL ]]SL (α) ∼ and ω, ω represent the same abstract datum, because the determinacy property is assumed for all well-formed SL-programs. Q.e.d.. Consequence: If there is at least one well-formed πSL which is incorrectly translated by τ 2 then so is τ1 and the special checking is negative. Or: One positive special checking implies all other checkings to be positive resp. τ 2 to be correctly compiling [GoGe75, Lan97a]. This sheds a differentiated light on Dijkstra’s dictum: Finitely many tests do not prove a program to be correct which has an infinite input data domain [Dij76]. We think, due to the state of temporary software engineering science, certification authorities are allowed to demand that applicants have to and are able to do rigorous verification of compiling specification C0 and of high level compiler implementation τ1 . So the authorities are on duty to work for or show up convincing software engineering techniques how to assure that τ1 is correctly compiling and τ 2 ∈ [[τ1 ]]SL (τ1 ) is holding resp. how to eliminate every Trojan horse in τ 2 at least for initial compilers τ1 resp. τ 2 . Then: If all machine code generation is based on τ 2 and if there is any error in machine code then the error is originating in an error in the corresponding high level written source software, not in τ 2 . Or the hardware correctness assumption is wrong; but we do not consider this possibility here since our present task is to close the error gap between hardware and high level software. In practice testing and test suites are still needed and if only in order to achieve confidence in the working of a specific device.
On Trojan Horses of Thompson-Goerigk-Type
5
85
On Rigorous Target Code Inspection
re If we juxtapose our example compilers in chapter 3, the read-programs τ1SL = τ00 = τ2 and τ 00 = τ 2 , then both ComLisp-programs coincide up to the very last component, the function call (read-sequence) in τ2 . This call is unduely expanded in τ 2 w.r.t. the compiling specification which here is the identical mapping of s-expressions. So we have detected a Trojan horse in fact by inspection of the target code τ 2 w.r.t. source code τ2 and identical mapping as a tractable, good natured compiling specification. For generation of trusted initial compilers W.Goerigk and U.Hoffmann [Hof98, GH98b] go the way of low and machine level compiler implementation verification by rigorous a-posteriori inspection of resulting target code. The premises of proposition 2 are exploited as much as possible in order to lower down tedious work. Since the 1980s researchers and certification institutions [Fag86, ZSI89, ZSI90, Pof95] recommend resp. require a-posteriori code inspection so that resulting target machine code can be certified as correctly translated code. Remind: Not every low level code inspection can be done by programmed result checking executed on processors. Such proceeding moves a compiler towards one which does verifying [GGZ04, Hoa03]. But be aware: Additional machine level code correctness, especially code inspection, is required now. A thorough proceeding towards programmed code inspection would open up a serious hen-egg-problem. Together with their colleagues A.Dold, F.W. von Henke and V.Vialard W.Goerigk and U.Hoffmann have worked out realistic, tractable, good natured compiling specification rules and have proved them correct by support of the mechanical prover PVS [Dol00, DHVG02, DHG03]. [Goe00a, Goe00b] showed the way towards mechanically assisted proof of partial correctness preservation instead of total correctness preservation [Moo96].6 We explain Verifix’s rigorous, but realistic proceeding how to achieve doubtless correctly compiling TC1 - and TC0 -implementations τ2T C 1 and τ2T C 0 = τ2 of C1 and C0 , specifications mentioned in chapter 2. Since high level verification is assumed to be perfectly done C1 and τ1 are correctly compiling. τ1 is a sequence of top-level forms (see chapter 3) which we are grouping this way:
6
Together with the research group of G.Goos and W.Zimmermann there have been developed analogous generators for C and Forth as source languages and i386, DECα and MC68000 as target machines. But so far the generations depend on unverified software like assemblers, auxiliary compilers and lack sufficient verification [GGZ98, GoZi99, GZG99, GGZ04].
86
H. Langmaack
(printhexadecimal (C1 (print-sequence (read-sequence)))) print-sequence is redundant in a sense. We intersperse printing because printprograms are considerably shorter than corresponding read-programs. In order to lower down work for rigorous low and machine level code inspection printing lends itself as a result checking technique in order to achieve rigorously proved correct reading [LaWo03], see section 5.3. Due to Wirth’s advice in chapter 1 we compile τ1 by an available ANSICommon LISP-compiler τ00 running e.g. on a Sun Sparc-processor HM0 . The resulting compiler program τ0 is applied to τ1 a second time, executed on HM0 and delivers a compiler τ2T C 1 written in T C1 -hexadecimal code. If we could 100% guarantee τ00 ’s correct translating then, as we have seen in chapters 2 and 4, τ2T C 1 is correctly compiling as well and its bootstrappimg test is successful. As 100% guarantee cannot be given we do rigorous code inspection of τ2T C 1 w.r.t. τ1 and w.r.t. C1 ’s proved correct compiling specification rules. In case of success we are well allowed to load τ2T C 1 into the main memory of the Transputer-processor by our proved correct boot loader associated to specification CT . So we get a correctly executing compiler τ2T C 0 = τ2 running on the Transputer.
5.1
On Compiler Design via Phases for Easier, More Trustworthy Code Inspection
In fact, it is not necessary to do that much, perhaps even unfeasible, low level code inspection. Since Goerigk and Hoffmann have designed compiler τ1 to translate via several phases and since τ1 does all reading of a source program as the initial activity and all hexadecimal printing of the resulting code as the final activity the bulk of low-level code inspection can be deleted, fortunately, again by exploiting the rightly assumed hardware correctness of the Transputer7 . τ1 has four phases represented by the four function calls (read-sequence) (print-sequence . . .) (C1 . . .) (printhexadecimal . . .) 7
In fact, this mode of action reduces the needed time for inspection of the initial ComLisp-compiler to 2 to 3 man months. That is a small expenditure compared to compiling specification verification which needs much more time, is of a much higher intellectual caliber and is not boring. Low and machine level code inspection is said to be boring and therefore prone to errors. This saying is another stimulus to keep that work as little as possible [DHG03].
On Trojan Horses of Thompson-Goerigk-Type
87
inside the final non-defining top-level form.8 Every other top-level form in compiler τ1 is a defining one. τ1 is designed in such a way that all information towards a next pass is transmitted via result of the preceding function call. Contents of global variables are redundant for a following phase. Every well-formed SL-program may have phases. E.g non-defining top-level forms are splitting a program in sequentially composed phases. But certain other subforms are doing so as well. We abstain from a syntactical definition. It is characteristic that translated code of such a subform leaves its result essentially9 in the stack place immediately above the fixed stack segment for global variables. 5.2
Semantics of Program Phases
In order to manage semantics for sequential composition of compiling specifications like C1 , C0 , CT for whole programs it is sufficing to consider program semantics only as a mapping from input to output media contents, e.g. seq[char]
[[πSL ]]SL
char∗ ,
see [DHVG02, DHG03] and our chapter 2. If we want to manage semantics for sequential composition of program phases then it is advisable to extend domain and range so that both become equal, e.g. with
s DSL
[[πSL ]]SL
t DSL
s t DSL = DSL = seq[char] × char∗ × SExpr where SExpr is the set of all (abstract) s-expressions which are the possible results of forms, especially function calls. In order to illustrate the idea we express the total correctnesses [GH98c, LaWo03] pr re and τ1SL : If of read-sequence and print-sequence, considered as phases τ1SL in d is a valid representation ∈ seq[char] of s-expression list d ∈ SExpr then there re with is regular terminating of τ1SL re (#\^z , str, d) ∈ [[τ1SL ]]SL (din #\^z , str, d )
where #\^z represents the end-of-file character, str is a character string and d an s-expression. If d is an s-expression list then there is regular terminating pr of τ1SL and exactly one valid representation dout ∈ char∗ of d with pr ]]SL (sequ, str, d) (sequ, str dout , d) ∈ [[τ1SL
where sequ resp. str is a character sequence resp. string. At target level we have analogously DTs C i 8 9
[[πT C i ]]T C i
DTt C i
Deviating from [GH98c] we assume that printhexadecimal writes a final end-of-file character whereas print-sequence does not do so. We tacitly subsume that for efficiency reasons the stack place immediately below the fixed stack segment is referencing the list of symbols in the heap, see [GH98c].
88
H. Langmaack
with
DTs C i = DTt C i = seq[byte] × byte∗ × IntSExpr, i = 0, 1.
IntExpr is the set of admissible internal representations (via a multivalued function ρsexp int ) of s-expressions by so called admissible acyclic graphs the roots of which are located in the run-time stack place for results of phases; the other referenced nodes must be located in the heap between the workspace bounds heap and heaptop which are system variables controlled by the machine program πT C i . TCi -code for print-sequence in correctly translated SL-programs is allowed to assume this admissibility precondition to hold. But because we are going to compose the code with preceding code which does not guarantee this precondition we extend the print-code by an algorithm to check admissibility10 . Due to compiling theorems in [DHVG02] we have a commutative diagram s DSL sSL ρT C i
DTs C i
[[πSL ]]SL
| Ci [[πT C i ]]
t DSL ρtSL T Ci
DTt C i
where sexp t SL ρsT SL Ci = ρT Ci = char2byte × char2byte × ρint
and πSL , πT Ci are well-formed phases. 5.3
On Checking of Generated Code for Reading by Trusted Printing
Now we can explain what is happening when we delete parts of complete rigorous code inspection, i.e. do only very superficial syntactical inspection of TC1 code for read-sequence, but do not look at internals of its subroutines, do not look at single hexadecimally coded Transputer-instructions. Especially we do not do laborious checkings for forbidden direct jumps inside routine bodies of read-sequence and to the outside. Therefore we give write protection to all loaded machine instructions, to the initial constant heap segment, to the subroutine jump table and to all system variables the values of which are to remain unchanged during execution of τ2T C 0 11 . 10
11
For the closer interested reader: As print-sequence is only used as a phase between phases this checking includes quotetop ≤ heaptop < (quotetop + rstack) /2 and rstack = rp and base address base of current stack frame is pointing to the phases’ fixed result place. Reasons: The constant initial heap is located between heap and quotetop; space between (quotetop + rstack)/2 and rstack is to be left free for garbage collection and subroutine return stack pointer rp has to point to the stack’s bottom. For the closer interested reader we mention e.g. start for subroutine jump table’s start address, heap for heap’s bottom address, quotetop for the index of the first uninitialized heap cell, rstack for the bottom address of the subroutine return stack, memtop for available memory’s end address.
On Trojan Horses of Thompson-Goerigk-Type
89
In order to make sure that, after we have started program τ2T C 0 applied to τ1in , the (correct) code of function call (print-sequence ...) is actually executed we install two instruction pointer break points at code beginning and ending of this call. When the second break point is reached too we do a visual inspection to make sure that τ1in and the newly generated τ1out in the output medium represent the same SL-program. Since τ1in is written by hand we cannot expect identity character by character. In case of successful inspection we continue execution. When execution terminates successfully the hexadecimally printed regular reout in sult τ2T C 1 in the output medium has been correctly compiled from τ1 resp. τ1 (because TC1 -code for print-sequence, C1, printhexadecimal have been rigorously inspected) and may be loaded into Transputer-memory as a correct machine program τ2T C 0 . In case the proceeding is not successful then either the resources of the Transputer do not suffice or we must look for a better auxiliary compiler τ00 . It is well possible that the code of read-sequence inside the original τ2T C 0 has a Trojan horse, but has nevertheless worked correctly for our special application to τ1 . The new τ2T C 0 has no Trojan horse in spite of degraded inspection. 5.4
On Feasibility Effects of Compiler Phases for Rigorous Code Inspection
Inspection of TC1 -code w.r.t. ComLisp = SL-programs and C1 -specification rules directly from SL to TC1 is unfeasible because such rules are, due to the long distance between SL and TC1 , too expanding and involved. Convincing rigor is not guaranteed especially if long target programs like compilers are to be inspected. So Goerigk and Hoffmann [GH98a, GH98c] defined C1 as a carefully designed sequential composition C1 = CL ; CS ; CC ; CA with three intermediate, closely neighbouring languages SIL a stack intermediate language, Cint a C-like intermediate language, also machine independent, TASM an assembly language oriented towards Transputermachine code. Their programs have been given s-expression syntax as SL-programs have. Neighbouring languages are so tightly related that their compiling specification rules are good natured: Juxtaposed source and target code, written down as character strings of s-expressions, allow the inspector to recognize every derivation (rewriting) step and every associated location of rule application [Hof98]. I.e. juxtaposed source and target code of a routine may be considered to be a proper proof protocol which every informatician can check without heavy side argumentations and calculations. In order to demonstrate how sequential composition of specifications is saving low and machine level inspection work we compose C1 exemplarily of two specifications:
90
H. Langmaack
C1 = C11 ; C12 C11 = CL ; CS C12 = CC ; CA . We are grouping the SL-top-level forms for C1 in compiler τ1 in this way: < all SL-top-level forms needed by C11, but not by C12, print-sequence nor printhexadecimal > < all SL-top-level forms needed by C12, but not by print-sequence nor printhexadecimal > The final non-defining top-level form is now (printhexadecimal (C12 (printsequence (C11 (print-sequence (read-sequence))))) So compiler τ1 is composed of three phases τ1re , τ111 , τ112 with three associated concatinated sections tlτ1re tlτ111 tlτ112 of top-level tl forms, mostly function definitions. To generate τ2T C1 = τ2 we procede as before, supported by τ00 on HM0 . If twofold compilation of τ1 is ending up with success the result in HM0 ’s output medium consists of three compilers written in SL resp. Cint resp. TC1 which have three phases each with their corresponding code (subroutines sr) sections. We compare the latter two compilers with the correct τ1 in HM0 ’s input medium, write them down as a matrix tlτ1 : tlτ1re re srτ2C int : srτ2C int re srτ2T C1 : srτ2T C1
tlτ111 11 srτ2C int 11 srτ2T C1
tlτ112 12 srτ2C int 12 srτ2T C1
and do rigorous code inspection only above the diagonal. We don’t do so below the diagonal, i.e. for srτ2C int re 11 srτ2T srτ2T C1 C1 , because inspection is not necessary here as we shall see. The overlines are indicating that these code pieces might have errors. In these sections we find the bulk of low and machine level code. This bulk is becoming even larger when we introduce more phases, so code inspection is made even less cumbersome. We load srτ2T C1 into the Transputer-memory and start srτ2T C0 applied to τ1 with four instruction pointer break points before and behind the two printsequence-calls and with two intermediate result comparisons (s-expression 11 12 srτ2C equality) w.r.t. tlτ1 = tlτ1re tlτ111 tlτ112 resp. srτ2C int int . In case 12 11 12 of successful checkings srτ2T C0 , applied to srτ2C int srτ2C int , is generat11 12 srτ2T ing a correct srτ2T C1 C1 . Especially all random or intended errors in 11 11 srτ2T C1 are no longer found in srτ2T C1 . This is showing the clue why rigorous code inspection below the diagonal is not required.
On Trojan Horses of Thompson-Goerigk-Type
91
Now we are again in a situation where only rigorous inspection of the translated read-routines is missing, a situation which can be managed as in section 5.3 to generate a correct compiler τ2T C1 with re 11 12 srτ2T srτ2T srτ2T C1 = srτ2T C1 C1 C1 . By this proceeding the available ANSI-Common Lisp-compiler τ00 has no chance to intrude its possibly hidden Trojan horses into our desired compiler τ2T C0 . All attempts of intrusion will be detected by Verifix’s careful proceeding. So there is a realistic software engineering technique to prevent Trojan horses of ThompsonGoerigk-type. We agree we need a little support by appropriate hardware. (Main frame) computers of the 1960s and 70s with operator consoles offered features to set instruction pointer break points and to give write protection to memory areas. The Verifix-project had a Transputer-processor available which allowed corresponding manipulations. 5.5
Verifix’s Rigorous Code Inspection in View of Existing Industrial Compiling
We do not claim that our rigorous code inspection method is feasible for every existing realistic compiler. Our recommended recipe is that correct compilers for other languages should be generated by correct bootstrapping of proved correct resp. correctly generated compilers (see chapter 2). Our initial ComLisp-compiler is designed to do straightforward code generation, no non-obvious transformations (e.g. code optimization) which prevent easy code inspection indeed and which we find in other realistic translations. So our initial compilation with its four phases is in a sense even better checkable than arithmetics for natural numbers. Namely multiplication does an unrestricted quantity of rule applications which are not reflected in juxtaposed factors and resulting product so that a result checker (inspector) is forced to write down more or less extensive side calculations. Multiplication rules are not good-natured, contrary to addition rules [Lan05]. Nevertheless, the attentive reader might wonder why such code inspection project should not work for every industrially used high level programming language other than Lisp-derivatives. On the other hand, ANSI-Common Lisp and its sublanguage ComLisp have clear theoretical and practical advantages: These languages have dynamic typing, their data, atomic and composed s-expressions, are highly expressive in spite of their simple syntax. It is most important that ComLisp has a powerful and unquestionably clear operational and denotational semantics. ComLisp-programs, special s-expressions, are systems of recursive, non-nested functions with sideeffects, i.e. assignments to global variables. Implementation of any compiling specifications as ComLisp-written compiler programs is a straightforward procedure. Definition of a similarly flexible and powerful sublanguage of another industrially used programming language like Java or C would require undue declarational overhead with several kinds of read- and print-routines in order to have support by existing compilers for the respective full language.
92
H. Langmaack
Thinking beyond this essay, the anonymous reviewer puts the idea for a related, alternative approach to trustworthy compilers onto the table: The long test sequences for compiler certification should be combined with a right notion of code coverage, especially coverage of all (infinitely many) paths, which is hard to achieve in practice. On the other hand, a correctness proof of compiling specification rules which Verifix’s proceeding is relying on is a coverage of all infinitely many compiler paths, namely a coverage with finitary, logical means. Such idea might work out and convince software certification authorities.
6
Conclusion
Let a compiler be given correctly written (implemented) in a high level host language. It is not astounding to the experienced software engineer that even careful rewriting of this compiler in host machine language is error prone. But it is very astounding that errors are occurring, namely Trojan horses of ThompsonGoerigk-type, if the original compiler is written in its own high level source language, host and target machine coincide and Wirth’s strong bootstrap test is successful. Even rewriting done by an auxiliary, approved compiler is no way out (chapter 3). Unfortunately, no scientific reasoning, worried at dangerous consequences for safety and security critical applications, has been able to convince software certification authorities that officially admitted compilers are not allowed to be exempted from full verification [BSI94] (see our chapter 4). Due to doubtless necessity to liberate application software engineers from proving correct low and machine level written code the originators of DFG-project Verifix have sought to transfer that kind of proof activity towards compiler constructors and system software engineers. Verifix has well demonstrated that building of fully verified realistic compilers for realistic high level languages to code of real processors can be taken over by industry as a standard technique. Therefore certification authorities may drop their scruples towards industrial acceptance. Key of the project’s success is acceptance of rigorous code inspection (which is a special result checking) as a mathematical proof technique appropriate in software engineering. Additional compiler phases which do programmed rigorous inspections of resulting codes may turn an unverified compiler into a fully verified one if at least a rest of rigorous code inspection is done for these phases by hand. Summary: Wirth’s compiler development recipe together with finitely many successful rigorously performed tests can be directed towards a fully verified compiler correctly implemented in a real machine’s code. Goerigk has demonstrated an impressive application of Goodenough’s and Gerhart’s theory on program verification support by tests (chapter 5). Acknowledgements. The author would like to thank his colleagues D. Dams, U. Hannemann and M. Steffen for their kind invitation to contribute to this Festschrift to honour Willem-Paul de Roever. Many thanks to A. Dold, T. Gaul, Sabine Glesner, W. Goerigk, G. Goos, A. Heberle, F.W.von Henke, K. H¨ oppner,
On Trojan Horses of Thompson-Goerigk-Type
93
U. Hoffmann, M. M¨ uller-Olm, V. Vialard, A. Wolf and W. Zimmermann for their fruitful cooperation in the project Verifix. I am grateful for valuable suggestions of the anonymous reviewer. Heartily thanks to Annemarie Langmaack for typesetting this article.
References [Bau99]
Bauer, F.L.: Einladung zur Mathematik. Deutsches Museum, M¨ unchen (1999) [BSI94] BSI – Bundesamt f¨ ur Sicherheit in der Informationstechnik. BSIZertifizierung. BSI 7119, Bonn (1994) [BSI96] BSI – Bundesamt f¨ ur Sicherheit in der Informationstechnik. OCOCAT-S Projektausschreibung (1996) [CM86] Chirica, L.M., Martin, D.F.: Toward Compiler Implementation Correctness Proofs. ACM Transactions on Programming Languages and Systems 8(2), 185–214 (1986) [DHG03] Dold, A., von Henke, F., Goerigk, W.: A Completely Verified Realistic Bootstrap Compiler. Int. J. of Foundations of Computer Science 14(4), 659–680 (2003) [DHVG02] Dold, A., von Henke, F.W., Vialard, V., Goerigk, W.: A Mechanically Verified Compiler Specification for a Realistic Compiler. Ulmer InformatikBerichte Nr. 2002-03, Univ. Ulm, 67 pgs. (2002) [Dij76] Dijkstra, E.W.: A Discipline of Programming. Prentice-Hall, Englewood Cliffs (1976) [Dol00] Dold, A.: Formal Software Development using Generic Development Steps. Logos-Verlag, Berlin, Dissertation, Univ. Ulm, 235 pgs. (2000) [Fag86] Fagan, M.E.: Advances in software inspections. IEEE Transactions on Software Engineering, SE-12(7), 744–751 (1986) [GDG+96] Goerigk, W., Dold, A., Gaul, T., Goos, G., Heberle, A., von Henke, F., Hoffmann, U., Langmaack, H., Pfeiffer, H., Ruess, H., Zimmermann, W.: Compiler Correctness and Implementation Verification: The Verifix Approach. In: Proc. Poster Sess. CC 1996–Internat. Conf. on Compiler Construction, ida, TR-Nr. R-96-12 (1996) [GGZ04] Glesner, S., Goos, G., Zimmermann, W.: Verifix: Konstruktion und Ar¨ chitektur verifizierender Ubersetzer. Informationstechnik 46(5) (2004) [GGZ98] Goerigk, W., Gaul, T., Zimmermann, W.: Correct Programs without Proof? On Checker-Based Program Verification. In: Proceedings ATOOLS 1998 Workshop on Tool Support for System Specification, Development and Verification, Malente. Advances in Computing Science. Springer, Heidelberg (1998) [GH96] Goerigk, W., Hoffmann, U.: The Compiler Implementation Language ComLisp, Technical Report Verifix/CAU/1.7, CAU Kiel (June 1996) [GH98a] Goerigk, W., Hoffmann, U.: The Compiling Specification from ComLisp to Executable Machine Code. Technical Report Nr. 9713, Institut f¨ ur Informatik, CAU Kiel, 106 pgs. (December 1998) [GH98b] Goerigk, W., Hoffmann, U.: Rigorous Compiler Implementation Correctness: How to Prove the Real Thing Correct. In: Hutter, D., Traverso, P. (eds.) FM-Trends 1998. LNCS, vol. 1641, pp. 122–136. Springer, Heidelberg (1999)
94
H. Langmaack
[GH98c]
[GH98d]
[Goe99]
[Goe00a]
[Goe00b]
[Goe00c] [GoGe75] [GoLa01]
[GoZi99]
[GZG99]
[Hoa03] [Hof98]
[Lan97]
[Lan97a]
Goerigk, W., Hoffmann, U.: Compiling ComLisp to Executable Machine Code: Compiler Construction. Technical Report Nr. 9812, Institut f¨ ur Informatik, CAU Kiel, 170 pgs. (October 1998) Goerigk, W., Hoffmann, U.: Proof Protocols for the Translation from ComLisp to Transputer-Code (1998), http://www.informatik.uni-kiel.de/∼wg/Verifix/ proof-protocols/i.ps with i = l1, l2, l3, l4, s2, s3, s4, c3, c4, t4 Goerigk, W.: On Trojan Horses in Compiler Implementations. In: Saglietti, F., Goerigk, W. (eds.) Proc. des Workshops Sicherheit und Zuverl¨ assigkeit softwarebasierter Systeme, IsTec Report ISTec-A.367, Garching (August 1999) ISBN 3-00-004872-3 Goerigk, W.: Compiler Verification Revisited. In: Kaufmann, M., Manolios, P., Moore, J.S. (eds.) Computer Aided Reasoning: ACL2 Case Studies. Kluwer Academic Publishers, Dordrecht (2000) Goerigk, W.: Proving Preservation of Partial Correctness with ACL2: A Mechanical Compiler Source Level Correctness Proof. In: Kaufmann, M., Moore, J.S. (eds.) Proceeding of the ACL2’ 2000 Workshop, Univ. of Texas, Austin, Texas, U.S.A. (October 2000) Goerigk, W.: Trusted Program Execution. Habilitation thesis. Techn. Faculty, CAU zu Kiel, 161 pgs. (May 2000) Goodenough, J.B., Gerhart, S.L.: Toward a theory of test data selection. IEEE Transactions on Software Engineering 1(2), 156–173 (1975) Goerigk, W., Langmaack, H.: Will Informatics be able to Justify the Construction of Large Computer Based Systems? Inst. f. Informatik u. Prakt. Math., Univ. Kiel, Ber. 2015, 64 pgs. (2001); Appeared as Part I: Realistic Correct Systems Implementation, 28 pgs., and Part II : Trusted Compiler Implementation, 24 pgs. International Journal on Problems in Programming, Kiev, Ukraine (2003), http://www.informatik.uni-kiel.de/∼wg/Berichte/ -Kiev-Instituts-bericht-2001.ps.gz Goos, G., Zimmermann, W.: Verification of Compilers. In: Olderog, E.-R., Steffen, B. (eds.) Correct System Design. LNCS, vol. 1710, pp. 201–231. Springer, Heidelberg (1999) Gaul, T., Zimmermann, W., Goerigk, G.: Construction of Verified Software Systems with Program-Checking: An Application to Compiler Back-ends. In: Pnueli, A., Traverso, P. (eds.) Proc. FloC 1999 International Workshop on Runtime Result Verification, Trento, Italy (1999) Hoare, C.A.R.: The Verifying Compiler: A Grand Challenge for Computing Research. J. ACM 50(1), 63–69 (2003) Hoffmann, U.: Compiler Implementation Verification through Rigorous Syntactical Code Inspection. PhD thesis, Technische Fakult¨ at der CAU zu Kiel, Inst. f. Informatik u. Prakt. Math., Ber. 9814, Kiel, 127 pgs. (1998) Langmaack, H.: Softwareengineering zur Zertifizierung von Systemen: ¨ Spezifikations-, Implementierungs-, Ubersetzerkorrektheit. Informationstechnik und Technische Informatik it+ti 39(3), 41–47 (1997) Langmaack, H.: Contribution to Goodenough’s and Gerhart’s Theory of Software Testing and Verification. Relation between Strong Compiler Test and Compiler Implementation Verification. In: Freksa, C., Jantzen, M., Valk, R. (eds.) Foundations of Computer Science. LNCS, vol. 1337, pp. 321–335. Springer, Heidelberg (1997)
On Trojan Horses of Thompson-Goerigk-Type [Lan05] [LaWo03]
[Moo96] [Pof95]
[Ste84] [Tho84]
[Wir77] [ZSI89] [ZSI90]
95
Langmaack, H.: What Level of Mathematical Reasoning can Computer Science Demand of a Software Implementer? ENTCS 141(2), 5–32 (2005) Langmaack, H., Wolf, A.: Reading and Printing in Constructing Fully Verified Compilers. Inst. Inf.Prakt. Math., Chr.-Albr.-Univ.Kiel, Ber. 0306, 113 pgs. (2003) Moore, J.S.: Piton: A Mechanically Verified Assembly-Level Language. Kluwer Academic Press, Dordrecht (1996) Pofahl, E.: Methods Used for Inspecting Safety Relevant Software. In: Cullyer, W.J., Halang, W.A., Kr¨ amer, B.J. (eds.) High Integrity Programmable Electronic Systems. Dagstuhl, Sem.-Rep. 107, p. 13 (1995) Steele Jr., G.L.: Common LISP. The Language. Digital Press, Badford (1984) Thompson, K.: Reflections on Trusting Trust. Communications of the ACM 27(8), 761–763 (1984); Also in ACM Turing Award Lectures: The First Twenty Years 1965-1985. ACM Press, New York (1987), and in Computers Under Attack: Intruders, Worms, and Viruses. ACM Press, New York (1990) Wirth, N.: Compilerbau, eine Einf¨ uhrung. B.G.Teubner, Stuttgart (1977) ZSI-Zentralstelle f¨ ur Sicherheit in der Informationstechnik. ITEvaluationshandbuch. Bundesanzeiger Verlagsgesellschaft, K¨ oln (1989) ZSI-Zentralstelle f¨ ur Sicherheit in der Informationstechnik. ITEvaluationshandbuch. Bundesanzeiger Verlagsgesellschaft, K¨ oln (1990)
Explicit Fair Scheduling for Dynamic Control Ernst-R¨ udiger Olderog1 and Andreas Podelski2 1
Department f¨ ur Informatik, Universit¨ at Oldenburg, 26111 Oldenburg, Germany 2 Institut f¨ ur Informatik, Universit¨ at Freiburg, 79110 Freiburg, Germany
Abstract. In explicit fair schedulers, auxiliary integer-valued scheduling variables with non-deterministic assignments and with decrements keep track of each processor’s relative urgency. Every scheduled execution is fair and yet, the scheduler is sufficiently permissive (every fair run can be scheduled). In this paper we investigate whether explicit fair scheduling also works with dynamic control, i.e., when new processes may be created dynamically. We give a positive and a negative answer.
1
Introduction
The present paper, written for the Festschrift in honor of Willem-Paul de Roever, investigates a heavily-researched topic with, still, many open questions. It seems hard to be indifferent about fairness if one is interested in concurrent systems. This is evidenced by quotes ranging from “There are far too many papers on fairness!” [Hoa96] to “I have long given up on fairness!” [Plo01]. In particular, de Roever felt obliged to warn the first author of this paper: “Do not work on fairness. It has all been solved!” He was, of course, referring to the concept of helpful directions [GFMdR81, GFMdR85, LPS81]. As typical of a well-meant warning, it was ignored. The work on fairness continued. In particular, an alternative approach to fairness was developed, called explicit fair scheduling [OA88]. The work on fairness is continuing. In particular, we investigate explicit fair scheduling in a setting for which it was not conceived originally. In this setting of dynamic control, new threads or processes may be created during the execution. Hence their number, although being finite throughout the execution, can not be bounded. This contrasts with the setting of static control in [OA88] where the number of processes was arbitrary but fixed. Before we present the contribution of this paper and our motivation to consider the setting of dynamic control, we will explain the why and the how of explicit scheduling. Why use explicit scheduling. There may be situations where one would like to “get rid of fairness”. For example, in program analysis (whose formal foundation can be given by abstract interpretation [CC77]), one may want to define the semantics in terms of a (pure) transition system, i.e., a graph. Here, a popular approach is to take one of the fair schedulers used in operating systems, and to D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 96–117, 2010. c Springer-Verlag Berlin Heidelberg 2010
Explicit Fair Scheduling for Dynamic Control
97
consider a new system which is composed of the original one and the scheduler, and whose semantics can be given in terms of a transition system. The objection to this approach is that the analysis result is valid only for one particular fair scheduler; i.e., it does not extend to another fair scheduler. To remove this objection, one has to take a universal scheduler, i.e., one that encompasses all possible fair schedulers. In contrast with schedulers implemented in operating systems, a universal scheduler is not meant to be practical. Universality holds if the scheduler is sufficiently permissive, i.e., if every possible fair execution can be scheduled (by letting the scheduler choose an appropriate sequence of alternatives at all non-deterministic choices). In order to be correct (sound), it must not be too permissive, i.e., no unfair execution can be scheduled. The idea behind explicit schedulers. The explicit schedulers given in [OA88] use auxiliary integer-valued variables (so-called scheduling variables), one for each process, to keep track of the relative urgency of each process (relative to the other processes). Making it more urgent is implemented by decrementing its scheduling value. Thus, scheduling values can become negative. The crucial step is the non-deterministic update to a non-negative integer each time after the process has been selected. Then, the process is not necessarily less urgent than all other processes. However, it is definitely less urgent than those that already have a negative scheduling value. This fact is used to prove (by induction) the scheduling invariant : the scheduling value will never decrease below −n, where n is the number of all processes [OA88]. This again means that a process cannot become “arbitrarily urgent”; i.e., it has to be selected after it has been made more urgent a finite (though unboundedly large) number of times, which is exactly what fairness means. The contribution of this paper. For reasons that we detail below, we consider the setting of dynamic control, i.e., the setting of concurrent systems composed of processes that may be created dynamically. This means that even in a single execution the number of processes cannot be bounded. We give the comprehensive answer to the question whether the explicit fair schedulers of [OA88] also work in this setting. The answer consists of two parts, according to the two notions of fairness we consider. It is positive for weak fairness (“justice”, roughly: “a continuously enabled process must be selected”). It is negative for strong fairness (“compassion”, roughly: “a repeatedly enabled process must be selected”). The crucial difference between the two cases of weak and strong fairness lies in the urgency of a process. In both cases, a process becomes more urgent (i.e., its scheduling value gets decremented) if it is enabled but not selected. If it is not enabled (and consequently not selected), the situation is different for each case of fairness. For weak fairness, the urgency gets lost (in the same way as if it was selected; i.e., its scheduling value is updated non-deterministically to a non-negative integer). For strong fairness, the urgency is put on hold (i.e., its scheduling value remains the same). In the setting of dynamic control, the scheduling invariant mentioned above can no longer be used for the proof of the correctness of the explicit fair scheduler
98
E.-R. Olderog and A. Podelski
(since the assertion “the scheduling value can never decrease below −n where n is the number of all processes” is void when the number n of processes can get arbitrarily large in one single execution). For weak fairness, however, we are still able to come up with another invariant and thus with another proof of the correctness of the explicit weakly-fair scheduler. For strong fairness, this is not possible. We are able to give a counterexample showing that the explicit strongly fair scheduler does not work for dynamic control. The intuition behind the counterexample is the following scenario where processes are newly created and the first process is enabled infinitely often but never selected. The crux is that the first process is enabled less and less often. In the larger and larger periods where the process is not enabled (and its urgency is put on hold), more and more processes get newly created. All these newly created processes are continuously enabled; hence their urgency increases. When the first process becomes enabled again, then it has already been overtaken in urgency by a newly created process. Hence, once again it will not be selected by the scheduler. This situation can repeat infinitely often. That is, the scheduler of [OA88] which is strongly fair with static control is incorrect with dynamic control. Motivation of our work. Our interest for fairness in the setting of dynamic control stems from three directions. Networked transportation systems (e.g., cars driving in groups called platoons) are modeled as concurrent systems (see, e.g., [BSTW06]). The fact that a traffic participant can appear and join a platoon is modeled by the creation of a new concurrent process. Fairness needs to be added as an assumption for the model for the validity of liveness properties (e.g., the termination of a merge manoeuvre between platoons). Operating systems are typical examples of reactive systems where threads are created specifically for individual task. Although the execution of the overall system may be infinite, those threads must terminate in order to keep the overall system reactive. For recent automatic proof techniques addressing the termination of such threads see [PR04, PR05, PPR05, CPR05, CPR06, CPR07]. All these techniques are specifically designed to cope with fairness. Presently, however, they are restricted to the setting of static control, i.e., to the setting where the number of processes is statically fixed. Perhaps surprisingly, recent work on model checking safety properties of operating systems code involve fairness [MQ08]. Fairness is used essentially to define eliminate useless (unfair) paths in the state space (i.e., paths that can be pruned without affecting the reachability of error states). This work uses explicit scheduling of the model checker for the “fair” exploration of the state space. Although the explicit scheduler in [MQ08] is inspired by [OA88], it chooses a different idea for the representation of the relative urgency of processors. The partial order between the processes is represented directly by a graph, instead been represented indirectly by scheduling values. Furthermore, the scheduler in [MQ08] is specialized to threads with loops of a certain form. The scheduler in [MQ08] is, as the one in [OA88], designed for the setting of a statically fixed number of threads.
Explicit Fair Scheduling for Dynamic Control
99
Roadmap. Presently you are still reading the introduction. In Section 2 we formulate the classical definitions of fairness not for Dijkstra’s guarded command programs but, instead, for infinitary guarded command programs, i.e., with infinitely many branches in the do-od loops. The programs formalize the setting of infinitary control where infinitely many processes can be active at the same moment. In Section 3 we similarly reformulate the three definitions from [OA88] of explicit scheduling, the specific scheduler WFAIR, and the specific scheduler FAIR. This way, we have all definitions ready when we get to programs with dynamically created processes. We formalize those programs as a special case of Dijkstra’s guarded command programs with infinitely many branches in the outer do-od loop. In Section 5 we prove that the scheduler WFAIR is correct and universal also for those programs. In Section 6 we give a counterexample for the scheduler FAIR. Summary of Results. We summarize the results of this paper and compare them with those of [OA88] in Table 1. Theorems 1 and 3 are established in the setting of infinitary control. Since dynamic control is a special case of infinitary control, these results carry over to the setting of dynamic control. Table 1. Summary of results. WFAIR and FAIR are the schedulers for weak and strong fairness, respectively. Their definitions stem from [OA88]. Static control is the setting of an arbitrary, but fixed number processes. Infinitary control refers to the theoretically motivated setting where we have infinitely many processes at the same moment. Dynamic control is the setting where processes can be created dynamically. Hence we have finitely many active processes at every moment, but even in a single execution, their number cannot be bounded. WFAIR is is FAIR is is
2
universal sound universal sound
static control yes [OA88] yes [OA88] yes [OA88] yes [OA88]
infinitary control yes (Theorem 1) no (Theorem 2) yes (Theorem 3) no (Theorem 4)
dynamic control yes (Theorem 1) yes (Theorem 5) yes (Theorem 3) no (Theorem 6)
Infinitary Fairness
Though the motivation for considering fairness stems from concurrency, it is easier and more elegant to study it in terms of structured nondeterministic programs such as Dijkstra’s guarded commands [Fra86]. We follow this approach in this paper. In this section, we carry the classical definitions of fairness from Dijkstra’s guarded command language over to an infinitary guarded command language, i.e., with infinitely many branches in do-od loops. It is perhaps a surprise that the definitions carry over directly. We then immediately have the definitions of fairness of programs with dynamically created processes because we will define those formally as a subclass of infinitary guarded command programs (in Section 4).
100
2.1
E.-R. Olderog and A. Podelski
Infinitary Control
We introduce infinitary guarded command programs by extending Dijkstra’s language [Dij75] with do-od loops that have infinitely many branches. Syntactically, these do-od loops are statements of the form S ≡ do []∞ i=0 Bi → Si od
(1)
where for each i ∈ N the component Bi → Si consists of a Boolean expression Bi , its guard, and the statement Si , its command. We define a structural operational semantics in the sense of Plotkin [Plo04] for infinitary guarded commands. As usual, it is defined in terms of transitions between configurations. A configuration C is a pair < S, σ > consisting a statement S that is to be executed and a state σ that assigns a value to each program variable. A transition is written as a step C → C between configurations. To express termination we use the empty statement E: a configuration < E, σ > denotes termination in the state σ. For the infinitary do-od loop S as in (1) we have two cases of transitions where σ |= B denotes that the Boolean expression B evaluates to true in the state σ: 1. < S, σ > → < Si ;S, σ > 2. < S, σ > → < E, σ >
if σ |= Bi for each i ∈ N, if σ |= ∞ i=1 ¬Bi .
Case 1 states that each enabled component Bi → Si of S, i.e., with the guard Bi evaluating to true in the current state σ, can be entered. If more than one component of S is enabled, one of them will be chosen nondeterministically. The successor configuration < Si ;S, σ > formalizes the repetition of the do-od loop: once the command Si is executed the whole loop S has to be executed again. Formally, the transitions of the configuration < Si ;S, σ > are determined by the transition rules for the other statements of the guarded command language. For details see, e.g., [AO97]. Case 2 states that the do-od loop terminates if non of the components are enabled any more, i.e, if all guards Bi evaluate to false in the state σ. An execution of a program S starting in a state σ is a sequence of transitions < S, σ > = C0 → C1 → C2 → . . ., which is either infinite or maximally finite, i.e., the sequence cannot be extended further by some transition. 2.2
Fairness
In this paper we investigate fairness for programs with only one infinitary do-od loop of the form (1). This simplifies its definition and is sufficient for modeling dynamic control. Since fairness can be expressed in terms of enabled and selected processes only, we abstract from all other details in executions and introduce the notions of selection and run. Consider an execution < S, σ 0 > = C0 → C1 → C2 → . . .
Explicit Fair Scheduling for Dynamic Control
101
of S as in (1). A transition Cj → Cj+1 is called a select transition if it consists of the selection of an enabled command, formally, if Cj =< S, σ > and Cj+1 =< Si , σ > with σ |= Bi for some i ∈ N. We define the selection of the transition Cj → Cj+1 as the pair (Ej , ij ) where Ej is the set of all (indices of) enabled components, i.e., Ej = {i ∈ N | σ |= Bi }, and ij is the (index of the) selected component, i.e., ij = i. Obviously, the selected command is among the enabled components. A run of the execution C0 → C1 → C2 → . . . is the sequence of all its selections, formally, the sequence (Ej0 , ij0 )(Ej1 , ij1 ). . . such that Cj0 Cj1 . . . is the subsequence of configurations with outgoing select transitions. Computations that do not pass through any select transition yield the empty run. A run of a program S is the run of one of its executions. These definitions above are as in [OA88] except that here infinitely many components can be enabled. However, the definition of fairness applies to this setting as well. A run (E0 , i0 )(E1 , i1 )(E2 , i2 ). . . is called weakly fair if it satisfies the condition ∞
∞
∀i ∈ N : ( ∀ j ∈ N : i ∈ Ej → ∃ j ∈ N : i = ij ). ∞
∞
where the quantifiers ∀ and ∃ denote “for all but finitely many” (or “almost everywhere”) and “there exist infinitely many”, respectively. Thus, in a weakly fair run, every component i which is from some moment on always enabled, is also selected infinitely often. A run is called strongly fair (or simply fair ) if it satisfies the condition ∞
∞
∀i ∈ N : ( ∃ j ∈ N : i ∈ Ej → ∃ j ∈ N : i = ij ). Thus, in a strongly fair run, every component i which is enabled infinitely often, is selected infinitely often. Clearly, every strongly fair run is also weakly fair. In particular, every finite run is fair. An execution is weakly or strongly fair if its run is weakly or strongly fair, respectively. Thus for fairness only select transitions are relevant; transitions inside the statements Si do not matter. Note that every finite execution is fair. Although we are not interested in the case where infinitely many components can be enabled at the same time (continuously or infinitely often) and although this case is perhaps not practically relevant, the definition of fairness still makes sense, i.e., there exist weakly and strongly fair executions also in this case.
102
3
E.-R. Olderog and A. Podelski
Explicit Scheduling
In this section we recall three definitions from [OA88]: explicit scheduling, the specific scheduler WFAIR, and the specific scheduler FAIR. In contrast to [OA88] we consider them here in the setting where infinitely many components need to be scheduled, not only a fixed number n of them. A scheduler is an automaton that enforces a certain discipline on the executions of a nondeterministic or concurrent program. To this end, the scheduler keeps in its local state sufficient information about the run of an execution, and engages in the following interaction with the program. At certain moments during an execution, the program presents the set E of currently enabled processes to the scheduler (provided E = ∅). By consulting its local state, the scheduler returns to the program nondeterministically some element i ∈ E that should be selected in the next transition step. We may ignore the actual interaction between the program and scheduler and just record the result of this interaction, the selection (E, i) scheduled by the scheduler. Summarizing, we arrive at the following definition. Definition 1. A scheduler is given by – a set of local scheduler states σ, which are disjoint from the program states, – a subset of initial scheduler states, and – a ternary scheduling relation sch ⊆ {scheduler states} × {selections} × {scheduler states} which is total in the following sense: ∀σ ∀E = ∅ ∃i ∈ E ∃σ : (σ, (E, i), σ ) ∈ sch. Thus for every scheduler state σ and every nonempty set E of enabled processes there exists a process i ∈ E such that the selection (E, i) and the updated local state σ satisfies the scheduling relation. By the totality of the scheduling relation, a scheduler can never block the execution of a program but only influence its direction. Definition 2. A finite or infinite run (E0 , i0 )(E1 , i1 )(E2 , i2 ). . . can be scheduled by a scheduler SCH if there exists a finite or infinite sequence σ 0 σ 1 σ 2 . . . of scheduler states, with σ 0 being an initial scheduler state, such that (σ j , (Ej , ij ), σ j+1 ) ∈ sch holds for all j ≥ 0. A scheduler SCH is called sound for (weak) fairness if every run that can be scheduled by SCH is (weakly) fair. A scheduler SCH is called universal for (weak) fairness if every (weakly) fair run can be scheduled by SCH.
Explicit Fair Scheduling for Dynamic Control
3.1
103
A Scheduler for Weak Fairness: WFAIR
We recall now from [OA88] the scheduler WFAIR for weak fairness but consider it for infinitely many components. With each component i it associates a scheduling variable z[i] representing a priority assigned to that component. A component i has a higher priority than a component j if z[i] < z[j] holds. Definition 3 (WFAIR). The set of scheduler states, the subset of initial scheduler states, and the ternary scheduling relation are given as follows: – The scheduler state σ is given by the values of an infinitary array z of type N → Z, i.e., z[i] is a positive or negative integer for each i ∈ N. – The initial scheduler states are those where each scheduler variable z[i] has some non-negative integer value. – The scheduling relation holds for scheduler states σ and σ and a selection (E, i) if the value of z[i] is minimal, formally, SCH i ≡ z[i] = min{z[k] | k ∈ E}, and if the new scheduler state σ is obtained from σ by executing the following statement UPDATE i . Informally, each scheduling variables z[j] is decremented if is enabled and the process j is not selected, and it is assigned a nondeterministically chosen non-negative integer, and it is either selected, i.e., j = i, or not enabled. UPDATE i ≡ z[i] :=?; for all j = i do if j ∈ E then z[j] := z[j] − 1 else z[j] :=? fi od The query symbol ? denotes a nondeterministically chosen non-negative integer value. Note that the scheduling relation is total as required by Definition 1. The update of the scheduling variables guarantees that the priorities of all enabled but not selected processes j are increased. The priority of the selected process i and of all processes j that are not enabled, however, is reset arbitrarily. The idea is that by gradually increasing the priority of enabled processes, their activation cannot be refused forever. Theorem 1 (Universality of WFAIR for infinitary control). The scheduler WFAIR can schedule every weakly fair run of an infinitary guarded command program (where possibly infinitely many components can be enabled at the same moment). Proof. Consider a weakly fair run (E0 , i0 )(E1 , i1 )(E2 , i2 ). . ..
(2)
104
E.-R. Olderog and A. Podelski
Adapting an argument from [AO83], we show that (2) can be scheduled by WFAIR. To this end, we construct a sequence σ 0 σ 1 σ 2 . . . of scheduler states satisfying the scheduling relation sch(σ j , (Ej , ij ), σ j+1 ) for every j ∈ N. The construction proceeds by assigning appropriate values to the scheduling variables z[i] of WFAIR. For i, j ∈ N we put min{k − j|k ∈ N ∧ k ≥ j ∧ ik = i} if ∃k ≥ j : ik = i σ j (z[i]) = 1 + min{k − j|k ∈ N ∧ k ≥ j ∧ i ∈ Ek } if ∀k ≥ j : ik = i Note that in both cases the minimum of a non-empty subset of N is considered. This is clear for the case where ∃k ≥ j : ik = i holds. If ∀k ≥ j : ik = i holds, then from state σj onwards component i is never scheduled for execution again. ∞
Since (2) is weakly fair, this can only be the case if ∃ k ≥ j : i ∈ Ek holds. Thus ∃k ≥ j : i ∈ Ek , which implies that also in this case the minimum is taken over a non-empty set. In this construction the variables z[i] have values of at least 0 in every state σ j and exactly one variable z[i] with i ∈ Ej has the value 0. This i is the index of the component selected next. It is easy to see that this construction of values σ j (z[i]) is possible with the assignments in WFAIR. An important property of the scheduler WFAIR can be stated for a run where the number of enabled components is bounded, say by n. Then, for k = 1, . . . , n, in the execution of WFAIR scheduling this run, the number of components with a scheduling value of at most −k is bounded by n − k. Formally, for every k = 1, . . . , n, the assertion INV k ≡ card {i ∈ N | z[i] ≤ −k} ≤ n − k is an invariant of that execution of WFAIR. Here card M denotes the cardinality of a set M . The conjunction of these assertions, INV ≡ INV 1 ∧ · · · ∧ INV n , is called the scheduling invariant of WFAIR for this run. In particular, none of the scheduling variables can have values below or equal −n. This invariance property is formalized in the following lemma. Lemma 1 (Execution invariant of WFAIR for bounded control). Consider a run (E0 , i0 )(E1 , i1 )(E2 , i2 ). . . (3) that is scheduled by WFAIR using a sequence σ 0 σ 1 σ 2 . . . of scheduler states satisfying the scheduling relation sch(σ j , (Ej , ij ), σ j+1 ) for every j ∈ N. Suppose that the number of enabled components in the run (3) is bounded by some n ∈ N, formally, card Ej ≤ n for all j ∈ N. Then the scheduling invariant INV holds in every scheduler state σ j .
Explicit Fair Scheduling for Dynamic Control
105
Proof. We proceed by induction on the index j ∈ N of the scheduler state σ j . Induction basis: j = 0. By definition of the initial scheduler states of WFAIR, we have σ 0 |= z[i] ≥ 0 for all i ∈ N so that INV is trivially satisfied in σ 0 . Induction hypothesis: INV holds in σ j . Induction step: j −→ j + 1. Suppose INV is false in σ j+1 . Then there is some k ∈ {1, . . . , n} such that there are at least n − k + 1 indices i for which z[i] ≤ −k holds in σ j+1 . Let I be the set of all these indices, i.e., I = {i ∈ N | σ j+1 |= z[i] ≤ −k}. Then I is nonempty and
card I ≥ n − k + 1.
(4)
Since z[i] is negative in σ j+1 for each i ∈ I, each component i with i ∈ I was enabled but not selected in state σ j by the definition of WFAIR (formally, i ∈ Ej and i = ij ). Thus (5) I ⊆ Ej \ {ij }. Case k = 1. Then card I ≥ n by (4). Contradiction since (5) and the general assumption card Ej ≤ n together imply card I ≤ n − 1. (Note that this is the only place where we use the assumption that the number of enabled components is bounded.) Case k > 1. By the definition of WFAIR, we have z[i] ≤ −k + 1 for all i ∈ I in σ j . Let J = {i ∈ N | σ j |= z[i] ≤ −k + 1}. Then I ⊆ J.
(6)
Since k − 1 ≥ 1, the induction hypothesis for k − 1 applies to the set J and implies that card J ≤ n − k + 1. (7) Then (4), (6), and (7) together imply I = J.
(8)
By (5) and (8), all components in J are enabled in σ j . By the definition of WFAIR, the scheduled component ij has the minimal value in σ j among the scheduling variables of the enabled components. So ij ∈ J. Contradiction since (5) and (8) together imply that ij ∈ J. This proves that INV also holds in σ j+1 . Using this Invariant-Lemma, we show that every run of the scheduler WFAIR in which at most n components are enabled at the same moment is weakly fair. Corollary 1 (Soundness of WFAIR for bounded control). Every run (E0 , i0 ) (E1 , i1 ) (E2 , i2 ). . .
(9)
where the number of enabled components is bounded by some n ∈ N, i.e., with card Ej ≤ n for all j ∈ N, that can be scheduled by WFAIR is weakly fair.
106
E.-R. Olderog and A. Podelski
Proof. Let the run (9) be scheduled by WFAIR and let σ 0 σ 1 σ 2 . . . be a sequence of scheduler states of satisfying sch(σ j , (Ej , ij ), σ j+1 ) for every j ∈ N. Then (9) is weakly fair. Suppose the contrary holds. Then there exists some component i with i ∈ N which is almost everywhere enabled, but from some moment on never activated. Formally, for some j0 ∈ N the assertion ∀j ≥ j0 : i ∈ Ej ∧ i = ij holds in (9). Since the variable z[i] of WFAIR gets decremented whenever the component i is enabled but not selected, it becomes arbitrarily small, in particular smaller than −n in some state σ j with j ≥ j0 . However, this is impossible since at most n components are enabled and thus, by Invariant-Lemma 1, none of the scheduling variables can have values below −n. Contradiction. In [OA88] the Invariant-Lemma and soundness of WFAIR are proven for static control, i.e., for the case where an arbitrary but fixed set of n components is considered. Here we lifted the proofs to the more general setting where the n enabled components may differ throughout the execution steps. If infinitely many components can be enabled at the same moment, the scheduler WFAIR does not guarantee weak fairness. Theorem 2 (Unsoundness of WFAIR for infinitary control). Not all runs of infinitary guarded command programs (where possibly infinitely many components can be enabled at the same moment) that are scheduled by WFAIR are weakly fair. Proof. We construct a run scheduled by WFAIR in which the component with number 0 is treated unfair, i.e., it is always enabled but never selected. Table 2 shows an initial segment of this run in detail. In the column denoted by i the component numbers are shown. We assume that every component is always enabled. The other columns in the table show the values of the scheduling variables z[i] in the scheduler states σ 0 , σ 1 , σ 2 , .... A star ∗ after a value indicates that in Table 2. A run where component 0 is treated unfair i σ 0 σ 1 σ 2 σ3 σ 4 ... 0 1 0 -1 -2 -3 ... 1 0* 0 -1 -2 -3 ... 2 0 -1* 0 -1 -2 ... 3 0 -1 -2* 0 -1 ... 4 0 -1 -2 -3* 0 ... 5 0 -1 -2 -3 -4* ... ...
...
Explicit Fair Scheduling for Dynamic Control
107
this state the component in the corresponding row is scheduled for execution. For example, in state σ 0 component 1 is scheduled. We see that component 0 is never selected because in the scheduler state σ j its scheduling variable z[0] has the value −j + 1 whereas the scheduling variables of selected component j + 1 has the value −j, which is the minimum value among all scheduling variables. This proof exploits that in each selection infinitely many components are enabled. For dynamic control we will have unboundedly many components but at each moment only finitely many of them will be enabled. 3.2
A Scheduler for Strong Fairness: FAIR
The scheduler FAIR of [OA88] for strong fairness is defined as WFAIR except that the scheduling relation uses a modified statement UPDATE i where the nondeterministic assignment z[j] :=? in the else-branch of its conditional statement is replaced by a skip statement. Definition 4 (FAIR). The scheduler states and the subset of initial scheduler states are as in Definition 3 of WFAIR. The ternary scheduling relation uses the same scheduling condition SCHi as in that definition but the statement UPDATE i is changed as follows: UPDATE i ≡ z[i] :=?; for all j = i do if j ∈ E then z[j] := z[j] − 1 else skip fi od With this change, the scheduling variable z[j] of a temporarily disabled component j does not get reset. We can show that the scheduler FAIR can schedule every fair run. Theorem 3 (Universality of FAIR for infinitary control). The scheduler FAIR can schedule every strongly fair run of an infinitary guarded command program (where possibly infinitely many components can be enabled at the same time). Proof. Consider a strongly fair run (E0 , i0 )(E1 , i1 )(E2 , i2 ). . .
(10)
We adapt an argument from [OA88, AO97] and show that (10) can be scheduled by FAIR by constructing a sequence σ 0 σ 1 σ 2 . . . of scheduler states satisfying sch(σ j , (Ej , ij ), σ j+1 ) for every j ∈ N. The construction proceeds by assigning appropriate values to the scheduling variables z[i] of FAIR. For i, j ∈ N we put if ∃m ≥ j : im = i card{k ∈ N|j ≤ k < mi,j ∧ i ∈ Ek } σ j (z[i]) = 1 + card{k ∈ N|j ≤ k < mi,j ∧ i ∈ Ek } if ∀m ≥ j : im = i
108
E.-R. Olderog and A. Podelski
where, as before, card M denotes the cardinality of a set M and where mi,j = min{m ∈ N|j ≤ m ∧ (im = i ∨ ∀n ≥ m : i ∈ En )}. Note that mi,j is the minimum of a non-empty subset of N because the run (10) is strongly fair. In the definition of σ j (z[i]) the cardinality expression counts how many times the component i is enabled in a selection scheduled by the scheduler (i ∈ Ek ) before its next (if any) activation. In this construction the variables z[i] have values ≥ 0 in every state σ j and exactly one variable z[i] with i ∈ Ej has the value 0. This i is the index of the component selected next. It is easy to see that this construction of values σ j (z[i]) is possible with the assignments in FAIR. However, the scheduler FAIR does not guarantee strong fairness for infinitely many components. Theorem 4 (Unsoundness of FAIR for infinitary control). Not all runs of infinitary guarded command programs (where possibly infinitely many components can be enabled at the same moment) that are scheduled by FAIR are strongly fair. Proof. We can take the same counterexample as in the proof of Theorem 2 because in that example the components are continuously enabled, so the schedulers WFAIR and FAIR behave the same.
4
Dynamic Control
Our goal is a minimalistic model that allows us the study of fairness for programs with dynamically created processes. In this section, we define the class of such programs with dynamic control formally as a subclass of programs in the infinitary guarded command language. At each moment each of the infinitely many processes “exists” (whether is has been created or not). Each process is modeled by a branch in the infinitary do-od loop. However, at each moment, only finitely many processes have been created (or activated or allocated). All others are dormant. If a branch corresponds to a dormant process its guard is not satisfied because a specific expression is false. Process creation is simply modeled by setting this expression to true. Processes are referred to by natural numbers. The process (with number) i, where i ≥ 1, is represented by the component Bi → Si consisting of a statement Si guarded by a Boolean expression Bi . A distinguished variable cp of type N represents the number of currently created processes. Process i is considered as being created if the expression i ≤ cp in its guard is satisfied. All other processes are treated as not being created yet. A distinguished creation process (with number) 0 guarded by the Boolean expression B0 increments the variable cp and thus “creates” one more process. Formally, we model dynamic control by one infinitary do-od loop (1) which is instantiated as follows:
Explicit Fair Scheduling for Dynamic Control
109
– for the creation process (i = 0) the command is of the form cp := cp + 1; S0 , – for all other processes (i ≥ 1) the guard is of the form i ≤ cp ∧ Bi . Thus we take the infinitary do-od loop S ≡ do
(11)
[]B0 → cp := cp + 1; S0 []∞ i=1 i ≤ cp ∧ Bi → Si od to represent infinitely many processes of which at each moment only finitely many are created. Note that in (11) each process, once it is activated, is executed as an atomic action — without being interruptible by any other process. When we wish to represent a finer granularity of atomicity, the do-od loop is refined into the following form: S ≡ do []B0,0 → cp := cp + 1; S0,0
(12)
0 []nk=1 B0,k → S0,k
ni []∞ i=1 []k=0 i ≤ cp ∧ Bi,k → Si,k od
Here each process i consists of ni components of the do-od loop, each one representing an atomic action. In particular, cp := cp + 1; S0,0 and S0,1 , . . . , S0,n0 are the atomic actions of the creation process 0, and Si,0 , . . . , Si,ni are the atomic actions of any other process i with i ≥ 1. By choosing ni = 1, the do-od loop (12) specializes to the form (11). To simplify the presentation we will formulate our proofs for programs of the form (11). 4.1
Sieve of Eratosthenes
As an example we consider the parallel version of the sieve algorithm of Eratosthenes for finding prime numbers of [PA04]. We rephrase it here as a do-od loop of the form (11) using variables and arrays of the following type: var c, cp : N1 array q : N2 → seq N2 ; array p : N2 → N2 ; array t : N → N;
% % % %
counter, number of created processes array of queues array of primes array of local variables
where N1 = N \ {0} and N2 = N1 \ {1}, and seq N2 denotes of set of sequences (modeling queues) with elements from N2 . We assume that c and cp are both initialized with 1 and that q[2] is initialized with the empty sequence . Then the program for the sieve of Eratosthenes is defined as follows:
110
E.-R. Olderog and A. Podelski
SE ≡ do [] ¬empty(q[cp + 1]) → cp := cp + 1; p[cp] := pop(q[cp]); q[cp + 1] := [] 1 ≤ cp → c := c + 1; push(c, q[2])
% creation process 0
% generator process 1
[]∞ i=2 i ≤ cp ∧ ¬empty(q[i]) → Si od
% sieve process i
where Si for i ≥ 2 represents the ith sieve: Si ≡ t[i] := pop(q[i]); if ¬div(p[i], t[i]) then push(t[i], q[i + 1]) fi We give some informal explanation of the program SE. Figure 1 shows how the processes are connected via queues. The generator process is connected via the queue q[2] to the sieve S2 . Further on, each sieve Si is connected to the next sieve Si+1 via the queue q[i + 1] and has a variable p[i] for storing a prime. The generator processes generates one by one all natural numbers of at least 2 in its variable c and outputs them to the initially empty queue q[2]. Initially, the number of created processes is cp = 1 and thus only the generator process (with the number 1) is created. Later on, whenever the input queue q[cp + 1] become non-empty (by receiving a number from the process with the number of the current value of cp), the creation process (with number 0) gets enabled. When activated this process increments cp and thus creates a new sieve process Scp . Further on, it initializes the variable p[cp] of this sieve process with the first number picked up from its input queue q[cp], which will a prime, and the output queue q[cp + 1] of Scp with the empty sequence. Each sieve Si gets enabled when its input queue q[i] is non-empty. When it picks up a number from q[i] it checks whether it is dividable by the prime stored in p[i]. If this is the case this number is ignored, otherwise it is output to the initially empty input queue q[i + 1] of the next sieve Si+1 . In [PA04] two correctness properties of this algorithm are considered. A safety property stating that the numbers stored by the sieves Si into the variable p[i] are indeed primes, and a response property stating that each natural number that is a prime will eventually be found by one of the created sieves. This response property depends on the assumption that only weakly fair executions of the program SE are considered.
Gen
q[2]
S2 p[2]
q[3]
S3
q[4]
p[3]
Fig. 1. Queues connecting the sieves
S4 p[4]
Explicit Fair Scheduling for Dynamic Control
111
In [dB87, AdB90] programs with dynamic process creation were verified that implement the sieve of Eratosthenes without fairness assumption. However, these programs terminated by generating only all primes up to a fixed upper bound. Example for strong fairness. As an example for strong fairness we consider a variant of the program above, a sieve of Eratosthenes with semaphores modeled by the program SES shown in Fig. 2. Here the granularity of the atomic actions of the processes is finer so that the program SES is represented as a do-od loop of the form (12). The idea is that each of the unboundedly many queues q[i] can be accessed by the processes i and i + 1 only in a mutually exclusive fashion. This is achieved by introducing a Boolean semaphore sem[i] for each queue q[i] and a program counter pc[i] for each process i. Formally, we add arrays of the following type: SES ≡ do % creation process 0: [] ¬empty(q[cp + 1]) ∧ pc[0] = 0 → cp := cp + 1; pc[cp] := 0; pc[0] := pc[0] + 1 [] [] [] [] [] []
pc[0] = 1 ∧ sem[cp] → sem[cp] := false; pc[0] := pc[0] + 1 pc[0] = 2 → p[cp] := pop(q[cp]); pc[0] := pc[0] + 1 pc[0] = 3 → sem[cp] := true; pc[0] := pc[0] + 1 pc[0] = 4 ∧ sem[cp] → sem[cp] := false; pc[0] := pc[0] + 1 pc[0] = 5 → q[cp + 1] := ; pc[0] := pc[0] + 1 pc[0] = 6 → sem[cp] := true; pc[0] := 0
% generator process 1: [] 1 ≤ cp ∧ pc[1] = 0 → c := c + 1; pc[1] := pc[1] + 1 [] 1 ≤ cp ∧ pc[1] = 1 ∧ sem[1] → sem[1] := false; pc[1] := pc[1] + 1 [] 1 ≤ cp ∧ pc[1] = 2 → push(c, q[2]); pc[1] := pc[1] + 1 [] 1 ≤ cp ∧ pc[1] = 3 → sem[1] := true; pc[1] := 0 % sieve process i: []∞ i=2 i ≤ cp ∧ ¬empty(q[i]) ∧ pc[i] = 0 → pc[i] := pc[i] + 1 []∞ i=2 i ≤ cp ∧ pc[i] = 1 ∧ sem[cp] → sem[cp] := false; pc[i] := pc[i] + 1 []∞ i=2 i ≤ cp ∧ pc[i] = 2 → t[i] := pop(q[i]); pc[i] := pc[i] + 1 []∞ i=2 i ≤ cp ∧ pc[i] = 3 → sem[i] := true; pc[i] := pc[i] + 1 []∞ i=2 i ≤ cp ∧ pc[i] = 4 ∧ ¬div(p[i], t[i]) → pc[i] := pc[i] + 1 []∞ i=2 i ≤ cp ∧ pc[i] = 4 ∧ div(p[i], t[i]) → pc[i] := 0 []∞ i=2 i ≤ cp ∧ pc[i] = 5 ∧ sem[i + 1] → sem[i + 1] := false; pc[i] := pc[i] + 1 []∞ i=2 []∞ i=2
i ≤ cp ∧ pc[i] = 6 → push(t[i], q[i + 1]); pc[i] := pc[i] + 1 i ≤ cp ∧ pc[i] = 7 → sem[i + 1] := true; pc[i] := 0
od Fig. 2. The program SES
112
E.-R. Olderog and A. Podelski
array pc : N → N; array sem : N2 → B;
% array of program counters % array of semaphores
In addition to the initializations stated for the program SE we assume that pc[0] and pc[1] are both initialized with 0 and that sem[2] is initialized with true. When pc[i] = 0 holds for the program counter of the sieve process i, it may not be enabled due to the semaphore sem[i]. Similarly, it may not be enabled when pc[i] = 4 holds due to the semaphore sem[i + 1]. As a consequence the response property, i.e., each natural number that is a prime will eventually be found by one of the created sieves, depends on the assumption that only strongly fair executions of the program SES are considered.
5
Weak Fairness for Dynamic Control
Since dynamic control is a special case of infinitary control, Theorem 1 implies that the scheduler WFAIR is universal for dynamic control. However, we can show more. Since in programs with dynamic control only finitely many processes are created at each moment, we can exploit this fact to strengthen the soundness result of WFAIR for bounded control (Corollary 1) as follows. Theorem 5 (Soundness of WFAIR for dynamic control). A run of a program with dynamic control that can be scheduled by WFAIR is weakly fair. Proof. Consider a run (E0 , i0 )(E1 , i1 )(E2 , i2 ). . .
(13)
of a program S of the form (11) given in Section 4 that is scheduled by WFAIR. Let σ 0 σ 1 σ 2 . . . be a sequence of scheduler states with sch(σ j , (Ej , ij ), σ j+1 ) for every j ∈ N. We claim that (13) is weakly fair. Suppose the contrary holds. Then there exists some process i with i ∈ N which is almost everywhere enabled, but from some moment on never activated. Formally, for some j0 ∈ N ∀j ≥ j0 : i ∈ Ej ∧ i = ij
(14)
holds in (13). By the definition of WFAIR, the variable z[i] of gets decremented whenever the process i is enabled but not activated. Thus the value of z[i] becomes arbitrarily small. We choose j0 large enough so that z[i] < 0 holds in the state σ j0 . In σ j0 only finitely many, say n, processes are created. If no more processes are created after j0 then card Ej ≤ n for all j ∈ N. By the Invariant-Lemma 1, the scheduling variable z[i] cannot get below −n. This contradicts that process i is not scheduled any more. Thus after j0 more and more processes are created by process 0 and compete with process i for being scheduled. Suppose at j1 process n + 1 is created, at j2 process n + 2, etc. By the definition of WFAIR, we have j0 ≤ j1 < j2 < . . . and σ j1 |= z[i] < 0 ∧ 0 ≤ z[0] ∧ 0 ≤ z[n + 1], σ j2 |= z[i] < 0 ∧ 0 ≤ z[0] ∧ 0 ≤ z[n + 2], ...
Explicit Fair Scheduling for Dynamic Control
113
Since process i is always enabled after j0 and thus its scheduling variable z[i] decremented in each step, the following relations are preserved: σ j |= z[i] < z[0] for all j ≥ j1 , σ j |= z[i] < z[n + 1] for all j ≥ j1 , σ j |= z[i] < z[n + 2] for all j ≥ j2 , ... Thus the newly created processes n + 1, n + 2, . . . have always a lower priority than process i. Also the creation process 0 itself has a lower priority than process i after j1 . Thus it cannot be selected again at position j2 to create process n + 2. Contradiction to (14).
6
Strong Fairness for Dynamic Control
For static control the scheduler FAIR is sound and universal for strong fairness as shown in [OA88]. For dynamic control, which can create unboundedly many processes, universality still holds. This is a special case of Theorem 3. However, in contrast to weak fairness, the scheduler FAIR does not guarantee strong fairness any more. Theorem 6 (Unsoundness of FAIR for dynamic control). Not all runs of programs with dynamic control that are scheduled by FAIR are strongly fair. Proof. We construct a run scheduled by FAIR in which a process, say with number 1, is treated unfair, i.e., it is infinitely often enabled but never activated. The idea is that process 1 becomes enabled more and more seldom. The increasing gap in between two successive moments of being enabled is used to create more and more new processes that all compete with process 1. The values of scheduling variables will force FAIR to activate the newly created processes rather than process 1. Table 3. A run where process 1 is treated unfair i σ 0 σ1 σ2 σ 3 σ4 σ5 σ 6 σ7 σ 8 σ 9 σ 10 σ 11 σ12 σ13 σ14 σ 15 ... 1 1 – 0 – – -1 – – – -2 – – – – -3 – ... 0 0* 0 -1* 0 -1 -2* 0 -1 -2 -3* 0 2
-1 -2 -3 -4* 0 ...
0* 0 -1* 0 -1 -2* 0 -1 -2 -3* 0
-1 -2 -3 -4* ...
3
0 -1* 0 -1 -2* 0 -1 -2 -3* 0
4
0 -1 -2* 0
5 ...
-1 -2 -3 ...
-1 -2 -3* 0 0
-1 -2 ...
-1 -2 -3* 0
-1 ... ...
114
E.-R. Olderog and A. Podelski
Table 3 shows an initial segment of this run in detail. In the column denoted by i the process numbers are shown. The other columns show the values of the scheduling variables z[i] in the scheduler states σ 0 , σ 1 , σ 2 , .... A star ∗ after a value indicates that in this state the process in the corresponding row is scheduled for execution. For example, in state σ 0 process 0 is scheduled. An entry “–” indicates that in this state the corresponding process is not enabled. This is the case only for process 1. If process 1 is not enabled its scheduling variable z[0] keeps its previous value. For example, in state σ 3 and σ 4 the value of z[1] is still 0. Empty boxes in the table indicate that in that state the corresponding process is not yet created. For example, in state σ 0 only the processes 1 and 0 are created. Note that whenever (the creation) process 0 is scheduled for execution, a newly created process appears in the successor state. For example, process 2 appears in state σ 1 because process 0 was scheduled for execution in state σ 0 . In general, process 1 is enabled in all states σ f (i) with i f (i) = ( r) − 1 for i ≥ 1 r=1
where its scheduling variable z[1] satisfies σ f (i) |= z[1] = 2 − i. Process 0 is activated in all these states σ f (i) with its scheduling variable z[0] satisfying σ f (i) |= z[0] = 1 − i < z[1]. Process 0 creates there the process i + 1 which starts its life in state σ f (i)+1 and is activated in all states σ g(i,j) with g(i, j) = f (i) +
j
(i + s) for i ≥ 1 and j ≥ 0.
s=0
For example, process 3 starts life in state σ f (2)+1 = σ 3 and is activated in the states σ g(2,0) = σ 4 , σ g(2,1) = σ 7 , σ g(2,2) = σ 11 , etc. This completes the proof.
In [Bes96], Chapter 6, an alternative scheduler for strong fairness was proposed. The scheduler states, the subset of initial scheduler states, and the scheduling condition SCHi are defined as for FAIR, but the update is quite different: UPDATE i ≡ z[i] := z[i] + 1+? which expresses that the value of scheduler variable z[i] of the selected process i is increased by at least 1. This way, the scheduler variables are always nonnegative, in contrast to FAIR. The update models that the priority of the selected process i is lowered relative to all other processes. In [Bes96] it is shown that this scheduler is, in our terminology, sound and universal for static control. However, by a variant of the counterexample in Table 3, we can show that also this scheduler is unsound for dynamic control.
Explicit Fair Scheduling for Dynamic Control
7
115
Conclusion
In this paper we investigated the schedulers WFAIR and FAIR that stem from [OA88]. These schedulers were designed for weak and strong fairness of static control, i.e., with an arbitrary but fixed number of processes. Here we first investigated these schedulers in the theoretically motivated setting of infinitary control where we have infinitely many processes at the same moment. We showed that WFAIR and FAIR are both universal in this setting (Theorems 1 and 3). However, none of these schedulers is sound for infinitary control as a simple counterexample of an unfair run scheduled by both WFAIR and FAIR shows (Theorems 2 and 4). We then investigated these schedulers in the setting of dynamic control where processes can be created dynamically. Hence at every moment we have finitely, but unboundedly many active processes in the same execution. We showed that WFAIR is sound and universal in this setting (Theorem 5). However, FAIR remains unsound as an elaborate counterexample of an unfair run scheduled by FAIR showed (Theorem 6). We used the notion of bounded control as a vehicle for our formal investigation. Bounded control lies between static and infinitary control: in each execution only a bounded number of processes is active, but there may be no uniform bound for all executions. For bounded control we established an execution invariant of WFAIR (Lemma 1) and deduced the soundness of WFAIR in this setting (Corollary 1). However, the main application of Lemma 1 is in the proof of Theorem 5, showing the soundness of WFAIR for dynamic control. So WFAIR is well suited also in the context of dynamic control whereas FAIR is not. This paper leaves open the question of a new scheduler that is both sound and universal for strong fairness of dynamic control. Thus even after this paper we would not give de Roever’s advice “Do not work on fairness. It has all been solved!”. Acknowledgements. This work was partly supported by the German Research Council (DFG) as part of the Transregional Collaborative Research Center “Automatic Verification and Analysis of Complex Systems” (SFB/TR 14 AVACS, http://www.avacs.org/). We thank Jochen Hoenicke and Andrey Rybalchenko for helpful comments on this paper.
References [AdB90]
[AO83] [AO97]
America, P., de Boer, F.S.: A proof system for process creation. In: Pnueli, A. (ed.) Proc. of the TC2 Working Conference on Programming Concepts and Methods – Preprint, IFIP, pp. 292–320 (1990) Apt, K.R., Olderog, E.R.: Proof rules and transformations dealing with fairness. Sci. of Comp. Progr. 3, 65–100 (1983) Apt, K.-R., Olderog, E.-R.: Verification of Sequential and Concurrent Programs, 2nd edn. Springer, Heidelberg (1997)
116 [Bes96]
E.-R. Olderog and A. Podelski
Best, E.: Semantics of Sequential and Parallel Programs. Prentice-Hall, Englewood Cliffs (1996) [BSTW06] Bauer, J., Schaefer, I., Toben, T., Westphal, B.: Specification and verification of dynamic communication systems. In: Goossens, K., Petrucci, L. (eds.) Proc. of the 6th Intern. Conf. on Application of Concurrency to System Design (ACSD), Turku, Finland. IEEE, Los Alamitos (2006) [CC77] Cousot, P., Cousot, R.: Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixedpoints. In: Proc. of 4th Symp. on Principles of Programming Languages (POPL), pp. 238–252. ACM, New York (1977) [CPR05] Cook, B., Podelski, A., Rybalchenko, A.: Abstraction refinement for termination. In: Hankin, C., Siveroni, I. (eds.) SAS 2005. LNCS, vol. 3672, pp. 87–101. Springer, Heidelberg (2005) [CPR06] Cook, B., Podelski, A., Rybalchenko, A.: Termination proofs for systems code. In: Proc. of the ACM SIGPLAN Conf. on Programming Language Design and Implementation (PLDI), pp. 415–426. ACM Press, New York (2006) [CPR07] Cook, B., Podelski, A., Rybalchenko, A.: Proving thread termination. In: Proc. of the ACM SIGPLAN Conf. on Programming Language Design and Implementation (PLDI). ACM Press, New York (2007) [dB87] de Boer, F.S.: A proof rule for process-creation. In: Wirsing, M. (ed.) Formal Description of Programming Concepts – III. IFIP, pp. 23–50. North-Holland, Amsterdam (1987) [Dij75] Dijkstra, E.W.: Guarded commands, nondeterminacy and formal derivation of programs. Comm. of the ACM 18, 453–457 (1975) [Fra86] Francez, N.: Fairness. Springer, New York (1986) [GFMdR81] Gr¨ umberg, O., Francez, N., Makowsky, J.A., de Roever, W.-P.: A proof rule of fair termination of guarded commands. In: de Bakker, J.W., van Vliet, J.C. (eds.) Proc. of the IFIP Symposium on Algorithmic Languages, pp. 339–416. North-Holland, Amsterdam (1981) [GFMdR85] Gr¨ umberg, O., Francez, N., Makowsky, J.A., de Roever, W.-P.: A proof rule of fair termination of guarded commands. Information and Control 66(1/2), 83–102 (1985); Revised version of [GFMdR81] [Hoa96] Hoare, C.A.R.: Remarks on fairness. Private communication (1996) [LPS81] Lehmann, D., Pnueli, A., Stavi, J.: Impartiality, justice and fairness: the ethics of concurrent termination. In: Even, S., Kariv, O. (eds.) ICALP 1981. LNCS, vol. 115, pp. 264–277. Springer, Heidelberg (1981) [MQ08] Musuvathi, M., Quadeer, S.: Fair stateless model checking. In: Proc. of ACM SIGPLAN Conf. on Programming Language Design and Implementation (PLDI) (June 2008); Preprint appeared as Technical Report MSR-TR-2007-149, Microsoft Research, Redmond (December 2007) [OA88] Olderog, E.R., Apt, K.R.: Fairness in parallel programs, the transformational approach. ACM TOPLAS 10, 420–455 (1988) [PA04] Pnueli, A., Arons, T.: TLPVS: A PVS-based LTL verification system. In: Dershowitz, N. (ed.) Verification: Theory and Practice. LNCS, vol. 2772, pp. 598–625. Springer, Heidelberg (2004) [Plo81] Plotkin, G.D.: A structural approach to operational semantics. Technical Report DAIMI-FN 19, Dept. of Comp. Sci., Aarhus University (1981) [Plo01] Plotkin, G.D.: Adequacy for algebraic effects. In: Honsell, F., Miculan, M. (eds.) FOSSACS 2001. LNCS, vol. 2030, p. 1. Springer, Heidelberg (2001)
Explicit Fair Scheduling for Dynamic Control [Plo04]
[PPR05]
[PR04]
[PR05]
117
Plotkin, G.D.: A structural approach to operational semantics. J. of Logic and Algebraic Programming 60-61, 17–139 (2004); Revised version of [Plo81] Pnueli, A., Podelski, A., Rybalchenko, A.: Separating fairness and wellfoundedness for the analysis of fair discrete systems. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 124–139. Springer, Heidelberg (2005) Podelski, A., Rybalchenko, A.: Transition invariants. In: Proc. of the 19th IEEE Symposium on Logic in Computer Science (LICS), pp. 32– 41. IEEE Computer Society, Los Alamitos (2004) Podelski, A., Rybalchenko, A.: Transition predicate abstraction and fair termination. In: Palsberg, J., Abadi, M. (eds.) Proc. of the 32nd ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages (POPL), pp. 132–144. ACM, New York (2005)
Synchronous Message Passing: On the Relation between Bisimulation and Refusal Equivalence Manfred Broy Institut für Informatik, Technische Universität München D-80290 München Germany [email protected] http://wwwbroy.informatik.tu-muenchen.de
Abstract. To find congruence relations proved more difficult for synchronous message passing than for asynchronous message passing. As well-known, trace equivalence of state-machines, which represents to a congruence relation for asynchronous computations, is not a congruence relation for the classical operators of parallel composition as found in process algebras with synchronous message passing. In the literature we find two fundamentally different proposals to define congruence relations for synchronous message passing systems. One is using David Park’s bisimulation used by Robin Milner for his Calculus of Communicating Systems (CCS) which introduces a class of relations between systems with synchronous message passing, the other one is an equivalence relation, introduced by the denotational semantics, given by Tony Hoare for a process algebra like Communicating Sequential Processes (CSP), based on socalled readiness, refusal and failure concepts. In this little note we analyze the question whether the equivalence relation, introduced by denotational semantics is in fact a bisimulation. Keywords: Synchronous Message Passing, Bisimulation, Trace Equivalence, Refusal Equivalence.
1 Introduction In the following we define for a very simple model of synchronous message passing both bisimulation equivalence and an equivalence based on the denotational semantics, working with readiness, refusal and failure sets. As well-known trace equivalence is not a congruence relation for the operators of synchronous message passing. David Park’s bisimulation defines a class of congruence relations. We show that actually the equivalence relation, introduced by denotational semantics based on so-called refusal and failure concepts, is more abstract than any bisimulation relation. This shows that bisimulations do not lead to fully abstract models. We introduce a very simple model of synchronous message passing by labeled state transition systems. The critical concept of such systems, that makes them more difficult to handle, is the fact that they synchronize on actions in a way where they select actions, which they can do together, according to the handshake concept. The concept of message synchronous communication in distributed systems (also called rendezvous or handshake communication) is found in process algebras like D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 118–126, 2010. © Springer-Verlag Berlin Heidelberg 2010
Synchronous Message Passing
119
CCS and CSP and incorporated in practical programming languages like Ada and Lotos. There are two standard ways presented in the literature to derive a compositional denotational style semantics for such process algebras that abstracts from low level operational semantics, in general given by rewrite rules. These techniques are: • bisimulation, • readiness sets, failures, and refusal sets. Bisimulation is a concept due to David Park. It has been applied by Robin Milner to define equivalence relations that are congruence relations for the semantic functions representing the language constructs on the top of a structured operational semantics in his calculus of communicating systems called CCS. Failures and refusals are a concept developed by Tony Hoare and his colleagues in Oxford to give a denotational meaning to his process algebra of communicating sequential processes called CSP.
2 Labeled State Transition Systems We deal with labeled state transition systems. They are described by state machines with state transition labeled by actions. 2.1 Labeled State Transition Machines Let A be a set of labels, also called actions, and let Σ be a set of states. A labeled state transition system is given by a relation Δ⊆Σ×Α×Σ For (σ1, a, σ2) ∈ Δ we write also a σ1 ⎯ ⎯→ σ2
We assume that the set of actions A can be decomposed into two sets Asyn and Aasyn such that A = Asyn ∪ Aasyn and Asyn ∩ Aasyn = ∅. Asyn is called the set of synchronous actions and Aasyn is called the set of asynchronous actions. Given a transition system Δ and an initial state α ∈ Σ we call the pair (Δ, α) a state machine with labeled transitions. A state is called terminal for a labeled state transition system Δ, if it does not have a successor state in Δ. We define a σ↓Δ ≡ ¬ ∃ σ’ ∈ Σ, a ∈ A: σ ⎯ ⎯→ σ’
A terminal state denotes a state in which no further action is enabled and hence the state machine has terminated. Note, that a deadlock is also an instance of a terminal state. In principle, we may specify a subset of the terminal states as regularly terminated states and the others as deadlock states. 2.2 Traces of Labeled State Transition Systems The transition relation Δ can easily be extended from actions to finite sequences of actions called traces in the following obvious way. A finite trace is a sequence
120
M. Broy
s ∈ Α∗ of actions. An action labeled state transition relation specifies for each state also a trace of actions. We write s σ1 ⎯ ⎯→ σ2
to express that the sequence s of actions can be performed in state σ1 and leads to state σ2. Let < > denote the empty sequence, denote the one element sequence and s1ˆs2 the concatenation of sequence s1 and s2. The relation on traces is inductively defined as the least relation that fulfils the following formulas:
σ1 ⎯<> ⎯→ σ1 a σ1 ⎯ ⎯→ σ2 ⇒ σ1 ⎯ ⎯ ⎯→ σ2 s1 s2 s1ˆs2 σ1 ⎯⎯ → σ2 ∧ σ2 ⎯⎯→ σ3 ⇒ σ1 ⎯ ⎯⎯→ σ3
Based on this extension of the state transition relation to sequences of actions we specify the set of traces for a given state σ ∈ Σ and for a given state transition relation Δ ⊆ Σ × Α × Σ. We define the set ptrace(σ) of partial traces as follows: s ptrace(σ) ≡ {s ∈ Α∗: ∃ σ’ ∈ Σ: σ ⎯ ⎯→ σ’ }
and the set of total traces ttrace(σ) by s ttrace(σ) ≡ {s ∈ Α∗: ∃ σ’ ∈ Σ: σ ⎯ ⎯→ σ’ ∧ σ’↓Δ }
Note that obviously ttrace(σ) ⊆ ptrace(σ). Two states are trace equivalent in if their sets of traces coincide. We specify the relation as follows:
σ1 ∼trace σ2 ≡ (ttrace(σ1) = ttrace(σ2) ∧ ptrace(σ1) = ptrace(σ2)) Two states σ1 and σ2 are called partially trace equivalent if ptrace(σ1) = ptrace(σ2). The trace equivalence is extended to state machines Mi = (Δi, αi) by the specification M1 ∼trace M2 ≡ α1 ∼ trace α2 We can also define the set of infinite traces for a state. This leads into issues of fairness. For our purpose, however, the set of finite traces is sufficient. 2.3 Composition of Labeled State Transition Machines Given two labeled state transition machines M1 = (Δ1, α1) with state space Σ1 and M2 = (Δ2, α2) with state space Σ2 which use the set of actions A decomposed into two sets Asyn and Aasyn such that A = Asyn ∪ Aasyn and Asyn ∩ Aasyn = ∅ we define a machine M = (Δ, α) with the same set of actions A and the state space Σ as follows:
Σ = Σ1 × Σ2 with initial state
α = (α1, α2)
Synchronous Message Passing
121
We define the composed state machine by
Δ = Δ1 || Δ2 and M = M1 || M2 where for synchronous actions b ∈ Asyn by (σ, b, σ') ∈ Δ ≡ ∃ (σ1, b, σ'1) ∈ Δ1, (σ2, b, σ'2) ∈ Δ2: σ = (σ1, σ 2) ∧ σ' = (σ'1, σ'2) and for asynchronous actions c ∈ Aasyn by (σ, c, σ') ∈ Δ ≡
∃ (σ1, c, σ'1) ∈ Δ1, σ2 ∈ Σ2: σ = (σ1, σ 2) ∧ σ' = (σ'1, σ2) ∨ ∃ σ1 ∈ Σ1, (σ2, c, σ'2) ∈ Δ2: σ = (σ1, σ2) ∧ σ' = (σ1, σ'2) According to this definition asynchronous steps are done in an interleaving mode. This way we get a composed machine that exhibits asynchronous as well as synchronous transitions.
3 Congruence Relations: Action Systems in CSP and CCS In CCS and CSP we consider a state transition system with a state set Σ and an action set A as well as a transition relation for states σ1, σ2 ∈ Σ and actions a ∈ A written in the form: a σ1 ⎯ ⎯→ σ2
This proposition expresses that in the state σ1 the action a can be executed leading to the state σ2. For simplicity, we do not consider ε-steps (silent steps) here but rather assume that all steps in a system are labeled by visible actions. This way we avoid some intricate problems with diverging computations, which are infinite sequences of computations where all transition steps are labeled by ε. A discussion of divergence would make our discussion unnecessarily more complicated. 3.1 Equivalence and Congruence Relations We consider one composition operator for state machines only, namely parallel composition. It seems to be the most interesting one. In process algebras like CSP and CCS one finds many more operators that form an algebra. CCS and CSP allows us to write terms, which are formed by the operators, that are used to denote the states of the state transition system described. Structured operational semantics is used to specify the state transitions for the terms of the process algebras. For our purpose, however, it is enough to consider only one operator namely parallel composition for state machines. A congruence relation is an equivalence relation ∼ with the property M1 ∼ M3 ∧ M2 ∼ M4 ⇒ (M1 || M2) ∼ (M3 || M4) The trace equivalence is not a congruence as well known from the work of Hennessy and Milner (see [2]). This is illustrated by the example given in Fig. 1.
122
M. Broy
1
a
2
a
a
a
3
4
b
c
8
10
9
11
6
b
c
8
b
9
12
Fig. 1. Example of Three Machines M1, M2 and M3
Fig. 1 shows three state machines. All actions are assumed to be synchronous. The machines M1 and M2 are trace equivalent. If we compose M1 with M3 and M2 with M3 we get the machines (M1 || M3) and (M2 || M3) where the trace ∈ ttrace(M1 || M3) while ∉ ttrace(M1 || M3). This shows that trace equivalence ∼trace is not a congruence for the parallel composition operator ||. 3.2 Bisimulation Equivalence The idea of bisimulation was introduced by David Park (see [8]). A relation ~ ⊆ Σ × Σ on the state set Σ is called a bisimulation if there exists a relation ≤ (actually a preorder) such that for all states σ1, σ2 ∈ Σ we have
σ1 ~ σ2 ≡ σ1 ≤ σ2 ∧ σ2 ≤ σ1 where ≤ is a relation on states with the following property a a σ1 ≤ σ2 ≡ σ1 = σ2 ∨ ∀ a ∈ A, σ3 ∈ Σ: σ1 ⎯ ⎯→ σ3 ⇒ ∃ σ4 ∈ Σ: σ2 ⎯ ⎯→ σ4 ∧ σ3 ≤ σ4
We call the relation ≤ a bisimulation preorder or a simulation, for short. To define the concept of bisimulation and to illustrate the idea of refusal sets we introduce the notion of the set of enabled actions. For each state σ ∈ Σ we define its set of enabled actions enabled(σ) ⊆ A as follows: a enabled(σ) = {a ∈ A: ∃ σ’ ∈ Σ: σ ⎯ ⎯→ σ’}
For each state σ ∈ Σ and each trace t ∈ Α∗ the set Rt(σ), which represents the sets of enabled actions after trace t, is defined by the equation t Rt(σ) = {enabled(σ’): σ ⎯ ⎯→ σ’}
Rt(σ) denotes the set of readiness sets for all the states σ’ reachable from the state σ by the trace t.
Synchronous Message Passing
123
We specify a relation ∠ on sets of sets of actions (in particular for sets of readiness sets as introduced). It is specified by the following formula: S1 ∠ S2 ≡ (S1 = ∅) ∨ (S2 ≠ ∅ ∧ ∀ R ⊆ A: (∃ R2 ∈ S2: R2 ∩ R = ∅) ⇒ (∃ R1 ∈ S1 : R1 ∩ R = ∅)) The basic idea of this definition is that if S1 is a set of readiness sets that can be reached by some trace in a state machine R1 then in one of the states reached by that trace a given set R ⊆ A of offered synchronous actions can be refused if R = ∅ or there is a set R1 ∈ S1 such that R1 ∩ R = ∅. In terms of refusals this formula expresses that whenever the set S2 can refuse an offer given by a set of potential actions R ⊆ A so can S1. If the sets S1 and S2 are finite, then they are refusal equivalent, if their sets of their inclusion minimal elements coincide. Another way to specify refusal equivalence is to define the “inclusion closure” incclose(S1) adding for each set all its supersets: incclose(S1) ≡ { R ⊆ A: ∃ R1 ∈ S1 : R1 ⊆ R } Then S1 and S2 are refusal equivalent if incclose(S1) = incclose(S2). The refusal equivalence ≈ as it is introduced by Tony Hoare to specify a denotational model for CSP is defined by the following formula (let σ1, σ2 ∈ Σ):
σ1 ≈ σ2 ≡ σ1 where
σ2 ∧ σ2
σ1
is a relation on states defined as follows
σ1
σ2 ≡ ∀ σ3 ∈ Σ, t ∈ A*: Rt(σ1) ∠ Rt(σ2)
In the following we give an example of two processes that are refusal equivalent but not bisimulation equivalent. The two processes are described in Fig 2. We do not have the validity of the bisimulation preorder relation
σ2 ≤ σ1 in this case, since a σ2 ⎯ ⎯→ σ6
but a ¬ ∃ σ: σ1 ⎯ ⎯→ σ ∧ σ6 ≤ σ
Note that we do neither have σ6 ≤ σ3 nor σ6 ≤ σ4. However, the two processes shown in Fig 2 are refusal equivalent. We show that by proving the formula
σ2
σ1 ∧ σ1
σ2
We calculate the two sets of enabled actions: a S1 = {enabled(σ): σ1 ⎯ ⎯→ σ} = {{b}, {c}} a S2 = {enabled(σ): σ2 ⎯ ⎯→ σ} = {{b}, {c}, {b, c}}
124
M. Broy
1
a
2
a
a
3
4
5
c
b
8
a a 6
b
9
b
8
8
7
c
c
9
9
Fig. 2. Example of Two Refusal Equivalent but not Bisimulation Equivalent Processes
Trivially we have S2 ∠ S1 as well as S1 ∠ S2 since whenever a set R is disjoint to some set R1 ∈ S1 we also find a set R2 ∈ S2 disjoint to R and vice versa. In fact, if a set S of readiness sets contains a set that is the union of two of the elements of S, it does not contribute to the “ability” of S to refuse an offer. This is captured in the theory of Tony Hoare by the fact that adding the union of two readiness sets to a set of readiness sets yields a refusal equivalent set of readiness sets. This shows that the refusal equivalence is not a bisimulation. The second question is whether bisimulation equivalence implies refusal equivalence. This is true as the following theorem shows. Theorem. Bisimulation equivalence induces refusal equivalence. Proof: We prove for arbitrary states σ1 and σ2 the proposition
σ1 ≤ σ2 ⇒ σ1
σ2
by proving (for all traces t ∈ A ) the formula *
t t σ1 ≤ σ2 ⇒ {enabled(σ): σ1 ⎯ ⎯→ σ} ∠ {enabled(σ): σ2 ⎯ ⎯→ σ}
We observe:
σ1 ≤ σ2 ⇒ {enabled(σ1)} ∠ {enabled(σ2)} This proposition is straightforward since by the definition of ≤ we obtain the relation enabled(σ1) ⊆ enabled(σ2) which implies {enabled(σ1)} ∠ {enabled(σ2)}. A straightforward proof by induction on the length of t shows
Synchronous Message Passing
125
t t σ1 ≤ σ2 ∧ σ1 ⎯ ⎯→ σ3 ⇒ ∃ σ4: σ2 ⎯ ⎯→ σ4 ∧ σ3 ≤ σ4
We obtain from σ3 ≤ σ4 the proposition enabled(σ3) ⊆ enabled(σ4) This proves the refusal equivalence of the states σ1 and σ2 under the assumption that they are bisimular. As a result we observe that refusal equivalence is more abstract than every bisimulation relation.
4 Concluding Remarks It is one of the weak and unsatisfying sides of our discipline that in the more than thirty years of creating theories and models for distributed systems there is not enough work to relate the different approaches and combine them into a comprehensive integrated theory. Too much more work has been performed in creating yet another model in contrast to understand how to relate and compare existing approaches. Only when we understand how the various concepts interrelate we can make progress towards a unifying theory of modeling and engineering distributed systems. In the case of CCS and CSP it is, in particular, surprising when looking at the schools of Oxford and Edinburgh of the late seventies and throughout the eighties. There two very similar concepts have been introduced, namely the process algebras CCS and CSP. Nevertheless, although there is plenty of work on a semantic foundation of CCS as well as on the semantic foundation of CSP, there is not much work on how to relate both approaches (an exception is [1]). CCS traditionally is based much more on an operational semantics in terms of labeled transitions and then on the definition of a bisimulation that is based on the operational semantics while CSP is following the Oxford idea of denotational semantics given a denotational meaning for CSP terms.
Acknowledgements It is a pleasure for me to thank Leonid Kof for helpful remarks on draft versions of the manuscript.
References 1. Gardiner, P.: Power simulation and its relation to traces and failures refinement. Theor. Comput. Sci. 309(1-3), 157–176 (2003) 2. Hennessy, M.C.B., Milner, R.: On observing nondeterminism and concurrency. In: de Bakker, J.W., van Leeuwen, J. (eds.) ICALP 1980. LNCS, vol. 85, pp. 299–309. Springer, Heidelberg (1980) 3. Hoare, C.A.R.: Communicating sequential processes. Communications of the ACM 21(8), 666–677 (1978)
126
M. Broy
4. Hoare, C.A.R.: Communicating Sequential Processes. Prentice Hall, Englewood Cliffs (1985) 5. Brookes, S., Hoare, C.A.R., Roscoe, A.W.: A Theory of Communicating Sequential Processes. Journal of the ACM 31(3), 560–599 (1984) 6. Jean, D.I., Barnes, J.G.P., Firth, R.J., Woodger, M.: Rationale for the Design of the Ada® Programming Language (1986) 7. Milner, R.: Communication and Concurrency, 9th edn. Prentice Hall, Englewood Cliffs (1989) 8. Park, D.: Concurrency and Automata on Infinite Sequences. In: Deussen, P. (ed.) Proceedings of the 5th GI-Conference Karlsruhe.. Theoretical Computer Science, vol. 104, pp. 167–183. Springer, Heidelberg (1981) 9. van Eijk, P.H.J., et al. (eds.): The Formal Description Technique LOTOS, N-H (1989) 10. DeRoever, W.P., de Boer, F., Hannemann, U., Hooman, J., Lakhnech, Y., Poel, M., Zwiers, J.: Concurrency Verification: Introduction to Compositional and Noncompositional Methods. Cambridge University Press, Cambridge (2001)
Reasoning about Recursive Processes in Shared-Variable Concurrency F.S. de Boer CWI, Amsterdam, The Netherlands Leiden University, The Netherlands [email protected]
Abstract. In this paper an assertional proof method is introduced which captures concurrent systems consisting of dynamically created recursive processes which interact via shared-variables. The main contribution is a generalization of the Owicki & Gries proof method and a formal justification by soundness and completeness.
1
Introduction
In [4], a sound and complete proof method for multi-threaded Java programs is introduced. The main contribution of this paper is the introduction of concurrent systems that consist of recursive first-order processes which can be created dynamically and interact via shared variables. The behaviour of these processes is described by a program which consists of a set of mutually recursive procedure definitions. These procedures are defined in terms of the standard sequential control structures. Processes can be dynamically created by calling a procedure without waiting for its return. This model of concurrency allows to abstract from object-orientation and to high-light the main ideas underlying the proof method in [4] via shared variables. Our proof method consists of annotating each recursive process with assertions which express certain properties of the shared variables and its own local variables. Such an annotated process is locally correct if certain verification conditions hold which characterize its sequential (and deterministic) flow of control. On the other hand, reasoning about the interaction between processes involves a global interference freedom test. This test is modeled after the corresponding test in [7] for concurrent systems consisting of a statically fixed number of processes which interact via shared variables. The main contribution of this paper is the generalization of the basic method described in [7] to recursive processes and dynamic process creation. This paper also includes a discussion of soundness and completeness.
2
Recursive Processes with Shared Variables
A program consists of a set of (parameterized) first-order procedure definitions p(u1 , . . . , un ) = S D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 127–141, 2010. c Springer-Verlag Berlin Heidelberg 2010
128
F.S. de Boer
Here p denotes the name of the procedure, u1 , . . . , un are its (formal) parameters, and S denotes its body. This paper abstracts from the concrete syntax of S which describes a sequential and deterministic behavior. Basic statements include the usual assignments and (recursive) procedure calls [create] p(e1 , . . . , en ) Here e1 , . . . , en is the list of actual parameters which are used to instantiate the formal parameters of p. The keyword ’create’ is optional. It indicates the dynamic creation of a new asynchronous process which starts executing the call p(e1 , . . . , en ) in parallel with the already executing processes. The process executing this create statement continues its own execution, i.e., it does not wait for the call p(e1 , . . . , en ) to return. On the other hand, the execution of a call p(e1 , . . . , en ) itself consists of adding it to the call stack of the executing process. We assume that the formal parameters of a procedure are read-only and that the actual parameters only contain local variables. Operationally, a process is modeled as a stack θ of closures, i.e., pairs (S, τ ) consisting of a statement S and a local state τ specifying the values of the local variables of S. The process itself is executing the closure on top of the call stack, which is also called its active closure. All other closures represent pending calls. A ’snapshot’ (or global configuration) Π of a concurrent execution consists of a multiset of processes Θ and a global assignment σ of the shared variables. A global transition relation Π → Π describes the execution of an atomic statement by one process. Procedure call. The execution of a call p(e1 , . . . , en ) generates an active closure which consists of the body of p and the local state obtained by assigning the values of the actual parameters to the formal parameters of p. This active closure is pushed unto the stack of the calling process. Formally, this is described by the transition {θ · (p(e1 , . . . , en ); S, τ )} ∪ Θ, σ → {θ · (S, τ ) · (S , τ )} ∪ Θ, σ where p(u1 , . . . , un ) = S and τ (ui ) = τ (ei ), for i = 1, . . . , n. Return. The return of a recursive procedure call is described by the transition {θ · (S, τ ) · (E, τ } ∪ Θ, σ → {θ · (S, τ )} ∪ Θ, σ where E denotes termination. Process creation. In case of the execution of a create statement the generated active closure is added as a new process. Formally, this is described by the transition {θ · (create p(e1 , . . . , en ); S, τ )} ∪ Θ, σ → {θ · (S, τ )} ∪ {(S , τ )} ∪ Θ, σ where p(u1 , . . . , un ) = S and τ (ui ) = τ (ei ), for i = 1, . . . , n.
Reasoning about Recursive Processes in Shared-Variable Concurrency
129
Assertions. Assertions are used to annotate the interleaving points of statements. Assertions constrain the values of the global and local variables. An assertion P is evaluated in a configuration. A configuration γ consists of a global assignment σ of the shared variables and a local state τ . By γ |= P is denoted that the configuration γ satisfies the assertion P . An assertion is valid, denoted by |= P , if γ |= P , for every configuration γ. For a statement S, WP (S, P ) denotes the weakest precondition which guarantees that every terminating execution of S satisfies P . Example: parallel prime sieve. This section concludes with a parallel program for generating primes. This program includes a shared variable ’nat’ for generating the natural numbers. The process ’sieve(prime)’ checks whether ’nat’ is divisible by its formal parameter ’prime’, which will denote a prime number. Synchronization between sieve processes is managed by the shared variable ’found’ which stores the set of found prime numbers and the shared variable ’passed’ which stores those numbers for which ’nat’ has passed the test successfully. sieve(prime)= old:= nat; if prime | old then synchronized begin if old=nat then nat:=nat+1; passed:=∅ fi end else synchronized begin if old=nat then passed:= passed∪{prime} fi; if passed=found then found:=found∪{nat}; passed:=∅ create sieve(nat); nat:=nat+1 fi; end fi; sieve(prime) The local variable ’old’ is used to ’freeze’ the value of ’nat’. The keyword ’synchronized’ indicates a statement which cannot be interleaved by any other synchronized process. If ’old’ does not pass the test, ’nat’ can be incremented, only
130
F.S. de Boer
if ’old’ still holds the current value of ’nat’. A corresponding new round is initialized by setting the set ’passed’ to the empty set. If ’old’ does pass the test, this is recorded by adding ’prime’ to the set ’passed’ only if ’old’ still holds the current value of ’nat’. Subsequently, it is tested whether the two sets ’passed’ and ’found’ are equal, where ’found’ stores the found prime numbers. If so, we got another prime and a corresponding sieve process is created. Additionally, the search for a new prime is initiated by incrementing ’nat’ , updating ’found’ and resetting ’passed’. The execution of the program starts with the initialization nat:=3; found:=∅; passed:=∅ and the creation of the initial sieve process sieve(2) Clearly, the shared variables ’found’ and ’passed’ can be replaced by corresponding counters. However, the additional information allows to express correctness by the simple global invariant found⊆Primes which states that ’found’ only stores prime numbers (’Primes’ denotes the set of prime numbers). In order to establish this invariant the following additional information is needed: Max(found)≤nat≤Nextpr(found) which states that ’nat’ is in between the greatest prime number found and the next prime number. Furthermore the following information about the sieve process itself is needed: passed⊆ {n : n | nat} ∩ found Using the proof method described in the next section it is a straightforward exercise to establish the global invariant and the local invariant prime∈found
3
The Verification Method
The verification method introduced in this section is defined in terms of proofoutlines. A proof-outline is a correctly annotated program. An annotation of a program associates with every sub-statement S (appearing in a process body) a precondition Pre(S) and a postcondition Post(S). Validation of verification conditions establish the correctness of an annotated program. First the verification condition which establishes that assertions are interference free is introduced. Then the verification conditions which establish that assertions specify correctly the sequential control flow are discussed.
Reasoning about Recursive Processes in Shared-Variable Concurrency
131
Interference Freedom Test In order to characterize the interference between different processes it is assumed that each process has a distinguished local variable ’id’ which is used for its identification. How unique process id’s are generated are described in the next section. An assertion P is defined to be invariant over the execution of a statement S by a different process if the following verification condition holds: |= (P ∧ Pre(S) ∧ id = id ) → WP (S, P ) For notational convenience, it is implicitly assumed that the local variables of P and Pre(S) are named apart by ’priming’ the local variables of P . Note that this includes the distinguished local variable ’id’, which is thus renamed in P by ’id ’. The above verification condition models the situation that the execution of the process denoted by the fresh local variable ’id ’ is interleaved by the execution of the statement S by the process denoted by the distinguished local variable ’id’. That we are dealing with two different precesses is simply described by the disequality id = id . Example 1. Reasoning about synchronized statements in general require the introduction of a shared variable ’lock’ which stores the identity of the process that owns the lock. As in Java the execution of synchronized statement cannot be interleaved by the execution of a synchronized statement by another process. That is, every synchronized statement is characterized by the invariant id=lock With this additional information annotations of the synchronized statements are trivially free from interference: For any assertions P and Pre(S) such that P implies id = lock and Pre(S) implies id = lock, respectively, it is the case that |= (P ∧ Pre(S) ∧ id = id ) → WP (S, P ) because the antecedent is inconsistent. Local Correctness An annotated program is locally correct if the verification conditions hold which characterize the sequential flow of control within one process. We have the standard verification conditions which characterize control structures like sequential composition, choice, and iteration constructs. The discussion is restricted to procedure calls p(e1 , . . . , en ) where the actual parameters e1 , . . . , en are not affected by the call itself (i.e., only contain local variables). Furthermore, it is assumed that the formal parameters
132
F.S. de Boer
of the process p are read-only. Given such a call parameter passing is simply modeled by the sequence of assignments u1 := e1 ; . . . , un := en Below this sequence of assignments is abbreviated by u ¯ := e¯. Let S denote the body of the process p. The following verification condition validates the precondition P of the call p(e1 , . . . , en ): |= P → WP (¯ u := e¯, Pre(S)) Here it is assumed that the local variables of the precondition Pre(S) of the process body only include the formal parameters of the process p and the distinguished local variable ’id’. Note that the local variable ’id’ thus may occur both in the precondition P of the call and the precondition Pre(S) of process body. We do not need to distinguish these different occurrences because the local variable ’id’ in both preconditions denotes the same process executing the call. Example 2. Consider the precondition lock=id∧balance≥amount of a call withdraw(amount) of a synchronized procedure ’withdraw’. This precondition states that the executing process denoted by ’id’ owns the lock (and that the value of the shared variable ’balance’ is greater or equal to that of the local variable ’amount’). The above precondition implies WP (u := amount, lock = id ∧ balance ≥ u) Note that the process therefore still owns the lock when executing the body of ’withdraw’. Furthermore, the following verification condition validates the postcondition Q of a call p(e1 , . . . , en ): |= WP (¯ u := e¯, Post(S)) → Q As above, it is assumed that the local variables of the postcondition Post(S) only include the formal parameters of the process p and the distinguished local variable ’id’. Note that since the formal parameters are read-only and the actual parameters are not affected by the call itself, we can restore the values of the formal parameters by the assignment u¯ := e¯. Example 3. Consider the postcondition lock = id ∧ balance = old(balance) − u
Reasoning about Recursive Processes in Shared-Variable Concurrency
133
of the body of the synchronized process ’withdraw’. The expression ’old(balance)’ denotes the ’old’ value of the (shared) variable, i.e., its value at ’call time’. We have that WP (u := amount, lock = id ∧ balance = old(balance) − u) reduces to the postcondition lock = id ∧ balance = old(balance) − amount of the corresponding call. This postcondition thus implies that the executing process keeps the lock. Auxiliary Variables In general to prove the correctness of a program we need auxiliary variables which are used to describe certain properties of the flow of control. Example 4 (Mutual exclusion). Consider the statement P (sem); CS ; V (sem) where sem is a boolean variable representing a binary semaphore. In order to prove that no two processes are executing the critical section CS we introduce an auxiliary variable in which stores the identity of the process that holds the semaphore. We introduce the value nil to indicate that the semaphore is free. P (sem); in := id; CS ; V (sem); in := nil Mutual exclusion is expressed by the assertion MUTEX defined by (in = nil ) = sem The assertion MUTEX is introduced as an invariant of the above extended statement which annotates all its interleaving points, that is, the start and end of the statement itself, and the start and end of the critical section CS . We have the following locally correct proof-outline. {MUTEX } P (sem); in := id; {MUTEX ∧ in = id} CS ; {MUTEX ∧ in = id} V (sem); in := nil {MUTEX } To establish interference freedom we have to prove {MUTEX ∧ in = id ∧ id = id}P (sem); in := id{in = id }
134
F.S. de Boer
and {MUTEX ∧ in = id ∧ in = id ∧ id = id}V (sem); in := nil {in = id } The first correctness formula holds because MUTEX ∧ in = id implies sem = false. The second correctness formula holds because in = id ∧ in = id ∧ id = id is clearly inconsistent. Auxiliary variables are also used to describe the semantics of creating a process and high-level synchronization constructs. Creating a process. In order to describe the specific semantics of process creation a shared variable ’c’ is introduced which counts the number of created processes. A create statement create p(e1 , . . . , en ) is transformed by c := c + 1; create p(e1 , . . . , en , c) where the additional actual parameter corresponds with the distinguished local variable ’id’ exported as a formal parameter of the process p. Given a precondition P of above create statement, the precondition Pre(S) of the body S of the process p is validated by the verification condition |= P → WP (¯ u := e¯, Pre(S)) where u ¯ := e¯ contains the assignment ’id := c’. Note that in order to avoid name clashes we have to rename the distinghuised local variable id in P because now it denotes a different process. On the other hand, the postcondition Q of a create statement is simply validated by the verification condition |= P → Q where P denotes its precondition. Synchronized statements. In order to describe the specific semantics of a synchronized statement synchronized S an auxiliary shared variable ’lock’ is introduced which stores the identity of the process owning the lock. Since a process releases the lock only when it has finished executing its synchronized statements, we also need an auxiliary shared variable that counts the number of active synchronized statements in a process. Every synchronized statement S is then prefixed with an await statement await lock=id∨ lock=0 do count:=count+1;lock:=id od
Reasoning about Recursive Processes in Shared-Variable Concurrency
135
The boolean condition states that either the process already owns the lock or the lock is not yet initialized (i.e., is ’free’), assuming that zero is not used as a process id’s. On the other hand, every synchronized statement ends with the execution of the await statement await true do count:=count-1; if count=0 then lock:=0 fi od The notion of proof outlines is extended with the following standard verification condition for await statements |= (P ∧ b) → WP (S, Q) where P and Q denote the precondition and the postcondition of the await statement, b denotes its boolean condition and S denotes its main body. Since the evaluation of the boolean guard of an await-statement and the execution of its body are assumed to be atomic we only need to apply the interference freedom test to the pre- and postcondition of the await-statement itself. Example 5 (Wait and notify). Like in Java, a process which owns the lock can release it by executing the ’wait’ statement. It has to wait until another process owning the lock calls the ’notify’ statement. In order to describe the semantics of this mechanism an auxiliary shared variable ’wait’ is introduced which is used to store the set of processes waiting for its lock. The semantics of the wait statement then is described by the following code if lock=id then lock:=0;wait:=wait∪{id} else abort fi; await lock=0 ∧ id ∈ wait do lock:=id od This statement first checks whether the process owns the lock. If so, the process simply releases the lock and is added to the set of waiting processes. If the process does not own the lock execution is aborted. The subsequent await statement waits for the lock to be free and for the process to be removed from the set of waiting processs. The semantics of the notify statement, which involves an arbitrary choice of the process to be notified, is described by if lock=id then wait:=wait \{any(wait)} else abort fi
136
F.S. de Boer
Here ’any’ is a set operation that satisfies the axiom any(wait)∈wait Logically it is a ’skolem’ function. In general, auxiliary variables can be introduced as local variables and as shared variables. Assignments to auxiliary variables may not affect the flow of control of the given program. It is important to note that auxiliary variables are also allowed as additional formal parameters of process definitions. Such auxiliary variables can be used to reason about invariance properties of process calls. Example 6 (Factorial function). Consider for example the following recursive process for computing the factorial function. fac()= if x>0 then x:=x-1;fac();x:=x+1;y=y*x else y:=1 fi Here ’x’ and ’y’ are shared variables. Upon termination ’y’ stores the faculty of the value stored by ’x’. In order to prove that the value of ’x’ upon termination equals its old value, a formal parameter ’u’ is introduced and the process is extended by fac(u)= if x>0 then x:=x-1;this.fac(u-1);x:=x+1;y=y*x else y:=1 fi We then can express the above invariance property by introducing the assertion ’u=x’ both as precondition and the postcondition of the process body. This specification of the process body can be validated by introducing ’u=x+1’ as the precondition and the postcondition of the recursive call. We have the following trivial verification conditions for process invocation and return |=u=x+1→u-1=x and |=u-1=x→u=x+1 where the assertion ’u-1=x’ results from replacing the formal parameter ’u’ in ’u=x’ by the actual parameter ’u-1’. It is of interest to note that the use of auxiliary variables as additional formal parameters in reasoning about invariance properties of recursive calls, differs from the standard Hoare logic of recursive procedures which requires certain extra axioms and rules (see [1]).
4
Soundness and Completeness
In this section soundness and completeness proofs are sketched. For technical convenience only, it is assumed throughout this section that every interleaving point of the given program is uniquely labeled. Such labels
Reasoning about Recursive Processes in Shared-Variable Concurrency
137
are denoted by l, l , . . .. Labeled statements are denoted by l : S. The assertion annotating an interleaving point l is denoted by @l. Soundness Let π be an annotated program. A global configuration Π = Θ, σ satisfies an annotated program π, denoted by Π |= π if for every process in Θ with active closure (l : S, τ ), we have σ, τ |= @l Roughly, a global configuration satisfies an annotated program if every process satisfies the assertion annotating the statement of its active closure. Theorem 1 (Soundness). For any correctly annotated program π (possibly extended with auxiliary variables), Π |= π and Π → Π implies Π |= π Roughly, this theorem states the invariance of the assertions of a correctly annotated program. The proof involves a straightforward but tedious case analysis of the computation step. Completeness Conversely, completeness can be established by proving correctness of an extended program annotated with so-called reachability predicates. These predicates are introduced in [2] and [6], and adapted to recursive processes with shared variables as follows. Given a program, for every interleaving point l the predicate @l is defined by σ, τ |= @l if there exists a reachable global configuration Θ, σ such that Θ contains a process with an active closure (l : S, τ ). A global configuration Π is reachable if there exists a partial computation Π0 →∗ Π starting from a fixed initial global state Π0 . Here →∗ denotes the reflexive, transitive closure of →. Using encoding techniques (as for example described in [8]) it can be shown that the above reachability predicates can be expressed in the assertion language. A straightforward though tedious case analysis establishes that a program annotated with the above reachability predicates is locally correct. The main case of interest is a proof of the verification condition |= WP (¯ u := e¯, @l) → @l
138
F.S. de Boer
for validating the postcondition of a (recursive) procedure call p(e1 , . . . , en ) (the label l marks the end of the body of procedure p and l the termination of the call). The sequence of assignments modeling the parameter passing is abbreviated by u¯ := e¯. The general problem obviously here is that termination of the body of p does not necessarily imply termination of this particular call. Therefore, we pass as additional parameters the label uniquely identifying the call. In order to reason about the local variables of the calling procedure every procedure definition is extended with an additional formal parameter which is used to pass the current stack of local states of the pending calls. This additional formal parameter we denote by ’context’. Every call p(e1 , . . . , en ) is extended by p(context ◦ l , v1 , . . . , vn , e1 , . . . , en ) where the additional actual parameter ’context◦l , v1 , . . . , vn ’ denotes the result of pushing unto the stack ’context’ the label l of the call statement (indictating its postcondition) and and the values of the local variables v1 , . . . , vn of the calling procedure, excluding the local variable ’context’ itself. The additional parameter ensures that the predicate WP (¯ u := e¯, @l) indeed describes the return of the procedure p to the given call. To see this, let σ, τ |= WP (¯ u = e¯, @l) where the simultaneous assignment u ¯ := e¯ now includes the additional assignment context := context ◦ l , v1 , . . . , vn It follows that
σ, τ |= @l
¯ := e¯ in τ . By the above where τ results from the execution of the assignments u definition of the reachability predicates it then follows that there exists a partial computation Π0 →∗ Θ, σ such that Θ contains a process with active closure (l : E, τ ) which indicates the termination of the body of procedure p. Since τ (context) = τ (context) ◦ l , τ (v1 ), . . . , τ (vk ) where v1 , . . . , vk are the local variables of the calling procedure, we may assume without loss of generality that Θ contains a process θ · (l : S, τ ) · (l, τ ) where S denotes the continuation of the given call (whose termination is marked by l ). Let Π = Θ , σ be the global configuration which results from popping the active closure (l, τ ) from the above call stack. By definition of the reachability predicates we conclude that σ, τ |= @l
Reasoning about Recursive Processes in Shared-Variable Concurrency
139
Remains to show that reachability predicates are interference free. More specifically, we have to show that for any interleaving points l and l , with l marking the start of an atomic statement S, we have |= (@l ∧ @l ∧ id = id ) → WP (S, @l ) Roughly, this verification condition states that if one process reaches l and another process reaches l, then l is still reachable after the execution of the statement S. This follows trivially if there exists one computation where both processes reach l and l at the same time. However, in general this is not the case, e.g., the reachability of l may require a scheduling of the processes which is incompatible with the reachability of l. Example 7 (Scheduling). Consider the following process race()= synchronized begin u:=b; if b then b:=false fi end; if u then l1 : S1 else l2 : S2 fi Here ’u’ is a local variable and ’b’ is a shared variable. Let the main process of the program initialize ’b’ to ’true’ and then simply create two instances of the above process. Let σ be a global state and τ be a local state σ(b) = false, τ (u) = τ (u ) = true, and τ (id) and τ (id ) are two different instances of the process p (the local variables ’u’ and ’id’ are renamed in the predicate @l1 by the fresh local variables u and id ). It follows that σ, τ |= @l1 ∧ @l1 . But clearly there exists no reachable global configuration in which both processes are at l1 at the same time. Therefore a global shared auxiliary variable ’sched’ is introduced which records the scheduling of the processes. Every read or write operation which involves access to the global assignment of the shared variables is extended with an update which adds the identity of the executing process. Example 8. Returning to the above example, note that this additional scheduling information implies that |= (@l1 ∧ @l1 ∧ id = id ) → false Note that @l1 implies that ’sched’ stores the process denoted by ’id ’ first, whereas @l1 implies that ’sched’ stores the process denoted by the distinguished local variable ’id’ first. The non-determinism arising from interleaving of the local computations of the processes, i.e., computations which only access the local state of the active closures and which do not access the global assignment of shared variables, does not affect the global computation. Consequently, a program extended with
140
F.S. de Boer
the auxiliary variable ’sched’ is basically deterministic: if Π0 →∗ Θ, σ and Π0 →∗ Θ , σ then we may assume without loss of generality that Θ = Θ . For deterministic programs proving interference freedom of the reachability predicates is straightforward. Theorem 2. For any labeled statements l : S and l : S of a program extended with the auxiliary variable ’sched’ we have |= (@l ∧ @l ∧ id = id ) → WP (S, @l ) Proof. Let
σ, τ |= @l ∧ @l ∧ id = id
By definition of the reachability predicates @l and @l there exists a partial computation Π0 →∗ Θ, σ starting from a fixed initial global configuration Π0 , such that Θ contains a process with active closure (l : S, τ ) and a process with active closure (l : S , τ ), where τ (u) = τ (u ), for every local variable u (remember that primed local variables are introduced in order to avoid name clashes between the local variables of @l and @l ). Since, τ (id) = τ (id ) we know that indeed we have two different processes (note that τ (id ) = τ (id)). Let Θ , σ be the global configuration resulting from execution of the statement S by the process τ (id). Since τ (id) = τ (id) we know that also Θ contains a process with active closure (l : S , τ ). By definition of the reachability predicates it then follows that σ , τ |= @l By definition of WP (S, @l ) (and taking into account renaming of the local variables), we conclude that σ, τ |= @l
5
Conclusion
In this paper a sound and complete proof method is presented for recursive processes with shared variables. The proof method distinguishes a local level which is based on a Hoare logic for the sequential flow of control of recursive calls within one process and a global level which deals with interference between processes. The proof method incorporates the use of auxiliary variables. These variables are used to capture specific aspects of the flow of control. In general, auxiliary variables can be used to extend the proof method in a systematic manner to various synchronization mechanisms. Of particular interest is the use of auxiliary variables introduced in this paper as additional formal parameters to reason about invariance properties of recursive calls within one process. In the completeness proof such additional formal
Reasoning about Recursive Processes in Shared-Variable Concurrency
141
parameters are used to pass the stack of local states. This use allows a complete characterization of recursive process calls in terms of reachability predicates, as required by the concurrent context. How to incorporate the extra rules for reasoning about invariance properties of recursive calls in a sequential context (see [5] and [1]) is an interesting topic of future research.
References 1. Apt, K.R.: Ten years of Hoare logic: a survey — part I. ACM Transactions on Programming Languages and Systems 3(4), 431–483 (1981) 2. Apt, K.R.: Formal justification of a proof system for Communicating Sequential Processes. Journal of the ACM 30(1), 197–216 (1983) 3. Apt, K.R., Francez, N., de Roever, W.P.: A proof system for Communicating Sequential Processes. ACM Transactions on Programming Languages and Systems 2, 359–385 (1980) 4. de Boer, F.S.: A Sound and Complete Shared-Variable Concurrency Model for Multithreaded Java Programs. In: Bonsangue, M.M., Johnsen, E.B. (eds.) FMOODS 2007. LNCS, vol. 4468, pp. 252–268. Springer, Heidelberg (2007) 5. Gorelick, G.A.: A complete axiomatic system for proving assertions about recursive and non-recursive programs. Technical Report 75, Department of Computer Science, University of Toronto (1975) 6. Owicki, S.: A consistent and complete deductive system for the verification of parallel programs. In: Proceedings of the eighth annual ACM symposium on Theory of computing. ACM Press, New York (1976) 7. Owicki, S., Gries, D.: An axiomatic proof technique for parallel programs. Acta Informatica 6, 319–340 (1976) 8. Tucker, J.V., Zucker, J.I.: Program Correctness over Abstract Data Types, with Error-State Semantics. CWI Monograph Series, vol. 6. Centre for Mathematics and Computer Science/North-Holland (1988)
Formal Semantics of a VDM Extension for Distributed Embedded Systems Jozef Hooman1,2 and Marcel Verhoef3 1
3
Embedded Systems Institute, Eindhoven, The Netherlands 2 Radboud University Nijmegen, The Netherlands [email protected] Chess, P.O. Box 5021, 2000 CA Haarlem, The Netherlands [email protected]
Abstract. To support model-based development and analysis of embedded systems, the specification language VDM++ has been extended with asynchronous communication and improved timing primitives. In addition, we have defined an interface for the co-simulation of a VDM++ model with a continuous-time model of its environment. This enables multi-disciplinary design space exploration and continuous validation of design decisions throughout the development process. We present an operational semantics which formalizes the precise meaning of the VDM extensions and the co-simulation concept. Keywords: embedded systems, modeling, specification language, formal semantics, co-simulation.
1 Introduction We present a formal semantics of an extension of the VDM language for the development of software in embedded systems. Examples of embedded systems can be found, for instance, in airplanes, cars, industrial robots, process automation, and consumer electronics devices. In general, the development of such systems is extremely complex. In the early design phases a trade-off has to be made between a large number of design choices. This concerns, for instance, aspects such as mechanical lay-out, accuracy and placement of sensors, types of actuators, processing units, communication infrastructure, software deployment, and costs. This leads to an enormous design space which involves several disciplines, such as mechanical engineering, electrical engineering, and software engineering. To support multi-disciplinary design space exploration, we propose an approach were each discipline can use its own modeling method, including all discipline-specific simulation and analysis techniques. But by defining a proper notion of co-simulation, which allows simultaneous simulation of discrete-time and continuous-time models, additional insight in the multi-disciplinary interactions and interferences can be obtained. E.g., in [1] we have coupled Rose Real-Time and Matlab/Simulink [2] to allow
This work has been carried out as part of the Boderc project under the responsibility of the Embedded Systems Institute. This project was partially supported by the Dutch Ministry of Economic Affairs under the Senter TS program.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 142–161, 2010. c Springer-Verlag Berlin Heidelberg 2010
Formal Semantics of a VDM Extension for Distributed Embedded Systems
143
co-simulation of a UML model of control software with a Matlab/Simulink model of the continuous behaviour of its environment. To realize this coupling, the platformdependent notion of time in Rose Real-Time had to be replaced by a mechanism which obtains a notion of simulated time from Simulink. While this is a step forward, it also shows that Rose Real-Time is not very suitable for the co-simulation of control systems because it lacks a suitable notion of simulation time. Moreover, the run-to-completion semantics does not allow interrupts due to relevant events of the physical system under control. We have also investigated support for design choices concerning the hardware infrastructure and the software-to-hardware mapping. Several methods have been applied to an in-car radio navigation system [3]. Each method has been used to model the relevant aspects of the system such as the characteristics of the processors, their connections, the input from the environment, the processor usage of the embedded software for this input, and the bandwidth needed for communication between processors. These models have been used to simulate and to analyze the performance of a particular deployment of software on hardware. We observed that already with relatively simple models and some sensitivity analysis, a good insight in design choices and performance bottlenecks can be obtained. However, it was also clear that most techniques do not immediately fit into the normal development process. For instance, the abstract models used for software analysis are not suitable for further software development. On the other hand, development languages that allow the refinement of high-level abstract models towards concrete code, usually do not support typical aspects of embedded systems such as deployment on hardware and performance analysis. For instance, we have investigated the use of the formal language VDM++ for the software development of embedded systems by applying it to the control of the paper path in a printer. The VDM tools offer nice simulation possibilities and allow code generation, but the case study also revealed some limitations, especially concerning the expressiveness of the language to describe distributed real-time systems. In addition, important tool features are missing to analyze models of embedded systems [4]. In this paper, we integrate a number of the lessons learned from the research described above, aiming at a software modeling technique which allows the specification and the analysis of embedded systems at various levels of abstraction. This should include timing and deployment aspects. Moreover, the technique should allow executable specifications with a proper notion of simulation time to enable a coupling with continuous-time models. Given our VDM++ experience and access to supporting tools, we use the VDM language as our starting point. Since VDM extensions can be specified in VDM++ itself, models in the extended language can be executed and we can experiment with them in case studies. The main contributions of our work are (1) an extension of VDM++ which allows the specification of the mapping of software task onto processing units, (2) an extension with additional timing primitives and a revision of the timing semantics of VDM++, (3) the definition of a notion of co-simulation of a VDM++ model with a continuous-time model, and (4) a formal semantics of the new concepts. Work on (1) and (2) has been described before in [5] and a first description of (3) can be found in [6]. New in this paper is the integration of these concepts and the definition of an integrated formal
144
J. Hooman and M. Verhoef
operational semantics. The operational semantics has been formalized using the proof tool PVS [7,8]. All our extensions have been specified in VDM++ and, hence, extended models can be executed with the standard VDM tools. Simulation has been used to validate our extensions in a few case studies. To experiment with co-simulation, we have coupled the VDM tool to 20- SIM [9], a tool to model and simulate the behavior of dynamic systems, such as electrical, mechanical or hydraulic systems. The main ideas of our approach, however, are rather independent from VDM. The same holds for the abstract formal semantics which defines the precise meaning of the main concepts. Related to our formal semantics is work in the context of UML about the precise meaning of active objects, with communication via signals and synchronous operations, and threads of control. In [10] a labeled transition system has been defined using the algebraic specification language CASL, whereas [11] uses the specification language of the theorem prover PVS to formulate the semantics. Note that UML 2.0 adopts the runto-completion semantics, which means that new signals or operation calls can only be accepted by an object if it cannot do any local action, i.e., it can only proceed by accepting a signal or call. In our VDM++ semantics there will be less restrictions on threads. In addition, none of these works deal with deployments. Related to that aspect is the UML Profile for Schedulability, Performance and Time, and research on performance analysis based on this profile [12]. Concerning our definition of co-simulation, there is an interesting relation with the software architecture for the design of continuous time / discrete event co-simulation tools described in [13]. An operational semantics has been defined in [14]. The main difference is that their approach aims at connecting multiple simulators on a so-called simulation bus, whereas we connect two simulators using a point-to-point connection. They use Simulink and SystemC, whereas we use 20- SIM and VDM++ to demonstrate the concept. The type of information exchanged over the interfaces is identical (the state of continuous variables and events). They have used formal techniques to model properties of the interface, whereas we have integrated the continuous-time interface into the operational semantics of a discrete event system. This paper is structured as follows. Section 2 provides a brief overview of VDM and the limitations of the current version of timed VDM++. Our proposed extensions for the specification of embedded systems are described in Sect. 3. In Sect. 4 we define an abstract syntax which illustrates the new concepts. This syntax is used to define a formal operational semantics of the proposed changes in Sect. 5. Concluding remarks can be found in Sect. 6.
2 Overview of VDM++ VDM++ is an object-oriented and model-based specification language with a formally defined static and dynamic semantics. It is a superset of the ISO standardized notation VDM-SL [15]. Different VDM dialects are supported by industry-strength tools, called V DM T OOLS [16]. The language has been used in several large-scale industrial projects [17,18,19]. However, not much is known about the application of VDM in the area of distributed real-time embedded systems.
Formal Semantics of a VDM Extension for Distributed Embedded Systems
145
The dynamic semantics of an executable subset of VDM++ is defined as a constructive operational semantics in VDM-SL [20]. The core of this specification is an abstract state machine which is able to execute a set of formally defined primitive instructions. Each abstract syntax element is translated into such a sequence of primitive instructions. The industrial success of V DM T OOLS is, for a large part, due to excellent conformance of the tool to the formally defined operational semantics and the round-trip engineering with UML. A brief overview of the VDM++ language can be found in Sect. 2.1. For an in-depth presentation of the language and supporting tools, we refer to [19]. The main limitations for the specification of embedded systems are described in Sect. 2.2. 2.1 The Basic VDM++ Notation In VDM++, a model consists of a collection of class specifications. We distinguish active and passive classes. Active classes represent entities that have their own thread of control and do not need external triggers in order to work. In contrast, passive classes are always manipulated from the thread of control of another active class. We use the term object to denote the instance of a class. More than one instance of a class might exist. An instance is created using the new operator, which returns an object reference. A class specification has the following components: Class header: The header contains the class name declaration and inheritance information. Both single and multiple inheritance are supported. Instance variables: The state of an object consists of a set of typed variables, which can be of a simple type such as bool or nat, or complex types such as sets, sequences, maps, tuples, records and object references. The latter are used to specify relations between classes. Instance variables can have invariants and an expression to define the initial state. Operations: Class methods that may modify the state can be defined implicitly, using pre- and postcondition expressions only, or explicitly, using imperative statements and optional pre- and postcondition expressions. Functions: Functions are similar to operations except that the body of a function is an expression rather than an imperative statement. Functions are not allowed to refer to instance variables, they are pure and side-effect free. Synchronization: Operations in VDM++ are synchronous, that is, the caller of an operation waits until the execution of the operation body has been completed. Thread: A class can be made “active” by specifying a thread. A thread is a sequence of statements which are executed to completion at which point the thread dies. It is possible to specify threads that never terminate. A timed extension to VDM++ was defined as part of the V ICE project [21] by assigning a user-configurable default duration to each basic language construct. In addition, there are two new statements: Duration. The duration statement, with the concrete syntax duration(d) IS, expresses that first all statements in the instruction sequence IS are executed instantaneously and next time is increased by d time units. The duration statement is used to override the default execution time for IS.
146
J. Hooman and M. Verhoef
Period. The periodic statement, with the concrete syntax periodic(d)(Op), can only be used in the thread clause to denote that operation Op is called periodically every d time units. 2.2 The Limitations of (Timed) VDM++ In previous work [4], we assessed the suitability of timed VDM++ for distributed realtime embedded systems. We list the most important problems here. 1. Operations in VDM++ are synchronous; calls are executed in the context of the thread of control of the caller. The caller has to wait until the operation is completed before it can resume. This is very cumbersome when embedded systems are modeled. These systems are typically reactive by nature and asynchronous. An event loop can be specified to describe this, but this increases the complexity of the model and its analysis. See also the discussion about the advantages of asynchronous communication in [22]. 2. Timed VDM++ is based on a uni-processor multi-threading model of computation. This means that at most one thread can claim the processor and only this active thread can enable progress of time. This is insufficient for describing embedded systems because 1) they are often implemented on a distributed architecture and 2) these systems need to be described in combination with their environment. Hence, we need a multi-processor multi-threading model of computation with parallel progress of time in subsystems and the environment. 3. The duration statement in timed VDM++ denotes a time penalty that is independent of the resource that executes the statement. When deployment is considered, it is essential to also be able to express time penalties that are relative to the capacity of the computation resource. Furthermore, there should be an additional time penalty that reflects the message handling between two computation resources whenever a remote operation call is performed. In addition, we would like to simulate a VDM model in parallel with a (possibly continuous) model of its environment. This is related to the first two points mentioned above, but also requires a description of the interface between both models and a definition of the semantics of co-simulation.
3 Proposed Extensions for Embedded Systems We briefly list our proposed changes and illustrate the main concepts by pieces of two examples. The formal semantics will be given in Sect. 5. The five main changes are: 1. The addition of asynchronous communication by introducing the async keyword in the signature of an operation to denote that it is asynchronous. For each call of an asynchronous operation a new thread is created which executes the body of the operation. The caller is not blocked and can immediately resume its own thread of control after the call has been initiated.
Formal Semantics of a VDM Extension for Distributed Embedded Systems
147
2. The introduction of a notion of deployment which can be used to specify the allocation of software tasks on processors and the communication topology between processors. Predefined classes, BUS and CPU, are made available to the specifier to construct the distributed architecture. Instance of these classes can be used to specify particular resources with a specific capacity, policy and overhead. Classes representing software tasks can be instantiated and deployed on a specific CPU in the model. The communication topology between the computation resources in the model can be described using the BUS class. The system class is used to contain such an architecture model. 3. The timing semantics of VDM has been adapted to allow multiple threads running on different processors. Any thread that is running on a computation resource or any message that is in transit on a communication resource can cause time to elapse. Models that contain only one computation resource are compatible to models in the original version of timed VDM++. 4. The introduction of a cycles statement to express the duration of statements in terms of cpu cycles. It can be used to denote a time delay that is relative to the capacity of the resource on which the the statement is executed. 5. To enable co-simulation, an XML configuration file can be used to describe the interface between two models, e.g., a VDM model and a model of it environment in another tool such as 20- SIM or Matlab/Simulink. It consists of arrays to define and relate sensors, actuators and events. 3.1 Examples of the Extensions In-car radio navigation system. We show how a number of extensions are used in the in-car radio navigation system [3]. The main aim of this case study was to investigate which hardware topology of processors and interconnections could be used for a number of software tasks, such that certain timing deadlines would be satisfied. The three main applications, radio, navigation, and Man-Machine Interaction (MMI), have been specified in our extended version of VDM. As an example, Fig. 1 shows the Radio class which has two asynchronous operations: AdjustVolume and HandleTMC. class Radio operations async public AdjustVolume: nat ==> () AdjustVolume (pno) == (duration (150) skip; RadNavSys‘mmi.UpdateVolume(pno)); async public HandleTMC : nat ==> () HandleTMC (pno) == (cycles (1E5) skip; RadNavSys‘navigation.DecodeTMC (pno)) end Radio Fig. 1. The Radio class
Since we are mainly interested in the overall timing behaviour of the system, we use the skip statement to represent internal computations. In the AdjustVolume operation
148
J. Hooman and M. Verhoef
we use, for illustration purposes, the duration statement to express that this internal computation (e.g., changing the amplifier volume set point) takes 150 time units. The HandleTMC operation, which deals with Traffic Message Channel (TMC) messages, uses the cycles statement to denote that a certain amount of time expires relative to the capacity of the computation resource on which it is deployed. If this operation is deployed on a resource that can deliver 1000 cycles per unit of time then the delay (duration) would be 1E5 divided by 1000 is 100 time units. A suitable unit of time can be selected by the modeler. Similarly, classes Navigation and MMI have been specified. The complete system model is presented in Fig. 2 where instances of the three classes are allocated on different processors. In this case, each instance is deployed on a separate CPU. Each computation resource is characterized by its processing capacity, specified by the number of available cycles per unit of time, the scheduling policy that is used to determine the task execution order and a factor to denote the overhead incurred per task switch. For this case study, fixed priority preemptive scheduling (denoted by ) with zero overhead is used. system RadNavSys instance variables -- create the application tasks static public mmi := new MMI(); static public radio := new Radio(); static public navigation := new Navigation();
-- create CPU CPU1 : CPU := CPU2 : CPU := CPU3 : CPU :=
(policy, capacity, task switch overhead) new CPU (, 22E6, 0); new CPU (, 11E6, 0); new CPU (, 113E6, 0);
-- create BUS (policy, capacity, message overhead, topology) BUS1 : BUS := new BUS (, 72E3, 0, {CPU1, CPU2, CPU3}) operations -- the constructor of the system model public RadNavSys: () ==> RadNavSys RadNavSys () == -- deploy MMI on CPU1 ( CPU1.deploy(mmi); -- deploy Radio on CPU2 CPU2.deploy(radio); CPU3.deploy(navigation) ) -- deploy Navigation on CPU3 end RadNavSys Fig. 2. The top-level system model for the in-car radio navigation system
Class BUS is used to specify the communication resources in the system. A communication resource is characterized by (1) the scheduling policy that is used to determine the order of the messages being exchanged, (2) its throughput, specified by the number of messages that can be handled per unit of time, (3) a time penalty to denote the
Formal Semantics of a VDM Extension for Distributed Embedded Systems
149
protocol overhead, and (4) the computation resources it connects. The granularity of a message can be determined by the user. For example, it can represent a single byte or a complete Ethernet frame, whatever is most appropriate for the problem under study. For this case study, we use First Come First Served scheduling (denoted by ) with zero overhead. Water level control. The water level control example [6] illustrates the interaction between a software model in VDM and a model of its continuous environment in 20- SIM. The 20- SIM tool is used to model the relevant dynamic behavior of a water tank (using so-called bond graphs), describing how the water level depends on the input flow, the drain, and whether a valve is open or closed. Such models can be simulated using socalled solvers, which are implementations of numerical integration techniques that approximate the solution of a set of differential equations. Depending on the type of model, these solvers may use a fixed time step or a variable step size. In the last case, the step size may change in time, depending on the dynamics of the system, the required accuracy, and the required detection of certain events such as a variable crossing a certain border. The time-triggered control of such a continuous-time model can be easily expressed in our extension of VDM, as shown in Fig 3, where level is a shared continuous variable that represents the height of the water level in the tank. Shared variable valve is used to change the state of the valve. The periodic clause states that the operation loop is called periodically, namely once per second. class TimeBasedController instance variables static public level : real; static public valve : bool := false
-- default is closed
operations loop: () ==> () loop () == if level >= 3 then valve := true else if level <= 2 then valve := false; threads periodic(1.0)(loop) end TimeBasedController Fig. 3. Time-based controller description in VDM++
An alternative is event-based control, where the continuous plant generates relevant events, e.g., when the water level has reached certain limits. As an example. let High(level,3.0) be the event generated by the solver of the plant when the water level is increasing and reaches value 3.0. Similarly, event Low(level,2.0) is generated when the level is decreasing and reaches level 2.0. The event handlers for these events are specified in VDM as asynchronous operations, open and close, as shown in Fig 4. The duration statement in the definition of
150
J. Hooman and M. Verhoef
open states that opening the valve in this case takes 50 msec. The cycles statement in the definition of close denotes that closing the valve takes 1000 cycles. Assuming this class is deployed on a processor with a capacity of 100000 cycles per second, then executing valve := false will take 10 msec. Note that the result of the assignment is only available after this time has passed. Finally, the mutex clauses state that the operations are declared mutually exclusive, that is, only one operation call can be active at any time. Hence, the execution of the body of an operation cannot be interrupted by the execution of any other operation body. class EventBasedController instance variables static public level : real; static public valve : bool := false
-- default is closed
operations static public async open: () ==> () open () == duration(0.05) valve := true; static public async close: () ==> () close () == cycles(1000) valve := false; sync mutex(open, close); mutex(open); mutex(close) end EventBasedController Fig. 4. The event-based controller description in VDM++
The relation between the events generated by the plant simulation and the corresponding handlers in VDM is defined in a separate XML configuration file. For brevity, we use an informal description as presented in Fig. 5. Sensor output of the plant model is bound to the level variable in the VDM model. Similarly, VDM variable valve provides actuator input to the plant. Furthermore, the open and close operations are defined as the handlers for the High(level,3.0) and Low(level,2.0) events. In other words, these asynchronous operations will be called automatically whenever the corresponding event fires. This will cause the creation of a new thread which will die as soon as the operation is completed. sensor[1] = cpu1.EventBasedController‘level actuator[1] = cpu1.EventBasedController‘valve event[1] = High(level,3.0) -> cpu1.EventBasedController‘open event[2] = Low(level,2.0) -> cpu1.EventBasedController‘close Fig. 5. The interface configuration file
Formal Semantics of a VDM Extension for Distributed Embedded Systems
151
4 Syntax and Informal Semantics To be able to highlight the formal semantics of the extensions proposed in the previous section, we define a syntax which abstracts from many aspects and constructs in VDM++. Our syntax does not contain class definitions and explicit definitions of synchronous and asynchronous operations. Assume given a set Operation of operations, with typical element op, and predicate syn?(op) which is true iff the operation is synchronous. We also assume that the body of operations is compiled into a given sequence of instructions. Let ObjectId be the set of object identities, with typical element oid. Assume given a set of variables Var = InVar ∪ OutVar ∪ LVar where InVar is the set of input/sensor variables, OutVar is the set of output/actuator variables, and LVar a set of local variables. The input and output variables (also called IO-variables) are global and shared between all threads and the continuous model. Hence, they can also be accessed by the solver of a continuous model, which may read the actuator variables and write the sensor variables. Let IOVar = InVar ∪ OutVar. Let Value be a domain of values, such as the integers. Our time domain is the nonnegative real numbers; Time = {t ∈ R | t ≥ 0}. We use d to denote a time value and duration (d) as an abbreviation of duration(d) skip. Assume that, for an instruction sequence IS, the statement duration(d) IS is translated into IS ˆduration(d), where internal durations inside IS have been removed and the “ˆ” operator concatenates the duration instruction to the end of a sequence. The concatenation operation is also used to concatenate sequences and to add an instruction to the front of the sequence. Functions head and tail yield the first element and the rest of the sequence, resp., and denotes the empty sequence. The cycles statement has been omitted here since it is equivalent to a duration statement, given a certain deployment. The periodic statement has been generalized to allow the periodic execution of an instruction sequence instead of an operation call only. The distributed architecture of an embedded control program can be represented by so-called nodes. Let Node be the set of node identities. Nodes are used to represent computation resources such as processors. On each node a number of concurrent threads are executed in an interleaved way. In addition, execution may be interleaved with steps of the solver. Function node : Thread → Node denotes on which node each thread is executing. Each thread executes a sequential program, that is, a statement expressed in the language of Table 1. Furthermore, assume given a set of links, defined as a relation between nodes, i.e., Link = Node × Node, to express that messages can be transmitted from one node to another via a link. In the semantics described here, we assume for simplicity that a direct link exists between each pair of communicating nodes. Note that CPU and BUS, as used in the radio navigation case study, are concrete examples of a node and a link. The solver may send events to the control program. Let Event be a set of events. Assume that an event handler has been defined for each event, i.e., an instruction sequence and a node on which this statement has to be executed (as a new thread), denoted by the function evhdlr : Event → Instr. Seq. × Node. The syntax of our sequential programming language is given in Table 1, with c ∈ Value, x ∈ Var, and d ∈ Time. These basic instructions have the following informal meaning: – skip represents a local statement which does not consume any time.
152
J. Hooman and M. Verhoef Table 1. Syntax of Instructions
Value Expr. e ::= c | x | e1 + e2 | e1 − e2 | e1 × e2 Bool Expr. b ::= e1 = e2 | e1 < e2 | ¬b | b1 ∨ b2 Instr. I ::= skip | x := e | call(oid, op) | duration(d) | periodic(d) IS | if b then IS1 else IS2 fi | while b do IS od Instr. Seq. IS ::= | IˆIS – x := e assigns the value of expression e to x. – call(oid, op) denotes a call to an operation op of object oid. Depending on the syn? predicate, the operation can be synchronous (i.e., the caller has to wait until the execution of the operation body has terminated) or asynchronous (the caller may continue with the next instruction and the operation body is executed independently). There are no restrictions on re-entrance here, but in general this can be restricted in VDM by so-called permission predicates. These are not considered here, also parameters are ignored. – duration(d) represents a time progress of d time units. When d time units have elapsed the next statement can be executed. – periodic(d) IS leads to the execution of instruction sequence IS each period of d time units. – if b then IS1 else IS2 fi executes instruction sequence IS1 if b evaluates to true and IS2 otherwise. – while b do IS od repeatedly executes instruction sequence IS as long as b evaluates to true. The formalization of the precise meaning of the language described above raises a number of questions that have to answered and on which a decision has to be taken. We list the main points: – How to deal with the combination of synchronous and asynchronous operations, e.g., does one has priority over the other, how are incoming call request recorded, is there a queue at the level of the node or for each object separately? We decided for an equal treatment of both concepts; each object has a single FIFO queue which contains both types of incoming call requests. – How to deal with synchronous operation calls; are the call and its acceptance combined into a single step and does it make a difference if caller and callee are on different nodes? In our semantics, we distinguish between a call within a single node and a call to an operation of an object on another node. For a call between different nodes, a call message is transferred via a link to the queue of the callee; when this call request is dequeued at the callee, the operation body is executed in a separate thread and, upon completion, a return message is transmitted via the link to the node of the caller. For a call within a single node, we have made the choice to avoid a context switch and execute the operation body directly in the thread of the caller. Instead, we could have placed the call request in the queue of the callee.
Formal Semantics of a VDM Extension for Distributed Embedded Systems
153
– Similar questions hold for asynchronous operations. On a single node, the call request is put in the queue of the callee, whereas for different nodes the call is transferred via a link. However, no return message is needed and the caller may continue immediately after issuing the call. – How are messages between nodes transferred by the links? In principle, many different communication mechanisms could be modeled. As a simple example, we model a link by a set of messages which include a lower and an upper bound on message delivery. For a link l, let δmin (l) and δmax (l) be the minimum and maximum transmission time. It is easy to extend this and, for instance, make the transmission time dependent on message size and link traffic. – How to deal with time, how is the progress of time modeled? In our semantics, there is only one global step which models progress of time on all nodes. All other steps do not change time; all assumptions on the duration of statements, context switches and communications have to be modeled explicitly by means of duration statements. – What is the effect of the interleaved execution of assignments to shared variables in different threads? As mentioned in the previous point, the execution of basic statements such as skip and assignment takes zero time. Hence, in our semantics any sequence of statements between two successive duration statements is executed atomically (in zero time). For instance, if we execute the instruction sequence duration(1) ˆ x := 1 ˆ x := x + 1 ˆ duration(1) in parallel with the sequence duration(1) ˆ x := 5 ˆ y := x ˆ duration(1) then there are two possible results; we might get x = 5 ∧ y = 5 or x = 2 ∧ y = 5. This in contrast with duration(1) ˆ x := 1 ˆ duration(1) ˆ x := x + 1 ˆ duration(1) in parallel with duration(1)ˆx := 5ˆduration(1)ˆy := xˆduration(1), where additionally x = 2 ∧ y = 1, x = 2 ∧ y = 2, x = 6 ∧ y = 5, and x = 6 ∧ y = 6 are possible. – What is the precise meaning of periodic(d) IS if the execution of IS takes more than d time units? We decided that after each d time units a new thread is started to ensure that every d time units the IS sequence can be executed. Of course, this might potentially lead to resource problems for particular applications, but this will become explicit during analysis.
5 Formal Operational Semantics The operational semantics presented in this section defines the execution of the language given in Sect. 4 formally. To focus on the essential aspects, we assume that the set of objects is fixed and need not be recorded in the configuration. However, object creation can be added easily, see e.g. [11]. Threads can be created dynamically, e.g., to deal with asynchronous operations and events received from the solver. Let Thread be the set of thread identities, including dormant threads that can be made alive when a new thread is created. Each thread i is related to one object, denoted by oi . This is used to define a new node function which defines the deployment of threads by means of the node function on objects: node(i) = node(oi ). Recall that any sequence of statements between two successive duration statements is executed atomically in zero time. However, the execution of such a sequence might
154
J. Hooman and M. Verhoef
be interleaved with statements of other threads or a step of the solver. Concerning the shared IO-variables in IOVar this means that we have to ensure atomicity explicitly. To this end, we introduce a kind of transaction mechanism to guarantee consistency in the presence of arbitrary interleaving of steps. Thread i is only allowed to modify IO-variable x if there is no transaction in progress by any other thread. The transaction is committed as soon as the thread performs a time step. Finally, we extend the set of instructions with an auxiliary statement return(i). This statement will be added during the executing at the end of the instruction sequence of a synchronous operation which has been called by thread i. To capture the state of affairs at a certain point during the execution, we introduce a configuration (Def. 1). Next we define the possible steps from one configuration to another, denoted by C −→ C where C and C are configurations (Def. 3). This finally leads to a set of runs of the form C0 −→ C1 −→ C2 −→ . . . (Def. 9). Definition 1 (Configuration). A configuration C contains the following fields: – instr : Thread → Instr. Seq. which is a function which assigns a sequence of instructions to each thread. – curthr : Node → Thread yields for each node the currently executing thread. – status : Thread → {dormant, alive, waiting} denotes the status of threads. – lval : LVar × Thread → Value denotes the value of each local variable for each thread. – ioval : IOVar → Value denotes the committed value of each sensor and actuator variable. – modif : IOVar × Thread → Value ∪ {⊥} denotes the values of sensor and actuator variables that have been modified by a thread and for which the transaction has not yet been committed (by executing a duration statement). The symbol ⊥ denotes that the value is undefined, i.e., the thread did not modify the variable in a non-committed transaction. – q : ObjectId → queue[Thread × Operation] records for each object a FIFO queue of incoming calls, together with the calling thread (needed for synchronous operations only). – linkset : Link → set[Message × Time × Time] records the set of the incoming messages for each link, together with lower and upper bound on delivery. A message may denote a call of an operation (including calling thread and called object) or a return to a thread. – now : Time denotes the current time. For a FIFO queue, functions head and tail yield the head of the queue and the rest, respectively; insert is used to insert an element and denotes the empty queue. For sets we use add and remove to insert and remove elements. For a configuration C we use: – C(f ) to obtain its field f . For example, C(instr)(i) yields the instruction sequence of thread i in configuration C.
Formal Semantics of a VDM Extension for Distributed Embedded Systems
155
– exec(C, i) as an abbreviation for C(curthr)(node(i)) = i, which expresses that thread i is executing on its node. – fresh(C, oid) to yield a fresh, not yet used, thread identity (so with status dormant) corresponding to object oid. To express modifications of a configuration, we define the notion of a variant. Definition 2 (Variant). The variant of a configuration C with respect to a field f and value v, denoted by C[ f → as v ], is defined v if f = f (C[ f → v ])(f ) = C(f ) if f = f Similarly for parts of the fields, such as instr(i). We define the value of an expression e in a configuration C which is evaluated in the context of a thread i, denoted by [[ e ]](C, i). The main point is the evaluation of a variable, where for an IO-variable we use the modif field if there is an uncommitted change: ⎧ ⎪ ⎨C(modif)(x, i) if x ∈ IOVar, C(modif)(x, i) = ⊥ [[ x ]](C, i) = C(ioval)(x) if x ∈ IOVar, C(modif)(x, i) = ⊥ ⎪ ⎩ C(lval)(x, i) if x ∈ LVar The other cases are trivial, e.g., [[ e1 × e2 ]](C, i) = [[ e1 ]](C, i) × [[ e2 ]](C, i) and [[ c ]](C, i) = c. It is also straightforward to define when a Boolean expression b holds in the context of thread i in configuration C, denoted by [[ b ]](C, i). For instance, [[ e1 < e2 ]](C, i) iff [[ e1 ]](C, i) < [[ e2 ]](C, i), and [[ ¬b ]](C, i) iff not [[ b ]](C, i). Definition 3 (Step). C −→ C is a step if and only if it corresponds to the execution of an instruction (Def. 4), a time step (Def. 5), a context switch (Def. 6), the delivery of a message by a link (Def. 7), or the processing of a message from a queue (Def. 8). Definition 4 (Execute Instruction). A step C −→ C corresponds to the execution of an instruction if and only if there exists a thread i such that exec(C, i) and head(C(instr)(i)) is one of the following (underlined) instructions: skip: Then the new configuration equals the old one, except that the skip instruction is removed from the instruction sequence of i, that is, C = C[ instr(i) → tail(C(instr)(i)) ] x := e: We distinguish two cases, depending on the type of variable x. – If x ∈ IOVar we require that there is no transaction in progress by any other thread, that is, for all i with i = i we have C(modif)(x, i ) = ⊥. Then the value of e is recorded in the modified field of i: C = C[instr(i) → tail(C(instr)(i)), modif(x, i) → [[ e ]](C, i)] As we will see later, all values belonging to thread i in C(modif) are removed and bound to the variables in C(ioval) as soon as thread i completes a time step (Def. 5). This corresponds to the intuition that the result of a computation is available only at the end of the time step that reflects the execution of a piece of code.
156
J. Hooman and M. Verhoef
– If x ∈ LVar then we change the value of x in the current thread: C = C[instr(i) → tail(C(instr)(i)), lval(x, i) → [[ e ]](C, i)] call(oid, op): Let IS be the explicit definition of operation op of object oid. We consider four cases: – Caller and callee are on the same node, i.e. node(i) = node(oid). • If syn?(op) then IS is executed directly in the thread of the caller: C = C[ instr(i) → ISˆtail(C(instr)(i)) ] • If not syn?(op), we add the pair (i, op) to the queue of oid: C = C[ instr(i) → tail(C(instr)(i)), q(oid) → insert((i, op), C(q)(oid)) ] – Caller and callee are on different nodes, i.e. node(i) = node(oid). Suppose link l connects these nodes. Then the call is transmitted via link l, which is represented by adding message m = (call(i, oid, op), C(now) + δmin (l), C(now) + δmax (l)) to the linkset of l. • If syn?(op), thread i becomes waiting: C = C[ instr(i) → tail(C(instr)(i)), status(i) → waiting, linkset(l) → insert(m, C(linkset)(l)) ] • Similarly for asynchronous operations, when not syn?(op), except that then the status of i is not changed: C = C[ instr(i) → tail(C(instr)(i)), linkset(l) → insert(m, C(linkset)(l)) ] duration(d): A duration statement leads to global progress of time, including a time step in the solver of the continuous model of the environment. This time step will be defined in Def. 5. periodic(d) IS: In this case, IS is added to the instruction sequence of thread i and a new thread j = fresh(C, oi ) is started which repeats the periodic instruction after a duration of d time units, i.e. C = C[ instr(i) → IS, instr(j) → duration(d)ˆperiodic(d) IS, status(j) → alive ] if b then IS1 else IS2 fi – If [[ b ]](C, i) then C = C[instr(i) → IS1 ˆtail(C(instr)(i))] – Otherwise, C = C[instr(i) → IS2 ˆtail(C(instr)(i))] while b do IS od: – If [[ b ]](C, i) then C = C[instr(i) → ISˆwhile b do IS odˆtail(C(instr)(i))] – Otherwise, C = C[instr(i) → tail(C(instr)(i))] return(j): In this case we have node(i) = node(j). Let l be the link which connects these nodes. Then m = (return(j), C(now) + δmin (l), C(now) + δmax (l)) is transmitted via l: C = C[ instr(i) → tail(C(instr)(i)), linkset(l) → insert(m, C(linkset)(l)) ]
Formal Semantics of a VDM Extension for Distributed Embedded Systems
157
Definition 5 (Time Step). A step C −→ C is called a time step only if all current threads are ready to execute a duration instruction or have terminated. More formally, for all i with exec(C, i), C(instr)(i) is or of the form duration(d) ˆ IS. Then the definition of a time step consists of three parts: (1) the definition of the maximal duration of the time step as allowed by the VDM model, (2) the execution of a time step by the solver, leading to intermediate configuration Cs (3) updating all durations of all current threads, committing all variables of the current threads, and dealing with events generated by the solver. 1. Time may progress with t time units if – t is smaller or equal than all durations that are at the head of an instruction sequence of an executing thread, and – C(now) + t is smaller or equal than all upper bounds of messages in link sets. Define the maximal length of the time step tm as the largest t satisfying these conditions. 2. If tm > 0 the solver tries to execute a time step of length tm in configuration C. Concerning the variables, the solver will only use the ioval field, ignoring the lval and modif fields. It will only read the actuator variables in OutVar and it may write the sensor variables in InVar in field ioval. As soon as the solver generates one or more events, its execution is stopped. This leads to a new configuration Cs and a set of generated events EventSet. Since the solver takes a positive time step, we have C(now) < Cs (now) ≤ C(now)+tm . If Cs (now) < C(now)+tm then EventSet = ø. Moreover, Cs (f ) = C(f ) for field f ∈ {instr, curthr, status, lval, modif}. If tm = 0 then the solver is not executed and Cs = C, EventSet = ø. This case is possible because we allow duration(0) to commit variable changes, as shown in the next point. 3. Starting from configuration Cs and EventSet, next (a) the durations are decreased with the actual time step performed, leading to configuration Cd (b) transactions are committed for threads with zero durations, leading to configuration Cm , and (c) new threads are created for the event handlers, leading to final configuration C . Let ts = Cs (now) − C(now) be the time step realized by the solver. (a) Durations in instruction sequences are modified by the following definition which yields a new function from threads to instruction sequences, for any thread i, NewDuration(C, ts )(i) = duration(di − ts )ˆtail(C(instr)(i)) if head(C(instr)(i)) = duration(di ) C(instr)(i) otherwise Let Cd = Cs [ instr → NewDuration(C, ts )] (b) Let ThrDurZero(C) = {i | exec(C, i) and head(C(instr)(i)) = duration(0)} be the set of threads with a zero duration. For these threads the transactions are committed and the values of the modified variables are finalized. This is defined by two auxiliary functions:
158
J. Hooman and M. Verhoef
NewIoval(C)(x) = v if ∃ i ∈ ThrDurZero(C) and C(modif)(x, i) = v = ⊥ C(ioval)(x) otherwise Note that at any point in time at most one thread may modify the same global variable in a transaction. Hence, there exists at most one thread satisfying the first condition of the definition above, for a given variable x. The next function resets the modified field, for any x and i, ⊥ if i ∈ ThrDurZero(C) NewModif(C)(x, i) = C(modif)(x, i) otherwise Then Cm = Cd [ioval → NewIoval(C), modif → NewModif(C)] (c) For each event e ∈ EventSet with evhdlr(e) = (ISe , ne ), let ie be a fresh not yet used - thread identity with status dormant and node(ie ) = ne . Then we define an auxiliary function EventInstr(C) : Thread → Instr. Seq. which installs event handlers. For any thread i, ISe if i = ie for some e ∈ EventSet EventInstr(C)(i) = C(instr)(i) otherwise In addition, we awake the threads of the event handlers by changing their status. Define, for any i, alive if i = ie for some e ∈ EventSet NewStatus(C)(i) = C(status)(i) otherwise Then C = Cm [instr → EventInstr(Cm ), status → NewStatus(Cm )] Observe that C (now) = Cs (now) = C(now) + ts with ts ≤ tm . Definition 6 (Context Switch). A step C −→ C corresponds to a context switch iff there exists a thread i which is alive, not running, and has a non-empty program which does not start with a duration, i.e., ¬exec(C, i), C(status)(i) = alive, C(instr)(i) = ø, and head(C(instr)(i)) = duration(d) for any d. Then i becomes the current thread and a duration of δcs time units is added to represent the context switching time: C = C[ instr(i) → duration(δcs )ˆC(instr)(i), curthr(node(i)) → i ] Note that more than one thread may be eligible as the current thread on a node at a certain point in time. In that case, a thread is chosen nondeterministically in our operational semantics. Fairness constraints or a scheduling strategy may be added to reduce the set of possible execution sequences and to enforce a particular type of node behavior, such as round robin or priority-based pre-emptive scheduling. Definition 7 (Deliver Link Message). A step C −→ C corresponds to the message delivery by a link iff there exists a link l and a triple (m, lb, ub) in C(linkset)(l) with lb ≤ C(now) ≤ ub. There are two possibilities for message m: – call(i, oid, op): Insert the call in the queue of object oid: C = C[ q(oid) → insert((i, op), C(q)(oid)), linkset(l) → remove((m, lb, ub), C(linkset)(l)) ]
Formal Semantics of a VDM Extension for Distributed Embedded Systems
159
– return(i): Wake-up the caller, i.e. C = C[ status(i) → alive, linkset(l) → remove((m, lb, ub), C(linkset)(l)) ] Definition 8 (Process Queue Message). A step C −→ C corresponds to the processing of a message from a queue iff there exists an object oid with head(C(q)(oid)) = (i, op). Let j = fresh(C, oid) be a fresh thread and IS be the explicit definition of op. If the operation is synchronous, i.e. syn?(op), then we start a new thread with IS followed by a return to the caller: C = C[ instr(j) → ISˆreturn(i), status(j) → alive, q(oid) → tail(C(q)(oid)) ] Similarly for an asynchronous call, where no return instruction is added: C = C[ instr(j) → IS, status(j) → alive, q(oid) → tail(C(q)(oid)) ] Definition 9 (Operational Semantics). The operational semantics of a specification in the language of Sect. 4 is a set of execution sequences of the form C0 −→ C1 −→ C2 −→ . . ., where each pair Ci −→ Ci+1 is a step (Def. 3) and the initial configuration C0 satisfies a number of constraints: – – – – – –
no thread has status waiting, on each node, the currently executing thread is alive, a thread is dormant iff it has an empty execution sequence, the modif field is ⊥ everywhere, all queues and link sets are empty, and the auxiliary instruction return does not occur in any instruction sequence.
To avoid Zeno behaviour, we require that for any point of time t there exists a configuration Ci in the sequence with Ci (now) > t.
6 Concluding Remarks We have defined a formal operational semantics for an extension of VDM++ which supports the development of embedded systems. The semantics has been validated by formulating it in the typed higher-order logic of the verification system PVS 1 and verifying properties about it using the interactive theorem prover of PVS. In fact, the formal operational semantics presented in this chapter is based on a much larger constructive (and therefore executable) operational semantics of the extended language, which has been specified in VDM++ itself. This approach allows symbolic execution of models written in our extended language using the existing and unmodified tool set V DM T OOLS. Besides the examples mentioned in Sect. 3.1, our extended VDM++ language has been applied to an experimental set-up that represents part of the paper path in a printer. This has been done in three phases: 1. In the first phase, the emphasis is on global system analysis by modeling and simulation using formal techniques. A model of the dynamic behavior of the physical system (with pinches, motors, and paper movement) was built using bond graphs, 1
The PVS files are available at http://www.cs.ru.nl/$\sim$hooman/VDM4ES. html
160
J. Hooman and M. Verhoef
supported by the 20- SIM tool. The controller software has been modeled using VDM++ and our extensions, supported by V DM T OOLS. The co-simulation of both models, as defined in this paper, has been used for multi-disciplinary design space exploration. This revealed a number of problems that were due to misunderstandings between designers of different disciplines and that would in normal industrial practice only have been detected in the integration phase. 2. In the second phase, the emphasis is on elaborating the software model of the control application. The level of detail in the VDM++ model has been increased incrementally, until source code could be generated from it automatically. The generated code has been compiled using a standard C++ compiler running on the simulator host, in our case a normal personal computer running on the Windows platform. The resulting dynamic link library (DLL) has been used for so-called software-inthe-loop simulations against the unmodified model of the plant in 20- SIM. 3. In the third phase, the unmodified C++ code generated from the VDM++ models developed in the second phase is compiled for the target platform. The resulting application has been uploaded to the embedded controllers of the experimental setup for testing. It showed that the continuous validation during the process leads to high-quality code that meets the control objectives with high accuracy. More details about this application can be found in [23]. Acknowledgments. The first author would like to thank Willem-Paul de Roever for numerous reasons: for the invitation to start as a researcher in European project Descartes, for the opportunity to participate in subsequent international projects, for the collaboration on joint papers, and for many years of supervision, guidance and stimulation.
References 1. Hooman, J., Mulyar, N., Posta, L.: Coupling Simulink and UML models. In: Schnieder, B., Tarnai, G. (eds.) FORMS/FORMATS 2004, pp. 304–311 (2004) 2. The Mathworks: Matlab/Simulink (2008), http://www.mathworks.com/ 3. Wandeler, E., Thiele, L., Verhoef, M., Lieverse, P.: System architecture evaluation using modular performance analysis: a case study. International Journal of Software Tools for Technology Transfer (STTT) 8(6), 649–667 (2006) 4. Verhoef, M.: On the use of VDM++ for specifying real-time systems. In: Fitzgerald, J., Larsen, P.G., Plat, N. (eds.) Towards Next Generation Tools for VDM: Contributions to the First International Overture Workshop, June 2006. CS-TR 969, pp. 26–43. School of Computing Science, Newcastle University (2006) 5. Verhoef, M., Larsen, P.G., Hooman, J.: Modeling and validating distributed embedded realtime systems with VDM++. In: Misra, J., Nipkow, T., Sekerinski, E. (eds.) FM 2006. LNCS, vol. 4085, pp. 147–162. Springer, Heidelberg (2006) 6. Verhoef, M., Visser, P., Hooman, J., Broenink, J.: Co-simulation of distributed embedded real-time control systems. In: Davies, J., Gibbons, J. (eds.) IFM 2007. LNCS, vol. 4591, pp. 639–658. Springer, Heidelberg (2007) 7. Owre, S., Rushby, J., Shankar, N.: PVS: A prototype verification system. In: Kapur, D. (ed.) CADE 1992. LNCS (LNAI), vol. 607, pp. 748–752. Springer, Heidelberg (1992) 8. SRI International: PVS (2008), http://pvs.csl.sri.com/
Formal Semantics of a VDM Extension for Distributed Embedded Systems
161
9. Controllab Products: 20-sim (2008), http://www.20sim.com/ 10. Reggio, G., Astesiano, E., Choppy, C., Hussmann, H.: Analysing UML active classes and associated statecharts - a lightweight formal approach. In: Maibaum, T. (ed.) FASE 2000. LNCS, vol. 1783, pp. 127–146. Springer, Heidelberg (2000) 11. Hooman, J., van der Zwaag, M.: A semantics of communicating reactive objects with timing. International Journal of Software Tools for Technology Transfer (STTT) 8(4), 97–112 (2006) 12. Bennet, A., Field, A.J., Woodside, M.C.: Experimental Evaluation of the UML Profile for Schedulability, Performance and Time. In: Baar, T., Strohmeier, A., Moreira, A., Mellor, S.J. (eds.) UML 2004. LNCS, vol. 3273, pp. 143–157. Springer, Heidelberg (2004) 13. Nicolescu, G., Boucheneb, H., Gheorghe, L., Bouchhima, F.: Methodology for efficient design of continuous/discrete-events co-simulation tools. In: Anderson, J., Huntsinger, R. (eds.) High Level Simulation Languages and Applications - HLSLA. SCS, pp. 172–179 (2007) 14. Gheorghe, L., Bouchhima, F., Nicolescu, G., Boucheneb, H.: Formal definitions of simulation interfaces in a continuous/discrete co-simulation tool. In: Proc. IEEE Workshop on Rapid System Prototyping, pp. 186–192. IEEE Computer Society, Los Alamitos (2006) 15. Andrews, D., Larsen, P., Hansen, B., Brunn, H., Plat, N., Toetenel, H., Dawes, J., Parkin, G., et al.: Vienna Development Method Specification Language Part 1: Base Language (1996); ISO/IEC 13817-1 16. CSK Systems Corporation: V DM T OOLS. (2008) Free tool support can be obtained from http://www.vdmtools.jp/en/ 17. van den Berg, M., Verhoef, M., Wigmans, M.: Formal Specification of an Auctioning System Using VDM++ and UML – an Industrial Usage Report. In: Fitzgerald, J., Larsen, P.G. (eds.) VDM in Practice – proceedings of the VDM workshop at FM 1999, pp. 85–93 (1999) 18. H¨orl, J., Aichernig, B.K.: Validating voice communication requirements using lightweight formal methods. IEEE Software 13-3, 21–27 (2000) 19. Fitzgerald, J., Larsen, P.G., Mukherjee, P., Plat, N., Verhoef, M.: Validated Designs for Object-oriented Systems. Springer, New York (2005), http://www.vdmbook.com 20. Larsen, P.G., Lassen, P.B.: An Executable Subset of Meta-IV with Loose Specification. In: Prehn, S., Toetenel, H. (eds.) VDM 1991. LNCS, vol. 551, pp. 604–618. Springer, Heidelberg (1991) 21. Mukherjee, P., Bousquet, F., Delabre, J., Paynter, S., Larsen, P.G.: Exploring Timing Properties Using VDM++ on an Industrial Application. In: Bicarregui, J., Fitzgerald, J. (eds.) The Second VDM Workshop (2000) 22. Clarke, D., Johnsen, E.B., Owe, O.: Concurrent objects a` la carte. In: Dams, D., Hannemann, U., Steffen, M. (eds.) de Roever Festschrift. LNCS, vol. 5930. Springer, Heidelberg (2010) 23. Verhoef, M.: Modeling and Validating Distributed Embedded Real-Time Control Systems. PhD thesis, Radboud University Nijmegen, The Netherlands (2008)
A Proof System for a PGAS Language Shivali Agarwal1 and R.K. Shyamasundar2 1
Tata Institute of Fundamental Research, Mumbai 2 IBM, India Research Lab, New Delhi
In honor of Willem Paul deRoever: Mentor, Friend and Colleague Abstract. Due to advances in hardware architectures such as multicore/multi-threaded architectures, various refinements of the parallel programming models such as distributed shared space, global address space and partitioned global address space (PGAS) etc., are widely prevalent in programming languages designed for high performance computing. In this paper, we shall discuss a preliminary work on a proof system for such a language. The language referred to as P GAS0 , is essentially an object-oriented language with features such as statically fixed set of places, asynchronous creation of activities, futures, atomics for synchronization etc. Many of the features of P GAS0 are taken from the new experimental language X10 (built around Java) under design at IBM. The language distinguishes between local and remote data access with reference to threads. The atomic is the only construct that can be used for synchronization in P GAS0 and is executed in a mutually exclusive manner at a place. One of the main safety properties of a P GAS0 program is that a thread should not access non-local data object directly by dereferencing but use the construct future to obtain remote data. We shall describe the semantics of P GAS0 and illustrate a proof system for the same with the motivation of establishing locality of data (an extremely useful from performance perspective). Further, we show how the same proof system can be used for establishing other concurrency properties.
1
Introduction
Advances in hardware architectures have lead to wide usage of multi-core/multithreaded processors. Scaling the performance due to parallelism has attained a special status over frequency scaling to enhance performance. Towards such a goal, parallel programming models have been revisited with a view to harness parallelism through data parallelism and non-interference freedom. Some of the refined parallel programming models are: data parallel (HPF), message passing model (MPI), shared memory model (openMP), distributed shared memory model (DSM) (UPC [12], Titanium) etc. The Distributed Shared Memory (DSM) model is designed to leverage ease of programming of the shared memory paradigm (hence programmer productivity), while enabling the high performance by expressing locality as in the D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 162–184, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Proof System for a PGAS Language
163
Fig. 1. Address Space
message passing model. Figure 1 depicts a general global address space wherein the global address space is partitioned relative to threads and references are either local or remote (i.e., global). Experience has shown that DSM programming languages, such as UPC, however may be unable to deliver the expected high level of performance due to the costly overhead of translating from the UPC memory model to the target architecture virtual addresses space [12]. With a view to harness the notion of locality much better, in X101 [4], the global address space is partitioned with respect to a priori fixed set of places and access to local and remote data is achieved through synchronous and asynchronous accesses; note that at a place one can access the data at the place synchronously while remote data access is achieved through spawning an explicit asynchronous computation on remote location. Such a paradigm seems to inherently enhance the true parallelism. Such languages are increasingly gaining significance with demand for high-performance computing and the easy availability of multi-core architectures. In X10, threads which are called activities can share only those variables that are declared final. A future is an asynchronous method call that stores the return value till the activity that created future asks for the return value. The future operation is complete when the return value is assigned to designated variable. Operations on shared variables are protected through atomic blocks. Another novel feature is that an activity that generates an exception shall always have a parent to report to. One of the main safety properties of a X10 program is that an activity should not access non-local data object directly by de-referencing but could use features like future to obtain remote data. Thus, deriving the locality of the data or other wise plays a significant role in the performance of the program. Our aim of this paper is to analyze establishing such properties in programming languages that demand the same. With this in view, we consider an abstract programming language P GAS0 that is essentially an object-oriented language with some of the important features from X10 such as, statically fixed set of places, asynchronous creation of activities, futures, atomics for synchronization. P GAS0 language distinguishes between local and remote data access with reference to activities. The atomic is the only construct that can be used for synchronization in P GAS0 and is executed in a mutually exclusive manner at a place and does not have other explicit synchronization constructs as in Java. While these restrictions make it 1
X10 is a programming language for high performance computing under design from the perspective of performance and productivity at IBM.
164
S. Agarwal and R.K. Shyamasundar
simpler for reasoning, it must be noted the language is quite general. Note that there is no constraint as in Creol derivatives [5] such as, only one thread can be active within an object and rescheduling occurs only at specific release points. In this paper, we shall describe the semantics of P GAS0 and illustrate a proof system for reasoning about the locality of data. The proof system uses the global invariance, local invariance and interference freedom of threads of execution. Our primary concern is the specification of such properties in the proof system. After illustrating the derivation of such properties, we show how the same proof system can be used for establishing other useful concurrency properties as well. Rest of the paper is organized as follows: Section 2 gives the syntax of our language, the semantics of which is discussed in section 2.1. Proof system is developed in 3. The related work is given in section 4 and we conclude in section 5.
2
Language P GAS0
A P GAS0 Program, P rog, is a collection of Places, Classes and a main statement(starting point of the program). Place corresponds to a cluster with affinity to a part of memory in the PGAS setting. A class denoted by Clj is very much similar to the notion of class in Java language. The syntax of P GAS0 is given in Figure 2. The main program starts a thread (referred to as activity) with a statement finish s where the keyword finish ensures that the statement s and subsequent activities created within s, terminate before the statement finish s is considered terminated (cf. section 2.1) and also provides a basis for robust exception handling in the context of asynchronous creation of activities. →→ The statement async Placei { T x s} denotes the creation of asynchronous activity at Placei . The statement new < Placei > C() denotes the creation of an object of type C at Placei . In case, a place is not mentioned the object is created wherever the activity that calls this statement is executing. In a typical future computation statement →
!primT v = future Placei v1 .m( v ) the computation is bound to Placei , !primT (note “!”) denotes that v is a future variable that takes a value of type primT (primitive type). In other words, future → variable is the placeholder for the return value of the method v1 .m( v ) in f utr. The variable v1 is a subscripted version of v to denote that they are different. A future variable can be forced to return the value using a blocking call namely f orce(). Note that f orce() can be called only on a future variable. A variable if declared final in an activity becomes accessible to subsequent activities created by this activity. Note that the final object cannot be modified; → however, the fields of the final object can be modified. op( v ) denotes operations on field / local variables when they belong to a primitive type. The shared objects can be updated in a mutually exclusive manner using atomic blocks. The statements in atomic block have to be non-blocking
A Proof System for a PGAS Language
P rog ::= Place ::= Cl ::= T ::= primT ::=
Place Cl main Place1 , Place2 Cl1 , Cl2 C | primT | final C int | bool
→ →
class C extends C { T f ; M } Clj ::= main ::= finish s → → A ::= async Placei { T x s} M ::= v ::= se ::= sa ::= e ::= ss ::= s ::=
f utr ::= b ::=
→ →
→→
T m ( T x ){ T x s; return e} x.f | x → v | op(se) x = se|x.f = se → v | new < Placei > C() | e.m( e ) → |v.f orce() | op( v ) sa| if b then ss else ss| ss1; ss2 v = e | ss | f inish A | f utr | skip | while b do s | A | atomic ss | when b ss → !primT v = future Placei v1 .m( v ) boolExpr
Placei ::= i f ::= field variable x ::= local variable m ::= method name C ::= class name
165
Program Set of Places Set of Classes Types Primitive type Class structure Main program (implicit finish) Asynchronous object creation at place i Method Variable Simple expression Simple assignment Expression Simple statement
Statements Future Statement boolean expression i ∈ {1, 2 . . . , n}
Fig. 2. Syntax of P GAS0
and cannot start another asynchronous activity. A conditional wait on shared objects is made possible by using the classical when b ss construct. 2.1
Informal Interpretation of P GAS0
The execution of P = < main, P laces, Classes > can be interpreted as follows: 1. Places have unique identities (statically) and could map to a cluster of processors with memory distribution as per PGAS paradigm 2. The classes contain instance variables and method declarations. 3. Objects, which are instance of classes are created dynamically and communicate via method invocation. All objects are bound to a place (or location) and cannot change their location throughout their lifetime. – Instance variables hold the state of an object and exist throughout the object’s lifetime. Local variables are stack-allocated and only exist during the execution of the method to which they belong.
166
S. Agarwal and R.K. Shyamasundar
4. Statement main starts the execution of program at place P0 . That is, it creates a thread (often referred to as activity) corresponding to main (or the outermost finish (implicit/explicit)) to start with and subsequently creates threads taking into account spawning requests at various places with the progress of the computation. The place at which the activity has been created is the local place for the activity and all other places are considered remote w.r.t. this activity. Note that an activity cannot migrate to another remote place. – An activity can access synchronously only the local data (i.e., the objects that reside at the place where the activity is running). Remote data (objects at remote places) access can be done through the use of futures. A future is an asynchronous activity created at the desired location that returns a value which is the result of a method call embodied in the activity. Thus, it is an asynchronous method call that returns a value on demand. A demand can be made by invoking a method f orce() on the future variable as explained earlier. 5. Keyword finish creates a finish scope within an activity such that all the activities created within the scope have to terminate before the finish statement can be considered complete. – The scope can be modelled as a tree where the root node denotes the scope (signified by the finish enclosing the main) of the main activity, the internal nodes correspond to activities that create finish scope nested within the finish of its’ parent node. This is a dynamic tree and an activity at internal node has to wait till the size of the subtree rooted at that node becomes zero. The leaf node activities execute normally. It is easy to see that activities terminate in a bottom up manner. – The finish statement provides a synchronization point between the parent and its’ children. The language enforces the main activity to create a finish scope and subsequent activities may create a fresh finish scope that is nested in the earlier one. Activities that do not create a finish scope maintain a link to that activity within whose finish scope they had been created. The tree structure, thus induced on activities, provides a rooted exception model; thus the exceptions of asynchronous activities will always be captured at least by the closest finish if not captured by the activity that created it. 6. An atomic block is non-blocking, cannot create activities and can access only local data. There is no nesting in atomic blocks, thereby, atomic construct cannot induce deadlock. For the programmer, it would appear as if the atomic block was executed in a mutually exclusive way at a place. 2.2
States and Configurations
Before going in for the definition, we shall explore the structure of the states and context of a P GAS0 program due to new features that are in P GAS0 over and above Java.
A Proof System for a PGAS Language
167
Basic Notation: 1. Places: The set of places is denoted by {P1 , P2 , . . . , Pn }. 2. Classes: For any class name c ∈ C, the domain for type c is denoted by V alc ; α, β . . . denote (infinite) set of object identifiers. – A fresh element in V alc denotes one that has not been used before in the program to denote object instances. Note that objects cannot migrate between places. States and Contexts: First, let us explore the structure of the contexts that arise due to various features of the language. a. Variables and Values: – The of type t is denoted by V alt which includes null and V al = domain t V al denotes the disjoint union. t – Let V ar denote local/final variables in a method that is being executed by an activity; that is, they are stack allocated. The final variables are special types of stack variables that cannot be modified but can be passed on to the child activity created in that method. – A local state (often referred to as context )of an activity is denoted by τ which is a partial map defined by V ar V al. – The map τ is factored out as τ f and τ l for convenience. • We use τ f for those variables in τ that have been declared final. • Similarly, τ l is used for the map of local variables. – Let IV ar denote instance variables that hold the state of an object. An object is, therefore, characterized by an instance state σinst that is a partial map: IV ar ∪ {this} ∪ {loc} V al. Let Σinst denote the set of all such instance state maps. – For an α ∈ V alc , the initial object instance of type C is denoted by αC init where all the field variables are initialized to either null or default values and the special fields that are created are mapped as follows: 1) this is created which is a self reference, that is, it holds the value α. 2) loc is created which contains the place identifier. For example, if an object is created at Pi , then loc is assigned i. Note that the value of loc cannot get changed during the entire lifetime of the object, that is, objects cannot migrate between places. A global state at a place Pi is denoted by σi and is of type V alc Σinst . The set of global states at Pi is denoted Σi . We shall denote by τ [v → u] the local state which assigns value u to v and agrees on all other values. σinst [f → u] is defined analogously. σi [α.f → u] denotes that in the global state of σi , instance variable(also called field variable) f of object instance pointed to by α maps to u. b. Activities: Let the set of activities at a place Pi be Ai = a1 , a2 , . . . , a , a , . . .. An activity created can be in active, suspended (denoted asusp ) or complete status (ac ). A newly created activity a is denoted anew . An activity configuration is
168
S. Agarwal and R.K. Shyamasundar
denoted by a[s]τ where a represents the activity identifier, s represents the statements to be executed, τ denotes the local state for execution of s. Some of the special aspects needed for other structures are described below: c. Future: The value of a future computation is abstracted through a map, called destiny map, denoted des. The map des maps each activity a to null by default. It is not null only when the activity itself is a future computation in which case it maps to the placeholder that will store the return value of the computation. Each activity itself decides on its des value, therefore, the map des reduces to a local attribute of an activity that can be fixed at the time of activity creation. d. Statements: In general, – We denote the successful completion of a statement via the classical skip construct that does not do any operation. – In case an exception occurs in the execution of a statement s, the statement is reduced to error (essentially denotes the classical notion of Abort). e. Finish Statement: The finish statement plays an important role in the order of creation/ termination of activities and induces a tree structure on the activities as described below: – The tree be denoted by Δ = (V, E). – The domain of V is the set of all activities that can be created at any place which is denoted by ∪i Ai ∪ {a0 } ∪ main. Here, we have distinguish main which corresponds to main activity that starts the program. a0 is a special node used to initialize V and does not correspond to any activity. – The set of edges, E, is initially empty. A program is considered started when there is an edge from a0 to main. Whenever an activity a ∈ Ai is created in the scope of a finish statement belonging to the activity(i may/may not be equal to j). a ∈ Aj , node a is added to V and edge (a, a ) is added to E. This edge is removed when a terminates along with node a . – Note that a single Δ is used for the whole program and is not tied to a place. – An activity created in nested finish will have an edge from the closest finish. The activities that create a finish scope form internal nodes of the tree while others are at the leaf level. The semantics of finish demands that an activity corresponding to internal node in Δ can resume execution only when the subtree rooted at this node is empty. For an activity a that occurs as an internal node, the rooted subtree denoted Δa . The set of edges in Δ are given by E(Δ). Note that the subtree Δa is considered empty if and only if E(Δa ) = ∅ and the nodes in Δa denoted by V (Δa ) contain only a. – For an activity a, its’ parent is denoted apt and corresponds to the immediate predecessor node in Δ . Note that we consider main as special case and mainpt is the main itself. Definition 1: A configuration at a place Pi denoted, Pi = (σi , Ai {a[s]τ }) where σi denotes the global instance state of Pi and Ai {aj [sj ]τ } denotes the set of
A Proof System for a PGAS Language
169
activities aj ready to execute statement sj in state σi with context τ ; appropriate projection of the state with respect to the activity is understood. Definition 2: A Program configuration, say P C = (P1 , P2 , . . . , Pn , Δ). The initial program configuration is given by P C0 = (P10 , P20 , . . . , Pn0 , Δ0 ) where P1 is 0 the place where the program is loaded and P10 = σ10 , A1 {a[s]τ }; that is, the initial configuration at place P1 denotes configuration with initial state (includes the data distributed at this place), context, and the initial program counters indicating the ready activity statements (provided by τ 0 ). For places other than place P1 we have Pi0 = (σi0 , ∅}), i = 1, indicating the data distribution only; no activities exist initially (they need to be created after the program starts from place zero). Definition 3: At a place i with activity set Ai , transition of configuration with respect to an active activity a ready to execute with s1 is given by (σi , Ai {a[s1 ]τ }) → (σi , Ai {next(a[s1 ])τ }) where – next(a[s1 )]) indicates the statement to be executed after s1 in activity a. Note that a thread is associated with an activity. For instance, in the notation a[s1 ], activity a is active and has a thread associated with it and the current locus of execution of a is denoted by statement s1 (referred to as the active statement). – Essentially, next: activity × active statement → next active statement That is, next(a[s]) yields the next point of execution in the thread associated with a beginning from the body s. If next(a[s1 ]) is null 2 then the activity a terminates if no other active statements are there in activity a. – In an activity, it is possible to have several concurrent threads of execution. Further, due to asynchronous creation several activities or threads can be created by an activity. For the purpose of the definition of next, we use the textual body to determine the next locus of control. We shall not go into further details of the definition of next. Note that as a result of transitions, global state at a place may change due to change in object instance. Local state of an activity may also change. In the next section, we describe rules for various statements to show how effect on the global and local state. In the following, we describe the basic rule of parallel execution in an object oriented language. Definition 4 (Parallel Execution at a place): Parallel execution of statements s1 and s2 in activities a1 and a2 respectively at a place, say Pi , denoted by {a1 [s1 ], a2 [s2 ]}, is nothing but the interleaved execution of the statements and is given by ||1 , ||2 ( for local). (σi , Ai {a1 [s1 ]}) → (σi , Ai {a1 [s1 ]}) (σi , Ai {a1 [s1 ], a2 [s2 ]}) → (σi , Ai {a1 [s1 ], a2 [s2 ]}) 2
Sometimes in the rules this is also denoted by for brevity.
||1
170
S. Agarwal and R.K. Shyamasundar
(σi , Ai {a2 [s2 ]}) → (σi , Ai {a2 [s2 ]}) (σi , Ai {a1 [s1 ], a2 [s2 ]}) → (σi , Ai {a1 [s1 ], a2 [s2 ]})
||2
The statement transitions follow the rules discussed in the sequel. Note that we are not explicitly showing the merging of the states explicitly as it is a shared state space. Definition 5 (Parallel execution at different places: Transitions of program configurations denoted, P C → P C → . . ., depict transitions due to executing activities concurrently across places. The parallel execution of two activities at different places (say Pi , Pj with activity sets Ai , Aj respectively) results in disjoint global states provided the activities are not doing any remote operations. The notion and effect of remote operations will become clear when we describe rules that describe the effect of remote operations. (σi , Ak {ai [si ]}) → (σi , Ak {ai [si ]}), k = {i, j}, si , sj have local effects (σi , Ai {ai [si ]}), (σj , Aj {aj [sj ]}) → (σi , Ai {ai [si ]}), (σj , Aj {aj [sj ]})
||g
– The rule for parallel execution of statements (given by ||g - g for global) at two different places states that they execute independently under the assumption that they are not remote operations because the global instance state is disjoint. – An activity at place Pi can execute a statement that can affect the global state at another place Pj . So the affecting statement at Pi has to be interleaved with all possible global instance states at Pj which can get affected to observe all possible behaviours. – Activities from multiple places can update Δ simultaneously and the order of interleaving is immaterial. Definition 6: A Program State is denoted by Σ and is given by < σ1 , σ2 , . . . , σn , Δ > where σi corresponds to that of Pi . 2.3
Operational Semantics
In this section, we shall describe the semantic rules using the classical SOS rules. Notation: – The primed versions τ , σi , Ai , Δ , denote the change in τ , σi , Ai , Δ respectively due to statement execution. – Transition granularity: In a rule, the global instance state σi for place Pi changes to σi in single step denoted by →. Then, σi is the next global instance state in which another activity can execute a statement. – τ implicitly means τ l and τ f . – Expression evaluation is denoted by e; we shall not go into details. Assignment: The statement x = e results in x being assigned the value of e. Rules (1)-(4) describe the assignment rules taking into account local and
A Proof System for a PGAS Language
171
instance variables. The primed version of current local context τ l denotes the change in the value of a local variable due to assignments, α denotes the object identifier. Note that in the case of field access (as described in rules (2), (3)), the object being accessed needs to reside at the same place as the activity. If not, then a goes to error state as shown in rule (4). The effect of instance variable assignment that is remote causes error in a similar way. τ l [x → eτ,σi ]
(1)
(σi , Ai {a[s ≡ x = e]τ }) → (σi , Ai {next(a[s])τ })
(Expression to local variable)
α = eτ,σi
α.loc = i τ l [x → α.f ]
(2)
(σi , Ai {a[s ≡ x = e.f ]τ }) → (σi , Ai {next(a[s])τ })
(Instance variable to local variable)
α = τ (x) α.loc = i σi [α.f = eτ,σi ] (σi , Ai {a[s ≡ x.f = e]τ }) → (σi , Ai {next(a[s])τ })
(3)
(Expression to instance variable)
α = eτ,σi α.loc = i (σi , Ai {a[s ≡ x = e.f ]τ }) → (σi , Ai {a[error]τ })
(4)
(Remote data to local variable)
Conditional: The rule for the conditional3 is the classical rule given below: bτ,σi (σi , Ai {a1 [if b then s1 else s2 ]}) → (σi , Ai {a1 [s1 ]})
Condtt
¬bτ,σi (σi , Ai {a1 [if b then s1 else s2 ]}) → (σi , Ai {a1 [s2 ]})
Condf f
Object Creation: α is f resh
l σi [α → αC init ] τ [x → α]
(σi , Ai {a[s ≡ x = new C()]τ }) → (σi , Ai {next(a[s])τ })
(5) (new C)
3
Due to interleaving of objects, the classical sequential composition does not have significance except that it defines the next discussed earlier. However, in the atomic construct the classical sequential composition holds. Hence, the rule is given along with the atomic construct.
172
S. Agarwal and R.K. Shyamasundar f α is f resh σi [α → αC init ] τ [x → α]
(σi , Ai {a[s ≡ f inal C x = new C(); s]τ }) → (σi , Ai {next(a[s])τ })
(6)
(final new C)
i = j
l α is f resh σj [α → αC init ] τ [x → α]
(7)
(σi , Ai {a[s ≡ x = new Pj C]τ }) → (σi , Ai {next(a[s])τ })(σj , Aj )
(new Pj C)
As there is no explicit place specified, Rule (5) (rule for creating a new object) says that when a new C call is made, the object is created at the same place as that of a and added to the set of objects at that place. The object instance created by default constructor is αC init . As a result of this call x maps to α in the updated local state τ l . Any change in final variables is denoted by the primed version, τ f . In case x is a final variable as seen in rule (6) it is stored in τ f . In case an explicit place is specified as in rule (7), the object instance is created at that remote place and a at place Pi gets the object identifier and updates its current context τ l by mapping x to α as shown by τ l . The rule for creating remote object as a final variable can be deduced similarly. Activity Creation σ1 = ∅, τ = ∅ E(Δs0 ) = ∅ E(Δ ) = E(Δ) ∪ (s0 , main) (∅, ∅), Δ → (σ1 , A1 {main[s]τ }), Δ
(8)
(main → finish s) →→
f
des(a) = null anew Ai = Ai ∪ a [ T x s]τ E(Δ ) = E(Δ) ∪ (apt , a ) (σi , Ai {a[s ≡ A]τ }), Δ → (σi , Ai {next(a[s])τ }), Δ
(9)
→ →
(A → async Pi { T x s}) i = j des(a) = null anew
→→
body(A) = { T x s}
E(Δ ) = E(Δ) ∪ (apt , a ) f
(σi , Ai {a[s ≡ A]τ }), Δ → (σi , Ai {next(a[s])τ }), (σj , Aj ∪ a [body(A)]τ ), Δ → →
(10)
(A → async Pj { T x s}) Assuming that the main activity always starts at P1 , rule(8) defines the rule for creation of the main thread wherein the tree of activities denoted by Δ initially consists of only node s0 with the only edge (first) (s0 , main). The statement block s is the main program body which starts in place P1 and may create activities at other places. Rules (9) and (10) define the creation of local and remote activities respectively. In the case of creation of other activities denoted by A, the activity fragment of the code is denoted by body(A). The new activity, denoted by anew , is created at the place specified in the construct. The activity
A Proof System for a PGAS Language
173
is added to Ai in (9) and Aj in the case of rule (10). The initial context of the activity a is initialized with the mapping of final variables obtained from τ f of a. The parent of the new activity is the same as the parent of a if a is not a future activity (can be checked by finding whether des(a) is null or not); note that the rules for future activity are dealt separately later. Δ is updated to reflect the creation of new edge4 (apt , a ) that denotes that apt =apt . In rule (10), a denotes an activity currently scheduled on Pj . Note that, we had specified that mainpt is defined to be main itself. Therefore, we are always able to resolve the parent relationship in the tree Δ. Also, an activity and its ancestor need not lie in the same place. Activity Termination: Rule (11) depicts normal termination of an activity a when there is no statement to execute next (denoted ). In this case, it is removed from Δ and the activity is marked completed in A denoted ac . Rule (12) says that main terminates when there are no other activities across all places and statements in main have completed execution. Termination of the main activity amounts to termination of program. Rule (13) says that if an activity runs into an error state, then the error state is propagated to its ancestor in addition to its removal from Δ. The error statement error becomes the next statement to be executed for the parent. This ensures that the parent does not execute the rest of the statement block s in the presence of an error in the program. We denote the local state of parent activity with a subscripted τ , namely, τpt to avoid confusion. The error eventually reaches main and the program is aborted as indicated by abort in rule (14). s = Δa = ∅ E(Δ ) = E(Δ) \ (apt , a) (σi , Ai {a[s]τ }), Δ → (σi , Ai {ac }), Δ
(11)
(Activity a terminates normally) s=
Δmain = ∅
A1 = {main} Anj=2 = ∅
E(Δ ) = E(Δ) \ (s0 , main)
(σ1 , A1 {main[s]τ }), (σ2 , A2 ), .., (σn , An ), Δ → (∅, ∅), Δ (12) (main terminates normally) E(Δ ) = E(Δ) \ (apt , a) apt ∈ Aj (σi , Ai {a[error]τ }), Δ → (σi , Ai {ac }), (σj , Aj {apt [error]τpt }), Δ
(13)
(Activity a terminates with error) Δmain = ∅ E(Δ ) = E(Δ) \ (s0 , main) (σ1 , A1 {main[error]τ }), Δ → abort
(14)
(main terminates with error) 4
Creation/Deletion of an edge (a, a ) in Δ is assumed to be done properly w.r.t. to a . Hence, we don’t show the creation of a node explicitly in any rule.
174
S. Agarwal and R.K. Shyamasundar
Method Call: When an activity a calls a method m with method-body M , method parameters are evaluated in the current context, a special variable xret is used for storing the return value initialized to null. The body of m is executed in the state τ which has the mapping of actual parameters to formal parameters. This is formally shown in rule (15). Note that the object on which the method is called should reside at the same place as a; other wise, it results in an error as seen in rule (16). Rule (17) models the return of a method. The activity exits out of the context of m and the return value of xret is made available in the now current state τ . Note that when the method returns, Δ does not change. α = eτ,σi
→
→
α.loc = i τ [xret → null, x → e τ,σi ] →
(15)
(σi , Ai {a[s ≡ x = e.m( e )]τ }) → (σi , Ai {a[body(m)]τ })
(Method call)
α = e
τ,σi
α.loc = i
(16)
→ τ e
(σi , Ai {a[e.m( )] }) → (σi , Ai {a[error]τ })
(Method call on remote object results in error)
xret = eτ,σi
τ l [x → xret ]
(17)
(σi , Ai {a[return e]τ }) → (σi , Ai {next(a[s])τ })
(Method returns value)
Finish: A finish scope is created in an activity and all the activities within a finish scope have to be completed before the current activity can proceed with the next statement. In rule (18), we can see that the statement within finish, which is A that denotes new async activity, has to be executed first followed by which it will wait for the activity subtree at a to be empty. The activities created directly in this finish scope have activity a as their ancestor. mg = (E(Δa ) == ∅) (σi , Ai {a[s ≡ f inish A]τ }), Δ → (σi , Ai {next(a[A; await mg]τ )}), Δ
(18)
(finish A)
await mg: The blocking condition induced by a finish statement is handled through an auxiliary construct await mg which essentially denotes that the activity has to wait till the meta-guard mg becomes true. The condition mg in case of finish is E(Δa ) == ∅ means that the subtree of activities rooted at a is empty. That is, the descendants of a have terminated. Note that if A terminates then the next (...) would be null if all the descendants have terminated; other wise it will wait on mg. Rules (19) and (20) show that this auxiliary construct is complete when mg becomes true. mgΔ,τ = f alse (σi , Ai {a[s ≡ await mg]τ }), Δ → (σi , Ai {next(a[s])τ }), Δ
(19)
(mg evaluates to false)
A Proof System for a PGAS Language
mgΔ,τ = true (σi , Ai {a[s ≡ await mg]τ }), Δ → (σi , Ai {next(a[s])τ }), Δ
175
(20)
(mg evaluates to true)
Future-Force → Let f utr denote the statement !primT v = future Pj v1 .m( v ); this corresponds to the asynchronous method call denoted through the new activity a that needs → → to execute v1 .m( v ). Note that variables v1 , v are final variables. The future variable v is used to create a special object instance vf ut to track the return value from this activity. This special object is referred to as handle. The object instance vf ut is part of the global state σi . The instance variables for object state are: vf ut .val and vf ut .trm. The field vf ut .val denotes value of the handle and should be of type primT . The field vf ut .trm keeps track of whether or not the future activity has terminated. The initialized handle is denoted as vf0 ut where vf ut .val is initialized to null and vf ut .trm is initialized to 0 which denotes “non-termination”. In the case of an exception, vf ut .val can contain special term error. The special destiny mapping denoted by des maps the asynchronous future activity to the handle vf ut . Thus, vf ut can be considered as a communication channel between a and a to communicate the computed value and the termination status. Rule (21) captures all the above details in creating a future activity and implicitly assumes the conditions and effects of activity creation on Δ as shown in rules (9) and (10) respectively depending on whether the future activity is created locally or remotely. Note that the future activity denoted by a is shown created at place Pj . Analogous to rule (10), it shows that a is the activity currently scheduled at Pj . Creation of an activity inside a future activity is described in rule (22). Unlike rule (9), the parent of a is a itself as a is a future activity. Therefore, future also plays the role of finish implicitly. The rule corresponding to (22) for creating activity within future activity at a remote place follows on the same lines. Rule (23) states that the future activity a when completed will have the return value in the val field of the handle given by des map. In case an error occurs in the method that was called in future, the handle to future variable stores error as the value. This is shown in rule (24) which also says that an error within the body of future that will cause the program to be aborted. The completion of method m in future activity either normally or with exception does not mean completion of future activity. The rule for termination of future activity is given in (25). It requires that vf ut .val = null and all child activities of the future activity have terminated. The termination is indicated by setting vf ut .trm to 1. For the sake of brevity, we have omitted the update to Δ which is basically the removal of (apt , a) from Δ. The activity that started future computation can force it to return the value using v.f orce() construct as shown in rule (26). The statement v.f orce() is considered complete only when vf ut .trm is 1; that is, the future activity has completed. The auxiliary construct await mg (first introduced in rule (18) for the meta-guard) in rule (26) is used to capture this condition. Note that the meta-guard is actually expressible in
176
S. Agarwal and R.K. Shyamasundar
the language itself. Rule (27) says that when mg is true, the value assigned to the handle of future variable vf ut .val is assigned to the local variable. anew
des(a ) = vf ut
σi [vf ut → vf0 ut ]τ [v → vf ut ] →
f
(σi , Ai {a[s ≡ f utr]τ }), Δ → (σi , Ai {next(a[s])τ })(σj , Aj ∪ a [v1 .m( v )]τ ), Δ (21) →
(f utr →!primT v = future Pj v1 .m( v ))
des(a) = vf ut anew Ai = Ai ∪ a des(a ) = null E(Δ ) = E(Δ) ∪ (a, a ) (σi , Ai {a[s ≡ A]τ }), Δ → (σi , Ai {next(a[s])τ }), Δ (22) (Activity created within a future activity) des(a) = vf ut σi [vf ut .val = eτ,σi ] s =
(σi , Ai {a[return e]τ }) → (σi , Ai {next(a[s])τ })
(23)
(Method m called in future returns normally)
des(a) = vf ut σi [vf ut .val = error] s =
(σi , Ai {a[error]τ }) → abort
(24)
(Error occurs in method m called in future)
σi [vf ut .trm = 1] des(a) = vf ut s = vf ut .val = null (σi , Ai {a[s]τ }), Δ → (σi , Ai {ac }), Δ
Δa = ∅
(25)
(Termination of future activity)
vf ut = vτ mg = (vf ut .trmσi == 1) (σi , Ai {a[x = v.f orce()]τ }) → (σi , Ai {a[await mg]τ })
(26)
(Retrieve future value using v.force())
mgΔ,τ,σi = true stm = if (vf ut .val! = error) then x = vf ut .val else error (σi , Ai {a[await mg]τ }), Δ → (σi , Ai {a[stm]τ }), Δ (27) (g corresponding to force evaluates to true)
Atomic Atomic blocks are non-blocking, cannot create activities and can access only data residing at the same place as shown in the premise of rule (28). It can be seen from that the effect of ss at the place, say Pi , captured by
(σi , Ai {a[atomic ss]τ }) → (σi , skipτ })
A Proof System for a PGAS Language
177
corresponds to pure sequential execution of ss. This satisfies the property that atomic blocks at a place are executed serially. Thus, when an atomic statement executes, the global state σi may change only due to execution of atomic statement and nothing outside the atomic statement. Also no new object can be added to it. objects accessed in ss are atPi
∗
(σi , Ai {a[atomic ss]τ }) → (σi , skipτ })
(σi , Ai {a[s ≡ atomic ss]τ }) → (σi , Ai {next(a[s])τ })
(28)
(Atomic block)
In atomic, the classical sequential composition holds as given below: (σi , Ai {a1 [atomic s1 ]}) → (σi , Ai {a1 [skip]}) (σi , Ai {a1 [atomic s1 ; s2 ]}) → (σi , Ai {a1 [atomic s2 ]})
Seq
The skip on the rhs in the premise reflects termination of s1 ; thus, s2 starts in the state σi . when b ss: This is used for conditional wait. The statement ss should be executed atomically in the state where the condition b evaluates to true; otherwise the activity is suspended till b becomes true. Rule (29) corresponds to an activity waiting for b to be true upon reaching a when b ss statement. and rule (30) shows that upon b becoming true, activity starts execution in same state of objects in σi that b was evaluated to be true. bσi ,τ = f alse mg = (bσi ,τ == true) (σi , Ai {a[s ≡ when b ss]τ }) → (σi , Ai {next(a[await mg; ss)]τ })
(29)
(when b ss)
bσi ,τ = true
∗
(σi , Ai {a[ss]τ }) → (σi , Ai {a[skip]τ })
(σi , Ai {a[ss]τ }) → (σi , Ai {next(a[s])τ })
(30)
(b evaluates to true)
3
Proof System
We shall now develop a proof system for asserting the safety of programs in P GAS0 with respect to places. Definition 7: A P GAS0 program is safe with respect to places if all the objects are accessed at the same location at which they are created. 3.1
Assertion Language
Our assertion language is given below.
178
S. Agarwal and R.K. Shyamasundar
E ::= z (variables)| z.f (field access) →
| location(z) | here | place | op(E )(operation) where – Variable z can be a program variable of primitive type, or one that maps to an object, or a logical (or auxiliary) variable (implicitly universally quantified). – z.f denotes the value of the field of the object that the variable z maps to. – place ∈ {P1 , P2 , . . . , Pn } denotes a place from the set of places that the programs executes on. location(z) gives the place of the object which z maps to. here denotes the place where the statement under consideration is getting executed. – Assertions are formed by boolean combination of boolean expressions. Notation: 1. The preconditions and post-conditions are denoted by p, p , .., q, q , ... An assertion can range over variables created in any activity in any place. 2. I denotes an invariant. Some of the special invariants used in the system are given below: (a) An assertion is said to be a local invariant (LI) if it holds throughout an activity. Note that LI consists of only local variables of the activity. (b) An assertion is a global variant (GI) if it holds true in all activities at all places. (c) Typically monitor invariants are used for shared objects that are locked; as we don’t have explicit locks in P GAS0 , we are not using them here. 3.2
Proof Outline
An observation is an assignment to an auxiliary variable. An observation is → → written as < y := e > where e is some expression that evaluates to an appropriate value and y can take following form: y ::= z | z.f | location(y) | here Auxiliary variables are used only for observing the program and do not affect the control flow or the data flow. Auxiliary variables in each activity are distinct (appropriate renaming is applied). A proof outline is a program annotated with assertions and observations. Let Pi , Pj denote places, p, q, p , q .. denote preconditions and post-conditions. Now, we describe the observation (or location) assertions for the various constructs for ensuring place safety of the program. 1. Assignment: In x = e, expression e will get evaluated to an object; thus, the observation assertion corresponds to checking the location of e. {p} x=e {q} 2. new C: The creation of a new object checks the location of the object created. {p} x= new Pi C() {q} 3. Activity Creation: For an activity creation through async Pi {S} we assert {p} S {q}
A Proof System for a PGAS Language
179
→
4. Method call: Consider a method call {p} e.m( e ) {q} in an activity at Pi . Let → → the formal arguments x be mapped to actual arguments e and eret be the return value. The observation assertions include the location information of the objects passed as arguments and the value of here. → → → m( x ) body(m) {q } return eret ; {q} → 5. Future: For statement v=future Pi {v1 .m( v )}, execution of method m takes place in a new activity at place Pi . The auxiliary variable here denotes the place where future activity is supposed to run. → {p} v1 .m( v ) {q} An object is accessed either during a field access or a method call. For any field → access (of the form z.f ) or a method call (z.m( e )), the assertion to be satisfied for place safety is: {location(z) == here} A proof outline for place safety is said to be correct if it satisfies the following verification conditions: 1. Local Correctness: A proof outline is locally correct if the assertions hold true for the all the statements in all the activities assuming non-interleaved execution of activities. That is, all possible annotated statements, {p1 } stmt {p2 }, satisfy |= {p1 } stmt {p2 } 2. Interference Freedom: When two activities run in parallel, their statements can interleave in any order. In case of shared variables, assertions of an activity which were locally correct can now possibly be falsified. Therefore, if we want the proof outline to remain true for an activity, we need to show that the assertions are interference free. That is, if p1 is a precondition of a statement S in activity a1 and R is a statement of another activity a2 such that S and R can interleave, then p1 is said to be interference free if |= {p1 ∧ pre(R)} R {p1 } Here pre(R) is the precondition of R in a2 . Two proof outline is interference free if for all possible assertions the condition of interference freedom holds. Note that if two statements do not interleave, then they are interference free. The following example illustrates local correctness. Let the places be P1 and P2; let X, Y and Z be the classes that objects can be an instance of. Consider the following proof outline: L0: L1: L2: L3: L4: L5: L6: L7: L8: L9:
finish async P1 { final Y y = new Y(); ; async P2 { ; X x = new X(); ; {location(x)==here} int v = x.f1; final Z z = new Z(); ; async P1 { ; {location(z)==here} int u = z.f; }
180 L10: L11:
S. Agarwal and R.K. Shyamasundar } }
The precondition {location(x) == here} on line L5 for the activity created at L3 is true because location(x)=P2 and here=P2. However, the assertion {location(z) == here} on line L8 does not hold because location(z) = P 2 and here = P 1 at that program point for activity created at L7. Hence, the above proof outline does not satisfy local correctness. Let us consider the following example where the proof outlines are locally correct but not interference free; we only show relevant observations. L0: L1: L2: L3: L4: L5: L6: L7: L8:
async P1 { ; final A u0=new A(); ; async P1 { ; B u1=new B(); ; u0.f=u1; ; u0.f.m(); } B u2= new P2 B(); u0.f=u2; }
The above example has two activities, one created at L0 and the other at L2. The shared object between the two activities is u0. The proof outline is: L0: L1: L2: L3: p1: L4: p2: L5: L6: p3: L7: L8:
async P1 { ; final A u0=new A(); ; async P1 { ; B u1=new B(); ; {location(u0)==P1} u0.f=u1; ; {location(u0)==P1 /\ location(u0.f)==P1} u0.f.m(); } B u2= new P2 B(); {location(u0)==P1} u0.f=u2; }
The assertions to be satisfied are given by p1, p2, p3. The proof outlines of the individual activities are locally correct. To ensure correctness (safety) of the program, we need to check for interference freedom of the proof outlines of both the activities. Consider program statements L5 and L7 that can potentially interleave. Suppose L7 executes before L5. The correctness formula to prove would be: {p2 ∧ p3} L7 {p2} which can be expanded to: {location(u0) == P 1 ∧ location(u0.f ) == P 1} u0.f=u2; {location(u0) == P 1 ∧ location(u0.f ) == P 1} It is easy to see that the effect of the assignment is such that the postcondition does not hold. This means that the proof outlines are not interference free. If the proof outline of each activity, such that assertions consist of location checks, is locally correct and the proof outlines are pairwise interference free, then the program is proven to be safe with respect to places.
A Proof System for a PGAS Language
3.3
181
Proving other General Properties
A data race can be asserted as a condition of two statements trying to write to a shared variable simultaneously or one statement reading and other writing to the same variable simultaneously. Consider the following example where r1 and r2 are local to activities a1 and a2 respectively and x, y are shared. L1: L2: L3:
a1 r1=x; if (r1 != 0) y=42;
a2 L4: r2=y; L5: if (r2 != 0) L6: x=42;
This example has a data race as shared variables x and y are read and written simultaneously. However, if we assert that for L1, the pre-condition is {x=0} and for L4 as {y=0} and use the interference freedom test and the local correctness test, it is easy to see that L3 and L4 cannot execute together and so can’t L1 and L6. This proves the absence of data races. Another property of interest in concurrent programs is deadlock/livelock. We use the gcd example from [3] to demonstrate that a blocked state can be detected in the program. The distributed gcd example from [3] coded in our language is shown below. P_1, P_2,.., P_k,..P_m Class number { int num; set(int n) {atomic num=n;} int get() {atomic int x=num; return x;} } main: finish{ final number n_1= new P_1 number(); ... final number n_k= new P_k number();; ... async P_k {n_k.set(mk); gcd(n_k,n_{k-1},n_{k+1});} (* creates activity at P_k *) ... } gcd(number n_k, number n_{k-1}, number n_{k+1}) (* for place P_k *) { int x=n_k.get(); !int yf=future P_{k-1} n_{k-1}.get(); int y=yf.force(); final number no_k=n_k; final number no_{k-1}=n_{k-1}; final number no_{k+1}=n_{k+1}; if (y<x) then { if ((x mod y)==0) then x=y; else x = x mod y; } !int zf=future P_{k+1} n_{k+1}.get(); int z=zf.force();
182
S. Agarwal and R.K. Shyamasundar
if (z<x) then { if ((z mod x)==0) then x=z; else x = x mod z; } async P_k {gcd(no_k,no_{k-1},no_{k+1});} when (x
Let gcd(m1 , m2 , .., mn ) denote the gcd of n numbers m1 , ..mn . Let the variable x in an activity corresponding to place Pi be denoted by xi . Let numi denote the value of number object ni at place Pi . From the program, it can be seen that for every place there can be only one such numi corresponding to most recent call. Initially, numi = mi . For a method call gcd(..) in an activity at Pi , let the formal arguments be denoted by corresponding to noi , noi−1 , noi+1 . The local invariant in gcd example is given by: LI : (x ≤ noi ). The global invariant is given by: GI : gcd(m1 , m2 , .., mn )==gcd(num1 , num2 , .., numn ) The assertion that holds in an activity a at place Pi when it finds itself blocked due to condition in the when-clause being false is given by: BLa : LI ∧ (x ≥ numi ) Let T RMa denote termination of an activity a. Then, (GI ∧ (∀a : (BLa ∨ T RMa))) → (
n
numi = gcd(m1 , m2 , .., mn ))
i=1
It is clear from the implication above, that eventually when RHS becomes true, then following will be true: num1 == num2 ∧ num2 == num3 ∧ ... ∧ numn == num1 This implies that the BLa will be true for the activities created and hence the system will be in a deadlock. For lack of space, we shall not go into formal aspects of soundness and completeness except for the following brief discussion on soundness. Definition 8: Given a program prog with annotation ψ, then prog|= ψ iff for all reachable program states Σ, for all statements stm in ∪i Ai : Σ |= pre(stm), where pre(stm) denotes the precondition of the statement stm. Let prog ψ iff prog with annotation ψ satisfies the verification condition of the proof system. Theorem(Soundness): Given a proof outline prog with annotation ψprog , then if prog ψprog then prog |= ψprog . Proof: The soundness can be established by structural induction on the configurational transitions corresponding to the various constructs starting from the initial configuration.
A Proof System for a PGAS Language
4
183
Related Work
´ The assertion-based proof system for Java by Abrah´ am et al. [1], introduces an assertional proof method for a multi-threaded sub-language of Java, covering concurrency issues like dynamic thread creation, shared variable concurrency etc. as well as the object-based core of Java. Futures play an important role in carrying out asynchronous computations and communicating back the result. Flanagan et al. [6] give semantics of future at various levels of intention. de Boer et al. [5] present a semantics and a proof system for an object-oriented language with futures. Their language allows only one thread to be active within an object at a given time unlike Java or X10. In our work, we use the concurrency model advocated in X10 [4], [11] that uses the novel rooted tree model such that no exception generated by any activity is lost, has futures and introduces the notion of places explicitly. For reasoning programs in this language, we use the method of proof outlines inspired from [1], [10]. The difference arises due to differences in concurrency model. Our reasoning focuses primarily on the safety of programs with respect to object accesses. The constraint for safety is that an object at a remote site with respect to an activity cannot be de-referenced in that activity. An activity can obtain the value of fields of remote object through futures. The notion of future is similar in nature to that in [5], [13]. We have an additional feature of place associated with the future. That is, we show futures in a distributed memory program. Unlike the concurrency model in [5] where each object has its own processor with multiple activities working on the object in interleaved manner, we have a model where an activity can access multiple objects local to its place of execution and several activities may be active for an object. Liblit et al. [9] have formalized type systems for distributed-memory programs and presented type inference systems complete a program with remote/ global annotations. However, in our language, an object is considered local or remote based on the location of the activity that is accessing it.
5
Discussions
In this paper, we have discussed the semantics and proof system for P GAS0 and illustrated how the proof system can be effectively used for inferring exceptions due to non-local references. Our study shows that the approach would be very useful to derive the set of constraints under which the place exception will not arise. It would be of interest to capture the computational aspects of the set of constraints for satisfaction and derive decidability results/algorithms. The later would provide a handle for the compiler designer as he can remove of the location tests needed and thus, generate an efficient program. An initial algorithmic approach for the place safety has been described in [2]. We plan to study reasoning of various aspects of safety & liveness properties, and also study aspects of debugging [8] for PGAS languages with optimizing features. One of
184
S. Agarwal and R.K. Shyamasundar
the other important features of X10 has been the novel use of clocks for barrier synchronization. We hope to look at this aspect and make a relative comparison of the language with rendezvous based languages such as Ada / CSP and other distributed computing languages[7]. Such a study will bring to light the power of X10 with respect to languages that use explicit nondeterminism and show the capability for distributed programming as well. Acknowledgements: The authors thank the X10 team for enthusiastic discussions on the language. Thanks go to the editors for valuable suggestions.
References ´ 1. Abrah´ am, E., de Boer, F.S., de Roever, W.-P., Steffen, M.: An assertion based proof system for multithreaded java. TCS 331 (2005) 2. Agarwal, S., Barik, R., Nandivada, V.K., Shyamasundar, R.K., Varma, P.: Static detection of place locality and elimination of runtime checks. In: Ramalingam, G. (ed.) APLAS 2008. LNCS, vol. 5356, pp. 53–74. Springer, Heidelberg (2008) 3. Apt, K.R., Francez, N., de Roever, W.P.: A proof system for communicating sequential processes. ACM TOPLAS 2(3) (1980) 4. Charles, P., Donawa, C., Ebcioglu, K., Grothoff, C., Kielstra, A., von Praun, C., Saraswat, V., Sarkar, V.: X10: An object-oriented approach to non-uniform cluster computing. In: OOPSLA (2005) 5. de Boer, F.S., Clarke, D., Johnsen, E.B.: A complete guide to the future. In: De Nicola, R. (ed.) ESOP 2007. LNCS, vol. 4421, pp. 316–330. Springer, Heidelberg (2007) 6. Flanagan, C., Felleisen, M.: The semantics of future and an application. J. Funct. Program. 9(1) (1999) 7. Koymans, R., Shyamasundar, R.K., Gerth, R., de Roever, W.P., Arun-Kumar, S.: Compositional semantics for real-time distributed programming language. Information and Computation 79(3) (December 1988) 8. Kundaji, R.N., Shyamasundar, R.K.: Refinement calculus: A basis for translation, validation, debugging and certification. TCS 354 (2006) 9. Liblit, B., Aiken, A.: Type system for distributed data structures. In: POPL (2000) 10. Owicki, S., Gries, D.: An axiomatic proof technique for parallel programs. Acta Informatica 6(4) (1976) 11. Saraswat, V., Jagadeesan, R.: Concurrent clustered programming. In: Abadi, M., de Alfaro, L. (eds.) CONCUR 2005. LNCS, vol. 3653, pp. 353–367. Springer, Heidelberg (2005) 12. UPC language specifications, v1.2. Technical Report LBNL-59208, Berkeley National Lab. (2005) 13. Welc, A., Jagannathan, S., Hosking, A.: Safe futures for java. In: OOPSLA (2005)
Concurrent Objects ` a la Carte Dave Clarke1 , Einar Broch Johnsen2 , and Olaf Owe2 1
2
CWI, Amsterdam, the Netherlands [email protected] Dept. of Informatics, University of Oslo, Norway {einarj,olaf}@ifi.uio.no
Abstract. Services are autonomous, self-describing, technology-neutral software units that can be described, published, discovered, and composed into software applications at run-time. Designing software services and composing services in order to form applications or composite services requires abstractions beyond those found in typical object-oriented programming languages. In this paper, we explore a number of the abstractions used in service-oriented computing and related Internet- and web-based programming models in the context of Creol, an executable concurrent object-oriented modeling language with active objects and futures; i.e., features capable of expressing and dealing with asynchronous actions. By adding various abstractions to the modeling language, we demonstrate how a concurrent object language may naturally address many of the requirements of service-oriented computing. The study of language extensions in the restricted setting of a small, high-level modeling language, such as Creol, suggests a cheap way of developing new abstractions for emerging application domains. In this paper, we explore abstractions in the context of service-oriented computing, particularly with regard to dynamic aspects such as service discovery and structuring mechanisms such as groups.
1
Introduction
Service-oriented computing is an emerging computational model in which services are autonomous, self-describing, technology-neutral software units that can be described, published, discovered, and composed into software applications at run-time. A service provides specific functionality to clients in its external environment. In particular, a client may detect new services or better service providers, and negotiate the use of services at run-time, depending on the quality and cost of the services currently available in the environment. This makes service-oriented computing an attractive model for distributed applications such as Web services, e-business, and e-government, but also for ad-hoc, self-configuring, and loosely-coupled networks, such as peer-to-peer networks, sensor-based applications, and mobile systems. Research on service-oriented computing typically focuses on three aspects: service and data interchangeability (or platform-neutrality), technologies for service detection, and mechanisms for service composition and orchestration. Interchangeability is achieved through XML-related technologies which support the D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 185–206, 2010. c Springer-Verlag Berlin Heidelberg 2010
186
D. Clarke, E.B. Johnsen, and O. Owe
exchange of structured or semi-structured data through a shared data model, as well as the publication and description of services. Service discovery (or openendedness) is typically achieved through events and event-handling architectures, and via standards such as UDDI and service repositories. Industrial proposals for the description of orchestration mechanisms include BPML, WSFL, XLANG, and BPEL. These languages typically combine communication primitives with work-flow constructs in order to describe the distributed flow of control in flexible ways. Service-oriented computing has been formally studied through a plethora of process algebras (e.g., [8, 9, 24]), which mostly address aspects of orchestration. However, two approaches to service discovery in this context have recently been developed, extending process algebras with Linda tuple spaces to represent service repositories [9] and with semantic subtyping of channels [11]. Object orientation has been criticized for its apparent mismatch with serviceoriented computing, partly due to a lack of advanced data types, which makes new solutions necessary to elegantly handle XML documents [6], and partly due to its traditionally restrictive communication and concurrency abstractions. The venue of truly parallel (distributed or multicore) systems challenges the multithread concurrency model which is predominant in object-oriented systems today and provided by Java and C#. This has lead to recent interest in concurrent objects and more flexible object interaction mechanisms [18, 13, 10, 5]. Concurrent objects are reminiscent of the concurrency model of Actors [2] and Erlang [3] and seem promising for deployment on multicore and distributed platforms, as well as for embedded systems (see Hooman and Verhoef [17]). Interestingly, concurrent objects have many of the properties described above for service-oriented computing. In particular, they interact via asynchronous communication and do not assume a tight coupling between caller and callee. This leads to a natural notion of asynchronous method call based on so-called futures [4, 29, 25, 22]. Creol is a concurrent object-oriented language which combines asynchronous method calls with controlled process release [18]. This decouples concurrency, communication, and synchronization in a very flexible way, while supporting a simple proof theory [13] compared to the proof theory of multithread concurrency [1]. The strict encapsulation of fields is enforced by using interfaces for typing objects; these interfaces include a so-called cointerface for callbacks between interacting objects. In this paper, we argue that this flexibility makes it easy to express orchestration patterns. Furthermore, we extend Creol with highlevel primitives for service publishing and discovery, and for delegation and a hierarchical notion of service discovery based on groups. We give a type system and formal semantics for this extended language. Finally, we argue through a series of examples for its suitability for service-oriented computing. The paper is structured as follows. Section 2 introduces Creol with primitives for service-oriented computing, its syntax and type system. Section 3 gives a reduction semantics for the language, in the style of Felleisen and Hieb [15]. Section 4 presents service-oriented computing flavored examples. Section 5 introduces abstractions for groups and their use through another example. Section 6 discusses our approach and some possible extensions in relation to existing work.
Concurrent Objects a ` la Carte
2
187
Service-Oriented Concurrent Objects
In this section, we present an extension of the concurrent object-oriented language Creol to address dynamic service publication and discovery. A concurrent object represents a separate computational unit, conceptually encapsulating its own processor. In Creol, method calls are asynchronous and object variables (references) are typed by interfaces. We use different interfaces to represent the different services offered by a concurrent object. In particular, an interface may require that a cointerface is offered by the service demander. 2.1
Syntax
The language syntax is given in Fig. 1. A program P is a list of interface and class definitions, followed by a method body corresponding to the main method. An interface D may inherit other interfaces and specify a set of method signatures. An interface is a subtype of all the interfaces it inherits. A class L may implement a set of interfaces and provide local fields and method definitions. For simplicity, we ignore class inheritance in this paper. We emphasize the differences with Java. As usual, a method signature declares the return type of the method, its name, and the types of its formal parameters. In addition, the method signature specifies the cointerface for the method; i.e., the required type of the object calling the method. Expressions e are standard apart from the asynchronous method call e!m(e), the (blocking) read operation e.get, the asynchronous bind operation bind I, and the service announcement operation announce I. Statements s are standard apart from release points await g and release, the delegation statement forward e!m(e) and the service removal statement retract e. Guards g are conjunctions of Boolean expressions bv and polling operations v? on futures v. When the guard in an await statement evaluates to false, the processor is released and the active process is suspended, otherwise computation proceeds in the active process. A release statement suspends the active process and another suspended process may be rescheduled, similar to a yield in Java. (Each object has an associated processor, so locks, signaling or other forms of process control are not needed.) Non-deterministic choice s s allows either branch to be selected. The branches of a merge s ||| s are interleaved at release points, influencing the control flow within a process without allowing other processes to execute, unless neither branch is enabled. In addition, the intermediate statement s///s appears during reduction; it corresponds to the activation of statement s in the merge of statements s and s , where statement s is delayed. The sequence t := e!m(e ); await t?; v := t.get, where t is a future variable, corresponds to a non-blocking method call, while t := e!m(e ); v := t.get (or simply v := e!m(e ).get) corresponds to a blocking method call. 2.2
Typing
The typing rules are given in Fig. 2. Let Γ be a typing context which binds names to types (e.g., x : T ) and write Γ (x) for the type bound to x in Γ , which
188
D. Clarke, E.B. Johnsen, and O. Owe
P Ms M v b bv T T+
::= ::= ::= ::= ::= ::= ::= ::=
D L {T x; s} T m (T x) with I Ms {T x; s} f |x true | false b|v I | bool | fut(T ) T | ad
D ::= interface I extends I {Ms } L ::= class C implements I {T f ; M } e ::= bv | new C( ) | e.get | e!m(e) | null | bind I | announce I s ::= v := e | await g | if bv then s else s fi | release | forward e!m(e) | retract e | s; s | s s | s ||| s | s///s | return e | skip g ::= bv | v? | g ∧ g
Fig. 1. The language syntax. Variables v are fields (f ) or local variables (x), C is a class name, and I an interface name. Lists are indicated by overlining, as in e. The type language is restricted to booleans, interfaces and futures. (Data types may be added.)
may be undefined. For simplicity, we elide the checking of interface declarations, and assume that all methods declared in an interface have distinct names and that interface extensions are non-circular. Futures have explicit types; i.e., fut(T ) is the type for futures holding values of type T . In addition, the type ad is used to denote the type of service announcements and ok is the type of well-typed statements. Furthermore, if a class C implements interfaces I, we conventionally use C as the name for the type interface C extends I { }. The subtype relation I J is induced by the reflexive and transitive closure of the interface extension relation. Data types as well as ad are only subtypes of themselves; i.e., bool bool, ad ad. In addition, if T T then fut(T ) fut(T ). In the typing rules for expressions, new C gets type C, null can get any interface type, and announce I gets type ad. We assume given an implicit table for interface declarations, so the auxiliary function lookup(I, m, T ) gives us the return and cointerface types of method m with formal parameters of types T in interface I, if such an m is declared in I. The typing of an asynchronous method call e!m(e), where e has type I and e have types T , can then be explained as follows: If method m is declared in the interface I of the callee e, with return type T and cointerface type J, then the asynchronous operation e!m(e) is typed by fut(T ) if the caller this can provide the required cointerface. Similarly, the asynchronous operation bind I is typed by fut(I). Similarly, e.get is of type T if e is a future of type fut(T ). In the typing of guards, the polling of a field f gets type bool provided that f is a future. For the typing of statements, note that the delegation statement forward e!m(e) is well-typed if the caller of the current method could have called e!m(e) instead of the call to the current method. It follows in typing rule Forward that forward e!m(e) is well-typed provided that m has an adequate return type and that the current caller has a type which satisfies the cointerface of m. The service revocation statement retract v is well-typed if v has type ad. Note that rule Assign allows expressions of type T + (in contrast to, e.g., Call). The typing of the remaining statements is standard. For the typing of methods, note that the typing context is extended with the two variables destiny, which provides the type of the method call’s associated future, and caller, which is typed by the method’s cointerface and
Concurrent Objects a ` la Carte
(Var)
(New)
Γ v : Γ (v)
Γ new C : C
(Null)
(Get)
(Await)
Γ null : I
Γ e : fut(T ) Γ e.get : T
Γ g : bool Γ await g : ok
(Call)
Γ e : I Γ e : T Γ (this) J lookup(I, m, T )) = T, J Γ e!m(e) : fut(T ) (Announce)
Γ (this) I Γ announce I : ad
(Assign)
(Subsumption)
T + Γ (v) Γ e : T + Γ v := e : ok
Γ e : T T T Γ e : T
(Bool)
(Cond)
Γ b : bool
Γ bv : bool Γ s1 : ok Γ s2 : ok Γ if bv then s1 else s2 fi : ok
(Forward)
(G-Conj)
Γ g : bool Γ g : bool Γ g ∧ g : bool
Γ e : I lookup(I, m, T )) = T, J Γ e : T Γ (caller) J T returnType(Γ ) Γ forward e!m(e) : ok (Retract)
Γ e : ad Γ retract e : ok (Bind)
Γ bind I : fut(I)
189
(Choice)
(Merge)
Γ s : ok Γ s : ok Γ s s : ok (Poll)
Γ v : fut(T ) Γ v? : bool
Γ s : ok Γ s : ok Γ s ||| s : ok
(Release)
(Skip)
Γ release : ok
Γ skip : ok
(Method)
Γ, destiny : fut(T ), caller : I, x1 : T1 , x2 : T2 s : ok Γ T m (T1 x1 ) with I{T2 x2 ; s} : ok (Class)
implements(C, I) for all M ∈ M · Γ, this : C, x : T M : ok Γ class C implements I {T x; M } : ok
(Return)
Γ e returnType(Γ ) Γ return e : ok (Comp)
Γ s : ok Γ s : ok Γ s; s : ok
Fig. 2. The typing rules. The predicate implements(C, I) compares the method signatures of C with those of each interface I in I with respect to co- and contravariance requirements for formal parameters and cointerfaces. Function returnType(Γ ) = T whenever Γ (destiny) = fut(T ).
provides support for callback, in addition to types for formal parameters and locally declared variables. As usual, the typing rule for classes extends the typing environment with a type for this and types for the fields declared in the class. In addition, rule Class includes a check for compatibility with the declared interfaces I of the class C by means of an auxiliary predicate implements(C, I) which ensures that for all method signatures T1 m (T2 x) with A declared in an interface I or a superinterface of I (for I ∈ I), there is a method declaration T3 m (T4 x) with B {T y; s} in C such that T3 T1 , T2 T1 , and A B.
190
3
D. Clarke, E.B. Johnsen, and O. Owe
Operational Semantics
The semantics is a small-step reduction relation on configurations of objects, messages, and services (see Fig. 3). An object has an identifier, a class, a queue of suspended processes, fields, and an active process. The process idle indicates that no method is running in the object. We introduce the notion of a command, cmd, to model asynchronous actions. Commands are reduced using a single rule which spawns off a new thread, in effect, to execute the command. Initially, commands include asynchronous method calls and bind requests. A future (fut fid, cmd, oid, mode, d) captures the state of command: initially sleeping, the command may later becomes active, and finally, when completed, it stores its result in the future. The value mode ∈ {s, a, c} represents these three states. A bind request does not use the mode a. Default values for types are given by a function default (e.g., default(I) = null, default(bool) = false, and default(fut(T )) = null). The initial configuration of a program L {T x; s} contains one object (obj o, ∅, , (T x default(T ), s)). Reduction takes the form of a relation config → config . Rules apply to partial configurations and may be applied in parallel. This differs from the semantics of object-oriented languages with a global store [16], but is consistent with Creol’s [19] executable Maude semantics [12] and allows true concurrency in the distributed or multicore setting. The main rules are given in Figs. 4, 5, and 6. The context reduction semantics decomposes a statement into a reduction context and a redex, and reduces the redex [15]. Reduction contexts are statements S, expressions E, and guards G with a single hole denoted by •: S ::= • | v := E | S; s | S///s | if G then s1 else s2 fi | return E | forward E | retract E | await G E ::= • | E.get | E!m(e) | oid!m(d, E, e) G ::= • | E? | G ∧ g | b ∧ G
Redexes reduce in their respective contexts; i.e., stat-redexes in S, expr-redexes in E, and guard-redexes in G. Redexes are defined as follows: ::= x := d | f := d | await g | skip; s | if b then s1 else s2 fi | release | forward oid!m(d) | retract aid | return d expr-redexes ::= x | f | fid.get | oid!m(d) | new C() | bind I | announce I guard-redexes ::= fid? | b ∧ g stat-redexes
Filling the hole of a context S with an expression r is denoted S[r]. config object processQ msg d
::= ::= ::= ::= ::=
| object | msg | service | config config (obj oid, C, processQ, fds, active) | process | processQ processQ (fut fid, cmd, oid, mode, d) oid | fid | adid | null | b
fds active service process cmd
::= ::= ::= ::= ::=
fd process | idle (ad adid, I, oid) (T x d, s) oid!m(d) | bind I
Fig. 3. Syntax for runtime configurations. Here, d denotes data, including Boolean values (b) and identifiers for objects (oid ), futures (fid ), and service announcements (adid ). Processes include code and local state, i.e. local variables with types and values.
Concurrent Objects a ` la Carte
191
(Red-Cmd)
fid is fresh (obj oid, C, pq, fds, (l, S[cmd)])) → (obj oid, C, pq, fds, (l, S[fid])) (fut fid, cmd, oid, s, null) (Red-Get)
(obj oid, C, pq, fds, (l, S[fid.get])) (fut fid, cmd, oid , c, d) → (obj oid, C, pq, fds, (l, S[d])) (fut fid, cmd, oid , c, d) (Red-New)
oid is fresh fds = defaults(C ) (obj oid, C, pq, fds, (l, S[new C ()])) → (obj oid, C, pq, fds, (l, S[oid ]))(obj oid , C , , fds , proc(C, run, null, oid , ε)) (Red-Poll)
b = (mode ≡ c) (obj oid, C, pq, fds, (l, S[fid?])) (fut fid, cmd, oid , mode, d) → (obj oid, C, pq, fds, (l, S[b])) (fut fid, cmd, oid , mode, d) (Red-Await)
(obj oid, C, pq, fds, (l, S[await g])) → (obj oid, C, pq, fds, (l, S[if g then skip else release; await g fi])) (Red-Release)
S[release] = S [release; s///s ] (obj oid, C, pq, fds, (l, S[release])) → (obj oid, C, pq :: (l, S[skip]), fds, idle) (Red-Reschedule)
(obj oid, C, pq :: p :: pq , fds, idle) → (obj oid, C, pq :: pq , fds, p) Fig. 4. The context reduction semantics (1/4). In the last rule, pq and pq may be empty. In Red-New, the proc function gives an activation of run (with oid as caller).
Expressions and guards. Red-Cmd spawns asynchronous actions by adding a sleeping future to the configuration, returning its identifier to the caller. In RedGet, a read on a future variable blocks the active process until the future is in completed mode (since no other rule applies to this redex). Blocking does not reschedule a suspended process. Object creation in Red-New introduces a new instance of a class C into the configuration (with fields collected from C), and the run method is called. In Red-Poll, a future variable is polled to see if the asynchronous action has completed. Release and rescheduling. Guards determine whether a process should be released. In Red-Await, a process at a release point proceeds if its guard is true and otherwise releases. When a process is released, its guard is reused to reschedule the process. When an active process is released in Red-Release or terminates, it is replaced by the idle process, which allows a process from the process queue to be scheduled for execution in Red-Reschedule. The rule given here is nondeterministic with respect to which process to choose. Active behavior is initiated
192
D. Clarke, E.B. Johnsen, and O. Owe
(Red-Method-Bind)
(obj oid, C, pq, fds, p) (fut fid, oid!m(d), oid , s, null) → (obj oid, C, pq :: proc(C, m, fid, oid , d), fds, p) (fut fid, oid!m(d), oid , a, null) (Red-Return)
l(destiny) = fid (obj oid, C, pq, fds, (l, S[return d])) (fut fid, oid !m(d), oid , a, null) → (obj oid, C, pq, fds, idle) (fut fid, oid !m(d), oid , c, d) (Red-Forward)
l(destiny) = fid l(caller) = oid (obj oid, C, pq, fds, (l, S[forward oid !m(d)])) (fut fid, , , , ) → (obj oid, C, pq, fds, idle) (fut fid, oid !m(d), oid , s, null) (Red-Announce)
adid is fresh (obj oid, C, pq, fds, (l, S[announce I])) → (obj oid, C, pq, fds, (l, S[adid])) (ad adid, I, oid)) (Red-Retract)
(obj oid, C, pq, fds, (l, S[retract adid])) (ad adid, I, oid) → (obj oid, C, pq, fds, (l, S[skip])) (Red-Service-Bind)
I I (fut fid, bind I, o, s, null) (ad adid, I , oid) → (fut fid, bind I, o, c, oid) (ad adid, I , oid) Fig. 5. The context reduction semantics (2/4). The function proc returns the process corresponding to method m of class C (or a superclass) with default values for the local variables, and destiny, caller, and the formal parameters, initialized with the actual parameter values (here, fid, oid , and d, respectively).
by a special method, named run, which is activated when the object is created. The interleaving of active and reactive behavior is controlled by means of release points. Method invocation, return, and delegation. A method call results in an activation on the callee’s process queue. As the call is asynchronous, there is a delay between the call and its activation, represented by the sleeping mode of a future. After the call, Red-Method-Bind creates a new process reflecting the activation of the called method, in which the destiny variable stores the return address to the call’s future. Thus when the process terminates, the result is stored by Red-Return in the future identified by destiny. This future changes its mode to completed and the active process becomes idle. The delegation of a call to a method of another object in Red-Forward modifies the future associated with the current call; the new method call is inserted and the mode of the future is reset to s, but the fid and caller values of the previous call are kept. Service discovery. In Red-Announce, an object makes public that it provides a given service, and in Red-Retract an object removes a service announcement.
Concurrent Objects a ` la Carte
(Red-Choice1)
193
(Red-Choice2)
enabled(s , (fds, l), μ) (obj oid, C, pq, fds, (l, S[s s ])) μ → (obj oid, C, pq, fds, (l, S[s ])) μ
enabled(s, (fds, l), μ) (obj oid, C, pq, fds, (l, S[s s ])) μ → (obj oid, C, pq, fds, (l, S[s])) μ (Red-Merge1)
(Red-Merge2)
(obj oid, C, pq, fds, (l, S[s ||| s ])) → (obj oid, C, pq, fds, (l, S[s///s ]))
(obj oid, C, pq, fds, (l, S[s ||| s ])) → (obj oid, C, pq, fds, (l, S[s ///s])) (Red-Merge-Release1)
(Red-Merge-Skip)
enabled(s , (fds, l), μ) (obj oid, C, pq, fds, (l, S[release; s///s ])) μ → (obj oid, C, pq, fds, (l, S[s ///s])) μ
(obj oid, C, pq, fds, (l, S[skip///s])) → (obj oid, C, pq, fds, (l, S[s])) (Red-Merge-Release2)
(Red-Var-Local)
¬enabled(s , (fds, l), μ) (obj oid, C, pq, fds, (l, S[release; s///s ])) μ → (obj oid, C, pq, fds, (l, S[release; (s ||| s )])) μ (Red-Assign-Local)
(obj oid, C, pq, fds, (l, S[x])) → (obj oid, C, pq, fds, (l, S[l(x)])) (Red-Field)
(obj oid, C, pq, fds, (l, S[x := d])) → (obj oid, C, pq, fds, (l[x → d], S[skip]))
(obj oid, C, pq, fds, (l, S[f ])) → (obj oid, C, pq, fds, (l, S[fds(f )]))
(Red-Assign-Field)
(Red-Skip)
(obj oid, C, pq, fds, (l, S[f := d])) → (obj oid, C, pq, fds[f → d], (l, S[skip]))
(obj oid, C, pq, fds, (l, S[skip; s])) → (obj oid, C, pq, fds, (l, S[s]))
(Red-Cond1)
(obj oid, C, pq, fds, (l, S[if true then s1 else s2 fi])) → (obj oid, C, pq, fds, (l, S[s1 ])) (Red-Cond2)
(obj oid, C, pq, fds, (l, S[if false then s1 else s2 fi])) → (obj oid, C, pq, fds, (l, S[s2 ])) (Red-Context)
config → config config config → config config
(Red-empty)
(obj oid, C, pq, fds, (l, skip)) → (obj oid, C, pq, fds, idle)
(Red-Parallel)
config μ → config μ config μ → config μ dom(μ) = dom(μ ) = dom(μ ) dom(config ) ∩ dom(config ) = ∅ config config μ → config config μ μ Fig. 6. The context reduction semantics (3/4). Here, μ denotes a configuration of futures. The rule Red-Merge-Release2 assumes that μ includes all futures in the configuration. The notation σ[v → d] is used to update the binding of v in a state σ. The obvious rule for conjunction is omitted. The function dom(config) extracts the set of identities of the objects, services, and futures in the configuration config.
194
D. Clarke, E.B. Johnsen, and O. Owe
If a service has been announced by some object, the discovery of that service is done by initiating an asynchronous command bind I via Red-Cmd. This request is later bound to an object providing the service in Red-Service-Bind, which stores the identity of an object announcing the service in the future. Choice, merge, and release. Either branch of a choice or merge may be selected for reduction, captured by Red-Choice1, Red-Choice2, Red-Merge1, and Red-Merge2. When a branch of a merge statement completes, Red-Merge-Skip schedules the other branch. If a release occurs inside a merge, the other branch of the merge is the first candidate for rescheduling — rescheduling is local to a process whenever possible. If both branches release, then the process is released. Let σ map fields and local variables to their values. Process release is based on the predicate enabled defined on guards, futures, and states which determines whether a guard will not directly release: enabled(b, σ, μ) = b enabled(v, σ, μ) = enabled(σ(v), σ, μ) enabled(fid?, σ, μ) = mode ≡ c, where (fid, , , mode, ) ∈ μ enabled(g ∧ g , σ, μ) = enabled(g, σ, μ) ∧ enabled(g , σ, μ) The predicate is lifted to statements; enabled(await g, σ, μ) = enabled(g, σ, μ) is the crucial case. In Red-Merge-Release1, Red-Merge-Release2 and RedRelease, the contexts and redexes do not factor expressions involving release uniquely: these may be factored as both S[release] and S [release; s///s ]. A clause is added to Red-Release to ensure that release; s///s is preferred. Context and parallel reductions. A reduction applies to a subconfiguration by rule Red-Context. In Red-Parallel futures may be shared between parallel reductions. As the futures inspected by one process may be changed by another, they need to be recomposed in a consistent way. This is handled by a function μ μ which collects futures from μ and μ and resolves conflicting futures with the same fid (i.e., keeping the completed ones) (See [13]). New futures are located in config and config . Sample reduction. The following is a short reduction sequence illustrating how bind works, in combination with the get operation for accessing a future. The overline indicates the expression/command being evaluated. The underline indicates the context in which this evaluation occurs. (obj oid, C, pq, fds, (l, (bind I).get)) (ad aid, I , oid ) → (obj oid, C, pq, fds, (l, fid.get)) (fut fid, bind I, oid, s, null) (ad aid, I , oid ) → (obj oid, C, pq, fds, (l, fid.get)) (fut fid, bind I, oid, c, oid ) (ad aid, I , oid ) → (obj oid, C, pq, fds, (l, oid )) (fut fid, bind I, oid, c, oid ) (ad aid, I , oid )
4
Examples
We first consider how Creol’s high-level concurrency structuring primitives can be applied to orchestration patterns in wide-area computing, by adapting some
Concurrent Objects a ` la Carte
195
examples from Misra and Cook [24]. We then show how the proposed constructs for service discovery and delegation may be used in the context of a restaurant example. To simplify the notation in the examples, we allow expressions o!m(e) as program statements if m has return type void in the interface of o (and omit the return statement in the body of such methods). Furthermore, we let Any be a minimal interface implemented by all classes (thus, the supertype of all other interfaces) and omit specifying the cointerface if it is Any. Example 1. Consider services which provide news on request. These may be modeled by a class NewsSite offering an interface News to the environment with a method news, which for a given date returns a document with the news of that day from a “site”. Let CNN and BBC be two such sites; i.e., two variables typed by the News interface. By calling CNN!news(d) or BBC!news(d) the news for a specified date d will be (asynchronously) downloaded in XML format from that site. (We do not here consider how XML can be integrated as a data type in Creol, see [28].) Another site distributes emails to clients at request. This site may be modeled by a class EmailSite which implements an interface Email with a method send(m, a), where m is some message content in XML format and a is the address of a Client object. Let NewsService be an interface for sites combining news from different sites, as defined by the following class: class NewsServiceSite implements NewsService { News CNN; Email email; void run() { CNN := new News(); email := new Email() } bool newsRelay(Date d, Client a) { XML v, fut(XML) t; t:= CNN!news(d); await t?; v:= t.get; email!send(v,a); return true } }
In this class, a method relays news from CNN for a given date d to a given client a by invoking the send service of the email object with the result returned from the call to CNN!news, if CNN responds. For simplicity, the method returns true upon termination. Note that a NewsServiceSite object executing a newsRelay process is not blocked while waiting for the CNN or the email object to respond. Instead the process is suspended until the response has arrived. Once the object responds, the process becomes enabled and may resume its execution, receiving the news and eventually forwarding the news by email. If either of these objects never responds the NewsServiceSite object may proceed with its other activity, only the suspended process will never become enabled. Now consider the case where a client wants news from both objects CNN and BBC. Because there may be significant delays until an object responds, the client desires to have the news from each object relayed whenever it arrives. This is naturally modeled by the merge operator, as in the following method: bool newsRelay2(Date d, Client a) with Any { XML v; fut(XML) t1; fut(XML) t2;
196
D. Clarke, E.B. Johnsen, and O. Owe t1 := CNN!news(d); t2 := BBC!news(d); (await t1?; v:= t1.get; email!send(v,a)) ||| (await t2?; v:= t2.get; email!send(v,a)) }
The merge operator allows the news pages from BBC and CNN to be forwarded to the email!send service once they are received. If neither service responds, the whole process is suspended and can be activated again when a response is received. By executing the method, at most two emails are sent. If the first news page available from either CNN or BBC is desired, the nondeterministic choice operator may be used to invoke the email!send service with the result received from either CNN or BBC, as in the following method: bool newsRelay3(Date d, Client a) { XML v; fut(XML) t1; fut(XML) t2; t1 := CNN!news(d); t2 := BBC!news(d); ((await t1?; v:= t1.get) (await t2?; v:= t2.get)); email!send(v,a) }
Here news is requested from both news objects, but only the first response will be relayed. The latest arriving response is ignored. Example 2. At a restaurant, the host of a dinner party is in charge of ordering wine. For this purpose, the host needs to attract the attention of some passing waiter. If busy, the waiter delegates the order to another waiter. The waiter who eventually deals with the order needs to get a more precise specification of the wine by querying the host; e.g., the desired vintage. The waiter then brings the new bottle to the host of the dinner party. We model this scenario by two interfaces Sommelier and Customer where the latter is the cointerface of the former, thus enabling callback to the customer. We shall conventionally annotate the interface itself with the cointerface information; in this case, the same cointerface applies to all methods declared in that interface. interface Customer with Waiter { int whichVintage() } interface Sommelier with Customer { bool orderWine(string producer) }
The implementation is given in Fig. 7 and demonstrates the dynamic aspects of the proposed primitives for service-oriented computing. We let a class Host implement the interface Customer and another class Waiter implement Sommelier. Waiters initially publish their interface, and retract this service if they have too many customers. Hosts dynamically find waiters when they need to make orders. When too busy, a waiter dynamically finds another waiter and forwards the order. Furthermore, the Waiter class demonstrates active behavior, represented by management of service announcements given in the run method, interleaved with reactive behavior, represented by the orderWine
Concurrent Objects a ` la Carte
197
class Host implements Customer, HostDuties { bool enoughWine; void run() { this!entertainGuests() } void surveyWineBottle() { Waiter waiter; fut(bool) ack; await not(enoughWine); waiter := (bind Waiter).get; ack := waiter!orderWine("MyFavoriteWine"); await ack?; enoughWine := true; release; // enjoy the wine this!entertainGuests() } void entertainGuests() { enoughWine := false; this!surveyWineBottle() } int whichVintage() { return 1970 } } class Waiter(int maxLoad) implements Sommelier { int noOfCustomers; void run() { ad service; await noOfCustomers < maxLoad; service := announce Waiter; await noOfCustomers ≥ maxLoad; retract service; this!run() } bool orderWine(string producer) with Customer { fut(int) order; int vintage; Sommelier colleague; if (noOfCustomers ≥ maxLoad) then colleague := bind Waiter.get; forward colleague!orderWine(producer) else noOfCustomers := noOfCustomers + 1; order := caller!whichWine(); await order?; vintage := order.get; // serve bottle noOfCustomers := noOfCustomers - 1; return true fi } } Fig. 7. The restaurant example. In class Waiter, the maxLoad attribute is declared as a class parameter, to be instantiated upon object creation.
method. The Host class additionally implements an interface HostDuties, which specifies methods entertainGuests and surveyWineBottle. Note that in the orderWine method of the Waiter class, the callback to whichWine is type correct since Customer is the cointerface.
198
5
D. Clarke, E.B. Johnsen, and O. Owe
Groups
Groups [7,21,23] provide a way of structuring advertisements and hence objects into virtual organisations. For example, groups are use in the JXTA peer-topeer library [20] to structure peers, advertisements, and channels in order to represent the parties involved in (il)legally downloading a particular film. Groups also offer the possibility of providing access control to service discovery, such as restricting members of the group to only those with legal right to the film. For simplicity, we consider only a very rudimentary access control mechanism, whereby an object has to be a member of a group in order to interact with it. Technically, the operations for service discovery and announcement and their semantics (Sections 2 and 3), will be generalised to the scope of particular groups. Groups may be created and dissolved, and objects may join and leave groups. Finally, advertisement and discovery happens in the context of a group. For uniformity we denote by glob the top-level group in which to advertise and search for advertisements by default. Syntactically groups are treated like objects, and operations on groups like asynchronous method calls. In order to enable groups to be advertised, along with objects, we need to extend the notion of announcement. In Section 2, announcements were purely subjective; i.e., an object can only announce itself. In contrast objective announcements have the form e!announce(e , I), where the result of expression e is advertised using interface I—again assuming that e has type I—into the group resulting from expression e. In order to discover a service advertised in a group, the object must be a member of that group. Syntax. The syntax introduced in Fig. 1 is extended as follows. Let group be the type of group identifiers, ranged over by gid, and let I include group in order to transparently enable the advertising of groups. Let the expressions e include the following operations on groups: new group, e!announce(e , I), and e!bind(I). Thus, the previous advertisement and discovery operations are redundant: subjective, groupless announcements announce I may be encoded as glob!announce(this, I) and bind(I) as glob!bind(I). Finally, let the statements s include the following operations on groups: e!dissolve, e!join, and e!leave. All operations involving groups are asynchronous. Typing. Fig. 8 presents the type rules for the new operations, which simply ensure that these operations apply to groups. Rule Announce-in-Group requires that the announced value conforms with the announced type. Rule Bind-inGroup is the obvious extension to the rule Bind of Fig. 2. Runtime Syntax. The runtime syntax introduced in Fig. 3 is extended as follows. Let id be either oid or gid. We extend the notion of command as follows: cmd ::= . . . | new group | gid!dissolve | gid!join | gid!leave | gid!announce(id, I) | gid!bind(I)
Configurations config are extended with a representation (gr gid, oids, aids) for a group with identifier gid, whose members are objects represented by the set oids and containing the advertisements aids. The reduction contexts become:
Concurrent Objects a ` la Carte (Group-Create)
(Group-Dissolve)
Γ new group : fut(group) (Group-Leave)
Γ e : group Γ e!dissolve : ok
(Announce-in-Group)
Γ e : I Γ e : group Γ e !announce(e, I) : fut(ad)
Γ e : group Γ e!leave : ok
199
(Group-Join)
Γ e : group Γ e!join : ok (Bind-in-Group)
Γ e : group Γ e!bind(I) : fut(I)
Fig. 8. Typing for group-related operations S ::= . . . | E!dissolve | E!join | E!leave | E!announce(e, I) | d!announce(E, I) | E!bind(I)
The redexes become: stat-redexes ::= . . . | gid!dissolve | gid!join | gid!leave expr-redexes ::= . . . | gid!bind(I) | new group | gid!announce(id, I)
Reduction Rules. In Fig. 5, Red-Cmd applies to the asynchronous evaluation of commands. The reduction rules for the new operations are given in Fig. 9. As all new operations are asynchronous, the reduction rules operate in the context of some future data structure, generally depending upon the fact that the caller is stored within this data structure in order to perform a primitive access control check. Apart from Red-Create, Red-Join and Red-Leave an object needs to be a member of the group in order to interact with it. Rule RedCreate creates a new group which contains no advertisements or members and Red-Dissolve removes an entire group. Rule Red-Join and Red-Leave allow an object to join and leave a group, by modifying the list of group members. Rule Red-Announce creates a new announcement and places it in the group and Red-Retract removes an announcement from a group, generalising the earlier version of this rule. Rule Red-Bind binds to an announcement in the scope of a given group. Example 3. Groups may be used to structure the services of auction houses. The underlying services are managed by the ItemAuction class, corresponding to an item for sale in an auction. The items are grouped into auctions, presumably the collection of items to be auctioned during a particular day or perhaps of a specific kind of item (e.g., guitars). Auctions are organised into auction houses, which again are groups. To simplify the presentation, variable initialization may occur in declarations, with the syntax T v = e. Furthermore, we use a simple form of multicast; i.e., for each object in e, e!m(e ) produces one future and e !announce(e, I) makes announcements, adding the objects to the group e . Some basic functions on the data type Set[T] are assumed to be defined, such as add and remove. The following class illustrates how some auctions could be created and populated from an external source.
200
D. Clarke, E.B. Johnsen, and O. Owe class Main(ItemDatabase db) { void run() { group ah1 = (new group).get; this!populate("Christie’s", ah1); group ah2 = (new group).get; this!populate("Dave’s", ah2); group as3 = (new group).get; this!populate("Einar and Olaf’s", ah3); } void populate(String name, group house) { Set[Item] items; group auction = (new group).get; house!join; house!announce(auction, group);
// populate auction from external source items := db!itemsFor(name).get; auction!announce(items, Item) // multicast } }
The class ItemAuction manages the auction of an individual item, including a set of the bidders for that item, and notifications to those bidders when the auction status changes. The method bid handles individual bids, relying on the mutual exclusion provided by Creol objects, and notifies other bidders. Class ItemAuction implements an interface Item with Bidder as cointerface and methods showInterest, loseInterest, and bid. (The Bidder interface and its implementation are given further below.) class ItemAuction implements Item { int currentBid = 0; Bidder holder = null; bool sold = false; Set[Bidder] bidders = new Set(); bool bid(int amount) with Bidder { if (sold or amount <= current) then return false fi; bidders := add(bidders, caller); currentBid := amount; holder := caller; remove(bidders,caller)!higherBid(amount); // multicast return true }
// allow a bidder to (un)subscribe to an item auction void showInterest() with Bidder { bidders := add(bidders,caller) } void loseInterest() with Bidder { bidders := remove(bidders,caller) }
Concurrent Objects a ` la Carte
201
(Red-Create)
gid is fresh (fut fid, new group, oid, s, null) → (fut fid, new group, oid, c, gid) (gr gid, , ) (Red-Dissolve)
oid ∈ oids (fut fid, gid!dissolve, oid, s, null) (gr gid, , oids) → (fut fid, gid!dissolve, oid, c, null) (Red-Join)
(fut fid, gid!join, oid, s, null) (gr gid, oids, aids) → (fut fid, gid!join, oid, c, null) (gr gid, oids ∪ {oid}, aids) (Red-Leave)
(fut fid, gid!leave, oid, s, null) (gr gid, oids, aids) → (fut fid, gid!leave, oid, c, null) (gr gid, oids \ {oid}, aids) (Red-Announce)
oid ∈ oids aid is fresh (fut fid, gid!announce(id, I), oid, s, null) (gr gid, oids, aids) → (fut fid, gid!announce (id, I), oid, c, aid) (ad aid, I, id) (gr gid, oids, aids ∪ {aid}) (Red-Retract)
oid ∈ oids aid ∈ aids (fut fid, retract aid, oid, s, null) (ad aid, I, id) (gr gid, oids, aids) → (fut fid, retract aid, oid, c, null) (gr gid, oids, aids \ {aid}) (Red-Bind)
oid ∈ oids aid ∈ aids (fut fid, gid!bind(I), oid, s, null) (ad aid, I, id) (gr gid, oids, aids) → (fut fid, gid!bind(I), oid, c, id) (ad aid, I, id) (gr gid, oids, aids) Fig. 9. The context reduction semantics (4/4) for group-related operations
void run() { await currentBid > 0; this!watchBids() } void watchBids() { int previousBid = currentBid; // going once, twice, three times: release; release; release; if currentBid > previousBid then this!watchBids() else sold := true; // accept current bid holder!success(); bidders!auctionClosed() fi // multicast } }
Specific items can be implemented by extending Item, giving a more specific description (in lieu of meta-data).
202
D. Clarke, E.B. Johnsen, and O. Owe interface interface interface interface interface
Guitar extends Item {} Gibson extends Guitar {} Firebird extends Gibson {} FirebirdI extends Firebird {} NonReverseFirebird extends Firebird {}
Bidder is an interface implemented by potential buyers in order to receive notifications from the auctions they follow. The notifications include that a higher bid has been made, that the auction has closed, and that a bid was successful. In each case, the caller is the item to which the notifications refer. interface Bidder with Item { void higherBid(int bid); void auctionClosed(); void success() }
Buyers use service discovery to find groups denoting auction houses, and within an auction house, an auction will be found, within which the specific items of interest can be found. class Buyer implements Bidder { void run() { // find an auction house -- needs meta-data group ah = glob!bind(group).get; ah!join; group auction = ah!bind(group).get; auction!join; Item item = auction!bind(Firebird).get; item!showInterest() } // Implementation of the Bidder interface void higherBid(int bid) with Item { if (bid < myMax) then caller!bid(bid + 10) fi } void auctionClosed() with Item {...} // quit, go home sad void success() with Item {...} // quit, go home happy }
6
Discussion
This paper explored a number of abstractions for service-oriented computing and related internet- and web-based programming models in the context of the concurrent object-oriented (modeling) language Creol. In particular, we have considered abstractions of dynamic aspects such as service discovery and structuring mechanisms such as groups. By adding abstractions to the modeling language, we showed how Creol satisfies many of the requirements of service-oriented computing. Directly introducing abstractions into the language and implementing them in Maude [12], is a quick way of enabling model development for emerging application domains, simultaneously providing a rapid way of evaluating alternative design decisions. There are many ways to extend Creol to enhance its ability to accurately model real world systems. We now explore a few of these.
Concurrent Objects a ` la Carte
203
Open system and failure models New services can come on-line at any time. This can be modeled by external interaction with a running Creol expression. External interaction may be captured by a labelled transition—and the existing set of rewrite rules would need to be modified to propagate such transitions: (Meta-Inject) inject object
−−−−−−−−−→ object The initiation of Creol’s class update mechanism [30] may be modeled this way, and may be used to provide runtime upgrade of services. Dually, it is possible to model when services go off-line, either temporarily or permanently. The following rule considers an explicit remove command: (Red-Remove)
(fut fid, remove oid, oid , s, ) (obj oid, C, processQ, fds, active) → (fut fid, remove oid, oid , c, null) A similar meta-rule interacting with the environment could model the operation. In general, operations in distributed systems can only be considered as partial due to (intermittent) failures of the network. A partial solution in the context of Creol is to modify asynchronous calls to allow the possibility of timing-out. A uniform way to achieve this is to add time-outs to future objects, as follows: (fut fid, cmd, caller, timer, mode, result), where timer has one of the following values timer infinity or timer n, for n ≥ 0. An additional mode to is introduced to model the occurrence of a time-out. The new, under-the-hood, reduction rules are: (Red-Timeout) (Red-Timer)
timer (n + 1) → timer n
status = c (fut fid, cmd, caller, timer 0, status, result) → (fut fid, cmd, caller, timer 0, to, result)
Changes at the language level, such as the interaction between get and await and time-outs would need to be done, perhaps via an exception mechanism. Maude has a real-time extension [26] to help implement these features. Enhancing Groups Although we used subinterfaces (of Guitar) to enable more precise service discovery, group discovery was rather weak, as there were no means for distinguishing one group from another. More precise groups can be specified by applying inheritance to groups, as follows: group QualityGroup; group GuitarGroup; group GibsonGroup extends GuitarGroup, QualityGroup;
Group discovery would be based on a supergroup; e.g., QualityGroup. Group creation would then be extended to create a specific kind of group: new GibsonGroup;
204
D. Clarke, E.B. Johnsen, and O. Owe
Further possibilities include limiting the types that can be advertised in a group: group GuitarGroup of Guitar;
Additional meta-data such as tags and certificates (for access control) [20] could be supplied to enhance the quality of service discovery. Additional group operations would certainly increase Creol’s modeling power; e.g., broadcast to a group [21]. Likewise, a stream-based approach to asynchronous responses, as in JXTA [20], to model multiple potential candidate bindings, would enable a client to better select the most appropriate service. Enhancing the security model underlying our groups would make them more appropriate for modeling real-world applications. Such measures are necessary since one would not generally want everyone to be able to dissolve the group, and some members may be able to advertise, whereas others may not. Other directions for future work include adding behavioural types (e.g., session types [14]) to place more static control on the use of objects or services, and to add a coordination layer (channels, connectors, or tuple space(s) [27]) between the objects or services, rather than allowing direct communication.
Acknowledgment The authors are indebted to Willem-Paul de Roever for encouragement, inspiration, and interaction over a number of years. In particular, he has given substantial support to the Creol and Credo projects, which form the basis of this paper. We are also grateful to Frank de Boer for interesting discussions on service binding in the context of object orientation.
References ´ 1. Abrah´ am-Mumm, E., de Boer, F.S., de Roever, W.-P., Steffen, M.: Verification for java’s reentrant multithreading concept. In: Nielsen, M., Engberg, U. (eds.) FOSSACS 2002. LNCS, vol. 2303, pp. 5–20. Springer, Heidelberg (2002) 2. Agha, G.A.: ACTORS: A Model of Concurrent Computations in Distributed Systems. The MIT Press, Cambridge (1986) 3. Armstrong, J.: Programming Erlang: Software for a Concurrent World. Pragmatic Bookshelf (2007) 4. Baker, H.G., Hewitt, C.E.: The incremental garbage collection of processes. ACM SIGPLAN Notices 12(8), 55–59 (1977) 5. Benton, N., Cardelli, L., Fournet, C.: Modern concurrency abstractions for C . ACM Transactions on Programming Languages and Systems 26(5), 769–804 (2004) 6. Bierman, G.M., Meijer, E., Schulte, W.: The essence of data access in Cω. In: Black, A.P. (ed.) ECOOP 2005. LNCS, vol. 3586, pp. 287–311. Springer, Heidelberg (2005) 7. Birman, K.P.: The process group approach to reliable distributed computing. Communications of the ACM (December 1993)
Concurrent Objects a ` la Carte
205
8. Boreale, M., Bruni, R., Caires, L., De Nicola, R., Lanese, I., Loreti, M., Martins, F., Montanari, U., Ravara, A., Sangiorgi, D., Vasconcelos, V., Zavattaro, G.: SCC: A service centered calculus. In: Bravetti, M., N´ un ˜ez, M., Zavattaro, G. (eds.) WS-FM 2006. LNCS, vol. 4184, pp. 38–57. Springer, Heidelberg (2006) 9. Bravetti, M., Zavattaro, G.: Service oriented computing from a process algebraic perspective. Journal of Logic and Algebraic Programming 70(1), 3–14 (2007) 10. Caromel, D., Henrio, L.: A Theory of Distributed Object. Springer, Heidelberg (2005) 11. Castagna, G., Nicola, R.D., Varacca, D.: Semantic subtyping for the pi-calculus. In: Proc. 20th IEEE Symp. on Logic in Computer Science (LICS 2005), pp. 92–101. IEEE Computer Society Press, Los Alamitos (2005) 12. Clavel, M., Dur´ an, F., Eker, S., Lincoln, P., Mart´ı-Oliet, N., Meseguer, J., Quesada, J.F.: Maude: Specification and programming in rewriting logic. Theoretical Computer Science 285, 187–243 (2002) 13. de Boer, F.S., Clarke, D., Johnsen, E.B.: A complete guide to the future. In: De Nicola, R. (ed.) ESOP 2007. LNCS, vol. 4421, pp. 316–330. Springer, Heidelberg (2007) 14. Dezani-Ciancaglini, M., Mostrous, D., Yoshida, N., Drossopoulou, S.: Session types for object-oriented languages. In: Thomas, D. (ed.) ECOOP 2006. LNCS, vol. 4067, pp. 328–352. Springer, Heidelberg (2006) 15. Felleisen, M., Hieb, R.: The revised report on the syntactic theories of sequential control and state. Theoretical Computer Science 103(2), 235–271 (1992) 16. Flatt, M., Krishnamurthi, S., Felleisen, M.: A programmer’s reduction semantics for classes and mixins. In: Alves-Foss, J. (ed.) Formal Syntax and Semantics of Java. LNCS, vol. 1523, pp. 241–269. Springer, Heidelberg (1999) 17. Hooman, J., Verhoef, M.: Formal semantics of a VDM extension for distributed embedded systems. In: Dams, D., Hannemann, U., Steffen, M. (eds.) Correctness, Concurrency, and Components. Festschrift for Willem-Paul de Roever. LNCS, vol. 5930, pp. 142–161. Springer, Heidelberg (2010) 18. Johnsen, E.B., Owe, O.: An asynchronous communication model for distributed concurrent objects. Software and Systems Modeling 6(1), 35–58 (2007) 19. Johnsen, E.B., Owe, O., Yu, I.C.: Creol: A type-safe object-oriented model for distributed concurrent systems. Theoretical Computer Science 365(1-2), 23–66 (2006) 20. JXTA Project, http://jxta.org 21. Lea, D.: Objects in groups (December 1993), http://www.gee.cs.oswego.edu/dl/groups/groups.html 22. Liskov, B.H., Shrira, L.: Promises: Linguistic support for efficient asynchronous procedure calls in distributed systems. In: Wise, D.S. (ed.) Proc. SIGPLAN Conf. on Programming Language Design and Implementation (PLDI 1988), pp. 260–267. ACM Press, New York (1988) 23. Maffeis, S.: Electra: Making distributed programs object-oriented. In: Proc. USENIX Symp. on Experiences with Distributed and Multiprocessor Systems (September 1993) 24. Misra, J., Cook, W.R.: Computation orchestration: A basis for wide-area computing. Software and Systems Modeling 6(1), 83–110 (2007) 25. Niehren, J., Schwinghammer, J., Smolka, G.: A concurrent lambda calculus with futures. Theoretical Computer Science 364, 338–356 (2006) ¨ 26. Olveczky, P.C., Meseguer, J.: Semantics and pragmatics of Real-Time Maude. Higher-Order and Symbolic Computation 20(1-2), 161–196 (2007)
206
D. Clarke, E.B. Johnsen, and O. Owe
27. Papadopoulos, G.A., Arbab, F.: Coordination models and languages. In: Zelkowitz, M. (ed.) The Engineering of Large Systems. Advances in Computers, vol. 46, pp. 329–400. Academic Press, London (1998) 28. Torjusen, A., Owe, O., Schneider, G.: Towards integration of XML in the Creol object-oriented language. In: Sandnes, F.E. (ed.) Proc. Norsk Informatikkonferase (NIK 2007), pp. 107–111. Tapir Akademisk Forlag (2007) 29. Welc, A., Jagannathan, S., Hosking, A.: Safe futures for Java. In: Proc. 20th Conf. on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA 2005), pp. 439–453. ACM Press, New York (2005) 30. Yu, I.C., Johnsen, E.B., Owe, O.: Type-safe runtime class upgrades in Creol. In: Gorrieri, R., Wehrheim, H. (eds.) FMOODS 2006. LNCS, vol. 4037, pp. 202–217. Springer, Heidelberg (2006)
On the Power of Play-Out for Scenario-Based Programs David Harel, Amir Kantor, and Shahar Maoz The Weizmann Institute of Science, Rehovot, Israel {dharel,amir.kantor,shahar.maoz}@weizmann.ac.il
Abstract. We investigate the power of play-out, the execution mechanism associated with scenario-based programming, which was defined as the operational semantics of live sequence charts (LSC). We compare some of the play-out strategies and mechanisms suggested in the literature, and discuss their strengths and limitations. Specifically, we define a simple infinite hierarchy of LSC programs, and use it to show that smart play-out, the lookahead version of play-out guided by model-checking, is strictly weaker than full synthesis from LSC.
This paper is dedicated to Prof. Willem de Roever, friend and colleague, with admiration and respect for his scientific achievements, his impact on the community, and his boundless energy.
1
Introduction
Live Sequence Charts (LSC) [3] is a visual formalism for inter-object scenariobased specification, and is especially useful for programming reactive systems. The language extends classical Message Sequence Charts (MSC) [14], mainly by being multi-modal, i.e., incorporating universal and existential modalities. Thus, LSC distinguishes between behaviors that may happen in the system (existential, cold) and those that must happen (universal, hot). It can also naturally express a variety of constraints, such as behaviors that are forbidden. Most importantly in the context of this paper, an executable (operational) semantics for LSC, termed play-out, was defined in [11]. Thus, LSC can be viewed not only as a specification language but also as a high-level programming language for reactive systems. The play-out idea can be applied to any scenariobased formalism that concentrates on inter-object behavior with multi-modal constraints; thus, in principle it can be used also to execute certain variants of temporal logic.
The research was supported in part by The John von Neumann Minerva Center for the Development of Reactive Systems at the Weizmann Institute of Science and by a Grant from the G.I.F., the German-Israeli Foundation for Scientific Research and Development.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 207–220, 2010. c Springer-Verlag Berlin Heidelberg 2010
208
D. Harel, A. Kantor, and S. Maoz
In this paper we describe some of the play-out strategies and mechanisms suggested since the publication of [3], and discuss their strengths and limitations. These include the original (na¨ıve) play-out of [11] and two stronger strategies, smart play-out [7,11] and planned play-out [13]. We are particularly interested here in smart play-out, due to its novel nature, whereby a verification technique (specifically, model-checking) is used to run a program rather than to prove properties thereof. Thus, we define a simple infinite hierarchy of LSC programs, and use it to show that smart play-out is strictly weaker than full synthesis from LSC.
2
Na¨ıve Play-Out of Scenario-Based Programs
Finding ways to construct executable systems based on inter-object, scenariobased specifications, appears to be an interesting challenge [4]. Many researchers have approached the issue as a synthesis problem; see, e.g., [1,6,15,20], where inter-object specifications, given in variants of Message Sequence Charts (MSC) [14], are translated into intra-object state-based executable specifications for each of the participating objects or components. Play-out, defined in [12,11], is a recent example of a different approach. Instead of synthesizing intra-object state-based specifications for each of the components, the play-out algorithm executes the scenarios directly, keeping track of all user (or environment) events, as well as system events, for all objects or components simultaneously, and causing other events and actions to occur as dictated by the specified scenarios. No intra-object model for any of the participating components needs to be built in the process. To date, two implementations of this basic play-out idea are available. The original one is part of the Play-Engine tool [11], which works as an LSC interpreter and drives the simulation of an application execution, provided it implements certain custom interfaces. A more recent implementation is part of the S2A compiler [5], which is based on a compilation scheme for translating LSCs into AspectJ [16]. This second implementation uses a UML2-compliant and slightly generalized variant of the LSC language, defined in [10]. Here now is a brief and simplified description of the basic play-out technique for LSC specifications consisting of universal charts, which is sufficient for our needs in this paper. The full LSC language of [11] is much richer than the simplified fragment we use here. It includes conditions, variables, structural constructs (e.g., if-then-else), symbolic instance lifelines, symbolic methods, time, etc. Playout handles all of these. However, in this paper we concentrate on a basic version of the language. We assume the reader is familiar with elementary LSC notions, such as the partial order of events defined by an LSC, an LSC cut, etc. A thorough treatment may be found in [11]. Roughly, a run of an LSC program may be seen as sequence of execution configurations, each representing a global cut (a tuple of all LSC current cuts). A cut induces a set of enabled and violating events; enabled events are the ones immediately after the current cut in the partial order defined by
On the Power of Play-Out for Scenario-Based Programs
209
the chart; violating events are all events that appear in the chart but are not currently enabled (see [11]). Whenever a minimal event of a chart occurs, a live copy of the chart is created. We say that a live copy of a chart is preactive (active) if its cut is in its prechart (main chart). A configuration is stable if it includes no active live copy. The play-out execution mechanism works in phases of a step followed by a valid superstep. A step is the execution of a single external event, performed by the environment or the user (both will be referred to here as the environment). A superstep is a finite (possibly empty) sequence of events executed by the system as a reaction to a single external event. A valid superstep handles iteratively all enabled main chart events in a non-violating way (a violation of a preactive LSC is allowed), in order to reach a stable configuration. More live copies may be created and activated during the execution of a superstep, but all copies must be completed or preactive when a superstep ends. Note that, in general, LSC is a nondeterministic language: after an external event occurs, many possible supersteps may exist. There are two possible causes for this. One is the partial ordering of events, which can lead to more than one enabled event at a given cut. The second is the specification of pieces of behavior in more than one chart. Events that do not appear in a chart do not violate it, but when a number of charts are active, even if they are all totally ordered, there may be more than one valid choice for the next event. Note, however, that once an event is executed in a configuration, the next configuration is uniquely determined, though some steps may yield a configuration for which no valid superstep exists. Different play-out algorithms differ in their approach to finding a valid superstep. If a valid superstep is not realized, we say the execution violates the specification. The na¨ıve play-out strategy [11] executes supersteps in a greedy manner. It iteratively seeks and executes some enabled main chart event that does not immediately violate an active LSC, until no such events are available. When several such events are available, na¨ıve play-out selects one arbitrarily. Consider a specification that contains the single LSC of Figure 1. It is a universal LSC, with three instances, one of which is the environment. If the environment sends message m1 to obj1 , a live copy of LSCA is created, the prechart completes and as the cut enters the main chart the LSC becomes active. The following superstep starts with two enabled events, m2 and m3 , waiting to be executed in some order. Na¨ıve play-out picks any one of these, say m3 , executes it (in this case obj2 sends message m3 to itself) and advances the cut. Self messages may represent internal actions of an object. The superstep is still not over, as m2 has yet to be sent. Being the only enabled event, m2 is promptly executed. Now m4 is enabled, so it is sent, and the chart is completed. The configuration reached at this point is stable, since there are no active live copies, and the superstep ends. In this example, the specification contains only one LSC. Scenario-based programs typically include many LSCs acting together, as we shall see in the next section.
210
D. Harel, A. Kantor, and S. Maoz
LSC A Env
Obj1
Obj2
m1
m2
m3 m4
Fig. 1. LSCA
3
More Powerful Play-Out Strategies
The original play-out [11] process is indeed na¨ıve. Some of the sequences of events possible as a response to an external (user or environment) event may eventually lead to violations, and these cannot be avoided by the non-backtracking, nonlooking-ahead, play-out process. Moreover, the partial order semantics among events in each chart and the ability to separate scenarios into different charts without having to say explicitly how they are to be composed are very useful in early requirement stages, but can cause under-specification and nondeterminism when one attempts to execute them. Consider a specification consisting of LSCA from Figure 1 and LSCB from Figure 2. We first demonstrate how na¨ıve play-out may fail to find a valid superstep when executing this specification. Again, assume that the environment sends message m1 to obj1 . This causes activation of a live copy of LSCA , and at the beginning of the following superstep events m2 and m3 are enabled. Na¨ıve play-out again may choose to execute m3 , but in the presence of LSCB this leads to a violation of the specification, as follows: After executing m3 and advancing the cut of LSCA , m2 is still enabled, so it is now executed. After the execution of m2 , being the minimal event of LSCB , a live copy of LSCB is created and activated. The enabled events are currently m4 of LSCA and m3 of LSCB . Note however, that m4 violates LSCB and m3 violated LSCA , so the execution of superstep is in deadlock, and a valid superstep cannot be realized. After the environment performed m1 , the play-out algorithm could have chosen to execute m2 before m3 . This, however, would have conformed with the order induced by LSCB , yielding a valid superstep, consisting of m2 , m3 and m4 (in that order), after which both LSCs are completed. This rather simple example demonstrates the need for a ‘smarter’ play-out strategy.
On the Power of Play-Out for Scenario-Based Programs
211
LSC B Obj1
Obj2
m2
m3 m4
Fig. 2. LSCB
3.1
Smart Play-Out
The strategy of smart play-out (SPO) [7,11] partially addresses these limitations, by using model-checking techniques to avoid violations within a superstep. The approach taken is to formulate the play-out task as a verification problem, and to use a counterexample provided by a model-checking algorithm as the desired superstep. The system on which we perform model-checking is constructed from the universal charts in the specification. The transition relation is defined so that it allows progress of active universal charts but prevents any violations. The system is initialized to reflect the status of the execution just after the last external event occurred, including the current values of object properties, information about the universal charts that were activated as a result of the most recent external events, and the progress in all precharts. The model-checker is then given a property claiming that always at least one of the universal charts is active. In order to falsify the property, the model-checker searches for a run in which eventually none of the universal charts is active; i.e., all active universal charts complete successfully, and by the definition of the transition relation no violations occurred. Such a counter-example is exactly the desired superstep, and it is then fed into the Play-Engine for execution. If the model-checker verifies the property then no correct superstep exists and execution terminates in failure. Smart play-out is sound and complete for a single superstep. In [8] smart play-out was extended to support LSC time constructs and forbidden elements. 3.2
Planned Play-Out
Often it is useful to discover more than a single valid superstep, but model checkers are usually unable to provide more than one counter-example. In [13], a
212
D. Harel, A. Kantor, and S. Maoz
variant of smart play-out was proposed, and termed planned play-out. Planned play-out uses AI planning algorithms to find many valid supersteps in a single run. Technically, the problem of finding a valid superstep is translated into a planning problem, and a planner is employed in order to solve it. The resulting plans are then translated back into supersteps. An interesting feature of this approach, and the way it was implemented in the Play-Engine, is that it supports interactive play-out: taking advantage of the interpreter/simulation nature of the Play-Engine, the user is allowed to backtrack during execution and to choose between possible steps in the quest for an acceptable superstep. This interactive, user-guided process, was called traversable play-out in [13].
4
Smart Play-Out Is Not Enough
We now show that smart play-out is strictly weaker than full synthesis from LSC, as defined in [6]. A superstep from a given configuration is k-valid (for k ≥ 1) if the environment may not force the execution into a violation in the following k − 1 play-out iterations. A superstep is thus 1-valid iff it is valid. A superstep is 2-valid iff it is 1-valid and from the configuration reached, any external event performed by the environment leads to a configuration from which another valid superstep exists. A superstep from a given configuration is ω-valid iff from the configuration reached, the environment cannot force the execution into a violation at all. Note that if during the execution, a superstep that is not ω-valid is taken, the environment may eventually force the execution into a violation. For any k ≥ 1 we define SPOk to be a generalization of smart play-out that is intended to find a k-valid superstep, if one exists. This may be realized by looking k supersteps ahead, and checking all possible events carried out by the environment between them. SPO1 is the original smart play-out, which looks one superstep ahead. Intuitively, looking a finite number of supersteps ahead is myopic. If the specification is ‘too deep’, smart play-out, however often repeated, may not be able to distinguish between a choice that will allow the specification to continue playing (an ω-valid superstep), and a choice that will allow the environment to force the execution into a violation of the specification. In LSC synthesis, however, the synthesized controller allows only ω-valid supersteps [6]. We now construct a sequence of LSC specifications. For any k ≥ 1, Speck = {LSC1 , LSC2 , . . . , LSCk+3 } is a set of universal LSCs, presented in Figures 3 and 4. In each chart there are two instances: the system and the environment.
On the Power of Play-Out for Scenario-Based Programs
LSC 2
LSC 1 Env
Sys
Env
Sys
e0
e0
a
b
LSC 3
LSC 4 Env
Sys
Env
Sys
a
c1
b
e2
e1 c2 c1
LSC 5 Env
Sys c2 e3 c3
Fig. 3. Speck = {LSC1 , LSC2 , . . . , LSCk+3 }
213
214
D. Harel, A. Kantor, and S. Maoz
LSC k+2 Env
Sys ck-1 ek
LSC k+3 Env
Sys
Fig. 4. Speck — continued
All the charts are linearly ordered, therefore we may describe Speck more concisely by: LSC1 : LSC2 : LSC3 : LSC4 : LSC5 : .. .
e0 e0 a, b, e1 c1 , e 2 c2 , e 3
⇒a ⇒b ⇒ c1 ⇒ c2 ⇒ c3
LSCk+2 : ck−1 , ek ⇒ α, β, γ LSCk+3 : α ⇒ γ, β In this notation, each row represents an LSC. Time advances from left to right, and ‘⇒’ separates the prechart from the main chart. Eenv = {e0 , e1 , . . . , ek } is the
On the Power of Play-Out for Scenario-Based Programs
215
(0,0,…,0) e0
ei ≠ e0
〈〉 (0,0,…,0) <>
(1,1,0,…,0) 〈b,a〉〉 e1 (0,0,1,0,…,0) 〈〉
e0
〈b,a〉〉
ei ≠ e0,e1 (0,0,1,0,…,0)
(1,1,1,0,…,0)
Fig. 5. Starting from the initial configuration, the environment may not force the execution into a violation
set of external events controlled by the environment. Esys = {a, b, c1 , . . . , ck−1 , α, β, γ} is the set of all other events controlled by the system. Theorem 1. SPOk is not enough to determine a non-violating execution of Speck (k ≥ 1). That is, SPOk cannot decide for two possible supersteps which one is ω-valid and which is not. Proof sketch. Consider Speck . Note that in each chart the events are linearly ordered. Moreover, the minimal event does not reoccur in the chart, so at any point during execution there exists at most one live copy of each chart. We denote the global state of execution as a configuration (l1 , l2 , . . . , lk+3 ), where each li is a natural number representing the current cut of the ith chart, starting from zero. The initial configuration is (0, . . . , 0), in which all charts are not active. Claim 1. Starting from the initial configuration, the environment may not force the execution into violation; i.e., for any future action of the environment there exists a valid superstep as a reaction, ad infinitum. Figure 5 presents an execution graph for Speck , starting from the initial configuration. The nodes represent configurations, edges represent environment controlled events or system controlled supersteps. For each action of the environment (an environment controlled event from Eenv ), the graph shows a possible superstep as a reaction. When formulated as a game between the system and the environment, this may be seen as describing a winning strategy for the system (which in fact does not depend on k). Assume now that the environment starts by performing e0 (‘step zero’). LSC1 and LSC2 are activated and their precharts are completed, yielding the
216
D. Harel, A. Kantor, and S. Maoz
configuration (1, 1, 0, . . . , 0), so a and b are enabled and must be carried out (in any order) in the following superstep (‘superstep zero’). First, consider the possibility of executing b and a, in this order (denoted b, a) in superstep zero. As shown in Figure 5, b, a is a valid superstep (yielding the configuration (0, 0, 1, 0, . . . , 0)), where for any future action of the environment there exists a valid superstep as a reaction. Thus, it is ω-valid for superstep zero and, in particular, it is k-valid. Second, consider the possibility of executing a, b in superstep zero. Claim 2. a, b is k-valid for superstep zero, but is not ω-valid. After e0 , a, b leads to configuration (0, 0, 2, 0, . . . , 0). To show that a, b is kvalid for superstep zero, we show the environment cannot force a violation in (k − 1) steps starting from (0, 0, 2, 0, . . . , 0). Observe that in Speck , in order for a violation to occur in any run starting from the initial configuration, LSCk+2 must proceed to its main chart. In the following we show that if a superstep begins with a configuration in which LSCk+2 is not active, a valid superstep exists. During the execution of the superstep, LSCk+2 will not be activated, because its prechart ends with an external event. Thus, α will not be executed during the superstep, and a live copy of LSCk+3 will not be created. Any live copy created during the superstep has an external event in its prechart, so it may not become active. Therefore, the superstep must terminate. It is now sufficient to notice that as long as there is an enabled main chart event during the execution of the superstep, there exists one that does not violate any active LSC. Only LSC1 , . . . , LSCk+1 may be active, so the event enabled in the active LSC of the largest index among these does not violate any active live copy. LSCk+2 may proceed to its main chart only after the events ek , ck−1 , ek−1 , ck−2 , . . . , e1 occur. Thus, at least k external events occurred after a, b was executed in superstep zero. Therefore, the environment may not violate the specification in (k − 1) actions, and a, b is k-valid. However, a, b is not ω-valid for superstep zero. If it is chosen, the environment may force the system into a violation of the specification (i.e., there is a winning strategy for the environment). This, of course, cannot take place in (k − 1) steps (as shown above), but k steps are enough. Figure 6 shows an execution graph starting from (0, 0, 2, 0, . . . , 0). For any valid superstep, the graph shows a possible action of the environment that eventually leads to a violation; that is, it shows how the environment can force the system into a violation in k steps. In conclusion, starting from the initial configuration and following the external event e0 , SPOk cannot choose between b, a and a, b for superstep zero, as both are k-valid supersteps. However, the former is ω-valid while the latter is not. We remark that the set of events of Speck depends upon the index k. As k grows, this alphabet of events grows. A stronger proof of Theorem 1 can be k , in which formulated, which employs a different sequence of specifications, Spec all specifications share the same set of events.
On the Power of Play-Out for Scenario-Based Programs
(0,0,2,0,…,0) 〈ck-1〉
e1
〈c1〉
(0,0,3,0,…,0)
(0,…,0,1,0)
(0,0,0,1,0,…,0)
ek
e2
217
(0,0,0,2,0,…,0)
(0,…,0,2,0)
Fig. 6. If a, b is chosen for superstep zero, the environment may force the execution into a violation. In the last configuration presented, (0, . . . , 0, 2, 0), a valid superstep does not exist due to a contradiction between LSCk+2 and LSCk+3 .
k = {LSC1 , LSC2 , . . . , LSCk+3 }: We use our concise notation to describe Spec LSC1 : LSC2 : LSC3 : LSC4 : LSC5 : .. .
e0 e0 a, b, e1 c, e1 c, c, e1
⇒a ⇒b ⇒c ⇒ c, c ⇒ c, c, c
k−1
LSCk+2 : c, c, . . . , c, e1 ⇒ α, β, γ α ⇒ γ, β LSCk+3 : env = {e0 , e1 } is the set of external These can be easily transformed into LSCs. E sys = {a, b, c, α, β, γ} is the set of events controlled by the environment, while E all other events controlled by the system. k is similar to the proof for The proof of Theorem 1 for the sequence Spec Speck . More specifically, proving that b, a is ω-valid for superstep zero but that a, b is not ω-valid, is roughly the same as in the proof of Theorem 1 above (although the proof of the second of these claims is slightly more complicated, because there are multiple live copies of an LSC). Showing that a, b is k-valid for superstep zero turns out to be somewhat more difficult, as it requires formulating a certain property of execution configurations, and proving that this property may be enforced by the system in the next (k − 1) iterations. Now, although SPOk does not determine a non-violating execution in all LSC ˜ specifications, we show that for a given specification S there is k˜ such that SPOk is ‘good enough’ for S. Theorem 2. Given an LSC specification S, one may compute a number k such that SPOk is enough to determine a non-violating execution of S. That is, SPOk , when executing S, always returns an ω-valid superstep if one exists. Proof sketch. Consider an LSC specification S = {L1 , . . . , Ln }. We define k to be larger than the number of execution configurations of S, as follows. Each LSC Li has a bounded number of live copies during execution; e.g., there are no more live copies than the number of cuts in Li (a much tighter bound will
218
D. Harel, A. Kantor, and S. Maoz
often exist). Let bi and ci be the number of cuts in Li and an upper bound on the number of live copies of Li , respectively. There are at most c1 b1 · . . . · cn bn execution configurations for S. Let k c1 b1 · . . . · cn bn + 1. Assume that after some external event, an ω-valid superstep exists. In particular, a k-valid superstep exists, so SPOk returns a k-valid superstep. We show that any k-valid superstep is, in fact, ω-valid, as required. Assume to the contrary, that the environment may force the execution into a violation. Let t be the smallest number of steps in which the environment may thus force a violation. Since there are at most (k − 1) configurations, if t ≥ k, then there are two steps after which the execution is in the same configuration, thus contradicting the minimality of t. Consequently, t < k, and the superstep returned by SPOk is not k-valid.
5
Related and Future Work
There have been several efforts to generate executable programs from LSC specifications. Synthesis from LSC specifications was investigated in [6,9]. Bontemps et al. [2] investigate different variants of synthesis from LSC. In [19], Wang et al. describe the synthesis of SystemVerilog programs from an LSC specification. In [17], Sun and Dong present a translation from an LSC specification to a CSP specification, for the purpose of synthesizing executable distributed processes. In [18], the same authors investigate also the generation of a distributed design from a combination of interaction-based and state-based modeling, given in LSC and Z. In other work in our group (in progress), we suggest optimizations for smart play-out. These are based on an approximation of the set of LSCs that may participate in the current superstep, and on other methods, inspired by standard compiler optimizations. All this is aimed at reducing the size of the model without affecting the soundness and completeness of finding a correct superstep. Finally, finding effective and efficient ways to compute a reasonably small k such that SPOk is good enough for a given specification as a whole, or for certain possible execution configurations, might very well be of theoretical and practical interest.
References 1. Alur, R., Etessami, K., Yannakakis, M.: Inference of Message Sequence Charts. In: Proc. 22nd Int. Conf. on Soft. Eng. (ICSE 2000), pp. 304–313. ACM Press, New York (2000) 2. Bontemps, Y., Schobbens, P.-Y., L¨ oding, C.: Synthesis of Open Reactive Systems from Scenario-Based Specifications. Fundam. Inform. 62(2), 139–169 (2004)
On the Power of Play-Out for Scenario-Based Programs
219
3. Damm, W., Harel, D.: LSCs: Breathing Life into Message Sequence Charts. J. on Formal Methods in System Design 19(1), 45–80 (2001); Preliminary version in: Ciancarini, P., Fantechi, A., Gorrieri, R.(eds.) Proc. 3rd IFIP Int. Conf. on Formal Methods for Open Object-Based Distributed Systems (FMOODS 1999), pp. 293–312. Kluwer Academic Publishers, Dordrecht (1999) 4. Harel, D.: From Play-In Scenarios To Code: An Achievable Dream. IEEE Computer 34(1), 53–60 (2001); Also in: Maibaum, T. (ed.) FASE 2000. LNCS, vol. 1783, pp. 22–60. Springer, Heidelberg (2000) 5. Harel, D., Kleinbort, A., Maoz, S.: S2A: A Compiler for Multi-Modal UML Sequence Diagrams. In: Dwyer, M.B., Lopes, A. (eds.) FASE 2007. LNCS, vol. 4422, pp. 121–124. Springer, Heidelberg (2007) 6. Harel, D., Kugler, H.-J.: Synthesizing State-Based Object Systems from LSC Specifications. In: Yu, S., P˘ aun, A. (eds.) CIAA 2000. LNCS, vol. 2088, pp. 1– 51. Springer, Heidelberg (2001); Preliminary version appeared as technical report MCS99-20, Weizmann Institute of Science (1999) 7. Harel, D., Kugler, H., Marelly, R., Pnueli, A.: Smart Play-out of Behavioral Requirements. In: Aagaard, M.D., O’Leary, J.W. (eds.) FMCAD 2002. LNCS, vol. 2517, pp. 378–398. Springer, Heidelberg (2002) 8. Harel, D., Kugler, H., Pnueli, A.: Smart Play-Out Extended: Time and Forbidden Elements. In: Proc. 4th Int. Conf. on Quality Software (QSIC 2004), pp. 2–10. IEEE Computer Society, Los Alamitos (2004) 9. Harel, D., Kugler, H., Pnueli, A.: Synthesis Revisited: Generating Statechart Models from Scenario-Based Requirements. In: Kreowski, H.-J., Montanari, U., Orejas, F., Rozenberg, G., Taentzer, G. (eds.) Formal Methods in Software and Systems Modeling. LNCS, vol. 3393, pp. 309–324. Springer, Heidelberg (2005) 10. Harel, D., Maoz, S.: Assert and Negate Revisited: Modal Semantics for UML Sequence Diagrams. Software and Systems Modeling (SoSyM) 7(2), 237–252 (2008); Preliminary version appeared in Proc. 5th Int. Workshop on Scenarios and StateMachines (SCESM 2006) at the 28th Int. Conf. on Soft. Eng. (ICSE 2006), pp. 13–19. ACM Press, New York (2006) 11. Harel, D., Marelly, R.: Come, Let’s Play: Scenario-Based Programming Using LSCs and the Play-Engine. Springer, Heidelberg (2003) 12. Harel, D., Marelly, R.: Specifying and Executing Behavioral Requirements: The Play-In/Play-Out Approach. Software and Systems Modeling (SoSyM) 2(2), 82– 107 (2003) 13. Harel, D., Segall, I.: Planned and Traversable Play-Out: A Flexible Method for Executing Scenario-Based Programs. In: Grumberg, O., Huth, M. (eds.) TACAS 2007. LNCS, vol. 4424, pp. 485–499. Springer, Heidelberg (2007) 14. ITU. International Telecommunication Union Recommendation Z.120: Message Sequence Charts. Technical report (1996) 15. Kr¨ uger, I., Grosu, R., Scholz, P., Broy, M.: From MSCs to Statecharts. In: Proc. Int. Workshop on Distributed and Parallel Embedded Systems (DIPES 1998), IFIP WG10.3/WG10.5. IFIP Proc., vol. 155, pp. 61–72. Kluwer, Dordrecht (1998) 16. Maoz, S., Harel, D.: From Multi-Modal Scenarios to Code: Compiling LSCs into AspectJ. In: Young, M., Devanbu, P.T. (eds.) Proc. 14th ACM SIGSOFT Int. Symp. Foundations of Software Engineering (FSE 2006), pp. 219–229. ACM, New York (2006) 17. Sun, J., Dong, J.S.: Synthesis of Distributed Processes from Scenario-Based Specifications. In: Fitzgerald, J.S., Hayes, I.J., Tarlecki, A. (eds.) FM 2005. LNCS, vol. 3582, pp. 415–431. Springer, Heidelberg (2005)
220
D. Harel, A. Kantor, and S. Maoz
18. Sun, J., Dong, J.S.: Design Synthesis from Interaction and State-Based Specifications. IEEE Transactions on Software Engineering 32(6), 349–364 (2006) 19. Wang, H.H., Qin, S., Sun, J., Dong, J.S.: Realizing Live Sequence Charts in SystemVerilog. In: Proc. 1st Joint IEEE/IFIP Symposium on Theoretical Aspects of Software Engineering (TASE 2007), Washington, DC, USA, pp. 379–388. IEEE Computer Society, Los Alamitos (2007) 20. Whittle, J., Kwan, R., Saboo, J.: From Scenarios to Code: an Air Traffic Control Case Study. Software and Systems Modeling (SoSyM) 4(1), 71–93 (2005)
Proving the Refuted: Symbolic Model Checkers as Proof Generators Ittai Balaban1 , Amir Pnueli2,3 , and Lenore D. Zuck4 1
3
WorldEvolved Services, New York [email protected] 2 New York University, New York [email protected] Weizmann Institute of Science (Emeritus) 4 University of Illinois at Chicago [email protected]
Abstract. The paper presents an automatic method to derive a deductive proof of response properties from symbolic model checking. The method is based on a new proof rule for response properties that deals directly with compassion (strong fairness). The method can be applied to infinite-state systems. In particular, model checking of response of (predicate- and ranking-) abstracted heap programs is automatically transformed into a deductive proof for the concrete heap system. All examples presented in the paper were run in TLV.
1 Introduction Model checking [3,4] is an automatic process for verifying temporal properties of finitestate systems. It was first applied to Linear Time Temporal Logic properties in [9,10]. Explicit-state model checking constructs a finite model representing the joint computations of the system and the negation of the property to be verified, and applies graph algorithmic techniques to it, checking that no computation of the system satisfies the negation of the property (and violates the property). In the equivalent and independently developed automata-theoretic approach ([19,8]) both system and negation of property are explicitly represented as automata, and their intersection is checked for emptiness. Since the number of states grows exponentially with the number of processes, explicitstate model checking techniques are applicable only to systems with few processes. In [12], McMillan showed how a BDD representation of systems allows for a much more efficient model checking, known as “symbolic” (or “implicit state”) model checking. In explicit-state and symbolic model checking, when executions violating the property exist, at least one is reported and serves as a counterexample. When the search for counterexamples fails one may conclude that the system satisfies its specification. However, our confidence in such a positive conclusion is tarnished by two possible factors: • Due to complexity and decidability, the system being checked is often an oversimplification of the actual system. Hence, failure to find counterexamples for it does not necessarily imply that the actual system is fault free.
This research was supported in part by ONR grant N00014-99-1-0131, and SRC grant 2004TJ-1256.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 221–236, 2010. c Springer-Verlag Berlin Heidelberg 2010
222
I. Balaban, A. Pnueli, and L.D. Zuck
• There always exists the possibility that the model checker itself contains a bug which causes it to report success, while the system is faulty. Both these risks may cause us to treat with diffidence a result which purely claims success without providing some supporting evidence, a “witness” or “certificate” that the property does indeed hold over the considered system. This ‘proof by lack of counterexample’ is the main drawback of the model checking approach; some would even say that model checking is a tool for falsification (or debugging – bug finding) rather than a tool for verification. An alternative approach to model checking is deductive verification that incrementally constructs proofs until the desired conclusion, a proof the system meets its specification, is obtained. Deductive verification is often manual, and, like all deductive proofs, requires considerable human skill and time. One of its main benefits is that it often explains why the system satisfies its specification. In [16,15] we enhanced the LTL explicit model checking process with the ability to automatically generate a deductive proof that the system meets its LTL specification. Thus, we emphasized the point of view that model checking can also be used to justify why the system actually works. We showed that, by exploiting the information in the graph that is generated during the search for counterexamples, we generate a fully deductive proof that the system meets its specification. The generated deductive proof can then be sent to a theorem prover to verify it is indeed valid. The work there assumes a finite-state system whose only fairness requirements are “justice” (weak fairness). In [13], Namjoshi expanded these results to prove μ-calculus properties. None of these papers deals with symbolic model checking, handles systems with compassion (strong fairness) assumptions, or extends the work to infinite-state systems. Our work on Invisible Invariants (for overview see [20]) can be viewed as addressing the first and third points above for the case of invariance properties. The extension of this work to liveness properties ([5]) assumes only justice-like fairness, and the liveness proofs often require some human guidance. In this paper we show how to obtain deductive proofs of response properties, from symbolic model checking, even in the presence of compassion (strong fairness), in a fully automatic manner. Response properties are the most common liveness properties. Formally, they are properties of the form ¼ (p → ½ q) where p and q are past formulas (i.e., formulas that contain no future operators). In Section 3 we quote a new proof rule ([17]) for response properties for systems with compassion. As is usual the case with deductive rules, the proof rule requires some auxiliary constructs. In Section 4 we show two main methods that automatically obtain these constructs from symbolic model checking. To simplify the presentation, we illustrate the methods on examples in which p and q are restricted to state formulas,i.e. formulas that contain no temporal operators. The generalization from state formulas to past formulas is straightforward and can be performed by the introduction of temporal testers as shown in [6] and [18]. The first algorithm we present produces a “flat” ranking, that is, each predicate (set of states) is assigned a natural number. Flat ranking suffices for finite-state systems. However, it is not obvious how to utilize flat ranking for the analysis of infinite-state systems. The second algorithm we present produces lexicographical ranking, which, as experience had shown, is easier to generalize to parameterized and heap-manipulating
Proving the Refuted: Symbolic Model Checkers as Proof Generators
223
systems. Both algorithms, obviously, produce “concrete” ranking. In Section 5 we show how to apply this second algorithm that produces lexicographical ranking to systems that are both predicate- and ranking- abstracted. Once auxiliary constructs are obtained, we may still wish to confirm that the premises of the deductive proof rule are met, which is achievable by a theorem prover, or a sufficiently powerful constraint solver. Our work applies to systems whose progress can be achieved only by fairness assumptions1. Related Work: [16] introduces the concept of proof generation as a complementary stage of model checking using Generalized B¨uchi automata and general proof rules. Independently, Namjoshi ([13]) has shown a proof system for the μ-calculus, based on alternating tree automata and parity games. There, LTL is treated as a special case of CTL∗ using ∀-automata, and fairness is recommended to be incorporated into the property. In [7] Kupferman and Vardi present a novel automata emptiness checking algorithm, which is subsequently exploited to produce a verification certificate that is checkable by external proof checkers. all of these methods are applied to propositional properties of finite-state systems and produce proofs (or certificates) that are propositional in nature. None of them can produce a proof that includes a ranking function over an unbounded domain. Namjoshi in [14] realizes the need to translate a proof of a finite-state abstraction of an infinite-state system. He refers to this proof concretization process as lifting of the proof. In this process, he takes into account the need to deal with ranking functions. The limitation of his method is that the lifting of ranking functions necessarily preserves the range of the functions. Starting from a finite-state system, the range of the abstract ranking functions and, therefore, the resulting range of the concrete ranking functions is necessarily bounded. In comparison, the methods presented here extract ranking functions over unbounded domains from their finitary representation as compassion requirements. The work closest in spirit to ours, which motivated the current work, is our work in [1,2], where we use a two-pass process: one to verify an abstract system, and another to derive ranking for a concrete system. However, the work there suffers not only from some redundancy, and, therefore, inefficiency, but also assumes that the only compassion requirements are introduced by abstraction, thus disallowing any “native” compassion. Both the inefficiency and restriction on native fairness are removed in this current work.
2 The Formal Framework As a computational model, we take a fair discrete system (FDS) D = V, Θ, ρ, C, where • V — A set of system variables. A state of D provides a type-consistent interpretation of the variables V . For a state s and a system variable v ∈ V , we denote by s[v] the value assigned to v by the state s. Let Σ denote the set of all states over V . • Θ — The initial condition: An assertion (state formula) characterizing the initial states. 1
Since we allow an idle step from every state, progress requires explicit fairness.
224
I. Balaban, A. Pnueli, and L.D. Zuck
• ρ(V, V ) — The transition relation: An assertion, relating the values V of the variables in state s ∈ Σ to the valuesV in an D-successor state s ∈ Σ. For a subset U ⊆ V we define pres(U ) as v∈U (v = v). We assume that the system can always idle, that is, we assume that ρ has the disjunct pres(V ). • C — A set of compassion (strong fairness) requirements: Each compassion requirement is a pair p, q of state assertions; A computation should include either only finitely many p-states, or infinitely many q-states. Note that we do not include “justice” in the system. A justice requirement q, that indicates that a computation must include infinitely many q-states, can be expressed by the compassion requirement 1, q (where “1” denotes true). In previous treatment of FDS ’s, we dealt directly only with justice properties. E.g., in [16,15] we allowed only for justice requirements, and in [5] we translated compassion into justice. Since here we deal with compassion directly, and since justice is a special case of compassion, we chose not to include justice explicitly in the system. For an assertion ψ, we say that s ∈ Σ is a ψ-state if s |= ψ. A computation of an FDS D is an infinite sequence of states σ : s0 , s1 , s2 , ..., satisfying the requirements: • Initiality: s0 is initial, i.e., s0 |= Θ. • Consecution: For each = 0, 1, ..., state s+1 is a D-successor of s . I.e., s , s+1 |= ρ(V, V ) where, for each v ∈ V , we interpret v as s [v] and v as s+1 [v]. • Compassion: for every p, q ∈ C, either σ contains only finitely many occurrences of p-states, or σ contains infinitely many occurrences of q-states. Example 1. Consider the Program MUX - SEM in Fig. 1, which is a trivial mutual exclusion protocol for N processes using a (single) semaphore.
n i=1
local y : boolean init y = 1 ⎤ ⎡ loop ⎡ forever do ⎤ ⎢ : NonCritical ⎥ ⎢ ⎢ 1 ⎥⎥ 2 : request y P [i] :: ⎢ ⎥⎥ ⎢ ⎢ ⎦⎥ ⎦ ⎣ ⎣ 3 : Critical 4 : release y
Fig. 1. Mutual Exclusion by Semaphores. Program MUX - SEM
For a fixed N , we define the following FDS: • V = {y} ∪ {πi : i = 1, . . . , N }, where πi is the control location of process i; N • Θ is the assertion y = 1 ∧ i=1 πi = 1; • The transition relation consists of a disjunction of 4N transitions, one for each process and location, as well asthe idle transition that leaves all variables intact, N i.e., the transition y = y ∧ i=1 πi = πi . Forexample, the 2nd transition of process i is πi = 2 ∧ y = 1 ∧ πi = 3 ∧ y = 0 ∧ j=i πj = πj ; • There are 3N compassion requirements, one for each process i and locations 2..4. The compassion requirement for location 2 is C2i : πi = 2 ∧ y, πi = 2, which
Proving the Refuted: Symbolic Model Checkers as Proof Generators
225
states that if process i is infinitely often in location 2 and the semaphore is available, then process i should move away from location 2 infinitely many times. The other compassion requirements are C3i : 1, πi = 3 (indicating the process i cannot stay forever in the critical section) and C4i : 1, πi = 4, indicating that process i cannot stay forever in location 4. Note that there is no fairness assumption associated with location 1, since a process can remain non-critical forever.
3 A Deductive Rule for Proving Response Properties In Fig. 2 we present proof rule FAIR -R ESPONSE of [17] that establishes the response property p =⇒ ½ q for an FDS D. Previous versions of this rule, such as the one presented in [11] were recursive in the sense that some of the premises of the rule were themselves response properties. This is the first version of a flat rule in which all the premises are first order and only the conclusion is temporal.
Rule FAIR -R ESPONSE For a well-founded domain A : (W, ), assertions p, q, ϕ1 , . . . , ϕn , compassion requirements (p1 , q1 ), . . . , (pn , qn ) ∈ C, and ranking functions Δ1 , . . . , Δn where each Δi : Σ → W R1. p =⇒ q ∨ n j=1 (pj ∧ ϕj ) For each i = 1, . . . , n, R2. pi ∧ ϕi ∧ ρ =⇒ q ∨ n j=1 (pj ∧ ϕj ) =⇒ q ∨ (ϕi ∧ Δi = Δi ) ∨ n R3. ϕi ∧ ρ j=1 (pj ∧ ϕj ∧ Δi Δj ) =⇒ ¬qi R4. ϕi p =⇒ ½ q Fig. 2. Deductive rule FAIR -R ESPONSE
The soundness of the rule is established in [17]. The rule calls for a well founded domain (W, ) (so that there are no infinitely decreasing -chains over W ), a list of compassion requirements p1 , q1 , . . . , pn , qn ∈ C, and for each i ∈ [1..n], an auxiliary assertion ϕi and a mapping (ranking) Δi from the states into W . Premise R1 establishes that any source p-state, either already satisfies q, or satisfies one of the ϕj ’s (together with its corresponding pj ). Premises R2–R4 establish that, for every i ∈ [1..n], the following all hold: 1. every successor of a (ϕi ∧pi )-state leads either into q, or into a (pj ∧ϕj )-state (R2), for some j ∈ [1..n]; 2. every ϕi successor leads either to a q-state, or to a ϕi state with the same Δi rank, or to some (pj ∧ ϕj )-state whose Δj -rank is lower than the Δi -rank of the originating state (R3); 3. Every ϕi -state violates q.
226
I. Balaban, A. Pnueli, and L.D. Zuck
Example 2. Consider Program MUX - SEM of Example 1. Suppose we wish to verify the response property ψ : p =⇒ ½ q, where p : π1 = 2 and q : π1 = 3. Assume we have
N at hand the invariant ϕ : (y + i=1 (πi ∈ {3, 4})) = 1 (i.e., the mutual exclusion property, where πi ∈ {3, 4} is interpreted as 0 if it is false, and 1 otherwise). To prove ψ for a fixed N using rule FAIR -R ESPONSE, we can choose the following constructs, where (, i) identifies the relevant compassion requirements, and is used to subscript its corresponding assertion ϕ,i and ranking Δ,i . (, i) ϕ,i (2, 1) π1 = 2 (3, i) (i = 2, . . . , N ) π1 = 2 ∧ πi = 3 (4, i) (i = 2, . . . , N ) π1 = 2 ∧ πi = 4
Δ,i 0 2 1
4 A Symbolic Computation of the Auxiliary Constructs Rule FAIR -R ESPONSE calls for auxiliary constructs: a well founded domain, a list of compassion requirements, and for each requirement in the list an assertion and a ranking function. Next we present an algorithm that extracts a deductive proof according to rule FAIR -R ESPONSE from a successful model checking verification of a response property p =⇒ ½ q. Model checking of the property can be described by the following algorithm: Let X := reachable D ∧ (p ∧ ¬q) ◦(ρ ∧ ¬q )∗ Fix(X) Let X := X ∧ r,t∈C (¬r ∨ (X ∧ ρ)∗ ◦(X ∧ t)) end – Fix(X) Return (X = ∅)
Where reachable D is the set of reachable states of the FDS D. Thus, the algorithm places in X the set of all states that are reachable from a reachable p-state via a q-free path. Then it successively removes from X all r-states that do not initiate an X-path leading to a t-state, for some compassion requirement r, t ∈ C. Finally, the property p =⇒ ½ q is D-valid iff the final value of X is empty. 4.1 A Symbolic Extraction of “Flat Ranking” Fig. 3 presents an algorithm for extracting the necessary proof constructs for a successful application of FAIR -R ESPONSE. The algorithm computes a value d and for every i ∈ [0..d], it identifies a compassion requirement Fi ∈ C and constructs an assertion ϕi and a ranking function Δi : Σ → [0..d]. Line 4 computes in ψ the set of all X-states from which there is no X-path leading to a t-state. Line 5 checks whether there exists an r-state belonging to ψ. If at least one such state is found then we have discovered a new assertion ϕd . In line 7 we define ϕd to consist of all states reachable from an rstate by a ψ-path. In lines 8 and 9 we define Fd and Δd to be r, t and d, respectively. In
Proving the Refuted: Symbolic Model Checkers as Proof Generators
227
Let X := reachable D ∧ (p ∧ ¬q) ◦(ρ ∧ ¬q )∗ Let d := −1 Fix(X) ⎤ ⎡ 3 : Forall r, t ∈ C do ⎤ ⎡ ⎢ 4 : Let ψ := X ∧ ¬((X ∧ ρ)∗ ◦(X ∧ t)) ⎥ ⎥ ⎢ ⎢ ⎥⎥ ⎢ 5 : If ψ ∧ r = 0 then ⎢ ⎢ ⎤ ⎥⎥ ⎡ ⎢ ⎥⎥ ⎢ 6 : Let d := d + 1 ⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎢ 7 : Let ϕd := (ψ ∧ r) ◦(ρ ∧ ψ )∗ ⎥ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥⎥ ⎢ ⎢ ⎢ ⎥ ⎥⎥ ⎢ 8 : Let Fd := r, t ⎢ ⎢ ⎥ ⎥⎥ ⎢ ⎣ ⎣ ⎦ ⎦⎦ ⎣ 9 : Let Δd := d 10 : Let X := X ∧ ¬(ψ ∧ r) 11 : If X = ∅ then report “fail” 0: 1: 2:
Fig. 3. Symbolic Extraction: Flat Ranking
line 10 all states satisfying ψ ∧ r are removed from X. As before, if at the end X is not empty, then the property is not satisfied. Example 3. Consider Program MUX - SEM of Example 1 and the response property (π1 = 2) =⇒ ½ (π1 = 3). Initially, X is the set of pending states, i.e., the reachable states where π1 = 2, denoted by pend . Suppose that the first compassion property considered (in the loop 3..10) is π1 = 2 ∧ y, π1 = 3. Then, when ψ if first computed (in line 4), it is pend , and after the first iteration of the innermost loop, ϕ1 is pend ∧ y = 0. Suppose that the next compassion requirements are C4,2 , . . . , C4,N . Then, for each i ∈ [2..N ], the ith iteration of the innermost loop computes in ψ the states where π1 = 2 and at most one process indexed by i ∈ [2..N ] is at location 4, or at most one process indexed by i ∈ [2..N ] is at location 3. Similarly, at the end of the iteration, the set of states where πi = 4 is removed from X. Thus, at the end of the N th iteration, X is the set of states where π = 2, no process is at location 4, and a single process is at location 3. Note that the order in which the compassion requirements appear is of no relevance (though the order assumed simplifies the representation) – at the end of each iteration, the states removed from X are those where πi = 4 (for some i > 1), and, since there is at most one process in location 4, the states removed are mutually disjoint. In fact, after the first iteration, X consists of N − 1 trees, in each of which a process i, i = 2, . . . N , location 3 or 4, while process 1 is at location 2, and all other processes are at locations 1..2. In Example 2 this was succinctly captured by having all such states having rank 1, while here their ranks are 2, . . . , N . Assume that the next N − 1 compassion requirements are C3,2 , . . . , C3,N . Then, for each i = 2, . . . , N , the N + (i − 1)st iteration of the innermost loop computes in ψ the states where π1 = 2 and πi = 3. Similarly, at the end of the iteration, the set of states where πi = 3 is removed from X. Thus, at the end of the N + (i − 1)st iteration, X is the set of states where π1 = 2, no process is at location 4, and a single process whose index is i + 1, . . . , N is at location 3. By the end of the (2N − 1)st iteration, X is empty.
228
I. Balaban, A. Pnueli, and L.D. Zuck
The ranking obtained is summarized in: (, i) ϕ,i Δ,i (2, 1) π1 = 2 0 (3, i) (i = 2, . . . , N ) π1 = 2 ∧ πi = 3 N + i − 1 (4, i) (i = 2, . . . , N ) π1 = 2 ∧ πi = 4 i − 1
The “flat ranking” obtained is less clear than the one obtained in Example 2, since the algorithm of Fig. 3 iteratively goes through the compassion requirement, assigning a different rank at each step. This is both less clear than the manual ranking, and hard to generalize to parameterized systems. We remedy this in the next subsection. 4.2 Extraction of a Lexicographic Ranking The ranking produced in Fig. 3 suffices for “real” finite-state systems. Often, however, we wish to find rankings for infinite-state systems. Two obvious applications are heap systems and parameterized systems. For the former, we can combine predicate 0 : m := 0 1 : pend := reachable D ∧ (p ∧ ¬q) ◦(ρ ∧ ¬q )∗ 2 : rank lex(pend , ) where procedure rank lex is defined by: procedure rank lex(subg, prefix ) ⎡ ⎤ d : integer ⎢ ⎥ X : assertion ⎢ ⎥ ⎢ 3 : d := −1 ⎥ ⎢ ⎥ ⎢ 4 : X := subg ⎥ ⎢ ⎥ ⎢ 5 : Fix (X) ⎥ ⎢ ⎡ ⎤⎥ ⎢ ⎥ 6 : Forall r, t ∈ C do ⎢ ⎡ ⎤ ⎥ ∗ ⎢ ⎢ ⎥⎥ 7 : ψ := X ∧ ¬((X ∧ ρ) ◦(X ∧ t)) ⎢ ⎢ ⎥⎥ ⎢ ⎢ ⎢ 8 : If ψ ∧ r = 0 then ⎥⎥⎥ ⎢ ⎢ ⎢ ⎥⎥⎥ ⎤ ⎡ ⎢ ⎢ ⎢ ⎥⎥⎥ 9 : Let d := d + 1 ⎢ ⎢ ⎢ ⎥⎥⎥ ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ 10 : m := m + 1 ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ ∗ ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ 11 : ϕm := (ψ ∧ r) ◦(ρ ∧ ψ ) ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ 12 : Fm := r, t ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ 13 : Δm := prefix ∗ (d) ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ 14 : X := X ∧ ¬ϕm ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ 15 : rem := ϕm ∧ ¬r ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ ⎢ ⎣ ⎣ ⎦⎦⎦⎥ ⎣ 16 : If rem = 0 then ⎢ ⎥ ⎣ ⎦ 17 : rank lex(rem, prefix ∗ (d)) 18 : If X = ∅ then report “fail” Fig. 4. A ranking process, yielding a lexicographic ranking
Proving the Refuted: Symbolic Model Checkers as Proof Generators
229
abstraction and ranking abstraction (see [2]) to obtain a finite-state system that can be model-checked. However, it is not a “flat” ranking that is of interest, but rather a concrete ranking, which may include some variables whose range is infinite. Parameterized systems are just like our ongoing example, however, where instead of fixing N , we wish to verify a response property for every instantiation of N . It is our experience that for systems of both types, a lexicographic ranking is called for. Fig. 4 presents such an algorithm. The main difference between the algorithms in Fig. 4 and in Fig. 3 is in lines 15–17. In these lines we check whether ϕm contains any (¬r)-states, and, if so, we add these states to rem. If all states in ϕm satisfy r then we continue with the flat ranking. Note that this is the case for all justice requirement since, for these, r = 1. On the other hand, if ϕm contains any (¬r)-states, we invoke rank lex recursively, asking it to rank all the states in ϕm ∧ ¬r, using prefix ∗ (d) as a prefix to all of the subsequent rankings. Note the recursive call in line 17 only executes when Fm is a “real” compassion (rather than a justice) requirement. We give examples for application of the algorithm in the next section. Note that the algorithm in Fig. 4 increments (the last component of the ranking) d whenever ϕm is “removed” from X. An alternative is to update X, and d, only after all the possible ϕC ’s, for every compassion requirement C, are computed. Fig. 5 presents such an algorithm. The advantage of such an algorithm is for parameterized systems, where similar compassion requirements, that belong to different processes, may have equal ranking. We can thus replace procedure rank lex with that of Fig. 5. procedure rank lex(subg, prefix ) ⎤ ⎡ d : integer ⎥ ⎢ X : assertion ⎥ ⎢ ⎥ ⎢ 0 : d := −1 ⎥ ⎢ ⎥ ⎢ 1 : X := subg ⎥ ⎢ ⎥ ⎢ 2 : Fix (X) ⎢ ⎡ ⎤⎥ ⎥ ⎢ 3 : Let accu φ := 0 ⎥ ⎢ ⎢ ⎢ 4 : Let rem := 0 ⎥⎥ ⎢ ⎢ ⎥⎥ ⎢ ⎢ 5 : Let d := d + 1 ⎥⎥ ⎢ ⎢ ⎤⎥⎥ ⎡ ⎢ ⎢ ⎥⎥ 6 : Forall r, t ∈ C do ⎢ ⎢ ⎤ ⎥⎥ ⎡ ∗ ⎢ ⎢ ⎥⎥⎥ ⎢ 7 : ψ := X ∧ ¬((X ∧ ρ) ◦(X ∧ t)) ⎢ ⎥⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ 8 : If ψ ∧ r = 0 then ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ ⎤ ⎡ ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ 9 : m := m + 1 ⎢ ⎢ ⎢ ⎥⎥⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ 10 : ϕm := (ψ ∧ r) ◦(ρ ∧ ψ )∗ ⎥ ⎥ ⎥ ⎥ ⎥ ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥⎥ ⎢ 11 : Fm := r, t ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥⎥ ⎢ ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥⎥ ⎢ 12 : Δm := prefix ∗ (d) ⎢ ⎢ ⎢ ⎢ ⎥⎥⎥⎥⎥ ⎢ ⎥ ⎢ ⎢ ⎣ ⎣ ⎣ 13 : rem := rem ∨ (ϕm ∧ ¬r) ⎦ ⎦ ⎦ ⎥ ⎢ ⎢ ⎥⎥ ⎥ ⎢ ⎢ ⎥ 14 : accu φ := accu φ ∨ ¬ϕm ⎢ ⎢ ⎥⎥ ⎢ ⎢ 15 : X := X ∧ ¬accu φ ⎥⎥ ⎢ ⎢ ⎥⎥ ⎢ ⎣ 16 : If rem = 0 then ⎦⎥ ⎥ ⎢ ⎦ ⎣ 17 : rank lex(rem, prefix ∗ (d)) 18 : If X = ∅ then report “fail” Fig. 5. An alternative ranking process, yielding a Lexicographic ranking
230
I. Balaban, A. Pnueli, and L.D. Zuck
Example 4. Consider Program MUX - SEM of Example 1 and the response property (π1 = 2) =⇒ ½ (π1 = 3). Applying the algorithm of Fig. 5, we obtain the similar assertions ϕ(,i) and ranking Δ(,i) as in Example 2. However, the rankings are: Δ(2,1) = 0,
Δ(3,i) = (0, 2),
Δ(4,i) = (0, 1)
The difference in the ranking is having the prefix “0” as the first component of the second and third case: In the first execution of the loop that starts in line 6, only the compassion requirement C2,1 yields a non-0 value for ψ (computed in line 7). Consequently, upon exiting the loop, rem = pend ∧ y = 0 and accu φ = ¬pend . Thus, there is a recursive call with (pend ∧ y = 0, 0 ). In this recursive call, each compassion requirement C4,i get a ranking (0, 1), contributes πi = 4 to accu φ and leaves rem empty. N Thus, upon exit, X = pend ∧ y = 0 ∧ i=1 πi = 4. Since a fixpoint is not reached, the loop in line 6 is executed again with d = 2. Then, each compassion C3,i gets ranked (0, 2), contributes nothing to rem, and πi = 3 to accu φ . A fixpoint is then reached ad the procedure terminates with the ranking above.
5 Generalization from Finite Ranking In [2] we introduced “ranking abstraction” as a companion to predicate abstraction. The motivation was to prove termination properties (which can be viewed as response properties) of programs that manipulate heaps. Roughly speaking, ranking abstraction is used to prove liveness properties of (not necessarily concurrent) systems by avoiding comprehensive ranking functions, and, instead, instrumenting the system with simple monitors, each recording the increase/decrease of some quantity in a well founded domain. The most obvious example is monitoring a variable whose range is the naturals. However, it is possible also to monitor less obvious entities, such as the length of the shortest route between the root and a certain node in a heap. Associated with each monitor is a monitor variable that records, at each step of the system, the increase or decrease in the monitored quantity. Furthermore, a monitor introduces a compassion requirement into the system, stating that if the monitor variable is decreased infinitely many times, it should also be increased infinitely many times. This constraint captures the well foundedness of the monitor domain. The monitors are non-interfering in that they do not alter the system behavior, or, alternatively, the projection of a computation of a monitored system onto the nonmonitoring variables is a computation of the non-monitored system. They are synchronously composed with the original system. For space reasons, we omit the formal description of the systems as well as the monitor instrumentation process . We consider a heap-less example, taken from [2], that demonstrates the main idea: Example 5. Consider program N ESTED -L OOPS in Fig. 6(a). In this program, the statements “x := ?” and “y := ?,” in lines 0 and 1 respectively, denote random assignments of arbitrary positive integers to variables x and y. We omit explicit mention of the obvious fairness properties 1, π = for = 0..5. Fig. 6(b) demonstrates a possible choice of monitors, by showing the system after it has been synchronously composed with monitors for the quantities x and y, with monitor variables decx and
Proving the Refuted: Symbolic Model Checkers as Proof Generators
⎡x, y : natural init x = 0, y = 0⎤ 0 : x := ? ⎥ ⎢ 1 : while x > 0 do ⎢ ⎡ ⎤⎥ ⎥ ⎢ 2 : y := ? ⎥ ⎢ ⎢ ⎢ ⎥⎥ 3 : while y > 0 do ⎢ ⎢ ⎥⎥ ⎢ ⎣ ⎦⎥ 4 : y := y − 1 ⎥ ⎢ ⎦ ⎣ 5 : x := x − 1 6: (a) Program N ESTED -L OOPS
231
x, y : natural init x = 0, y = 0 decx , decy : {−1, 0, 1} compassion (decx > 0, decx < 0), (decy > 0, decy < 0) ⎤ ⎡ 0 : (x, decx , decy ) := (?, sign(x − x ), 0) ⎥ ⎢ 1. while x > 0 do ⎢ ⎤⎥ ⎡ ⎥ ⎢ , dec ) := (?, 0, sign(y − y )) 2 : (y, dec x y ⎥ ⎢ ⎢ ⎥⎥ ⎢ 3 : while y > 0 do ⎢ ⎢ ⎥⎥ ⎢ ⎣ 4 : (y, decx , decy ) := (y − 1, 0, 1) ⎦ ⎥ ⎥ ⎢ ⎦ ⎣ 5 : (x, decx , decy ) := (x − 1, 1, 0) 6: (b) Program AUGMENTED -N ESTED -L OOPS
Fig. 6. Program N ESTED -L OOPS and Its Augmentation
decy , respectively. The multiple assignment statements in the program encode the synchronous composition. The associated compassion requirements (decx > 0, decx < 0) and (decy > 0, decy < 0) capture the fact that variables x and y cannot decrease infinitely often in a computation (without violating type correctness), unless they increase infinitely often. In order to verify response properties of infinite-state systems, in [2] we advocate composing the system with monitors, then predicate abstracting the monitored system, and finally model checking the resulting system for termination. Since the abstraction preserves the monitoring variables (decx and decy in Example 5) and compassion requirements, the notion of well foundedness of the concrete data domain is preserved in the resulting finite-state system. The precise choice of monitors, as well as abstraction predicates, is left to an independent process of abstraction refinement, also described in [2]. Example 6. Consider program N ESTED -L OOPS in Example 5. Suppose we wish to prove that the program always terminates, specified by the response property at− 0 =⇒ ½ at− 6, where the assertion at− k stands for π = k. Since the decrements of x and y are obvious candidates for monitoring, we use the monitor composition in Fig. 6(b), in which N ESTED -L OOPS is synchronously composed with two monitors, one for each of x and y. Using the predicate base X : x > 0, Y : y > 0, we obtain the program in Fig. 7. Note that, since locations 1 and 3 are while statements, the implicitly set Dec x and Dec y to 0, thus, in locations 2, 4, and 5, Dec x = Dec y = 0. Once model-checked, it can be shown to satisfy the property ½ (π = 6). Fig. 8(a) shows the program states. In Fig. 8(b) we present a lexicographic ranking, auxiliary assertions and their associated compassion requirements obtained for the program by application of the ranking process of Fig. 4. There, we use Cv to denote the compassion requirement (Dec v > 0, Dec v < 0), and C – the compassion (rather, justice) requirement (1, π = ). To emphasize the nesting structure of the assertions, we label them by lexicographic indices corresponding to their ranks.
232
I. Balaban, A. Pnueli, and L.D. Zuck X, Y : {0, 1} init Y = 0, X = 0 Decx : {−1, 0, 1} Decy : {−1, 0, 1} compassion (Decx > 0, Decx < 0), (Decy > 0, Decy < 0) ⎡ ⎤ 0 : (X, Decx , Decy ) := (1, −1, 0) ⎢ ⎥ 1 ⎢ ⎤⎥ ⎡ : while X do ⎢ ⎥ , Dec ) := (1, 0, −1) 2 : (Y, Dec x y ⎢ ⎥ ⎢ ⎥⎥ ⎢ 3 : while Y do ⎢ ⎢ ⎥⎥ ⎢ ⎣ 4 : (Y, Decx , Decy ) := ({0, 1}, 0, 1) ⎦ ⎥ ⎢ ⎥ ⎣ ⎦ 5 : (X, Decx , Decy ) := ({0, 1}, 1, 0) 6: Fig. 7. Program N ESTED -L OOPS -A BSTRACT
Index 1 2 3 4 5 6 7 8 9 10
(π : X, Y, Decx , Decy ) (1 : 0 0 1 0) (5 : 1 0 0 0) (3 : 1 1 0 1) (4 : 1 1 0 0) (3 : 1 0 0 1) (3 : 1 1 0 −1) (2 : 1 0 0 0) (1 : 1 0 1 0) (1 : 1 0 −1 0) (0 : 0 0 0 0)
Δi 0 1 (1,0) (1,1) (1,1,0) (1,2) (1,3) 2 3
ϕi ϕ0 : ϕ1 : ϕ1.0 : ϕ1.1 : ϕ1.1.0 : ϕ1.2 : ϕ1.3 : ϕ2 : ϕ3 :
1 2..8 2 3..5 4 6 7 9 10
Compassion C1 Cx C5 Cy C4 C3 C2 C1 C0
(b) Concrete Ranking, Assertions, and Compassion for Abstract Nested Loops Fig. 8. Results from Ranking Abstract Nested Loops
(a) States of Abstract Nested Loops
Once the model checker verifies that the system satisfies its liveness property, [2] presents an algorithm to produce a concrete ranking for the original program and the response property. The approach there, however, suffers from two weaknesses – the ranking is obtained in two passes, one that verifies the liveness property, and another that performs the extraction2. The second pass does not obtain information from the model checking pass other than confirmation that it is successful. Rather, it generates a ranking on its own. Another weakness of the approach is that extraction of the concrete ranking heavily depends on the monitors as imposing the only “real” (non-justice) compassion requirements. Here we propose an approach that can deal with native compassion, as well as compassion introduced by the monitors. The symbolic algorithm for extracting auxiliary constructs for FAIR -R ESPONSE, can be extended to produce a concrete ranking. This solves the two issues mentioned above – the concrete ranking is obtained in a single pass, and native compassion properties are treated as well. 2
Dennis Dams has pointed out this inefficiency, which motivated the current work.
Proving the Refuted: Symbolic Model Checkers as Proof Generators
233
To obtain a concrete ranking, we perform the following transformation for each assertion ϕα corresponding to the compassion requirement (Dec v > 0, Dec v < 0): • Let D be the maximal d such that there exists a rank of the form α d β for some β. Define D1 = D + 1; • Remove from the set of assertions the assertion ϕα , and replace it by the assertion ϕα D1 : ϕa ∧ Dec > 0, with the rank (α D1 ), and associate with it the compassion requirement (1, ¬∃Vπ : ϕa ∧ Dec v > 0, where Vπ = V − {π}. This compassion requirement, which is justice, requires that the system does not remain in states state ϕa ∧ Dec > 0; • Transform any rank of the form (α β) to become (α v β). Example 7. Consider again the nested loop program with the ranking in Fig. 8(a) and the compassion requirement Cx . Hence, we have α = 1, and D1 = 4. Thus, we replace ϕ1 with the assertion ϕ14 : 8. Similarly, we replace the assertion ϕ11 with the assertion ϕ111 : , 5. We then replace any “1” prefix of a rank with “1, x”, and any “1, x, 1” prefix with “1, x, 1, y”. The new associated compassion requirements, and a the new concrete ranking, is in Fig. 9. ϕα ϕ0 : ϕ1.0 : ϕ1.1.0 : ϕ1.1.1 : ϕ1.2 : ϕ1.3 : ϕ1.4 : ϕ2 : ϕ3 :
1 2 4 3, 5 6 7 8 9 10
Δα 0 (1, x, 0) (1, x, 1, y, 0) (1, x, 1, y, 1) (1, x, 2) (1, x, 3) (1, x, 4) 2 3
Cα C1 C5 C4 C3 C3 C2 C1 C1 C0
Fig. 9. Concrete constructs
6 Conclusion and Future Work We have presented symbolic algorithms that extracts the auxiliary constructs – assertions and ranking functions – necessary to obtain deductive proofs of response properties. The first algorithm provides a “flat” ranking and its main use is for (concrete) finite-state systems. The others produce lexicographic rankings and are more suitable for parameterized or infinite-state systems where “flat” ranking give little information of the general case. There are two algorithms to generate lexicographic rankings, one being more “sequential,” assigning separate rankings for each assertion/compassion pair, and the other being more “concurrent,” attempting to assign the same ranking to several assertion/compassion pairs. The former seems to be more suitable for obtaining concrete ranking for infinite-state systems that are obtained from finite-state abstractions (using both predicate- and ranking- abstraction), while the latter seems to be more suitable to obtain “parameterized” ranking for parameterized systems obtained from small instantiations.
234
I. Balaban, A. Pnueli, and L.D. Zuck
Once the auxiliary constructs for the application of the deductive rules are obtained, a theorem prover can be used to validate the premises of the proof rule. Since the number of premises dictated by the rule is only linear in the number of compassion requirements in the system (see Fig. 2), the complexity of the validation effort is completely determined by the assertional language used to define abstraction predicates and monitors. When the system is “truly” finite-state, independent validation by a theorem prover is not necessary, since the algorithm that produces the constructs also checks for the correctness of the system. We described the symbolic algorithm for obtaining concrete constructs from abstraction of (concrete) infinite-state systems. We are currently working on heuristics for generating parameterized rankings from small instantiations of parameterized systems. We suspect that some variant of the “project & generalize” heuristics we used in our work on invisible constructs will allow us to obtain such rankings.
References 1. Balaban, I.: Shape Analysis by Augmentation, Abstraction, and Transformation. PhD thesis, New York University, New York (May 1987) 2. Balaban, I., Pnueli, A., Zuck, L.D.: Modular ranking abstraction. Int. J. Found. Comput. Sci. 18(1), 5–44 (2007) 3. Clarke, E., Emerson, E.: Design and synthesis of synchronization skeletons using branching time temporal logic. In: Kozen, D. (ed.) Logic of Programs 1981. LNCS, vol. 131, pp. 52–71. Springer, Heidelberg (1982) 4. Emerson, E., Clarke, E.: Characterizing correctness properties of parallel programs using fixpoints. In: de Bakker, J.W., van Leeuwen, J. (eds.) ICALP 1980. LNCS, vol. 85, pp. 169– 181. Springer, Heidelberg (1980) 5. Fang, Y., Piterman, N., Pnueli, A., Zuck, L.D.: Liveness with invisible ranking. Software Tools for Technology Transfer 8(3), 261–279 (2006) 6. Kesten, Y., Pnueli, A.: A Compositional Approach to CTL∗ Verification. Theor. Comp. Sci. 331(2-3), 397–428 (2005) 7. Kupferman, O., Vardi, M.: From complementation to certification. Theor. Comp. Sci. 345, 83–100 (2005) 8. Kurshan, R.: Computer Aided Verification of Coordinating Processes. Princeton University Press, Princeton (1995) 9. Lichtenstein, O., Pnueli, A.: Checking that finite-state concurrent programs satisfy their linear specification. In: Proc. 12th ACM Symp. Princ. of Prog. Lang., pp. 97–107 (1985) 10. Lichtenstein, O., Pnueli, A., Zuck, L.: The glory of the past. In: Parikh, R. (ed.) Logic of Programs 1985. LNCS, vol. 193, pp. 196–218. Springer, Heidelberg (1985) 11. Manna, Z., Pnueli, A.: Completing the temporal picture. Theor. Comp. Sci. 83(1), 97–130 (1991) 12. McMillan, K.: Symbolic Model Checking. Kluwer Academic Publishers, Boston (1993) 13. Namjoshi, K.S.: Certifying model checkers. In: Berry, G., Comon, H., Finkel, A. (eds.) CAV 2001. LNCS, vol. 2102, pp. 2–13. Springer, Heidelberg (2001) 14. Namjoshi, K.: Lifting temporal proofs through abstractions. In: Zuck, L.D., Attie, P.C., Cortesi, A., Mukhopadhyay, S. (eds.) VMCAI 2003. LNCS, vol. 2575, pp. 174–188. Springer, Heidelberg (2002) 15. Peled, D., Pnueli, A., Zuck, L.: From falsification to verification. In: Hariharan, R., Mukund, M., Vinay, V. (eds.) FSTTCS 2001. LNCS, vol. 2245, pp. 292–304. Springer, Heidelberg (2001)
Proving the Refuted: Symbolic Model Checkers as Proof Generators
235
16. Peled, D., Zuck, L.: From model checking to a temporal proof. In: Dwyer, M.B. (ed.) SPIN 2001. LNCS, vol. 2057, pp. 1–14. Springer, Heidelberg (2001) 17. Pnueli, A., Sa’ar, Y.: All you need is compassion. In: Logozzo, F., Peled, D.A., Zuck, L.D. (eds.) VMCAI 2008. LNCS, vol. 4905, pp. 233–247. Springer, Heidelberg (2008) 18. Pnueli, A., Zaks, A.: PSL model checking and run-time verification via testers. In: Misra, J., Nipkow, T., Sekerinski, E. (eds.) FM 2006. LNCS, vol. 4085, pp. 573–586. Springer, Heidelberg (2006) 19. Vardi, M., Wolper, P.: An automata-theoretic approach to automatic program verification. In: Proc. First IEEE Symp. Logic in Comp. Sci., pp. 332–344 (1986) 20. Zuck, L., Pnueli, A.: Model checking and abstraction to the aid of parameterized systems (a survey). Computer Languages, Systems & Structures 30(3-4), 139–169 (2004)
236
I. Balaban, A. Pnueli, and L.D. Zuck
Dedication Dear Willem-Paul, It is impossible to summarize in few paragraphs the strong impact that you have had on my scientific work, career, and life in general, during our long and intensive acquaintance so far. This includes your high scientific standards and strong principles of full abstraction and compositionality. Unfortunately, I could not always achieve in my work these absolute ideals, but your criticism has always been constructive and inspiring. You were one of the first Computer Scientists who realized the significance of Statecharts as a specification tool for reactive systems, both pragmatically and theoretically. This led to a long, fruitful, and highly enjoyable collaboration between us which spanned more than a decade in a sequence of European Projects, and yielded many exciting scientific results. This collaboration led to the formation of extensive groups of young researchers, both at Utrecht and Kiel, which grew to become mature contributors to the application of formal methods both in academy and industry, due to your inimitable charisma as an educator and a leader. I am also indebted to you for organizing the series of Dutch REX conferences and summer schools, in which I have always been a very welcome guest and contributor. These events always represented the frontiers of research in the important subjects of concurrent verification, refinement and abstraction, real-time, and compositionality. But most importantly, I am very proud to have earned the honor and pleasure of being your friend, a relation that easily and naturally extended to friendship between our families. Another very beneficial thing I learned from you is that, as scientists, we should never take ourselves too seriously and that, sometimes, a fresh fish with crisp white wine is more important than seven theorems – of course, if we can have both, all the better. Sincerely Yours
Amir Pnueli
Meanings of Model Checking E. Allen Emerson1,2 2
1 Department of Computer Sciences Computer Engineering Research Center The University of Texas at Austin, Austin TX 78712, USA [email protected] www.cs.utexas.edu/~ emerson/
Abstract. Model checking was introduced in the early 1980’s to provide a practical automated method for verifying concurrent systems. Model checking has had substantive impact on program verification. For the first time industrial strength systems are being verified on a routine basis. As time has progressed, the term model checking has acquired slightly different shades of meaning. In this paper we these consider variant aspects of model checking, elucidating some often overlooked and subtle distinctions. Keywords: model checking, alternative definitions, special applications.
1
Introduction
It is an understatement to note that computing systems of all kinds can and do behave incorrectly. Computer software programs, computer hardware designs, embedded systems, as well as computing systems in general are well known to exhibit errors. Working programmers may devote more than half of their time on testing and debugging in order to increase reliability. A great deal of research effort has been and is devoted to developing improved testing and simulation methods. Testing can successfully identify significant errors. Yet, remaining serious errors still afflict many computer systems including systems that are safety critical, mission critical, or economically vital. The US National Institute of Standards and Technology has estimated that programming errors cost the US economy $60B annually [Ni02]. Ensuring correct behavior of computing systems is thus a fundamental task. Given the inadequacy of testing, alternative mathematical approaches have been sought. The most promising approach depends on the fact that programs and more generally computer systems may be viewed as mathematical objects with behavior that is in principle well-determined. This makes it possible to specify using mathematical logic what constitutes the intended (correct) behavior. Then one can try to give a formal proof or otherwise establish that the program
This work was supported in part by National Science Foundation grants CCR-0098141 & CCR-020-5483 and funding from Fujitsu Labs of America.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 237–249, 2010. c Springer-Verlag Berlin Heidelberg 2010
238
E.A. Emerson
meets its specification. This line of study has been active for about four decades now. It is often referred to as formal methods. Even formal methods are not a panacea; they cannot guarantee that we get what we really want. In principle, it is difficult to impossible to establish for correctness in all aspects because such a “comprehensive correctness” specification is tantamount to what Dijkstra called pleasantness, the property of a computing system behaving as the designer desires. As the latter is an inherently pre-formal, intuitive, indeed, nebulous property, it is not well-defined mathematically. Nonetheless, practical experience has shown that the task of formalizing the requirements for a system helps clarify our intuitive preferences. As commonly understood, the model checking problem is an instance of the general program verification problem. Model checking provides an automated method for verifying concurrent (nominally) finite state systems that uses an efficient and flexible graph search, to determine whether or not the ongoing behavior described by a temporal property holds of the system’s state graph. The method is algorithmic and often efficient because the system is finite state, despite reasoning about infinite behavior. If the answer is yes then the system meets its specification. If the answer is no then the system violates its specification; in practice, the model checker can usually produce a counterexample for debugging purposes. At the time of the introduction of model checking in the early 1980s, the prevailing paradigm for verification was a manual one of proof-theoretic reasoning using formal axioms and inference rules oriented towards sequential programs [Ho69]. The need to encompass concurrent programs, and the desire to avoid the difficulties with manual deductive proofs, motivated the development of model checking. In my experience, constructing proofs was sufficiently difficult that it did seem there ought to be an easier alternative. The alternative was suggested by temporal logic. Temporal logic possessed a nice combination of expressiveness and decidability. It could naturally capture a variety of correctness properties, yet was decidable on account of the “Small” Finite Model Theorem which ensured that any satisfiable formula was true in some finite model that was small. It should be stressed that the Small Finite Model Theorem concerns the satisfiability problem of (propositional) temporal logic, i.e., truth in some state graph. This ultimately lead to the conception of model checking, i.e., truth in a given state graph. The remainder of the paper is organized as follows. In section 2 major variant meanings of the term model checking are described, compared, and contrasted. Specialized formulations of model checking tailored toward various classes of applications are discussed in section 3. Some general concluding remarks are given in section 4, while a special acknowledgement to Willem-Paul de Roever is given in section 5.
2
What Is Model Checking?
The field of model checking has evolved over time to encompass several related but distinct meanings, foundations, and applications. We will trace this development and discuss some of these alternatives.
Meanings of Model Checking
2.1
239
Automatic Verification of Finite State Concurrent Programs
By far the most common meaning of the term model checking is the original one. It refers to an automatic method for verifying programs. Here, for specificity, let us refer to it as model checking0 . Both the concept and the term were introduced by Clarke and Emerson [CE81]. Model checking0 is an automatic method for verifying whether finite-state concurrent programs meet a correctness specification in temporal logic. The idea was introduced independently by Quielle and Sifakis [QS82]. Since the systems to be checked are finite state, model checking0 is not only automatic but, in principle, algorithmic; returning “Yes, the program satisfies the specification”, or “No, the program has an error”, typically with information to help identify the error. In practice, state explosion may prevent the implementation from completing its run (due to time-out or memory-overflow). Such verification by model checking0 has come to be referred to as temporal logic model checking. We should describe the original motivation for the words model and checking. The model checking0 problem is: Given the finite Kripke structure or state graph M representing a finite state concurrent program and a temporal logic formula f , determine whether M |= f . We can read M |= f as “M satisfies f ”, or “in structure M specification f is true”, or simply “M models f .”. We can then use a model checking0 algorithm to determine whether M meets specification f . The algorithm operates, in short, by checking whether M , viewed as a logical interpretation of f , is in fact a model of f . Assuming f is a formula of (branching) temporal logic, the algorithm can readily check it over M using the TarskiKnaster theorem to evaluate by iteration the fixpoint characterizations of the primitive temporal operators, while compound formulas can be evaluated by recursive descent. A more recently suggested as well as reasonable, but historically inaccurate, etymology, is that one takes a program and abstracts away inessential detail. The result is a model in the sense of a model railroad car or a model in applied mathematics. One then mathematically checks the model. This sense is helpful as it provides a good perspective on current practice, but as an etymology fails to capture the original sense that the program state graph is model, as understood in mathematical logic, of the specification formula. Intuitively, model checking0 is a method to establish that a given program meets a given specification where: – – – – – – 2.2
The program defines a finite state graph M . M is searched for elaborate patterns to determine if the specification f holds. Pattern specification is flexible. The method is efficient in the sizes of M and, ideally, f . The method is algorithmic. The method is practical. Why Could Model Checking be Useful?
At this point, it is worthwhile asking why anyone might believe model checking0 could possibly be useful. After all, how many programs are finite state? A bit
240
E.A. Emerson
of thought reveals that there are quite a lot of interesting programs that are finite state, or can be viewed as finite state. Concurrent programs figure prominently here. At the least the portion of a concurrent program to coordinate and synchronize the various subcomponents is typically finite state. For instance, in a solution to the mutual exclusion problem the sequential code in the critical section is irrelevant to the synchronization behavior and we can cleanly factor it out. The resulting synchronization skeleton is an abstraction of the original program that captures the essence of its concurrent behavior [CE81]. Many solutions to classical synchronization problems including mutex, the readers-writers problem, and the dining philosophers problem have finite state synchronization skeleton skeletons. Additional examples include many protocols and sequential circuits. We thus have a sort of empirical justification for the potential utility of model checking: we can find many examples of finite state concurrent programs and circuits. There is also a theoretical justification. If a program can be specified using (propositional) temporal logic, then it can be realized by a finite state machine. 2.3
The Truth Problem
We have the generalized term we shall denote model checking1 in use to refer to the truth problem for a given logic: given a logical formula f and an interpretation I of appropriate signature for f one writes I |= f when “I is true of f ”, or we can say “I is a model of f ”. The truth relation |= for the logic is mathematically well-defined via the Tarskian definition of truth. Note that truth may not be effectively computable for a particular logic. The original model checking0 problem as defined above is a strict special case of this model checking2 problem, where the interpretations are finite graphs and the logic is temporal, thereby ensuring decidability. However, for more general logics over more general interpretations one often sees queries amounting to “is the model checking problem decidable?”. The answer may be No if model checking1 is intended. 2.4
Who Can Handle the Truth?
This model checking1 problem, while implicit in the Tarskian definition of truth, seems not to have been distilled nor much studied along with its algorithmic connotations. However, examples of model checking1 are not uncommon: For propositional logic formula f and truth assignment I, certainly we can quickly check whether I |= f . But this instance of model checking1 does not conventionally seem to have been considered an interesting or even particularly worthwhile problem. Rather, the propositional satisfiability problem, given f does there exist an I such that I |= f , is the key consideration. The dual validity problem is also key. The enormous interest in propositional satisfiability is well-warranted given given its archetypal role as an NP-complete problem. From about 1975 to 1985 there was tremendous, mostly theoretical, interest in devising decision algorithms for the satisfiability of various modal (e.g., S4),
Meanings of Model Checking
241
dynamic (e.g., PDL, PDL+repeat), and temporal logics (CTL, LTL, etc.). The satisfiability problems were all inherently NP-hard and, in practice at least exponential time. The problems also seemed rather difficult technically. The model checking0 problem, and its subsuming generalization model checking1 problem, seemed technically much simpler to begin with than the corresponding satisfiability problems. At the first couple of conferences where we introduced model checking ([CE81], [CES83]), a number of accomplished theoretically-oriented computer scientists were confused as to just what model checking0 was. It wasn’t satisfiability. It wasn’t validity. It was, in their view, a somewhat disconcerting novelty. The more practically-oriented computer scientists dismissed its utility rather promptly. In their view, most programs are infinite state and even for finite state systems state-explosion would surely render the method useless. So for quite a long time, model checking was viewed as something of an eccentric or quixotic quest. 2.5
The General Verification Problem
The model checking2 problem is in some instances identified with the general verification problem, which is: given program M and specification h determine whether or not the behavior of M meets the specification h. Here the type of program is unrestricted as is the specification. Formulated in terms of Turing Machines, the verification problem was considered by Turing [Tu36]. Given a Turing Machine M and the specification h that it should eventually halt (say on blank input tape), one has the halting problem which is algorithmically unsolvable. We have that the finite state model checking0 problem is a special case of the general verification, or model checking1 , problem. It is not uncommon to read, say, that “the model checking problem is undecidable for ZZZ parallel processes”. 2.6
Algorithmic Verification
We have the notion denoted by model checking3 that is often used synonymously with algorithmic verification. Obviously, there is a potential inconsistency between the potentially undecidable model checking2 and the always decidable model checking3 . On the other hand model checking0 in the original sense of finite state verification is an instance of algorithmic verification. In practice, the most common meaning of algorithmic verification appears to be model checking. But the notions of model checking0 vs. model checking3 do not necessarily coincide. 2.7
Linear Temporal Truth via Validity
Fifth, we use the term model checking4 for correctness w.r.t. linear temporal formalisms such as LTL (cf. [LP85], [VW86]). In the linear framework the meaning of a program is presented as a set of sequences of program states that define legitimate program behaviors, known as paths; the program is denoted M and the
242
E.A. Emerson
specification is h, an LTL formula defining another set of paths. We write M |= h here precisely when paths(M ) ⊆ paths(h). If we let fM be an LTL formula true of path x exactly when x is in M , then M |= h iff fM ⇒ h is a valid LTL formula. Thus, contrary to some expectations linear temporal model checking4 , a validity problem, is not an instance of model checking1 the truth problem. There is another peculiarity about linear temporal model checking4 . Most logics are closed under semantic negation (cf. [EH86]): not (M |= h) iff M |= ¬h. However, linear temporal logics are not. If it is not so that M |= h, then there exists some path x satisfying ¬h. But this cannot be expressed in terms of the linear temporal |=; we might try M |= ¬h but this just asserts all paths x of M . In any case, this linear temporal model checking4 appears to be the only refinement of model checking that is not naturally representable in terms of model checking1 .
3
The Field of Model Checking
Model checking has diversified into a number of directions including theoretical aspects, application domains, and specialized formulations. This whole space of related ideas concerning model checking comprises the field of model checking [ACM08]. First, let us consider still another description of model checking. Wikipedia [Wik08] describes model checking as “ the process of checking whether a given structure is a model of a given logical formula”. Given the potential ambiguity associated with the term process, this can be sharpened in several alternative ways. If one replaces “the process” by “the problem” then one obtains the definition for the original model checking1 problem. If we restrict ourselves to finite structures and temporal logic, a “process” should be an “algorithm”. More generally, we could refer to a “method” of checking. Still more generally the “process of model checking” might be understood as referring to the entire process of using a model checking to verify a system. This includes the following aspects. – selection of appropriate formal machinery: logics, algorithms, data structures; – developing a formal specification encompassing the correctness properties of interest for the system; – modeling the system itself in a suitable modeling language (e.g., SML, Verilog, or C); – decisions regarding what abstractions of the system are to be deployed to eliminate irrelevant information while retaining crucial information; – choice of mechanical reasoning tools. We can distinguish model checking algorithms on the basis of the data structures they use. Early model checkers (cf. [CES86]) as well as many current ones (cf. [Ho96]) employ enumerative, also called explicit state, representation of the system state graph (Kripke structure) Traditional methods such as adjacency
Meanings of Model Checking
243
lists or matrices are used. Essentially each state is represented by a node in the adjacency list and each transition between states is represented by a link; for the somewhat less space efficient matrix representation of a system with n states we use a n × n matrix. The net effect is that the size of the computer representation is proportional to the size of the state graph. Assuming the number of transitions is proportional to the number of states, a system with a few thousand states would have a representation involving a few thousand memory locations. Enumerative model checking can be useful on a range of applications from artificial intelligence to business processes to software verification. The enumerative approach can be useful for systems up to a certain size limit. Presently up to several tens or perhaps hundreds of millions of states can be handled in many cases. Significantly, the limit is growing over time due to technological advances associated with Moore’s Law, inducing the doubling of RAM capacity every couple of years or so. An important advantage of enumerative model checking is that it is well-suited to systems that have a more or less irregular organization. Such systems may not be amenable to succinct symbolic representation (see below). Explicit state representation can be advantageous in model checking hardware designs where, say, the abstracted design contains relatively few states each requiring a long description. In contrast to explicit state model checking, we also have symbolic model checking (cf. [BCL+94]). Essentially the same algorithms are used as for explicit state model checking. However, sets of states (and the sets of transitions comprising the transition relation) are described compactly using constraints and processed in bulk. Most commonly, the constraints are captured using BDDs (Binary Decision Diagrams) (cf. [Bry86]) which are deterministic finite automata accepting bit strings of length n, corresponding to states over a state space of size 2n . As a relatively small automaton state diagram with about n states can have exponential in n paths through the diagram, a small BDD, i.e. automaton, can represent an immensely large set of states. Not every set of states has a small BDD representation, but those encountered in practical efforts at model checking hardware designs typically do. BDDs work extremely well with hardware designs having up to a few hundred state variables. Variant notions of BDDs can perhaps handle a few thousand state variables. SAT-based bounded model checking is an alternative approach to symbolic model checking [B+99]. The SAT approach can accommodate larger designs than the BDD approach. However it only explores for “close” errors at depth bounded by k where typically k ranges from a few tens to hundreds of steps. In general it cannot find “deep” errors and provide verification of correctness. Still other forms of symbolic model checking use inequalities and polyhedra to more compactly represent hybrid systems. There are numerous other notions of model checking, as well as specialized application domains. Abstract model checking [Co00] is a formulation based on abstract interpretation. Real-time model checking (cf. [EMSS90]) addresses correctness involving, e.g., promptness requirements such as “the goal is achieved
244
E.A. Emerson
in 15 time units”. There are various forms of stochastic model checking. Probabilistic model checking (cf. [HJ94]) evaluates assertions such as “p occurs with probability greater than 1/2”; the underlying structures are typically Markov chains. Statistical model checking uses sampling to estimate the likelihood of assertions being true or false (cf. [GS05]). Hardware model checking is a broad term that refers to the model checking of computer hardware, microprocessor designs, sequential circuits and so forth. Hardware model checking has been quite successful. Owing to the static format of system states, and because the relatively small size and regularity of hardware designs makes feasible the verification of reasonably large (a few hundred state variables) portions of “real-estate” from the design using symbolic representation. Hardware model checking is used routinely by leading hardware vendors, microprocessor manufacturers, and design automation companies. Of course, there is quite a ways to go. A modern microprocessor design may involve 100, 000 state variables or more. Surely, additional advances in scaling “monolithic” model checking will follow. However, it seems likely that to handle entire microprocessors model checking together with compositional reasoning may be needed. Software model checking refers to the verification of software. This is a much harder task than hardware verification. The format of individual states in software is dynamic due to growing and shrinking stacks, heaps, and other data structures. The syntactic organization of the source code and corresponding state graph may exhibit a high degree of irregularity. For these reasons compaction techniques based on symbolic representations tend not to be useful for software. Moreover, software programs can be truly enormous. Windows Vista involves 50 million lines of code. Nonetheless, there has been some success in model checking software using static analysis techniques to facilitate abstraction. Device drivers with over 100, 000 lines of code have been verified. General software verification, however, remains a Grand Challenge. The term parallel and distributed model checking refers to the development of model checking algorithms exploiting concurrency to improve their time efficiency and space capacity (cf. [PDMC08]). Incremental model checking refers to methods for successively checking a sequence M1 , M2 , M3 , ...Mk of revised systems, each obtained as a revision of the previous one, without starting over on each, but instead using information developed in the prior instances (cf. [SS94]). This is related to the area of dynamic graph algorithms, but arguably more challenging. Directed or guided model checking refers to the use of hints or heuristics to guide the direction of state space exploration in an effort to detect errors more cheaply (cf. [BRS00]). Regular model checking is a term that has been used to to describe certain forms of infinite state model checking where the representation of sets of states/strings uses automata.
4
Concluding Discussion
We have argued that model checking refers to several closely related but distinct concepts. In the vast majority of cases, we refer to model checking0 which
Meanings of Model Checking
245
is an automatic, indeed algorithmic method of verifying finite state concurrent programs against temporal logic specifications [CE81] (cf. [QS82]). This model checking0 is a spacial case of model checking1 , which amounts to the truth problem for a logic: given interpretation I and formula f , is I a model of f ? The former is always decidable by virtue of finite-state-ness; the latter may very be undecidable. The program verification problem, given program M determine whether it satisfies specification f , is also known as the model checking2 problem and is in general undecidable and subsumes model checking0 . Algorithmic verification is often used as a synonym for model checking3 , even though the former is arguably more general than the latter. The method of checking truth of a linear temporal LTL assertion f over structure M , often referred to as model checking4 , is somewhat surprisingly not an instance of model checking1 , rather it is a special case of validity checking. Numerous other specialized formulations from hardware to software model checking abound. Model checking in the original sense of model checking0 today finds broad applicability. Each of the other senses has also proved a fruitful topic for study and often applications. The practical significance of model checking is that, instead of just talking about program verification, today actual industrial designs and programs are being verified using model checking. The conceptual significance of model checking is that the role of manual proofs is establishing correctness of programs has been diminished, while the role of exhaustive search is crucial.
Acknowledgement The paper is dedicated to Willem-Paul de Roever on the occasion of his 65th birthday. It has been my pleasure and honor to know Willem-Paul since 1981. It was also my privilege and pleasure to visit Willem-Paul for summer 1987 and again during spring 1988. Willem-Paul set the golden standard epitomizing what all it means to be a scientist. First, uncompromising pursuit of the truth is needed. Creativity, vision and imagination are vital. Crucially, one needs to be clear-headed as well. All these traits are exemplified in enormous abundance by Willem-Paul. The recognition of a surprising lack of clarity surrounding the term model checking, inspired me to try to cast matters with a bit more precision.
References [ACM08]
[Ak78] [AENT01]
Association for Turing Machinery, 2007 Turing Award awarded to E.M. Clarke, E.A. Emerson, J. Sifakis; Full Citation (for founding .... the field of Model Checking) (2007); http://awards.acm.org/homepage.cfm?awd=140 Akers, S.B.: Binary Decision Diagrams. IEEE Trans. on Computers, C27(6), 509–516 (1978) Amla, N., Emerson, E.A., Namjoshi, K.S., Trefler, R.J.: AssumeGuarantee Based Compositional Reasoning for Synchronous Timing Diagrams. In: Margaria, T., Yi, W. (eds.) TACAS 2001. LNCS, vol. 2031, pp. 465–479. Springer, Heidelberg (2001)
246
E.A. Emerson
[B+90]
[B+99]
[BMP81]
[Br86] [BY75] [BRS00] [Bry86] [BCL+94]
[Bu62]
[Bu74] [CE81]
[CES86]
[Cl79] [CC99]
[Co00]
[Da00]
[DEG06]
[deBS69]
Birch, J., Clarke, E., MacMillan, K., Dill, D., Hwang, L.: Symbolic Model Checking: 1020 States and Beyond. In: Logic in Computer Science, LICS, pp. 428–439 (1990) Biere, A., Cimatti, A., Clarke, E., Zhu, Y.: Symbolic Model Checking without BDDs. In: Cleaveland, W.R. (ed.) TACAS 1999. LNCS, vol. 1579, pp. 193–207. Springer, Heidelberg (1999) Ben-Ari, M., Manna, Z., Pnueli, A.: The Temporal Logic of Branching Time. In: Principles of Programming Languages, POPL, pp. 164–176 (1981) Bryant, R.: Graph-Based Algorithms for Boolean Function Manipulation. IEEE Trans. Computers 35(8), 677–691 (1986) Basu, S.K., Yeh, R.T.: Strong Verification of Programs. IEEE Trans. on Software Engineering, SE-1(3), 339–345 (1975) Bloem, R., Ravi, K., Somenzi, F.: Symbolic guided search for CTL model checking. In: DAC 2000, pp. 29–34 (2000) Bryant, R.E.: Graph-Based Algorithms for Boolean Function Manipulation. IEEE Transactions on Computers, C-35(8), 677691 (1986) Burch, J.R., Clarke, E.M., Long, D.E., McMillan, K.L., Dill, D.L.: Symbolic model checking for sequential circuit verification. IEEE Trans. on CAD of Integrated Circuits and Systems 13(4), 401–424 (1993/1994) Buchi, J.R.: On a Decision Method in Restricted Second Order Arithmetic. In: Proc. of Int’l. Congress on Logic Method, and Philosophy of Science, pp. 1–12. Stanford Univ. Press (1960/1962) Burstall, R.M.: Program Proving as Hand Simulation with a Little Induction. In: IFIP Congress, pp. 308–312 (1974) Clarke, E.M., Emerson, E.A.: The Design and Synthesis of Synchronization Skeletons Using Temporal Logic. In: Proceedings of the Workshop on Logics of Programs. LNCS, vol. 131, pp. 52–71. Springer, New York (1981) Clarke, E.M., Emerson, E.A., Sistla, A.P.: Automatic Verification of Finite State Concurrent Systems Using Temporal Logic Specifications. ACM Trans. Prog. Lang. and Sys. 2(8), 244–263 (1986) Clarke, E.M.: Program Invariants as Fixpoints. Computing 21(4), 273–294 (1979) Cousot, P., Cousot, R.: Refining Model Checking by Abstract Interpretation. Automated Software Engineering: An International Journal 6(1), 69–95 (1999) Cousot, P.: On Completeness in Abstract Model Checking from the Viewpoint of Abstract Interpretation. In: Reunion Workshop on Implementations of Logic, Saint Gilles, Reunion Island, November 11 (2000) Daskalopulu, A.: Model Checking Contractual Protocols. In: Breuker, Leenes, Winkels (eds.) JURIX 2000: The 13th Annual Conference, pp. 35–47. IOS Press, Amsterdam (2000) Deshmukh, J., Emerson, E.A., Gupta, P.: Automatic verification of parameterized data structures. In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS, vol. 3920, pp. 27–41. Springer, Heidelberg (2006) de Bakker, J.W., Scott, D.: A Theory of Programs (1969) (unpublished manuscript)
Meanings of Model Checking [deRo+01]
[Di76] [Di89] [EC80]
[EH86]
[EJ91] [EL86]
[EL87] [Em90] [EMSS90]
[EN96]
[EN98]
[ES97]
[FG99]
[Fl67]
[FSS83]
[GT99]
[GS05]
247
de Roever, W.-P., de Boer, F.S., Hannemann, U., Hooman, J., Lakhnech, Y., Poel, M., Zwiers, J.: Concurrency Verification: Introduction to Compositional and Noncompositional Methods, xxiv+776 pages. Cambridge University Press, Cambridge (2001) Dijkstra, E.W.: Discipline of Programming. Prentice-Hall, Englewood Cliffs (1976) Dijkstra, E.W.: In Reply to Comments. EWD1058 (1989) Emerson, E.A., Clarke, E.M.: Characterizing Correctness Properties of Parallel Programs Using Fixpoints. In: de Bakker, J.W., van Leeuwen, J. (eds.) ICALP 1980. LNCS, vol. 85, pp. 169–181. Springer, Heidelberg (1980) Emerson, E.A., Halpern, J.Y.: “’Sometimes’ and ‘Not Never’ revisited: on branching versus linear time temporal logic. J. ACM 33(1), 151–178 (1986) Emerson, E.A., Jutla, C.S.: Tree Automata, Mu-calculus, and Determinacy. In: FOCS 1991, pp. 368–377 (1991) Emerson, E.A., Lei, C.-L.: Efficient Model Checking in Fragments of the Propositional Mu-Calculus. In: Logic in Computer Science, LICS, pp. 267–278 (1986) Emerson, E.A., Lei, C.-L.: Modalities for Model Checking: Branching Time Strikes Back. Sci. of Comp. Prog. 8(3), 275–306 (1987) Emerson, E.A.: Temporal and Modal Logic. Handbook of Theoretical Computer Science, vol. B. North-Holland, Amsterdam (1990) Emerson, E.A., Mok, A.K., Sistla, A.P., Srinivasan, J.: Quantitative Temporal Reasoning. In: Clarke, E., Kurshan, R.P. (eds.) CAV 1990. LNCS, vol. 531, pp. 136–145. Springer, Heidelberg (1991) Emerson, E.A., Namjoshi, K.S.: Automatic Verification of Parameterized Synchronous Systems. In: Alur, R., Henzinger, T.A. (eds.) CAV 1996. LNCS, vol. 1102, pp. 87–98. Springer, Heidelberg (1996) Emerson, E.A., Namjoshi, K.S.: Verification of a Parameterized Bus Arbitration Protocol. In: Vardi, M.Y. (ed.) CAV 1998. LNCS, vol. 1427, pp. 452–463. Springer, Heidelberg (1998) Emerson, E.A., Sistla, A.P.: Utilizing Symmetry when Model-Checking under Fairness Assumptions: An Automata-Theoretic Approach. ACM Trans. Program. Lang. Syst. 19(4), 617–638 (1997) Giunchiglia, F., Traverso, P.: Planning as Model Checking. In: Biundo, S., Fox, M. (eds.) ECP 1999. LNCS, vol. 1809, pp. 1–20. Springer, Heidelberg (2000) Floyd, R.W.: Assigning meanings to programs. In: Schwartz, J.T. (ed.) Proceedings of a Symposium in Applied Mathematics. Mathematical Aspects of Computer Science, vol. 19, pp. 19–32 (1967) Fernandez, J.-C., Schwartz, J.P., Sifakis, J.: An Example of Specification and Verification in Cesar. The Analysis of Concurrent Systems, 199–210 (1983) Giunchiglia, F., Traverso, P.: Planning as Model Checking. In: Biundo, S., Fox, M. (eds.) ECP 1999. LNCS (LNAI), vol. 1809. Springer, Heidelberg (2000) Grosu, R., Smolka, S.A.: Monte Carlo Model Checking. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 271–286. Springer, Heidelberg (2005)
248
E.A. Emerson
[HJ94] [H+06]
[HvdP06] [Ho69] [Ho96] [IEEE05] [Ja97] [JPZ06]
[Ko83] [Kl56]
[Kn28] [KS92] [Ku94]
[La80]
[Le59] [LP85]
[Lo+94]
[NASA97]
[NK00]
[Ni02]
Hansson, H., Jonsson, B.: A Logic for Reasoning about Time and Reliability. Formal Asp. Comput. 6(5), 512–535 (1994) Heath, J., Kwiatowska, M., Norman, G., Parker, D., Tymchysyn, O.: Probabilistic Model Checking of Complex Biological Pathways. In: Priami, C. (ed.) CMSB 2006. LNCS (LNBI), vol. 4210, pp. 32–47. Springer, Heidelberg (2006) Brim, L., Haverkort, B.R., Leucker, M., van de Pol, J. (eds.): FMICS 2006 and PDMC 2006. LNCS, vol. 4346. Springer, Heidelberg (2007) Hoare, C.A.R.: An Axiomatic Basis for Computer Programming. Commun. ACM 12(10), 576–580 (1969) Holzmann, G.J.: On-The-Fly Model Checking. ACM Comput. Surv. 28(4es), 120 (1996) IEEE-P1850-2005 Standard for Property Specification Language (PSL) Jackson, D.: Mini-tutorial on Model Checking. In: Third IEEE Intl. Symp. on Requirements Engineering, Annapolis, Maryland, January 6-10 (1997) Jurdenski, M., Paterson, M., Zwick, U.: A Deterministic Subexponential Algorithm for Parity Games. In: ACM-SIAM Symp. on Algorthms for Discrete Systems, January 2006, pp. 117–123 (2006) Kozen, D.: Results on the Propositional Mu-Calculus. Theor. Comput. Sci. 27, 333–354 (1983) Kleene, S.C.: Representation of Events in Nerve Nets and Finite Automata. In: McCarthy, J., Shannon, C. (eds.) Automata Studies, pp. 3–42. Princeton Univ. Press, Princeton (1956) Knaster, B.: Un th´eor`eme sur les fonctions d’ensembles. Ann. Soc. Polon. Math. 6, 133˘ 2013134 (1928) Kautz, H., Selman, B.: Planning as Satisfiability. In: Proceedings European Conference on Artificial Intelligence, ECAI 1992 (1992) Kurshan, R.P.: Computer Aided Verification of Coordinating Processes: An Automata-theoretic Approach. Princeton Univ. Press, Princeton (1994) Lamport, L.: “’Sometimes’ is Sometimes ’Not Never’ ”- On the Temporal Logic of Programs. In: Principles of Programming Languages, POPL, pp. 174–185 (1980) Lee, C.Y.: Representation of Switching Circuits by Binary-Decision Programs. Bell Systems Technical Journal 38, 985–999 (1959) Lichtenstein, O., Pnueli, A.: Checking that Finite State Programs meet their Linear Specification. In: Principles of Programming Languages, POPL, pp. 97–107 (1985) Long, D.E., Browne, A., Clarke, E.M., Jha, S., Marero, W.: An improved Algorithm for the Evaluation of Fixpoint Expressions. In: Dill, D.L. (ed.) CAV 1994. LNCS, vol. 818, pp. 338–350. Springer, Heidelberg (1994) Formal Methods Specification and Analysis Guidebook for the Verification of Software and Computer Systems, vol. II, A Practioners Companion, [NASA-GB-01-97], 245 p. (1997) Namjoshi, K.S., Kurshan, R.P.: Syntactic Program Transformations for Automatic Abstraction. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 435–449. Springer, Heidelberg (2000) National Institute of Standards and Technology, US Department of Commerce, Software Errors Cost U.S. Economy $59.5 Billion Annually, NIST News Release, June 28 (2002), http://www.nist.gov/public_affairs/releases/n02-10.htm
Meanings of Model Checking [Pa69]
[Pa81] [PDMC08]
[Pn77] [Pn79] [Pr67] [QS82]
[SS94]
[Su78] [Ta55] [Tu36]
[Tu49]
[Va01]
[VW86]
[vB78] [Wa+00] [Wik08]
249
Park, D.: Fixpoint induction and proofs of program properties. In: Meltzer, B., Michie, D. (eds.) Machine Intelligence, vol. 5. Edinburgh University Press, Edinburgh (1969) Park, D.: Concurrency and Automata on Infinite Sequences. Theoretical Computer Science, 167–183 (1981) 7th International Workshop on Parallel and Distributed Methods in Verification, PDMC 2008, Affiliated to ETAPS 2008, http://pdmc.informatik.tu-muenchen.de/PDMC08/ Pnueli, A.: The Temporal Logic of Programs. In: Foundations of Computer Science, FOCS, pp. 46–57 (1977) Pnueli, A.: The Temporal Semantics of Concurrent Programs. Semantics of Concurrent Computation, 1–20 (1979) Prior, A.: Past, Present, and Future. Oxford University Press, Oxford (1967) Queille, J.-P., Sifakis, J.: Specification and verification of concurrent systems in CESAR. In: Dezani-Ciancaglini, M., Montanari, U. (eds.) Programming 1982. LNCS, vol. 137, pp. 337–351. Springer, Heidelberg (1982) Sokolsky, O., Smolka, S.: Incremental model checking in the modal calculus. In: Dill, D. (ed.) CAV 1994. LNCS, vol. 818, pp. 352–363. Springer, Heidelberg (1994) Sunshine, C.A.: Survey of protocol definition and verification techniques. ACM SIGCOMM Computer Communication Review 8(3), 35–41 (1978) Tarski, A.: A lattice-theoretical fixpoint theorem and its applications. Pac. J. Math. 5, 285–309 (1955) Turing, A.M.: On Computable Numbers, with an Application to the Entscheidungproblem. Proc. London Math. Society 2(42), 230–265 (1936); A Correction, ibid 43, 544–546 Turing, A.M.: Checking a Large Routine. In: Paper for the EDSAC Inaugural Conference. Typescript published in Report of a Conference on High Speed Automatic Calculating Machines, June 24, pp. 67–69 (1949) Vardi, M.Y.: Branching vs. Linear Time: Final Showdown. In: Margaria, T., Yi, W. (eds.) TACAS 2001. LNCS, vol. 2031, pp. 1–22. Springer, Heidelberg (2001) Vardi, M.Y., Wolper, P.: An Automata-Theoretic Approach to Automatic Program Verification (Preliminary Report). In: Logic in Computer Science, LICS, pp. 332–344 (1986) von Bochmann, G.: Finite State Description of Communication Protocols. Computer Networks 2, 361–372 (1978) Wang, W., Hidvegi, Z., Bailey, A., Whinston, A.: E-Process Design and Assurance Using Model Checking. IEEE Computer 33(10), 48–53 (2000) Wikipedia, “Model Checking” (July 30, 2008), http://en.wikipedia.org/wiki/Model_checking
Smaller Abstractions for ∀CTL∗ without Next Kai Engelhardt1,2 and Ralf Huuck2,1 1
2
CSE, UNSW, Sydney, NSW 2052, Australia National ICT Australia Ltd. (NICTA), Locked Bag 6016, NSW 1466, Australia
Abstract. The success of applying model-checking to large systems depends crucially on the choice of good abstractions. In this work we present an approach for constructing abstractions when checking Nextfree universal CTL∗ properties. It is known that functional abstractions are safe and that Next-free universal CTL∗ is insensitive to finite stuttering. We exploit these results by introducing a safe Next-free abstraction that is typically smaller than the usual functional one while at the same time more precise, i.e., it has less spurious counter-examples.
1
Introduction
Model-checking [8,29] has matured to the perhaps most important industrialstrength formal method for system verification [26,27,3,20,11,21]. Only relatively small systems are, however, amenable to standard modelchecking. The size of systems is up to exponential in the number of their concurrent components—this is referred to as the state explosion problem [32]. This obstacle can often be overcome by working with a suitable abstraction instead of the considered system itself [10,25,13,16,4,2,12,18]. To a hypothetical user of a model-checker T , an abstraction A is suitable for or a system C and a property ϕ, if it is both, safe and small enough. It is safe whenever proving that A satisfies ϕ established that also C satisfies ϕ. It is small enough if T manages to establish whether A satisfies ϕ. Heuristics have been proposed to automatically construct suitable abstractions [17,2]. Not all suitable abstractions are necessarily appropriate for checking ϕ. Abstraction A might expose spurious counter-examples to ϕ and prove itself to be too coarse for proper reasoning. One popular approach to deal with spurious counter-examples is called CEGAR (for counter-example-guided abstraction refinement). In this approach the abstraction is iteratively refined based on analyses of spurious counter-examples. The process stops when the model checker finds either a proper counter-example or ϕ to hold. The advantage of CEGAR is that it can still be fully automated [9,31,7]. How to find good abstractions and how to improve and refine them has been an
Work was partially supported by ARC Discovery Grants RM00384 and RM02036. National ICT Australia is funded by the Australian Government’s Department of Communications, Information Technology and the Arts and the Australian Research Council through Backing Australia’s Ability and the ICT Research Centre of Excellence programs.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 250–259, 2010. c Springer-Verlag Berlin Heidelberg 2010
Smaller Abstractions for ∀CTL∗ without Next
251
active area of research. The standard approach to refining abstractions is to split abstract states. We suggest a simple method to make the abstraction’s transition relation even smaller. This paper is concerned with a particular class of abstractions only. We call them functional abstractions because our abstract systems are the images of concrete systems under some function from concrete to abstract states.1 Functional abstractions are known to be safe for properties expressed in ∀CTL∗ , which encompasses LTL. In contrast to our approach Ranzato and Tapparo propose to construct strongly preserving abstractions in order to avoid the loss of temporal properties [30] when abstracting. Their construction does not allow the removal of transitions from an abstraction as we propose here. An earlier related approach is presented in [14] where the authors compute property preserving abstractions for CTL∗ based on abstract interpretation. Their abstractions are data dependent and do not deal with loops as proposed here. A different approach taken by Kesten and Pnueli in [22] and also in [28] where the abstraction is paired with a progress monitor to enforce certain liveness properties. While this technique preserves full LTL including Next and is more general than our solution, it also orthogonal and more complicated. The two techniques can and should be combined. We present our work in the simplest setting, that is, Kripke structures, because we have no new insights related to explicit liveness constraints such as the justice and compassion sets of the fair discrete systems used by Pnueli et al. Ball, Kupferman, and Sagiv also tackle the loop abstraction problem for large finite state systems [1]. They avoid Kesten and Pnueli’s progress monitors by using must transition where possible to ensure progress in the abstraction. In their setting this requires identifying entry and exit ports within the abstract state representing a loop. In contrast to ours, their approach does not exploit the absence of Next in the liveness formula to be verified. Their method also leads to less abstract models than ours. The approaches described above all tackle the same problem: abstractions may introduce spurious counter-examples to progress properties whenever a terminating code fragment (e.g. a finite loop) in the concrete system is abstracted to a potentially diverging code fragment (e.g. a self loop). The contribution of this paper is the introduction of a Next-free abstraction which is based on a functional abstraction that exhibits such a spurious counter-example. The Nextfree abstraction is even smaller than the functional one while at the same time more precise. The key idea is to investigate self-loops on the abstract level. The functional abstraction of a Kripke structure contains a self-loop for an abstract state s if, and only if, there is a transition in the pre-image of s. We propose to omit such a self-loop from the abstraction whenever the pre-image of s does not allow infinite paths. Self-loops are the natural enemy of liveness properties. Thus, by eliminating as many of these self-loops as possible, abstractions become more likely to be useful for checking liveness properties. 1
Functional abstractions are also known as partition refinements or quotient systems [13].
252
K. Engelhardt and R. Huuck
The price to pay is twofold. 1. We lose the next-operator when stating temporal properties. Dropping the next-operator is often considered natural in a refinement setting [24]. Moreover, liveness properties are typically formulated without next. 2. We have to decide for each pre-image of an abstract state with a self-loop whether it allows an infinite path. This is not necessarily feasible. Many concrete systems have conceptually infinite state spaces. When abstracting to finite state spaces most pre-images of abstract states tend to be infinite themselves. Depth-bounded search could be used to approximate an answer to the decision problem. If that search is inconclusive it is always safe to keep the self-loop. We show that Next-free abstractions are sound for the Next-free universal fragment ∀CTL∗¾ of CTL∗ , but generally unsound for formulae containing ¾ . The remainder of this work is organized as follows: In Section 2 we introduce basic notations for model-checking universal CTL∗ and for abstractions. The subsequent Section 3 introduces our novel abstractions for Next-free ∀CTL∗ . We prove that these abstractions are sound, and generally smaller and more precise than the standard abstraction. We give an example of this in Section 4 before drawing final conclusions and pointing out to future work in Section 5.
2
Model-Checking for ∀CTL∗¾
To increase the accessibility of the paper we recall some of the standard definitions for a.o. Kripke structures, paths, execution sequences, (minimal) functional abstractions, and the syntax and semantics of ∀CTL∗ . Readers familiar with these notions can skip to Section 3. 2.1
Kripke Structures and Abstractions
Let P be a set of atomic propositions. A Kripke structure (over P ) [23] is characterized by a tuple (S, S0 , R, μ) such that: S is a set of states; S0 ⊆ S is a set of initial states; R ⊆ S × S is a transition relation, which is required to be total, i.e., for every state s ∈ S there exists an s ∈ S such that (s, s ) ∈ R; and μ : S −→ 2P is a labeling function which assigns a set of propositions to every state. Let K = (S, S0 , R, μ) be a Kripke structure. Let I be a non-void and possibly infinite segment of N. An I-indexed state sequence s = (si )i∈I ∈ S I is a path of K if, for all i ∈ I such that also i + 1 ∈ I we have that (si , si+1 ) ∈ R. Whenever I is infinite we say that s is a full path (of K). If the first element of a path s is an initial state, we call s an initial path (of K). Full initial paths are usually called execution sequences. The property OK of K is the set of all its execution sequences. Let S ⊆ S. The set of all full paths in K containing only states of S is denoted by OK (S ). For a path π = (si )i∈N , we write π i for the suffix of π starting at si .
Smaller Abstractions for ∀CTL∗ without Next
253
Given two sets of states S and S . We call h ⊆ S × S an abstraction relation iff it is the graph of a total function onto S . Let h ⊆ S × S be an abstraction relation. Let K = (S , S0 , R , μ ) be another Kripke structure. Say that K is an abstraction (of K, with respect to h) if h(S0 ) ⊆ S0 (concrete initial states are mapped to abstract initial states), h−1 ; R; h ⊆ R (concrete transitions are mapped to abstract transitions), and (propositions are preserved) μ (S ) = μ(h−1 (S )). We say that h is faithful, if, for all s ∈ S and s1 , s2 ∈ h−1 (s ) we have μ(s1 ) = μ(s2 ). From now on, we shall only consider faithful abstraction relations. The smallest such abstraction, (S , h(S0 ), h−1 ; R; h, μ(h−1 (S h ))), is called the minimal abstraction (of K with respect to h) and referred to by K h . Its components are referred to by S h , S0h , Rh , and μh , respectively. 2.2
∀CTL∗
Syntax. Next we define the logic ∀CTL∗ (over P ). The syntax of state and path formulae is given in BNF by: state formulae: ∀CTL∗ ϕ ::= p | ¬p | ϕ ∨ ϕ | ϕ ∧ ϕ | ∀ Φ path formulae: Φ ::= ϕ | Φ ∨ Φ | Φ ∧ Φ | ¾ Φ | Φ U Φ | Φ V Φ where p ∈ P . The slightly uncommon release operator V denotes the dual of the until operator U, i.e., it expresses the CTL∗ formula ¬(¬ϕ U ¬ψ). Boolean constants (true = p ∨ ¬p, false = p ∧ ¬p for some p ∈ P ) and further common temporal modalities (e.g., ½ Φ = true U Φ, ¼ Φ = Φ V false) are defined as abbreviations (cf. also [14,13]). Semantics. The semantics of ∀CTL∗ is given by inductive definitions of satisfaction relations |= for state and path formulae over Kripke structures. Consider a Kripke structure K = (S, S0 , R, μ), a state s ∈ S, a proposition p ∈ P , a path π ∈ OK , state formulae ϕ and ψ, and path formulae Φ and Ψ . – – – – – – – – – –
K, s |= p iff p ∈ μ(s); K, s |= ¬p iff p ∈ / μ(s); K, s |= ϕ ∨ ψ iff K, s |= ϕ or K, s |= ψ; K, s |= ϕ ∧ ψ iff K, s |= ϕ and K, s |= ψ; K, s |= ∀ Φ iff for all paths π ∈ OK beginning with s, we have that K, π |= Φ; K, π |= ϕ iff K, π0 |= ϕ; K, π |= Φ ∨ Ψ iff K, π |= Φ or K, π |= Ψ ; K, π |= Φ ∧ Ψ iff K, π |= Φ and K, π |= Ψ ; K, π |= ¾ Φ iff K, π 1 |= Φ; K, π |= Φ U Ψ iff there exists k ∈ N such that K, π k |= Ψ and K, π j |= Φ for all 0 ≤ j < k; – K, π |= Φ V Ψ iff for all k ∈ N, if for all 0 ≤ j < k we have K, π j |= Φ then K, π k |= Ψ .
254
K. Engelhardt and R. Huuck
If Q is a set Q of states or paths we write K, Q |= ϕ to abbreviate ∀q ∈ Q(K, q |= ϕ). We abbreviate K, S0 |= ϕ to K |= ϕ. Recall that we use ∀CTL∗¾ to denote the Next-free fragment of ∀CTL∗ . Among others, Browne et al. showed that minimal abstractions are safe for ∀CTL∗ , i.e., every ∀CTL∗ formula, which holds for a minimal abstraction does so for the original system [6,19,14]. Dams’ thesis contains a thorough exposition of abstraction techniques for CTL∗ and related logics [13]. Theorem 1 ([6]). Let K be a Kripke structure, let h be a faithful abstraction relation, and let ϕ be a formula of ∀CTL∗ . Then K h |= ϕ implies K |= ϕ. Let π be a full path. A stutter step is a pair of consecutive equal states πi = πi+1 . Call π stutter-free either if it has no stutter steps (i.e., ∀i ∈ N (πi = πi+1 )) or if the only stutter steps are infinite consecutive repetitions of a single state, i.e., ∃k ∈ N (∀i < k (πi = πi+1 ) ∧ ∀i > k (πi = πk )). We denote by π the stutter-free equivalent of π, which is the stutter-free path obtained from π by contracting each maximal finite sequence of stutter steps to a single state. We call two paths π and π stutter equivalent if π = π and denote this by π ≡ π . Denote the -equivalence class of π by [π]≡ . De Nicola and Vaandrager showed that CTL∗¾ , the Next-free fragment of CTL∗ is insensitive to stuttering [15]. Thus, so is ∀CTL∗¾ . Theorem 2. Let K be a Kripke structure, let Φ be a path formula of ∀CTL∗¾ , and let π, π ∈ OK be stutter equivalent. Then K, π |= Φ iff K, π |= Φ.
3
Minimal Abstractions for ∀CTL∗¾
Definition 1 (Next-free abstraction). Let K = (S, S0 , R, μ) be a Kripke structure and h ⊆ S × S h be a faithful abstraction relation. The Kripke structure K¾h = (S h , S0h , R¾h , μh ) given by 1. S0h = h(S0 ), 2. R¾h = (h−1 ; R; h) \ (s, s) OK (h−1 (s)) = ∅ , and 3. μh (S h ) = μ(h−1 (S h )) is called K’s Next-free abstraction with respect to h. Observe that this is well-defined because R¾h is necessarily total. Self-loops (stutter steps) in R are preserved since they give rise to full paths. Next we investigate how Next-free abstractions relate to stuttering [24]. Lemma 1. Let K = (S, S0 , R, μ) be a Kripke structure, S h a non-empty set, and h ⊆ S × S h an abstraction relation. Then, h(OK ) ⊆ [OK¾h ]≡ . Proof. Let π = (si )i∈N ∈ OK . We need to show that h(π) ∈ [OK¾h ]≡ . Observe that, by the definition of S0h , we have that h(s0 ) is an initial state of the Next-free abstraction. It remains to be shown that if the image (h(si ), h(si+1 ))
Smaller Abstractions for ∀CTL∗ without Next
255
of a step (si , si+1 ) in π is missing from R¾h , then (a) the missing step must be a stutter step, and (b) that the length of this stuttering is finite. In other words, for all i ∈ N, should (h(si ), h(si+1 )) ∈ / R¾h , then h(si ) = h(si+1 ) and there exists k > i such that h(sk ) = h(si ). For (a), observe that (h(si ), h(si+1 )) ∈ / R¾h implies that (h(si ), h(si+1 )) ∈ h h h R \ R¾ . Thus, by the definition of R¾ and Rh , it follows that h(si ) = h(si+1 ). For (b), let i ∈ N such that h(si ) = h(sk ) for all k > i. Note that the full path π i is in OK (h−1 (h(si ))). Consequently, (h(si ), h(si )) ∈ R¾h by the definition of R¾h . Theorem 3. Let K = (S, S0 , R, μ) be a Kripke structure, S h a non-empty set, h ⊆ S × S h a faithful abstraction relation, and ϕ ∈ ∀CTL∗¾ . Then K¾h |= ϕ implies K |= ϕ. Proof. We show by simultaneous induction over the structure of state and path formulae that K¾h , s |= ϕ ⇒ K, h−1 (s) |= ϕ for all state formulae ϕ and that K¾h , π |= Φ ⇒ K, h−1 (π) |= Φ for all path formulae Φ. We only show the interesting cases. Case state formula ϕ is a proposition p: K¾h , s |= p ⇒ p ∈ μh (s) ⇒ p ∈ s ∈h−1 (s) μ(s ) ⇒ ∀s ∈ h−1 (s) (p ∈ μ(s )) , as h is faithful ⇒ ∀s ∈ h−1 (s) (K, s |= p) ⇒ K, h−1 (s) |= p Case ϕ is ∀ Φ: K¾h , s |= ∀ Φ ⇒ ∀π ∈ OK¾h π0 = s ⇒ K¾h , π |= Φ ⇒ ∀π ∈ [OK¾h ]≡ π0 = s ⇒ K¾h , π |= Φ , by Theorem 2 ⇒ ∀π ∈ [OK¾h ]≡ π0 = s ⇒ K, h−1 (π) |= Φ , ind. hyp. ⇒ ∀π ∈ h(OK ) π0 = s ⇒ K, h−1 (π) |= Φ , by Lemma 1 ⇒ ∀π ∈ OK (h(π0 ) = s ⇒ K, π |= Φ) ⇒ K, h−1 (s) |= ∀ Φ
4
Example
To illustrate that Next-free abstractions can indeed generate smaller and more precise abstractions, we use a simple traffic light controller example taken from [9]. Consider a traffic light as depicted in Fig. 1(a). There are three states: red, yellow, and green ({r, y, g}), where r is the initial state. We consider them to be labeled with the propositions go and stop as follows: μ(r) = stop and μ(g) = μ(y) = go.
256
K. Engelhardt and R. Huuck
r
r
r
go
go
y
g
(a)
(b)
(c)
Fig. 1. Traffic light controller
We want to prove the liveness property that the traffic light is infinitely often red, i.e., ϕ = ∀ ¼ ∀ ½ stop. We do admit that no model-checker struggles with this example. For the sake of argument let us, however, compare the minimal abstraction with the Next-free abstraction for the abstraction relation h = {(r, r), (g, go), (y, go)}. The minimal abstraction is shown in Fig. 1(b). Note that ϕ no longer holds, since the self-loop introduced by contracting the two states g and y allows control to remain in go forever. This counter-example is, of course, spurious. On the other hand, the Next-free abstraction shown in Fig. 1(c) eliminates the selfloop, because there is no infinite path in h−1 (go). This means, ϕ is still valid and there is no spurious counter-example. Moreover, the transition relation is smaller than the one of the minimal abstraction. Most known techniques of automatic abstraction refinement based on spurious counter-example detection amount to splitting the go state, leading to the original, concrete system. Kesten and Pnueli’s construction would result in the parallel composition of 1(b) with a progress monitor. Ball, Kupferman, and Sagiv would identify g as entry port and y as exit port of go in Fig. 1(b) and conclude that the self-loop on go in that figure cannot diverge. The additional effort for Next-free abstraction lies in checking the pre-image of abstractions for infinite paths. If the size of the pre-images allows, this can be done efficiently by checking it for self-loops and non-trivial strongly connected components. Otherwise, depth-bounded search can be used to approximate. The same idea as in the original CEGAR approach can be adapted to keep increasing the search depth in the pre-image of the abstract state with the self-loop responsible for the spurious counter-example.
5
Conclusion and Future Work
Tailoring abstractions according to the property to check is not a new idea. We suggest in this paper to look at effectively computable improvements of
Smaller Abstractions for ∀CTL∗ without Next
257
simple abstractions. We have illustrated this idea by one particular improvement, namely, the omission of certain self-loops in abstractions. This omission is shown to be sound for ∀CTL∗¾ , the Next-free fragment of ∀CTL∗ . The proposed elimination of self-loops appears to be optimal in the sense that generalizing it to other loops does not promise any gain over existing abstraction refinement techniques such as the ones proposed by Clarke et al. and by Sa¨ıdi [9,31]. In the worst case, to remove a transition closing a loop of length n, introduce a state component to the abstraction—in other words split abstract states—to memorize the last n − 1 states visited. One future task is to implement the proposed abstraction technique in existing tools and to compare its performance to that of its precursors. Our attempts have so far been frustrated by the perceived lack of an open source CEGAR framework that actually works. The much-needed comparison should give an indication for the potential speed-up with respect to the usual functional abstraction as well as the relative number of abstractions that do not need further refinement.
Acknowledgment The authors would like to thank Ed Clarke for his talk on abstractions and counter-examples given in Sydney in July 2003 during which the core idea of this paper was born. We would also like to thank the anonymous referees who saw an earlier version of this paper. One remark is in order: we do consider μ(y) = go correct because we never stop at yellow.
References 1. Ball, T., Kupferman, O., Sagiv, M.: Leaping loops in the presence of abstraction. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 491–503. Springer, Heidelberg (2007) 2. Ball, T., Millstein, T., Rajamani, S.K.: Polymorphic predicate abstraction. ACM Transactions on Programming Languages and Systems 27(2), 314–343 (2005) 3. Bengtsson, J., Larsen, K.G., Larsson, F., Pettersson, P., Yi, W.: UPPAAL — a tool suite for automatic verification of real-time systems. In: Alur, R., Sontag, E.D., Henzinger, T.A. (eds.) HS 1995. LNCS, vol. 1066, pp. 232–243. Springer, Heidelberg (1996) 4. Bensalem, S., Lakhnech, Y., Owre, S.: Computing abstractions of infinite state systems automatically and compositionally. In: Hu, A.J., Vardi, M.Y. (eds.) CAV 1998. LNCS, vol. 1427, pp. 319–331. Springer, Heidelberg (1998) 5. Brinksma, E., Larsen, K.G. (eds.): CAV 2002. LNCS, vol. 2404. Springer, Heidelberg (2002) 6. Browne, M.C., Clarke, E.M., Grumberg, O.: Characterizing finite Kripke structures in propositional temporal logic. Theoretical Computer Science 59, 115–131 (1988) 7. Chaki, S., Clarke, E., Groce, A., Jha, S., Veith, H.: Modular verification of software components in C. In: International Conference on Software Engineering, May 2003, pp. 385–395 (2003)
258
K. Engelhardt and R. Huuck
8. Clarke, E.M., Emerson, E.A.: Synthesis of synchronisation skeletons for branching time temporal logic. In: Kozen, D. (ed.) Logic of Programs 1981. LNCS, vol. 131. Springer, Heidelberg (1982) 9. Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000) 10. Clarke, E.M., Grumberg, O., Long, D.E.: Model checking and abstraction. In: Proceedings of the 19th ACM SIGPLAN-SIGACT Symposium on Principles of programming languages, pp. 343–354 (1992) 11. Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. MIT Press, Cambridge (2000) 12. Cousot, P.: On abstraction in software verification. In: Brinksma, Larsen [5], pp. 37–56 13. Dams, D.: Abstract Interpretation and Partition Refinement for Model Checking. PhD thesis, Technical University of Eindhoven (July 1996) 14. Dams, D., Gerth, R., Grumberg, O.: Abstract interpretation of reactive systems. ACM Transactions on Programming Languages and Systems 19(2), 253–291 (1997) 15. de Nicola, R., Vaandrager, F.: Three logics for branching bisimulation. Journal of the ACM 42(2), 458–487 (1995) 16. Dingel, J., Filkorn, T.: Model checking for infinite state systems using data abstraction, assumption-commitment style reasoning and theorem proving. In: Wolper, P. (ed.) CAV 1995. LNCS, vol. 939, pp. 54–69. Springer, Heidelberg (1995) 17. Dwyer, M.B., Hatcliff, J., Joehanes, R., Laubach, S., Pasareanu, C.S.: Toolsupported program abstraction for finite-state verification. In: Proceedings of the 23rd International Conference on Software Engineering (May 2001) 18. Godefroid, P., Jagadeesan, R.: Automatic abstraction using generalized model checking. In: Brinksma, Larsen [5], pp. 137–150 19. Groote, J.F., Vaandrager, F.: An efficient algorithm for branching bisimulation and stuttering equivalence. In: Paterson, M. (ed.) ICALP 1990. LNCS, vol. 443, pp. 626–638. Springer, Heidelberg (1990) 20. Henzinger, T.A., Ho, P.-H., Wong-Toi, H.: HyTech: a model checker for hybrid systems. Software Tools for Technology Transfer 1, 110–122 (1997) 21. Holzmann, G.J.: The SPIN Model Checker. Pearson Educational, London (2003) 22. Kesten, Y., Pnueli, A.: Verification by augmented finitary abstraction. Information and Computation 163(1), 203–243 (2000) 23. Kripke, S.A.: Semantical considerations on modal logic. Acta Philosophica Fennica 16, 83–94 (1963) 24. Lamport, L.: What good is temporal logic? In: Mason, R.E.A. (ed.) Information Processing 1983, Proceedings of the IFIP 9th World Computer Congress, Paris, France, September 19-23, pp. 657–668. North-Holland, Amsterdam (1983) 25. Loiseaux, C., Graf, S., Sifakis, J., Bouajjani, A., Bensalem, S.: Property preserving abstractions for the verification of concurrent systems. Formal Methods in System Design 6(1), 11–44 (1995) 26. McMillan, K.L.: Symbolic Model Checking. Kluwer Academic Publishers, Dordrecht (1993)
Smaller Abstractions for ∀CTL∗ without Next
259
27. Olivero, A., Yovine, S.: KRONOS: A Tool for Verifying Real-Time Systems. User’s Guide and Reference Manual. VERIMAG, Grenoble (1993) 28. Pnueli, A., Xu, J., Zuck, L.D.: Liveness with (0, 1, infty)-counter abstraction. In: Brinksma, Larsen [5], pp. 107–122 29. Queille, J.-P., Sifakis, J.: Specification and verification of concurrent systems in CESAR. In: Dezani-Ciancaglini, M., Montanari, U. (eds.) Programming 1982. LNCS, vol. 137, pp. 337–351. Springer, Heidelberg (1982) 30. Ranzato, F., Tapparo, F.: Generalized strong preservation by abstract interpretation. Journal of Logic and Computation 17(1), 157–197 (2007) 31. Sa¨ıdi, H.: Model checking guided abstraction and analysis. In: Palsberg, J. (ed.) SAS 2000. LNCS, vol. 1824, pp. 377–396. Springer, Heidelberg (2000) 32. Valmari, A.: The state explosion problem. In: Reisig, W., Rozenberg, G. (eds.) APN 1998. LNCS, vol. 1491, pp. 429–528. Springer, Heidelberg (1998)
Timing Verification of GasP Asynchronous Circuits: Predicted Delay Variations Observed by Experiment Prasad Joshi1, Peter A. Beerel1, Marly Roncken2, and Ivan Sutherland3 1 Ming Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90089, USA {prasadjo,pabeerel}@usc.edu 2 Strategic CAD Labs, Intel Corporation, 5200 NE Elam Young Parkway, Hillsboro, OR 97124, USA [email protected] 3 VLSI Research Group, Sun Microsystems, 16 Network Circle, Menlo Park, CA 94025, USA [email protected]
Abstract. This paper reports spreadsheet calculations intended to verify the timing of 6-4 GasP asynchronous Network on Chip (NoC) control circuits. The Logical Effort model used in the spreadsheet estimates the delays of each logic gate in the GasP control. The calculations show how these delays vary in response to differing environmental conditions. The important environmental variable is the physical distance from one GasP module to adjacent modules because longer wires present greater capacitance that retards the operation of their drivers. Remarkably, the calculations predict correct operation over a large range of distances provided the difference in the distances to predecessor and successor modules is limited, and predict failure if the distances differ by too much. Experimental support for this view comes from the measured behavior of a test chip called “Infinity” built by Sun Microsystems in 90 nanometer CMOS circuits fabricated at TSMC. Keywords: Network-on-chip, relative timing verification, GasP circuits.
1 Introduction – Network on a Chip (NoC) Advances in integrated circuit technology have produced chips of amazing speed containing hundreds of millions of transistors. These same advances have forced designers to put more emphasis on communication systems within such chips for four reasons. First, it costs energy to send signals on wires, and an increasing fraction of the energy budget is spent on communication. Second, the wires cost chip area. Modern chips have about a dozen layers of wiring to provide the wiring space required. Third, finding routes for the maze of wires, though highly automated, is becoming an increasingly daunting task. And fourth, the sheer complexity of modern designs requires their partition into more manageable sub pieces. For these reasons, there has been much recent discussion of system on a chip (SoC) designs constructed from separate parts linked together by a network on the chip (NoC). Such a design approach can partition the system into separate parts that can be D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 260–276, 2010. © Springer-Verlag Berlin Heidelberg 2010
Timing Verification of GasP Asynchronous Circuits
261
designed concurrently or even re-used from previous products. Concentrating the communication needs of such a design into a network with pre-specified properties provides simple interfaces to which each separate part can attach. Moreover, it greatly simplifies the final assembly of the separate designs into the final SoC. By now there is a diverse family of NoC designs. Many of these communicate the presence or absence of data as well as the data itself, using one of a variety of handshake protocols for the purpose, see e.g. [1]. Such handshake protocols require signals that travel both forward and backward along the network; forward signals to indicate the presence of new data that could progress through the network and backward signals to indicate the presence of space into which such data may fit. Families of NoC circuits generally include special modules that can branch and merge arms of the network. Such modules may, for example, send an incoming message to several outputs in a “broadcast” mode, or may send the message to only one of several outputs in an “addressable” mode. Other modules may join messages from two or more inputs, again either by concatenation of simultaneously-arriving messages, or on a “first-come-first-served” basis. Such modules form the basis for a wide variety of network topologies. The experimental chip described in this paper uses a two-way addressable Branch module and a two-way first-come-first-served Merge module.
2 Single Track A particularly interesting form of forward and back NoC handshake protocol is called “single track” signaling [3][12][2]. In a single track protocol a single wire carries both the forward and backward handshake signals. A sender briefly drives the wire to one logic level, signaling the presence of data on adjacent data wires. The receiver, noticing that change in the wire’s state, copies the corresponding data and then briefly drives the wire to the other logic state to indicate that it has absorbed the previous data value. Single track signaling is attractive not only because a single wire occupies less space, but also because single track signaling consumes minimum energy per cycle. A transition must pass in each direction, and the single wire does exactly that with automatic return to the initial state after each handshake. These two advantages are offset by a timing issue inherent in the word “briefly.” Because sender and receiver share the signaling wire and drive it in opposite directions, each must take care to cease its drive promptly so that the other has free use of the wire. Proper operation of single-track systems depends on the proper behavior of each participant to drive only briefly. The GasP family of asynchronous control circuits discussed in this paper is such a “single track” system [9]. The 6-4 GasP family gets its name from the six logic gates in the forward direction between each stage and its successor, gates A B C D E F shown in Fig. 1, and the four logic gates in the backward direction, gates A B C X in the figure. The longer delay appears in the forward direction because copying data forward requires action on the part of data latches, whereas moving a “bubble” backwards to indicate that a register has become empty requires none. Note also that the 6-4 GasP circuit has sets of five inverting gates that form closed loops. One such loop, called the successor loop, involves gates A B C D E. Whenever the inputs to the AND function of gate A cause gate A to act, the successor loop will
262
P. Joshi et al.
change the state of the successor state wire in such a way as to discontinue that action. Similarly, the five gates A B C X F form a predecessor loop that serves a similar purpose. The five gates in each loop form, in effect, a pair of five-inverter ringoscillators coupled to neighbors by the AND logic function inside the GasP module and the state wires through which the module communicates with its neighbors. 2.1 Timing in Single Track Circuits Each participant in a single-track signaling protocol must cease driving the state wire soon enough to make room for the action of the other participant. Were a participant to drive the wire for too long a time, both might drive it concurrently in opposite directions, consuming unnecessary energy and producing an indeterminate logic signal. How is this to be avoided? Some single-track systems [3][2] make use of the analog properties of the state wire. Each participant drives the wire “long enough” for it to pass some threshold voltage that will alert the other participant. Other singletrack systems, including GasP, depend on the relative timing of logic gates to avoid drive conflict at the state wire. Careful choice of the transistor widths in GasP circuits enables the two participants to operate quickly while avoiding conflict. The transistors in the 6-4 GasP circuits reported here are chosen to be strong enough so that each logic gate has approximately the same delay. This is possible because all but two of the logic gates drive fixed loads. The AND function, called A in Fig. 1, drives only its own output capacitance, the capacitance of the relatively short wire to the inverter called B, and the input capacitance of inverter B. Likewise, B drives only itself, a short wire, and inverter C. In addition to driving both inverter D and the NMOS transistor X and the wires to them, inverter C must drive the rather large load presented by the long control wire to the many latches that will capture the data. The figure labels this load L1. Thus inverter C tends to be rather large, but its load is the same in every module.
Fig. 1. Two stages of 6-4 GasP. From AND to AND, the forward path is 6 gates: ABCDEF; the backward path is 4 gates: ABCX; the successor loop is 5 gates: ABCDE; and the predecessor loop is 5 gates: ABCXF.
Timing Verification of GasP Asynchronous Circuits
263
Only the two lone transistors, E and X, drive loads that vary from module to module. Their major load is the capacitance of the state wire between modules, labeled L2 in the figure. If the neighbor module happens to be nearby, L2 will be small, but if it happens to be far away, L2 may be much larger. The module designer cannot know the exact length of the state wire until the module has been placed in the system. Thus the designer of a system of GasP modules faces a choice; either opt for identical modules with variable delay or opt for custom modules with constant delay. This is an unpleasant choice because both options present problems. The choice of constant delay requires specialized transistor sizing and thus unique layout for each module after establishing the distance to its neighbors. The choice of identical modules causes variation in performance with distance. Although some variation in performance is acceptable in an asynchronous system, excessive variation may cause failures as we shall describe below.
3 The Problem Faced with the choice between identical modules and constant delay, the VLSI Research Group at Sun Microsystems chose identical modules. The Sun group’s earlier efforts with 4-2 GasP [12] had shown them how hard it is to tailor the transistors in each module to the length of the wires between modules. First, such customization requires very late binding of transistor widths after layout is complete, and second, it requires calculation of wire capacitance. Accurate calculation of wire capacitance is very difficult because such calculation requires complete knowledge of what structures lie adjacent to the wire throughout its length. The Sun group now uses 6-4 GasP rather than the faster but more demanding 4-2 GasP precisely to relax the tight constraints on delay presented by the faster 4-2 GasP circuits. The problem remains, however, to understand the conditions under which an identical set of 6-4 GasP modules will operate correctly in the face of variable distance between modules. This paper addresses only the handshake operations, omitting any discussion of data validity. The Sun group did electrical simulations of the behavior of modules in environments with long and short state wires. These simulations used the SPICE electrical simulation tool to simulate the behavior of each and every transistor, wire capacitance, and logic function, a fairly laborious and compute-intensive process. The results of these simulations showed proper operation over the range of state wire lengths actually needed in the test chip. Now that the chip has been tested, we can happily report that it does work. However, to use the 6-4 GasP modules in a more complex design requires a deeper understanding of the conditions under which such a design will be valid. Because the GasP modules in a more complex design will be placed automatically, such understanding could easily be converted into constraints on the automatic placement software. With this in mind, Sun was eager to enlist the help of the authors of this paper. Together we sought to identify the failure mechanisms possible to the 6-4 GasP design in a much wider range of environments. What can be said about the conditions under which GasP circuits will or will not operate properly? It is always useful to have an outside group, the “red team,” identify flaws in a design. A red team has no vested
264
P. Joshi et al.
interest in showing that the design is correct and every motive to show how it will fail. Sun felt that the value contributed by a red team would outweigh any loss of proprietary information that might result.
4 Relative Timing Our timing verification approach is based on relative timing constraints in which we explicitly identify timing constraints in the form of ordering of signal transitions within the circuit [8]. In addition to providing insight into the circuit behavior, we are exploring whether we can verify asynchronous circuits using standard commercial static timing analysis tools, capturing the margins necessary to cover all sources of delay variability that asynchronous circuits may exhibit. Using this relative timing approach, we identified and formally modeled the intended behavior and circuit implementation of the GasP circuits and identified two important failure modes. Both these failure modes occur because the two loops of logic gates in each GasP module meet at the AND function. Completion of either of the loops shuts off both of them. Failure can result if either of these two loops completes before the other is adequately underway. The timing constraints involve the difference in delay of the two loops. The constraints are illustrated in Fig. 2 in which the behavior of the GasP design illustrated in Fig. 1 is represented by up- and down-going transitions (+ or -) of key signals to the latches and neighboring GasP stages: FIRE, PRED, and SUCC. The constraints are illustrated by the pair of paths in Fig. 2(a1)-(a2) and Fig. 2(b1)-(b2); the delay of the dotted path must be smaller than the delay of the solid path. These timing constraints help ensure a full rail-to-rail swing on the state wires between stages by turning the drivers on long enough to drive the state wire high or low before turning the drivers off. The first timing constraint, illustrated in Fig. 2(a1)-(a2), states that PRED of the next GasP stage should go high before PMOS gate E of the current stage stops driving it; see Fig. 1. Presence of a large state wire load from SUCC of stage (a1) to PRED of stage (a2) and a small state wire load on PRED of stage (a1) can lead to a violation of this constraint. Moreover, if PMOS gate E in stage (a1) fails to drive the state wire all the way up before the predecessor loop shuts E off, data moving forward may be lost.
(a1)
(a2)
(b1)
Fig. 2. Relative Timing Constraints on GasP
(b2)
Timing Verification of GasP Asynchronous Circuits
265
The second timing constraint, illustrated in Fig. 2(b1)-(b2), states that SUCC of the previous GasP stage should go low before NMOS gate X of the current stage stops driving it; see Fig. 1. Presence of a large state wire load from PRED of stage (b2) to SUCC of stage (b1) and a small state wire load on SUCC of stage (b2) can lead to a violation of this constraint. Moreover, if NMOS gate X in stage (b2) fails to drive the state wire all the way down before the successor loop shuts X off, bubbles moving backward may be lost, resulting in duplication of data. Notice that each constraint involves the relative ordering of two actions on the state wire that leads to an adjacent module. When the first of the two actions completes, it turns off the driving action of both loops. Thus a very fast predecessor loop may prematurely terminate the drive of a slower successor loop, and vice versa. Armed with this understanding the question remains: “How much faster?”
5 Applying Logical Effort The Theory of Logical Effort [13] offers a way to estimate the delay in logic gates. According to the Logical Effort model, the delay in a logic gate depends on the logic function it implements, and is directly proportional to the load that it drives and inversely proportional to the width of its transistors. In general, a logic gate can be made faster by making its transistors wider or by reducing the load that it drives. One result of this analysis is that fastest overall operation of a three-gate string of logic is obtained by making the “size” of the central gate the geometric mean of the “sizes” of the first and last gate. If the central gate is too small, the first gate will operate faster, but the central gate will be too slow. If the central gate is too large, the central gate will operate faster, but the first gate will be too slow. Logical Effort teaches that the “size” of a gate depends not only on the width of its transistors, but also on its logic function. According to the Logical Effort model, the delay in a logic gate is given by: d = gh + p
(1)
where d is the delay in the gate in normalized time units, g is the Logical Effort of the gate, h is the ratio of the load it drives to the input load it presents to its driver, and p is a constant delay that arises from the diffusion capacitance connected to the gate’s output. The Logical Effort, g, depends on the logic function implemented by the gate. Table 1 shows the logical effort of some logic gates. The Theory of Logical Effort ignores the time it takes for signals to change from one logic level to another. More accurate models can take this "rise time," or its reciprocal, "slew rate," into account at the cost of computational complexity. More accurate models are particularly important for estimating the delay introduced by the resistance and capacitance of long wires. A long wire not only adds capacitive load to its driver, retarding the signal at its source, but also delivers any change in logic level with degraded rise time. The available tools that model the effects of rise time degradation in long wires are suitable for modeling only synchronous systems. Peter Beerel and his students at USC seek to adapt such tools to asynchronous systems. The analysis offered in this paper models only how the capacitance of the wires retards the gate that drives them and ignores the delay inherent in the wire itself. The simpler Logical
266
P. Joshi et al. Table 1. The Logical Effort for some simple logic gates
Gate type
Number and width of NMOS
Number and width of PMOS
Total transistor width
Width seen per input
Logical Effort per input
Inverter
1@W
1 @ 2W
3W
3W
1 (norm.)
2 input NAND
2 @ 2W
2 @ 2W
8W
4W
4/3
2 input NOR
2@W
2 @ 4W
10W
5W
5/3
2 input XOR
4 @ 2W
4 @ 4W
24W
12W
4
NMOS transistor
1@W
none
W
W
1/3
PMOS transistor
none
1 @ 2W
2W
2W
2/3
Effort model used in this paper is useful because none of the wires considered here is long enough to present significant electrical resistance. The logical efforts in Table 1 are normalized to the inverter whose logical effort is arbitrarily set to one. A NAND gate is worse than an inverter at providing electrical amplification by the ratio 4 to 3, and a NOR gate is worse by the ratio 5 to 3. The difference between NAND and NOR comes from the fact that in most CMOS processes, a PMOS transistor conducts about half as much current as an NMOS transistor of the same width. Except for the single transistor gates, these logical effort numbers are at least one because, unlike inverters, logic gates must use transistors in series or parallel combinations. Transistors in series conduct current less well than individual transistors and transistors in parallel require more input charge than does a single transistor. XOR is a particularly nasty function that involves parallel combinations of series transistors. Another way of thinking about this is that the output of an XOR logic gate will change in response to any single input change. This makes XOR a “more complex” logic function than either NAND or NOR and gives it correspondingly higher logical effort. In both the NAND and the NOR logic gates, the AND function comes from two series transistors. Both must conduct before current can flow though the series pair to the output. In the NAND logic gate, the AND function appears in NMOS transistors, whereas in the NOR logic gate, the AND function appears in PMOS transistors. The symbol used for logic gate A in Fig. 1 indicates that PMOS transistors provide its AND function and therefore the logical effort of gate A is 5/3. One of us, Prasad Joshi, applied the Logical Effort model to the 6-4 GasP circuits. He built a spreadsheet much like that shown in Table 2 to calculate the delays earlier identified as critical to correct operation. Table 2 greatly simplifies the sizes and wire lengths in order to offer simple numbers that are, nevertheless, approximately correct. The size S of a gate is a measure of its drive strength and depends on the width of its transistors. Table 2 normalizes wire lengths as the size of an inverter that would present an equal load. Logical effort enters Table 2 as an increase in input load rather than as increased delay. The longer wire that makes L2 = 150 was the design target. Notice that for this case the gates all have approximately the same total delay, in the range 4 to 5.
Timing Verification of GasP Asynchronous Circuits
267
Table 2. Spreadsheet calculations of delays in the 6-4 GasP module of Fig. 1. The three columns with bold numbers are fixed by design: size S and self-delay P follow from the gate implementation and the gate complexity, and wire load WL follows from the wire dimensions. Note that in the calculation of the self delay for E and X we count the diffusion output capacitance of both E and X, because both their outputs connect to the state wire. Using Table 1 for the logical efforts, IL = (S / logical effort of the gate). The formulas for the delays are as follows: D1 = GL/S, D2 = WL/S, Total delay = D1 + D2 + P.
Gate Name
A B C D E L2=15 E L2=150 F X L2=15 X L2=150
Drives Size Load (S) per next gates input and wire (IL) 30 B+wire 18 40 C+wire 40 100 100 D+X+wire 20 60
20 40
E+wire A+F+ L2
60
40
A+F+ L2
10 60
10 20
A+wire A+F+ L2
60
20
A+F+ L2
Next Delay Self gate from delay GL load (P) (GL) (D1) 40 2.2 2 100 2.5 1 20+20 2.4 1 +200 40 2 1 30+10 0.67 with X =1 30+10 0.67 with X =1 30 3 1 30+10 0.67 with E =1 30+10 0.67 with E =1
Next Delay Total wire from delay load WL (WL) (D2) 0.5 4.7 9 0.5 4 20 1 4.4 100 20 15
1 0.25
4 1.9
150
2.5
4.2
5 15
0.5 0.25
4.5 1.9
150
2.5
4.2
Unlike Table 2, which shows only two values of L2, Prasad’s spreadsheet shows all delays for a wide variety of lengths of the predecessor and successor state wires. Although we initially thought that the ratio of the lengths of these wires might be important, Prasad’s spreadsheet revealed instead that the difference in the lengths of these wires matters. Table 3 summarizes the total delay for several paths of interest. A tenfold change in L2 results in a change in these path delays of only about 10% to 15% because of the fixed delays of gates A B C D and F. Table 3. Total delay for various paths
Path name Forward latency Backward latency Successor loop Predecessor loop
Path ABCDEF ABCX ABCDE ABCXF
Delay, L2=15 23.5 15 19 19.5
Delay, L2=150 25.8 17.3 21.3 21.8
Difference 9.6 % 15 % 12 % 12 %
268
P. Joshi et al.
6 The Infinity Chip The VLSI group at Sun Microsystems designed a test chip called “Infinity” to demonstrate 6-4 GasP modules suitable for use in a network on a chip [10]. Infinity tests the operation, speed, and power consumption of the various types of modules required, with special focus on the modules for branching and merging networks. The Branch and Merge modules are electrically more complex than the simple 6-4 GasP circuit shown in Fig. 1, but have many similar elements. Like the circuit of Fig. 1, the Branch and Merge modules each provide 6 gates in the forward direction and 4 in the backward direction. Moreover, each logic gate used in the Branch and Merge modules has a delay in the range of 4 to 5 just like the logic gates in the simple GasP module1. Thus we expect performance from the Branch and Merge modules similar to that of the simpler circuits. Logically, the Infinity chip consists of two rings of 100 GasP modules each, as shown in Fig. 3. Each module passes on a word of 52 bits of data whose latches and the wire to reach them form the large load L1 in Fig. 1. The rings share a common center section from Merge to Branch inclusive of 50 modules. At the start of the center section a Merge module combines input from the two rings upon demand, using an arbiter to decide which input was presented first. At the end of the center section, a Branch module separates the data elements according to one of the bits that they carry. This bit marks each data element as “belonging” to either the left or the right ring. Geometrically, the Infinity chip is arranged as three columns of modules; see Fig. 3. Each module is 288 lambda2 high and about 5000 lambda wide. Their width requires that the center-to-center distance of the columns exceed 5000 lambda. The center column is the shared part of the two rings; data flows down through the center column from the Merge module near its top to the Branch module near its bottom. Data flows back upwards through the outer columns, thus completing each of the two logical rings. Each outer column includes a counter to measure how many data elements pass through the column. Each outer column also includes two “proper stoppers” to properly fill and drain data elements from each ring at the beginning and end of each experiment. Low speed scan techniques load data into the rings prior to each experiment and unload the data afterwards to check for errors. Infinity’s designer, Ivan Sutherland, anticipated that the task of copying data elements from one column to another was “harder” than passing data within the columns because it requires state wires that are approximately 16 times longer, namely about 5000 rather than about 300 lambda long. To avoid placing a large electrical burden on the Branch module in addition to its inherently larger logical burden, he chose to isolate the Branch from the column-crossing section. Thus it is, as Fig. 3 shows, that the pair of buffer stages immediately below the Branch drive data to the adjacent column. A similar pair of buffer stages above the Merge module offers the Merge module similar protection from excess electrical load. 1 2
The delay in the Merge module will be longer in the rare case of contention at its arbiter. Lambda is a distance measure characteristic of the manufacturing process.
Timing Verification of GasP Asynchronous Circuits
Fig. 3. Layout view of the Infinity experiment. It occupies an area of about 0.5 mm2.
269
270
P. Joshi et al.
7 Experimental Results Performance experiments with asynchronous rings typically measure throughput versus occupancy [14][7][5]. It is easy to see that plots from such experiments show a roughly triangular shape. If the ring contains no data, there can be no throughput. Similarly, if every stage in the ring contains a data element, none can move, and again there’s no throughput. Let us first consider the low-occupancy side of such a plot. A single data element passes once around the ring by passing through each and every module once. Two data elements make the same trip in about the same time, achieving twice the throughput. Three data elements offer three times the throughput, and so forth. Thus for small occupancy, throughput increases linearly with occupancy. Now let us consider the high-occupancy part of such a plot. If the ring is completely full, there’s no throughput. If all but one module is occupied, the blank space, or bubble, can circulate backwards, just as space for your automobile moves backward when you advance one car length on a very crowded highway. Two bubbles make a complete circuit of the ring backwards in about the same time as one, and so throughput increases roughly linearly with the number of bubbles. The bubble side of the graph is steeper than the data side because 6-4 GasP circuits move bubbles faster
Fig. 4. Throughput versus occupancy for Infinity’s rings operated separately
Timing Verification of GasP Asynchronous Circuits
271
than data. A 6-4 GasP circuit moves data forward with a latency of 6 logic gates and moves bubbles backward with a latency of 4 logic gates. Somewhere in the middle these two linear regions meet, giving the roughly triangular shape often seen in such plots. One might expect a sharply defined maximum at an occupancy defined by the slopes of the two linear regions, but such a sharp top is rarely observed. The shape at near maximum throughput is defined by the details of the delay in the AND function when both its inputs change at the same time. Real AND gates suffer more delay when both inputs change together than when one input changes well before the other [5]. This effect gives the plot a smooth curved top. It also sometimes happens that a single module in the ring or a few modules are inherently slower than the others. A slow stage places a maximum limit on throughput, giving the plot a flat top, as can be seen in Fig. 4 which shows throughput versus occupancy for each of Infinity’s rings operated separately. We notice from the figure that each of Infinity’s rings is limited to a throughput of about 3.8 * 109 data items per second. Extrapolation of the two linear portions of the plot, however, suggest that speeds of about 4.2 * 109 data items per second might be possible, a 10% increase. 7.1 Single-Column Experiment A separate and simpler experiment on the same chip as the Infinity experiment and using the same 6-4 GasP modules contains only a single FIFO ring, with all its modules arranged in a single column. Upward bound and downward bound stages alternate in the column, so that no stage is more than two modules away from its logical neighbors. This simpler experiment avoids the column change required in Infinity. The measured peak throughput of this simpler experiment is about 4.2 * 109 data items per second, about 10% higher than either of Infinity’s two rings operating alone. Naturally, we want to know which module in the Infinity design limits its performance. The first suspects, of course, are the Branch and Merge modules because of their greater complexity. Neither of these modules appears in the simpler experiment. 7.2 The Cross-Column Hypothesis Alternately, the speed limit might be the result of the long state wire required to pass from one column to the adjacent column. Indeed, Infinity includes special buffer modules to drive this wire, but their PMOS and NMOS drive transistors, E and X in Fig. 1 are the same width as those in other modules that drive shorter wires. Moreover, the spreadsheet results summarized in Table 3 suggest that the greater wire length involved might result in a 10% to 15% decrease in speed. Thus we offer the hypothesis that the long state wires for the cross-column transfer limit Infinity’s performance. This hypothesis renders harmless the greater logical complexity of the Branch and Merge modules. Is there an experiment we can do with Infinity to support this Cross-Column Hypothesis? 7.3 The Supporting Experiment Indeed there is. Consider the case where each ring contains data elements that mingle as they pass through the center column. The Branch module delivers data to the two side columns via the two buffer stages immediately below it. If it should happen that
272
P. Joshi et al.
the data elements alternate as they pass through the center section, the Branch module will deliver them alternately to the two sides. In this case, the data rate needed to move from the center column to either side would be only one half of the data rate sustained in the center column. Such a reduction in data rate would leave twice as much time for crossing to the adjacent column, much more than the task requires. Any small excess delay of 10% to 15% for crossing from one column to the next will thus be masked. Of course, to achieve its maximum throughput the Branch module must receive a sufficiently fast supply of elements of alternate types. Similarly, the Merge module at the top of the center column receives data elements from each ring via the two buffer stages immediately above it; see Fig. 3. The Merge module contains an arbiter to decide which element arrives first and dispatch it down the center column. An element observed slightly later at the other input to the Merge must wait for the next cycle of the Merge module. If there is always a data element waiting at the beginning of each cycle of the Merge module, we say that the Merge module is “saturated”. In saturation it happens that the electrical properties of its arbiter cause the Merge module to send forward data elements alternately from its two inputs. When the Merge module is saturated, it will accept data elements from an individual side column at only half of its maximum data rate. This reduction in data rate leaves twice as much time for crossing from the side column, much more than the task requires. Thus, just as alternate delivery by the Branch module masks the crosscolumn delay at the bottom, so alternate receipt by the Merge module likewise masks the cross-column delay at the top of the center column. Moreover, because the Merge module forwards data alternately from the two sides, the center stream meets the alternating condition assumed for the Branch module. 7.4 Confirming the Hypothesis How many data elements must there be in the side column to achieve at least half the maximum throughput? We can discover the answer by examining Fig. 4 which shows the throughput for any given occupancy. If we need half the maximum throughput, we must have occupancy of at least 24% and not more than 80%. Why is this? With less occupancy there aren’t enough data elements to sustain the throughput, and with greater occupancy there is too much congestion. Applying the 24% to 80% numbers to the 50 modules of the side column and its buffers requires them to contain at least 12 data elements and not more than 40, a maximum that leaves enough bubbles, namely 10, to achieve the required throughput. In addition, of course, the 50 modules of the shared center column at maximum throughput contain a total of about 60% * 50 = 30 data elements. Because these alternate from the two rings, half of them must belong to each ring. The total number of elements in each ring is the sum of its unshared data elements and its shared data elements, a number that lies between 12 + 15 = 27 and 40 + 15 = 55. We expect occupancy in each of the two rings in this range to mask any cross-column delay. Unfortunately, Infinity lacks a counter able to measure directly the throughput of the center column. However, the throughput of the center column must be the sum of the throughputs of the two side columns. Thus we can use the sum of the two side throughputs to test the hypothesis that the cross-column delay limits the single-ring throughput of Infinity.
Timing Verification of GasP Asynchronous Circuits
273
Experimental confirmation of the Cross-Column Hypothesis comes from an experiment in which each ring holds 45 data elements. In this condition we observe a combined throughput of about 4.2 * 109 data elements per second. This higher combined throughput is very similar to that of the simpler single-column experiment. Analytic confirmation of the Cross-Column Hypothesis comes from Table 3 which suggests that the longer state wires required for crossing from one column to another will retard the buffer modules about 10% to 15%. The near agreement between analysis and experiment give us confidence that the flat top of Fig. 4 comes from delay in the cross-column buffers and that the Branch and Merge modules can operate at their full design speed.
8 Conclusions One might be tempted to conclude that the cross-column drivers in Infinity are too small. It would have been possible to use wider drive transistors in the buffer stages to speed up the cross-column transfer. However, examination of Table 2 illustrates otherwise. Notice that for the long wire case in Table 2 the delays of transistors E and X are about the same as the delays of all the other logic gates. In fact, the width of transistors E and X was chosen for the long wire case, but the same E and X widths appear in all stages. The close proximity of stages in a single column makes the state wires between them too short for the chosen width of their drive transistors. Thus, rather than concluding that Infinity’s cross-column drivers operate too slowly, we should conclude that the single-column modules, including the Branch and Merge, operate too quickly. We can improve the design by providing a few additional module types with smaller transistors for driving short state wires. Such a design would have reduced transistor width, and thus would consume less energy. All its stages would operate with about the same delay, albeit with the longer delay of the cross-column stages. The most important result from this work is that a limited set of module types can provide reliable operation for an extended range of state wire lengths. GasP circuits fail when the delays in the predecessor and successor loops differ by too much. Notice, however, that each loop includes the delay of gates A B and C in Fig. 1. Moreover, the delays of gates D and F can be made approximately the same. Thus for such a timing failure to occur, the delay in gate E or X must exceed the combined delay of the other logic gates, A B C and D or F. It would require an excessively long state wire to delay E or X so much, a wire long enough to span many of Infinity’s columns. Moreover such a wire would have too much electrical resistance to be useful. The analysis offered here bolsters our confidence that the 6-4 GasP family of control circuits may prove applicable to a wide variety of NoC conditions. Acknowledgements. The authors wish to thank Intel and Sun and the sponsors of SRC for their support of this work. Our thanks go also to the members of the VLSI Research group at Sun Microsystems for design, layout, and test of the Infinity chip. Special thanks to Russell Kao and Hesam Fathi Moghadam who provided test results for Infinity, and to Igor Benko and Jon Lexau who helped with its final layout and fabrication. The authors would also like to thank Pankaj Golani, a PhD student at USC, for very insightful discussions on relative timing constraints for more general single-track handshaking.
274
P. Joshi et al.
References 1. Beerel, P.A., Roncken, M.E.: Low Power and Energy Efficient Asynchronous Design. Journal of Low Power Electronics 3(3), 234–253 (2007) 2. Beerel, P.A., Ferretti, M.: High Performance Asynchronous Design Using Single-Track Full-Buffer Standard Cells. IEEE Journal of Solid-State Circuits 41(6), 1444–1454 (2006) 3. van Berkel, K., Bink, A.: Single-Track Handshake Signaling with Application to Micropipelines and Handshake Circuits. In: 2nd IEEE International Symposium on Advanced Research in Asynchronous Circuits and Systems, pp. 122–133 (1996) 4. Coates, W.S., Lexau, J.K., Jones, I.W., Fairbanks, S.M., Sutherland, I.E.: FLEETzero: An Asynchronous Switching Experiment. In: 7th IEEE International Symposium on Asynchronous Circuits and Systems, pp. 173–182 (2001) 5. Ebergen, J.C., Fairbanks, S., Sutherland, I.E.: Predicting Performance of Micropipelines Using Charlie Diagrams. In: 4th IEEE International Symposium on Advanced Research in Asynchronous Circuits and Systems, pp. 238–246 (1998) 6. Golani, P., Beerel, P.A.: Back-Annotation in High-Speed Asynchronous Design. Journal of Low Power Electronics 2(1), 37–44 (2006) 7. Lines, A.M.: Pipelined Asynchronous Circuits. Master’s thesis, Caltech CSTR:1988.cs-tr95-21, California Institute of Technology (1996) 8. Stevens, K.S., Ginosar, R., Rotem, S.: Relative Timing. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 11(1), 129–140 (2003) 9. Sutherland, I.: A Six-Four GasP Tutorial. Technical Report, UCIES#2007-is49, see FLEET web site [15] (2007) 10. Sutherland, I.: Infinity: A Proposed Test Chip. Technical Report, UCIES#2007-is46, see FLEET web site [15] (2007) 11. Sutherland, I.: FLEET – A One-Instruction Computer, Technical Report, UCIES#2006is30, see FLEET web site [15] (2006) 12. Sutherland, I., Fairbanks, S.: GasP: A Minimal FIFO Control. In: 7th IEEE International Symposium on Asynchronous Circuits and Systems, pp. 46–53 (2001) 13. Sutherland, I., Sproull, B., Harris, D.: Logical Effort: Designing Fast CMOS Circuits. Morgan Kaufmann, San Francisco (1999) 14. Williams, T.E.: Analyzing and Improving the Latency and Throughput Performance of Self-Timed Pipelines and Rings. In: IEEE International Symposium on Circuits and Systems, vol. 2, pp. 665–668 (1992) 15. Fleet web site, http://research.cs.berkeley.edu/class/fleet/
About the Authors Prasad Joshi is a graduate student at the University of Southern California. He came to USC after completing a B.E. degree in 2006 at the University of Mumbai, India. Peter Beerel is an Associate Professor of Electrical Engineering at the University of Southern California and is Prasad’s advisor. Peter completed his PhD degree in Electrical Engineering at Stanford University in 1994. Marly Roncken is employed by Intel Corporation, and assigned as Researcher in Residence at the University of California, Berkeley. She has been involved in asynchronous systems since shortly after receiving her master’s degree in Mathematics from the University of Utrecht in 1985. Before joining Intel in 1997 she worked for Philips Research in Eindhoven, The Netherlands.
Timing Verification of GasP Asynchronous Circuits
275
Ivan Sutherland is a Fellow at Sun Microsystems, assigned by Sun to work at the University of California, Berkeley. He received his PhD degree in Electrical Engineering at MIT in 1963.
The Project The asynchronous VLSI/CAD group at the University of Southern California (USC) led by Prof. Peter Beerel does research in various aspects of making asynchronous circuits a viable alternative to synchronous design. One of its current goals is to extend the application of standard static timing analysis techniques to non-traditional circuits, including high-speed asynchronous pipelines, see e.g. [6]. Heretofore nearly all timing verification tools have been built to validate the operation of synchronous circuits in which a global clock provides a common rhythm of operation to an entire system. In asynchronous designs, however, separate parts of the system each use their own timing signals, thus making it very hard to apply the standard validation tools. The USC group seeks ways to simplify timing verification for asynchronous systems. The USC group is sponsored in part by the Global Research Collaboration (GRC) program of the Semiconductor Research Consortium (SRC). GRC provides for a global forum for pre-competitive collaboration among all segments of the semiconductor industry, universities and government agencies. Marly Roncken mentors the
276
P. Joshi et al.
SRC work at USC for Intel Corporation, and was seeking additional asynchronous design examples to which the USC timing validation tools and understanding might apply. She found these at the University of California, Berkeley (UCB), where Ivan Sutherland has an ongoing collaborative research project between Sun Microsystems and UCB, called Fleet [4][11][15]. Meanwhile, the VLSI Research Group at Sun Microsystems had fabricated an asynchronous test chip for Fleet, called Infinity, in 90 nanometer TSMC technology. The fortuitous juxtaposition of these activities brought the four authors together. Marly’s earlier association with Prof. Willem-Paul de Roever, as his master’s student at the University of Utrecht, leads us to offer this paper in honor of Willem-Paul’s “Declaration of Independence” through retirement from the University of Kiel on 4 July 2008. July 4th is also a very important holiday in the United States, being the anniversary of the date when the United States declared independence from England.
Integrated and Automated Abstract Interpretation, Verification and Testing of C/C++ Modules Jan Peleska Department of Mathematics and Computer Science University of Bremen Germany [email protected]
Abstract. Starting from the perspective of safety-critical systems development in avionics, railways and the automotive domain, we advocate an integrated verification approach for C/C++ modules combining abstract interpretation, formal verification and conventional testing. It is illustrated how testing and formal verification can benefit from abstract interpretation results and, vice versa, how test automation techniques may help to reduce the well known problem of false alarms frequently encountered in abstract interpretations. As a consequence, verification tools integrating these different methodologies can provide a wider variety of useful results to their users and facilitate the bug localisation processes involved. When applied to C/C++ software, the problems of aliasing, type casts and mixed arithmetic and bit operations have to be handled on the level of constraint generation. We cope with this problem by using a symbolic interpretation method operating on an abstracted memory model. We describe the available tool support developed by the author, his research group and industrial partners.
1
Introduction
1.1
Objectives
In this contribution an integrated approach to static analysis by abstract interpretation, formal verification by model checking and testing is described. The focus of our contribution is on the verification of C/C++ functions and methods (we use the general term modules to denote both functions and methods). Module verification1 has its well-defined place in the development life cycle, and static analysis, testing and – though less frequently used in today’s industrial practice 1
Partially supported by the BIG Bremer Investitions-Gesellschaft under research grant 2INNO1015B. Following [21], we use the term verification for all activities where a development artefact is checked for compliance with respect to a given specification. In particular, reviews, inspections, formal verifications, static analyses and testing are verification activities.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 277–299, 2010. c Springer-Verlag Berlin Heidelberg 2010
278
J. Peleska
– formal verification are recommended techniques for this purpose. From the verification specialists’ point of view it is advisable to perform these techniques in an integrated manner: – Test cases can serve as useful counter examples for violated assertions, thereby supporting the formal verification and static analysis processes, – static analyses frequently uncover non-functional defects2 which are unlikely to be detected during functional testing, – formal verification is the “last resort” when algorithms are too complex to be tested and analysed in an exhaustive way. To our best knowledge, however, tools supporting these activities in an integrated manner do not exist until today. It is therefore the purpose of this paper to outline how such an integration can be performed and to present the associated tool support developed by author’s research group in cooperation with industrial partners. Indeed, it will become apparent that such an integration is also beneficial from the tool builders’ point of view: – As will be outlined in Section 2, both functional and structural testing can be regarded as reachability problems as they are typically explored in formal verification by (bounded) model checking. – Static analysis by abstract interpretation is a powerful means to reduce the state space to be explored for the purpose of test case generation or formal verification. – Test case generation and the under-approximation techniques used in the constraint solving activities involved are useful for verifying potential errors uncovered by (over approximating) static analyses. 1.2
Background and Motivation: Industrial Safety-Critical Systems Development and the Deployment of Formal Methods
According to the standards [21,8,1] the generation of 100% correct software code is not a primary objective in the development of safety-critical systems. This attitude is not unjustified, since code correctness will certainly not automatically imply system safety. Indeed, safety is an emergent property [14, p. 138], resulting from a suitable combination of (potentially failing) hardware and software layers. As a consequence, the standards require that – the contribution of software components to system safety (or, conversely, the hazards that may be caused by faulty software) shall be clearly identified, and – the software shall be developed and verified with state-of-the art techniques and with an effort proportional to the component’s criticality. 2
In particular, the so-called runtime errors, such as division by zero, array bounds violations, out-of-bounds pointers, unintended endless loops etc.
Integrated and Automated Abstract Interpretation, Verification and Testing
279
Based on the criticality, the standards define clearly which techniques are considered as appropriate and which effort is sufficient. The effort to be spent on verification is defined most precisely with respect to testing techniques: Tests should (1) exercise each functional requirement at least once, (2) cover the code completely, the applicable coverage criteria (statement, branch, modified condition/decision coverage) again depending on the criticality, (3) show the proper integration of software on target hardware. Task (3) is of particular importance, since analyses and formal verifications on source code level cannot prove that the module will execute correctly on a specific hardware component. These considerations motivate the main objectives for the tool support we wish to provide: 1. Application of the tool and the results it provides have to be associated clearly with the development phases and artifacts to be produced by each activity specified in the applicable standards. 2. Application of the tool should help to produce the required results – tests, analysis and formal verifications – faster and at least with the same quality as could be achieved in a manual way. Requirement 1. is obviously fulfilled, since the tool functionality described here has been explicitly designed for the module verification phase, as defined by the standards mentioned above. Requirement 2 motivates our bug finder approach with respect to formal verification and static analysis: These techniques should help to find errors more quickly than would be possible with manual inspections and tests alone – finding all errors of a certain class is not an issue. As a consequence the tool can be designed in such a way that state explosions, long computation times, false alarms and other aspects of conventional model checkers and static analysis tools, usually leading to user frustration and rejection of an otherwise promising method, simply do not happen: Instead, partial verification results are delivered, and these – in combination with the obligatory tests – are usually much better than what a manual verification could produce within affordable time. 1.3
Related Work
The work presented here summarises and extends results previously published by the author and his research team in cooperation with Verified Systems International GmbH [3,18,19,17]. Many authors point out that the syntactic richness and the semantic ambiguities of C/C++ present considerable stumbling blocks when developing analysis tools for software written in these languages. Our approach is similar to that of [12] in that we consider a simplified syntactic variant – the GIMPLE code – with the same expressive power but far more restrictive syntax than the original language: GIMPLE [11] is a control flow graph representation using 3-address code in assignments and guard conditions. Since the gcc compiler transforms every C/C++ function or method into a GIMPLE representation, this seems to be
280
J. Peleska
an appropriate choice: If tools can handle the full range of GIMPLE code, they can implicitly handle all C/C++ programs accepted by gcc. Therefore we extract type information and GIMPLE code from the gcc compiler; this technique has been described in [15]. In contrast to [12], where a more abstract memory model is used, our approach can handle type casts. The full consideration of C/C++ aliasing situations with pointers, casts and unions is achieved at the price of lesser performance. In [7,5], for example, it is pointed out how more restrictive programming styles, in particular, the avoidance of pointer arithmetics, can result in highly effective static analyses with very low rates of false alarms. Conversely it is pointed out in [25] that efficient checks of pointer arithmetics can be realised if only some aspects of correctness (absence of out-of-bounds array access) are investigated. As another alternative, efficient static analysis results for large general C-programs can be achieved if a higher number of false alarms (or alternatively, a suppression of potential failures) is acceptable [9], so that paths leading to potential failures can be identified more often on a syntactic basis without having to fall back on constraint solving methods. On the level of binary program code verification impressive results have been achieved for certain real-world controller platforms, using explicit representation models [22]. These are, however, not transferable to the framework underlying our work, since the necessity to handle floating point and wide integer types (64 or 128 bit) forbids the explicit enumeration of potential input values and program variable states. All techniques described in this paper are implemented in the RT-Tester tool developed by the author and his research group at the University of Bremen in cooperation with Verified Systems International GmbH [26]. The approach pursued with the RT-Tester tool differs from the strategies of other authors [7,5,25]: We advocate an approach where verification activities focus on small program units (a few functions or methods) and should be guided by the expertise of the development or verification specialists. Therefore the RT-Tester tool provides mechanisms for specifying preconditions about the expected or admissible input data for the unit under inspection as well as for semi-automated stub (“mock-object”) generation showing user-defined behaviour whenever invoked by the unit to be analysed. As a consequence, programmed units can be verified immediately – this may be appealing to developers in favour of the test-driven development paradigm [4] – and interactive support for bug-localisation and further investigation of potential failures is provided: A debugger supports various abstract interpretation modes (in particular, interval analysis) and the test case generator can be invoked for generating explicit input data for reaching certain code locations indicating the failure of assertions. With the recent progress made in the field of Satisfiability Modulo Theory [20] powerful constraint solvers are available which can handle different data types, including floating point values and associated non-linear constraints involving transcendent functions. The interplay between path generator, interpreters and solver as handled within the RT-Tester tool has been described in [3]. The solver
Integrated and Automated Abstract Interpretation, Verification and Testing
281
implemented in the tool relies on ideas developed in [10] as far as Boolean and floating point constraints are involved, but uses additional techniques and underlying theories for handling linear inequations, bit vectors, strings and algebraic reasoning, see, e. g. [23]. Most methods for solving constraints on interval lattices used in our tool are based on the interval analysis techniques described in [13]. 1.4
Overview
In section 2 an overview over the tool architecture and the methods involved is given. The next two sections describe two of the main techniques that are prerequisites for abstract interpretation, property checking and testing: Symbolic interpretation techniques (Section 3) are used to create memory models, symbolically describing the state transitions performed by the UUT along a single path or a whole portion of the code. The constraint generator (Section 4) evaluates the memory model in order to resolve expressions in such a way that the resulting reachability constraints are suitable for the tool’s solver component. Section 5 presents a conclusion.
2
Abstract Interpretation, Formal Verification and Testing – An Integrated Approach
2.1
Specification of Analysis, Verification and Test Objectives
In our approach functional requirements of C/C++ modules are specified by means of pre- and post-conditions (Fig. 1). Optionally, additional assertions can be inserted into an “inspection copy” of the module code. The Unit Under Test (UUT)3 is registered by means of its prototype specification preceded by the @uut keyword and extended by a {@pre: ... @post}; block. Pre- and postconditions are specified as Boolean expressions or C/C++ functions, so that – apart from a few macros like @pre, @post, @assert and the utilisation of the method name as place holder for return values – no additional assertion language syntax is required. The pre-condition in Fig. 1, for example, states that the specified module behaviour is only granted if input i is in range 0 ≤ i ≤ 9 and inputs x, y satisfy exp(y) < x. The post-condition specifies assertions whose applicability may depend on the input data: The first assertion globx == globx@pre states that the global variable globx should always remain unchanged by an execution of f(). The second assertion (line 9) only applies if the input data satisfies −10.0 < y ∧ exp(y) < x. Alternatively (line 12), the return value of f() shall be negative. It is well-known that pre-/post-condition specifications are considerably facilitated by the optional utilisation of auxiliary variables [2, p. 192]: These variables are characterised by the fact that they are never read in control conditions or assignments to non-auxiliary variables. As a consequence, the existence of auxiliary variables and their associated assignments does not change the (untimed) 3
We use this term in general for any module to be analysed, verified and/or tested.
282
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
J. Peleska
double globx; ... @uut double f(double x, double y, int i) { @pre: 0 <= i and i <= 9 and exp(y) < x; @post: @assert( globx == globx@pre ); if ( -10.0 < y and exp(y) < x ) { @assert( f == 1.0/(x - exp(y)) ); } else { @assert( f < 0 ); } };
Fig. 1. Example: Module specification by pre- and post-conditions
behaviour of the UUT. Assignments can either be directly inserted into the UUT code (so-called code instrumentation) or into the UUT specification by way of pre- and post-processing statements: Figure 2 shows an example of the latter variant, where the two previous return values of function g() are related to the actual function return: The @aux section is used to declare auxiliary variables. The @preprocess statements are executed before each call to the UUT, the @postprocess statements are executed after the UUT has terminated and the post-condition has been evaluated. Since module behaviour is not only defined by its input-output relation but also by the sequence of sub-function and method invocations, it is necessary to specify – the expected number and sequence of sub-function invocations, – the expected input data to be passed by the UUT to its sub-functions, – constraints about the sub-function behaviour, depending on the input data it receives. Sub-functions are specified in the same way as the UUT itself. Using auxiliary variables and associated assignments recording the calls and their parameters, the assertions related to sequencing of sub-function calls can be expressed by means of predicates referring to these auxiliary variables. For test purposes, our system automatically generates test stubs (also called mock objects in objectoriented settings): These are functions replacing the original sub-functions invoked by the UUT, and showing the specified sub-function behaviour. The utilisation of stubs has the advantage, that exceptional behaviour which rarely occurs in the original sub-function (e. g. report of an arithmetic exception or a hardware error) can easily be simulated in the stub, so that execution of the associated
Integrated and Automated Abstract Interpretation, Verification and Testing
1 2 3 4 5 6 7 8 9 10 11 12 13 14
283
@uut double g(double x) { @aux: double z0; double z1 = 0; @preprocess: z0 = z1; @pre: 0 < x; @post: @assert( fabs( 1.0 - (g + z1 + z0)/3.0 ) < 0.1 ); @postprocess: z1 = g; };
Fig. 2. Module specification using auxiliary variables
code sections in the UUT can be triggered in a simple way. For structural testing, the desired coverage can be specified. Currently, we support the coverage criteria required in the standards [21,8]: – Statement coverage (C0): Every statement is executed at least once. – Decision coverage (C1): C0 coverage plus the requirement that every decision is evaluated at least once with result true and at least once with result false. This is required, for example, for testing avionic software of criticality level B (A = highest criticality level). – Multiple condition/decision coverage (MC/DC): C1 coverage plus the requirement that every condition in a decision in the module has taken all possible outcomes at least once, and each condition in a decision has been shown to independently affect that decision’s outcome. A condition is shown independently to affect a decision’s outcome by varying just that condition while holding fixed all other possible conditions. This is required, for example, for testing avionic software of criticality level A. The specification of pre-/post-conditions and internal assertions, in combination with the optional utilisation of auxiliary variables, allows to specify safety conditions about the module behaviour. As a consequence, the verification goals are represented by reachability problems which are very similar to the structural coverage test goals: If we consider augmented module versions where each safety condition ψ is represented by an auxiliary code branch if ¬ψ then { raiseError(); } located at the appropriate place in the code, a test reaching the raiseError();-statement would uncover the violation of ψ and at the same time provide a counter example. Conversely, if this statement can be proven to be “dead code”, this proves validity of ψ. Furthermore, the objective to achieve functional test coverage can also be reduced to the problem of achieving structural test coverage, that is, it can also
284
J. Peleska
be transformed into a set of reachability problems. To illustrate this we consider a typical post-condition pattern Q ≡ (Ci (v, v ) ⇒ Qi (v, v )) i
Given variable vector pre-states v and post-states v , this post-conditions states a number of conditions Ci (v, v ) about the situations to be distinguished. Depending on the applicable situation Ci (v, v ), additional assertions Qi (v, v ) shall also hold. Functional test coverage would now require to create each of the situations Ci (v, v ), so that the expected outcome Qi (v, v ) can be checked. Instead of UUT f(), we now consider the augmented function faug () shown in Fig. 3. Obviously, statement coverage of faug () implies functional coverage of f() in the sense exemplified above.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
void f_aug(t1 x1, ..., tn xn) { t r; if ( P(v) ) { // This branch is entered when input data // satisfied pre-condition P(v) v0 = v; // Create copy of pre-states r = f(x1, ...,xn); // Call the UUT // Post-state has changed variable vector v, // pre-state is saved in auxiliary variable v0. if ( C_1(v0,v) ) { assert( Q_1(v0,v) ); } ... if ( C_k(v0,v) ) { assert( Q_k(v0,v) ); } } }
Fig. 3. Branch coverage of f aug() implies functional test coverage of f()
For the static analysis objective “Absence of run-time errors” no user-defined specifications are required, since the analysis obligations can be directly extracted from the code. It is possible, however, to choose between bug finder mode and proof mode: The former mode only uncovers run-time errors along the module paths which have been investigated in order to reach the specified test
Integrated and Automated Abstract Interpretation, Verification and Testing
285
coverage and verification goals. Each uncovered run-time error is associated with a test case uncovering the erroneous module state; potential runtime errors for which no test cases could be constructed are not reported. The proof mode tries to prove the absence of any runtime error within the module, provided that the specified pre-conditions are met. 2.2
Building Blocks of the Tool Platform
SUT − Memory Model
Path Selector
Constraint Generator SUT − Abstract Model
Intermediate Model Representation
UML2.0 Statecharts C++ Module+Specification
SUT Code/Model Parsers
Figure 4 shows the major building blocks of the tool platform which are described in the subsequent paragraphs.
Interpreters Concrete Symbolic Abstract
Constraint Solver
String
Interval Linear Bit− Analysis Arithmetic Vector
Boolean
Test Data: Input Assignment Solution Set Approximation
Fig. 4. Building blocks of test automation, static analysis and property verification tool platform
Intermediate Model Representation and Parser Front-Ends. In order to support various specification formalisms and test paradigms, such as code-based white-box and model-based black-box testing, the system under test and/or its specification are first transformed into an intermediate model representation (IMR) which is independent on the concrete SUT code or specification syntax: The IMR consists of a class library allowing to encode hierarchic hybrid transition systems. For testing C/C++ modules, a compilation front-end derived from gcc [15] parses the UUT code and generates (1) a control flow graph (CFG) representation in 3address code of the UUT and (2) a detailed information base supporting queries about types, variables/objects and their sizes. Based on these information, the IMR model of the UUT is instantiated. For model-based test and verification parsers for UML2.0 and domain-specific languages (railways and automotive) are available. The CFG representation of C/C++ modules uses GIMPLE syntax [11] which has the same expressiveness as C/C++ but uses a very restricted syntax for expressions. This facilitates the parsing and the IMR generation process, as well as the constraint generation process.
286
J. Peleska
Path Selector. The path selector controls the lazy “on-demand” expansion of the symbolic models described in more detail in Section 3: For a given edge e in the CFG, it constructs paths from the initial CFG node to e and submits them to the constraint generator as sequences or trees of CFG nodes and edges. CFG cycles representing unbounded loops in the UUT code are gradually expanded. Cycles representing simple for-loops with fixed numbers of cycles are identified by means of the interpreters (see below) and directly expanded to their specified limit. With information provided by the abstract interpreters or the solver, the path selector learns about infeasible paths, so that they are never extended. Further details are described in [3]. Interpreters and the Generation of Models. As depicted in Fig. 4, three types of interpreters are used to support the testing, verification and abstract interpretation activities. (1) The concrete interpreter evaluates the module with concrete input data, following the rules of the GIMPLE operational semantics, as described in [17]. This interpreter is used to investigate the paths through the GIMPLE module which are covered by given concrete sets of data. If, for example, concrete test data has been generated in order to reach a CFG edge g p −→CF G q, then the concrete interpreter is applied to determine the consecutive transitions q −→CF G . . . to be performed with these data until the module’s exit point is reached. (2) The symbolic interpreter performs symbolic computations on the CFG, recording the effect of GIMPLE statements by means of predicates in a symbolic memory model, whose details are described in Section 3. This interpreter is the core component for generating the constraints to be solved for the inputs to the module in order to reach a given edge or location in the CFG. Observe that, since the symbolic interpreter does not check whether the constraints recorded along an “interpretation route” through the CFG are satisfiable, the generated computations are a superset of the ones really possible. (3) The abstract interpreters evaluate one or more abstractions of the memory model. Starting with (lattice) abstractions of the module’s input data, they operate on abstractions of the symbolic memory model. The purpose of this activity is threefold: – Identification of runtime errors. – Using over-approximation, an abstract interpreter can find sufficient conditions to prove that a computation “suggested” by path generator and symbolic interpreter is infeasible. Since abstract interpretation can be performed at comparably low cost this is more effective than waiting for the constraint solver to find out that a path cannot be covered. – Using under-approximation, the abstract interpreters speed up the solution process for non-linear constraints involving floating point variables and transcendent functions. In order to prove the correctness of interpreters – this is an ongoing activity in the author’s research group, but beyond the scope of this paper – it is helpful to analyse the formal rˆ ole of these interpreters: The concrete interpreter obviously
Integrated and Automated Abstract Interpretation, Verification and Testing
287
generates the concrete transition system representing the behaviour of the GIMPLE module. Hypothetically it could be used to generate a global model for a concrete model checker, but in practice these models would be far too large for any module of realistic complexity. As a consequence, the concrete interpreter unfolds the model in a lazy manner, only along the paths through the CFG where concrete investigations (e. g. for determining the effect of a concrete data set on the module portion to be covered) are required. The symbolic interpreter unfolds a memory model which implicitly specifies the power set lattice over the concrete model. Each state of this power set lattice is represented by a set of pairs (l, s) where l is a location in the module’s CFG and s is an interpretation of symbols (variables, pointers, function pointers) in location l. As will become apparent in Section 3, a symbolic memory state mem generated during a symbolic computation for a location l defines a constraint φ over symbols xi ∈ V such that the set of all possible concrete symbol valuations in this state is given by {s : V → D | φ[s(x1 )/x1 , . . . , s(xn )/xn ]}, that is, the set of valuations where φ becomes true. Each abstract interpreter is instantiated according to the following recipe: 1. For every datatype t in the concrete program component chose a suitable abstraction lattice (L(t), ), so that a Galois Connection (see, for example [6, − p. 201]) (P(t), ⊆) ← −→ (L(t), ) between powerset lattice and abstraction lat tice exists. 2. Lift each operation ♦ defined on t to L(t) by means of the canonic con struction ♦L : L(t) × L(t) → L(t); p1 ♦L p2 =def (p1 ♦P p2 ) . In this definition, ♦P denotes the canonic lifting of ♦ to the powerset lattice over t: a1 ♦P a2 =def {x1 ♦x2 | xi ∈ ai , i = 1, 2}. 3. Having defined all abstraction lattices L(t), lift all Boolean operators : t × t → B to [] : L(t) × L(t ) → L(B) by ⎧ ⎨ if {x1 x2 | x1 ∈ p1 , x2 ∈ p2 } = {false, true} p1 []p2 = false if {x1 x2 | x1 ∈ p1 , x2 ∈ p2 } = {false} ⎩ true if {x1 x2 | x1 ∈ p1 , x2 ∈ p2 } = {true}
4. Based on steps 1 — 3, lift the symbolic state space SS defined in Section 3 − to a lattice representation SL , together with a Galois Connection SS ← −→ SL . 5. Introduce an abstract transition relation −→L ⊆ SL × SL by requiring (−→P denotes the transition relation on the powerset lattice over SS ) (a)
p −→P p p −→L p
(b)
a −→P p a−→L p
This transition relation −→L satisfies the consistency condition (c)
∀a, a , b ∈ L : (a −→L a ∧ b a ⇒ ∃b ∈ L : b −→L b ∧ b a )
Properties (a—c) can be used to identify infeasible transitions on symbolic interpretation level, without having to consult the constraint solver: If abstract interpretation can show that p −→L p , this implies p −→P p . (For further details, see the lecture notes [16]).
288
J. Peleska
Constraint Generation. The partial symbolic model expanded by the symbolic interpreter according to guidelines of the path selector already contains the constraints to be fulfilled in order to reach a given edge in the CFG on a (set of) pre-defined paths. These constraints, however, still refer to pointer and array expressions which have to be resolved before passing the constraints to the solver. This resolution process is performed by the constraint generator. The resulting constraint φ only refers to atomic variables which are derived from the program variables, address offsets and length expressions. The representation of φ in conjunctive normal form is structured in such a way that each atomic condition is represented in 3-address code (such as x < y + z), as long as it does not involve calls to n-ary functions with n > 2 (e. g., x = f (u, v, w, y)). Constraint Solver. The solver handling conditions prepared by the constraint generator has been developed according to the Satisfiability Modulo Theory (SMT) paradigm [20]. It uses a combination of techniques for solving partial problems of specific type (e. g., constraints involving bit vector arithmetic, strings, or floating point calculations). For the solution of constraints involving floating point expressions and transcendent functions the solver applies interval analysis with sub-paving, bi-partitioning and forward-backward constraint propagation as main techniques [13]. Given a constraint Φ and its solution set S, the solver constructs a sub-paving P , that is, a collection of interval vectors whose union C is a subset of S, that is, and under-approximation. As a consequence, any vector of variable values taken from P is a solution of Φ. Alternatively, the solver can construct sub-pavings as over-approximations P , such that the union C of P -interval vectors satisfies S ⊆ C . This is used for the generation of boundary value and robustness tests, where Φ represents a pre-condition, and we are interested in finding values close to the boundary (inside or outside) of S. To speed up the process of finding P , forward-backward constraint propagation is used to contract interval vectors possessing non-empty intersection with both S and its complement. Additionally, we use learning strategies as described in [10,3], where also more details on the solver can be found.
3 3.1
Memory Model and Symbolic Interpretation Memory Model
As a consequence of the aliasing problems of C/C++ it may be quite complex to determine the valuation of a variable in a given module state: the memory location associated with the variable may have been changed not only by direct assignments referring to the variable name, but also indirectly by assignments to de-referenced pointers and memory copies to areas containing the variable. Therefore we introduce a memory model that allows us to identify the presence of such aliasing effects with acceptable effort. Computations are defined as sequences of memory configurations, and the memory areas affected by assignments or function/method executions are specified by means of base addresses, offsets and physical length of the affected area. Moreover, the values written
Integrated and Automated Abstract Interpretation, Verification and Testing
289
to these memory areas are only specified symbolically by recording the valuedefining expression (e. g. right-hand side of an assignment or output parameter of a procedure call) without resolving them to concrete or abstract valuations. This motivates the term symbolic interpretation. Global, static and stack variables x induce base addresses &x in the data and stack segment, respectively. Dynamic memory allocation (malloc(), new ...) creates new base addresses on the heap. A memory configuration mem consists of a collection of memory items, each item m specified by base address, offset, length and and value expression (Fig. 5). Since some statements will only conditionally affect a memory area, it is necessary to associate memory items with constraints specifying the conditions for the item’s existence.
m.v0 | m.v1 | m.a | m.t | m.o | m.l | m.val | m.c m.v0 First computation step number where m is valid m.v1 Last computation step number where m is valid or ∞ for items valid beyond the actual computation step m.a Symbolic base address m.t Type of specified value m.val m.o Start offset from base address in bits, where value is stored m.l Offset from base address to first bit following the stored value, so m.l−m.o specifies the bit-length of the memory location represented by the item m.val Value specification m.c Validity constraint
Fig. 5. Structure of a memory item m
Symbolic computations – that is, sequences of memory configurations related by transition relations – are recorded as histories, in order to reduce the required storage space: Memory items are associated with a validity interval [m.v0 , m.v1 ] whose boundaries specify the first and last computation step where the item was a member of the configuration. Example 1. Suppose that variables float x, y, z; are defined in the stack frame of the UUT on a 32-bit architecture, and the current computation step n performs an assignment x = y + z. This leads to the creation of a new memory item m =def n | ∞ | &x | float | 0 | 32 | yn + zn | true Item m is first valid from step n on, and has not yet been invalidated by other writes affecting the memory area from start address &x to &x+31, so m.v1 = ∞. (Example 3 below shows the effect of C/C++ statements on the invalidation of memory items.) The value depends on the valuation of y and z, taken in step n. This is denoted by the version index n in the value expression yn + zn .
290
J. Peleska
For the representation of large memory areas carrying identical or inter-dependent values it is useful to admit additional bound parameters in the offset, value and constraint specifications: mp0 ,...,pk = v0 | v1 | a | t | o(p0 , . . . , pk ) | l(p0 , . . . , pk ) | val(p0 , . . . , pk ) | c(p0 , . . . , pk ) defines a family of memory items by means of the definition mp0 ,...,pk =def {m | m .v0 = v0 ∧ m .v1 = v1 ∧ m .a = a ∧ m .t = t ∧ (∃p0 , . . . , pk : m .o = o[p0 /p0 , . . . , pk /pk ] ∧ m .l = l[p0 /p0 , . . . , pk /pk ] ∧ m .val = val[p0 /p0 , . . . , pk /pk ] ∧ m .c = c[p0 /p0 , . . . , pk /pk ])} Example 2. Suppose that array float a[10]; is defined in the stack frame of the UUT on a 32-bit architecture, and is currently represented by a family of memory items mp =def n | ∞ | &a[0] | float | 32 · p | 32 · p + 32 | sinf((f loat)p) | 0 ≤ p ∧ p < 10 Family m specifies one memory item for each p ∈ {0, . . . , 9}, each item located at a p-dependent offset from the base address &a[0] and carrying a p-dependent value. More formally, the symbolic interpretation state space SS of a module P is defined as SS =def N (P ) × N0 × M N (P ) =def Nodes of P ’s GIMPLE control flow graph M =def dataSegment × heapSegment × stackSegment dataSegment =def M-Item∗ heapSegment =def M-Item∗ stackSegment =def stackF rame∗ stackF rame =def M-Item∗ M-Item =def N0 × (N0 ∪ {∞}) × BaseAddress × Types × Offset × OffsetPlusLength × Value × Constraint BaseAddress =def String Offset =def OffsetPlusLength =def Value =def Constraint =def Expr(Sym × N0 ) Sym =def Symbols of P plus parameters for families of memory items
Integrated and Automated Abstract Interpretation, Verification and Testing
291
Each symbolic state consists of a triple (node, n, mem) where node is a node in the GIMPLE control flow graph representing the current “program counter state” of the symbolic execution, n serves as a computations step counter and mem is the current memory configuration. The collection of memory items generated so far is structured according to their allocation in the data segment, heap or stack, respectively. The stack is further sub-divided into frames, so that the validity of stack variables during their associated function executions can be clearly specified. For the symbolic specification of offsets, values and constraints GIMPLE expressions over symbols from the module P , that is, program variables or object attributes are used. In addition to that, these expressions may refer to parameters defining families of memory items, as illustrated in Example 2. Each variable symbol is associated with a version identifier: If x has current version n and the next computation step processes an assignment x = x + 1; this generates a new memory item for version xn+1 with value expression xn + 1. 3.2
Symbolic Interpretation
Symbolic interpretation (denoted below by transition relation −→G , “G” standing for “GIMPLE operational semantics”) is performed according to rules of the pattern g
n1 −→CF G n2 , (n1 , n, mem) −→G (n2 , n + 1, mem )
so a transition can be performed on symbolic level whenever a corresponding g edge exists in the control flow graph (−→CF G denotes the edge-relation in the module’s CFG, with guard condition g as label). It may turn out, however, on abstract or concrete interpretation level, that such a transition is infeasible in the sense that no valuation of inputs exists where the constraints of all memory
σ : Symbols × M → M-Item∗
τ : Symbols × M → Symbols
Depending on the current memory state mem, σ(x, mem) maps a symbol x to the stack frame, global data or heap section where x is associated with. Depending on the current memory state m, τ (x, m) maps a variable symbol x to its type.
β : Selectors → BaseAddress Maps a selector to its base address. ω : Selectors → Expr
Maps a selector to its offset expression.
bitsizeof : Symbols → N
Takes a type symbol and returns its length in bits.
bit : Expr × N0 → {0, 1}
bit(e, b) returns the value of the bth bit of expression e.
Fig. 6. Auxiliary functions used in the symbolic interpretation algorithms in Section 3.2
292
J. Peleska
items involved evaluate to true. Informally speaking, a statement changing the memory configuration is processed according to the following steps: (1) For every base address and offset possibly affected by the statement, create a new memory item m , to be added to the resulting configuration. (2) For each new item m check which existing items m may be invalidated: Invalidation occurs, if m refers to the same base address as m and the data area of m covers (i. e., has a nonempty intersection with) that of m. (3) For each invalidated item m create new ones m specifying what may still remain visible of m: m equals to m if m does not exist at all (i. e., constraint m .c evaluates to false), or m and m do not overlap. Moreover, m specifies the resulting value representation of m in memory for the situation where m and m only partially overlap. To explain the effect of symbolic transitions on the state space SS more formally, we present three transition rules explaining stack variable definition, assignment to a variable and assignment to a de-referenced pointer. In the definitions and algorithms involved some auxiliary functions are involved which are defined in Fig. 6. (1) A stack variable definition, n1 =def typex x;, performed in computation step n, only affects the current stack frame. Value expression Undef marks that the value is still undefined. The new memory item is only valid if the guard condition g, evaluated according to the memory configuration of computation step n − 1 is true. m := (n, ∞, &x, typex, 0, bitsizeof(typex), Undef, (g, n − 1)); mem := (mem.data, mem.heap, f ront(mem.stack) last(mem.stack) m
;
procedure up= (sel : Selectors; expr : Expr; n : N0 ; g : Expr; inout mem : M ) m .v0 := n; m .v1 := ∞; m .a := &β(sel) m .t := τ (sel, mem) m .o := (ω(sel), n − 1) m .l := (ω(sel) + bitsizeof(sel), n − 1) m .val := (expr, n − 1) m .c := (g, n − 1) up(m , n, mem); end
Fig. 7. Effect of normal assignments on history of memory items
(2) The effect of an assignment to a stack or global variable, n1 =def sel = expr; affects the current stack frame or the global data segment. In this assignment expression sel denotes an arbitrary selector, that is an identifier of an atomic variable, structure component, array element or mixed structure/array identifier, such as a.b[i][j].c.d[k]. The new memory state mem is specified by procedure call
Integrated and Automated Abstract Interpretation, Verification and Testing
293
up= (sel, expr, n + 1, g, mem); mem := mem;
and its in-out-parameter mem. Procedure up= () (Fig. 7) specifies (a) how a new memory item m is created, carrying the right-hand side expression as its value and the CFG guard condition as validity constraint and (b) which memory items m have to be invalidated due to the new assignment, possibly leading to the creation of “replacements” for these m involving new constraints. The details of this invalidation/creation process are specified in procedure up() (Fig. 8).
procedure up(m : M-Item; n : N0 ; inout mem : M ) begin h := σ(m .a, mem); u := ; for m = last(h) downto head(h) do if (m.v1 = ∞ ∧ m .a = m.a) then m.v1 := n − 1; c := m.c ∧ (¬m .c ∨ m .l ≤ m.o ∨ m.l ≤ m .o) m := (n, ∞, m.a, m.t, m.o, m.l, m.val, c ); c := m.c ∧ m .c ∧ 0 ≤ b ∧ b < bitsizeof(m.t) ∧ (b < m .o − m.o ∨ m .l − m.o ≤ b); mb := (n, ∞, m.a, Bit, m.o + b, m.o + b + 1, bit(m.val, b), c ); u := m , m b u; endif enddo h := h u m ; end
Fig. 8. Effect of new memory item m on memory items m ∈ M
In the loop processed in procedure up(), new memory item m captures the situation where either m is infeasible (i. e. ¬m .c) or the address range of m does not affect m. The definition of the memory item family m b in this specification handles the “worst case” of an assignment, where only one or more, but not all bits of an existing memory item m are overwritten. This happens, for example, when working with C/C++ unions or bit-vector operations where selected bits of an integer variable can be manipulated. Item family m b specifies which of the original bits from m are left unchanged by such an operation. Fortunately, it can be decided in many situations that m b does not exist, so that the invocation of bit-vector decision procedures can be avoided. Assume, for example, that all array indexes involved in expressions a.b[i] and a’.b’[i’] are within range. Then an assignment to a.b[i] is in conflict with a’.b’[i’] if and only if a = a ∧ b = b ∧ i = i . In this case, a’.b’[i’] is completely overwritten. Further observe that the offset expressions ω(sel) used in procedure up= () are constants if the selector does not involve array indexes or only indexes which are constant.
294
J. Peleska
Example 3. A stack declaration int a[10]; followed by assignments a[i] = m + n; a[j] = 0; is represented in GIMPLE as
int a[10]; i_0 = i; D_4151 = m + n; a[i_0] = D_4151; j_1 = j; a[j_1] = 0;
1 2 3 4 5 6
After having processed lines 1 — 6, the associated computation results in the following history of memory items: m1p = (1, 3, &a[0], 32 · p, 32 · p + 32, int, Undef, 0 ≤ p ∧ p < 10) m2 = (2, ∞, &i 0, 0, 32, int, i1 , true) m3 = (3, ∞, &D 4151, 0, 32, int, m2 + n2 true) m4p = (4, 5, &a[0], 32 · p, 32 · p + 32, int, Undef, 0 ≤ p ∧ p < 10 ∧ p = i 02 ) m5 = (4, 5, &a[0], 32 · i 02 , 32 · i 02 + 32, int, D 41513 , 0 ≤ i 02 ∧ i 02 < 10) m6 = (5, ∞, &j 1, 0, 32, int, j4 , true) m7p = (6, ∞, &a[0], 32 · p, 32 · p + 32, int, Undef, 0 ≤ p ∧ p < 10 ∧ p = i 02 ∧ p = j 15 ) m8 = (6, ∞, &a[0], 32 · i 02 , 32 · i 02 + 32, int, D 41513 , 0 ≤ i 02 ∧ i 02 < 10 ∧ i 02 = j 15 ) 9
m = (6, ∞, &a[0], 32 · j 15 , 32 · j 15 + 32, int, 0, 0 ≤ j 15 ∧ j 15 < 10)
This example illustrates how memory items are invalidated by consecutive writes to related memory areas: For example, the execution of statements 1 — 4 results in m1p .v1 = 3, since the write in statement 4 also affects the memory area starting at &a[0]. Items m4p specifies “what is left of” m1p after execution of statement 4. (3) An assignment to a de-referenced pointer, n1 =def *p = expr; may affect the data segment, heap or stack, depending on the potential target addresses p points to. The details are specified by function up=p (Fig. 9). up=p (p, expr, n, mem, g) mem := mem;
up=p loops over all possible pointer valuations represented by memory items m . For each valid item, a list pl of all possible pointer targets is generated, using auxiliary function ξ() (Fig.10): Depending on the possible valuations of the address expression specified in m.val, p may point to one or more locations in stack, data segment or heap. For each of these possible situations, the items m in pl contain value expressions consisting of base addresses and offsets. For example, if p = q + i ∧ q = &z + k, then ξ returns an item with value expression &z + k + i in its list. The effect of each new item on the invalidation of existing items and creation of new ones is performed again as specified by up() and explained above.
Integrated and Automated Abstract Interpretation, Verification and Testing
295
procedure up=p (p : Symbols; expr : Expr; n : N0 ; g : Expr; inout mem : M ) begin h := σ(p, mem); for m = last(h) downto head(h) do if m .a = &p ∧ m .v1 = ∞ then pl := ξ(m .val, mem) foreach m ∈ pl do m.a := Base address part of address expression m”.val; m.v0 := n; m.v1 := ∞; m.o := Offset part of address expression m”.val; m.l := m.o + bitsizeof(τ (∗p, mem)); m.val = (expr, n); m.c := (g, n) ∧ m .c ∧ m .c; up(m, n, mem); enddo endif enddo end
Fig. 9. Effect of assignments to de-referenced pointers on history of memory items
function ξ((expr, n) : Expr × N0 ; mem : M ) : M-Item∗ m := (., ., ., ., ., ., (expr, n), true); el := m ; while have m-item m in el with unresolved m .val do m := next m-item in el with unresolved m .val; x := next unresolved identifier from m .val; h := σ(x, mem); for m := last(h) downto head(h) do if m .a = β(x) ∧ m .v0 ≤ n ≤ m .v1 then val1 := m .val; In val1 : exchange each occurrence of x by m .val; el := el (., ., ., ., ., ., val1 , m .c ∧ m .c) ; endif enddo erase m from el; enddo ξ := el; end
Fig. 10. Function ξ finds list of base addresses and offsets potentially associated with a pointer
296
3.3
J. Peleska
Optimisation through Abstract Interpretation
The symbolic interpretation process as described above does not investigate the feasibility of the memory items associated with a configuration. This can either be done after constraint generation (see Section 4) by the solver or by more efficient, though incomplete, abstract interpretations. To this end, we use interval analysis in order to calculate ranges of possible variable values and pointer target addresses. These interval interpretations are used to check whether constraints m.c of memory items m evaluate to false in the interval lattice valuation of Boolean expressions. Since the interval interpretation is an over-approximation, a false-valuation of m.c in the interval lattice implies that no concrete valuation of m.c could evaluate to true either. As a consequence, m can be immediately dropped, without having to consult the solver.
4
Constraint Generation
As we have seen in the previous section, the guard conditions to be fulfilled in order to cover a specific path or a sub-graph of a module’s CFG are already encoded in the memory items associated with the symbolic memory configurations involved. The most important task for the constraint generator is now to resolve the value components of the memory items involved, so that the resulting expressions are free of pointer and array expressions, and are represented in an appropriate format for the solver. Example 4. Let us extend Example 3 by two additional statements 7 D_4160 = a[i_0]; 8 if ( D_4160 < 0 ) { ...(*)... } and suppose we wish to reach the branch marked by (*). The constraint generator now proceeds as follows: (1) Initialise constraint Φ as Φ := D 4160 < 0. (2) Resolve D 4160 to a[i 0], as induced by the memory item resulting from the assignment in line 7. Since a[i 0] is an array expression, we have to resolve it further, before adding the resolution results to Φ. (3) a7 [i 07 ] matches with items m7p , m8 , m9 for a and m2 for i 0 in Example 3, since the other items with base address &a[0] are already outdated at computation step 7; this leads to resolutions Φ := Φ ∧ ((D 4160 = Undef ∧ i 07 = p ∧ 0 ≤ p ∧ p < 10 ∧ p = i 02 ∧ p = j 15 ) ∨ (D 4160 = D 41513 ∧ i 07 = i 02 ∧ 0 ≤ i 02 ∧ i 02 < 10 ∧ i 02 = j 15 ) ∨ (D 4160 = 0 ∧ i 07 = j 15 ∧ 0 ≤ j 15 ∧ j 15 < 10)) ∧ i 07 = i 02 ∧ i 02 = i1
Observe that at this stage Φ has been completely resolved to atomic data types: The references to array variable a have been transformed into offset restrictions (expressions over i 07 , i 02 , j 15 , . . .), and the array elements involved (in this example a[i 0]) have been replaced by atomic variables representing their values (D 4160). References to C-structures would be eliminated in an analogous way,
Integrated and Automated Abstract Interpretation, Verification and Testing
297
by introducing address offsets for each structure component and using atomic variables denoting the component values. Further observe that we have already eliminated the factors 32 in Φ, initially occurring in expressions like 32 · i 07 = 32 · j 15 . These factors are only relevant for bit-related operations; for example, if an integer variable is embedded into a C-union containing a bit-field as another variant, and a memory item corresponding to the integer value is invalidated by a bit operation. (4) Prepare the constraint for the solver: Following the restrictions for admissible constraints described in [10], our solver requires some pre-processing of Φ: (a) Inequalities like i 02 = j 15 are replaced by disjunctions involving <, >, e. g. i 02 < j 15 ∨ i 02 > j 15 . (b) Inequalities a < b are only admissible if a or b is a constant. Therefore atoms like i 02 < j 15 are transformed with the aid of slack variables s, so that non-constant symbols are always related by equality. For example, the above atom is transformed into i 02 + s = j 15 ∧ 0 < s. (c) Three-address-code is enforced, so that – with the exception of function calls y = f(x0 , . . . , xn ) and array expressions y = a[x1 ] . . . [xn ] – each atom refers to at most 3 variables. Since the introduction of slack variables may lead to four variables in an expression originally expressed with three symbols only, auxiliary variables are needed to reinstate the desired three-address representation. For example, x + y < z leads to x + y = z + s ∧ s < 0 which is subsequently transformed into aux = z + s ∧ x + y = aux ∧ s < 0. (d) The constraint is transformed into conjunctive normal form CNF. Constraint Φ in this example already indicates a typical problem to be frequently expected when applying the standard CNF algorithm: Some portions of Φ resemble a disjunctive normal form. This is caused by the necessity to consider alternatives – that is, ∨-combinations – of memory items, where the validity of each item is typically specified by a conjunction. As a consequence, the standard CNF algorithm may result in a considerably larger formula. Therefore we have implemented both the standard CNF algorithm and the Tseitin algorithm [24] as an alternative, together with a simple decision procedure indicating which algorithm will lead to better results.
5
Conclusion
We have described an integrated approach for automated testing, static analysis by abstract interpretation and formal verification by model checking (reachability analysis). The techniques described have been explicitly designed for the verification of C/C++ modules. To cope with the aliasing problems of C/C++, a memory model for symbolic interpretation of address values, offsets, lengths and values of memory valuations has been described. The combinatorial complexity of symbolic memory interpretation is considerably reduced by means of lock-step abstract and symbolic interpretation, using the abstract interpretation for a priori elimination of infeasible symbolic states. The tasks of functional and structural testing have been reduced to problems of reachability analysis. To cope with constraints involving all C/C++ data types, including bit vector operations, type casts, large integer ranges and floating point variables, an
298
J. Peleska
SMT (Satisfiability Modulo Theory) solver is used which handles floating point variables and transcendent functions by means of interval analysis. Acknowledgements. I would like to express my gratitude to Willem-Paul de Roever, for his considerable support during various stages of an interleaved academic and industrial career. Willem-Paul’s advice – both scientific and personal – has always been highly constructive and stimulating.
References 1. IEC 61508 Functional safety of electric/electronic/programmable electronic safetyrelated systems. International Electrotechnical Commission (2006) 2. Apt, K.R., Olderog, E.-R.: Verification of Sequential and Concurrent Programs. Springer, Heidelberg (1991) 3. Badban, B., Fr¨ anzle, M., Peleska, J., Teige, T.: Test automation for hybrid systems. In: Proceedings of the Third International Workshop on SOFTWARE QUALITY ASSURANCE (SOQUA 2006), Portland Oregon, USA (November 2006) 4. Beck, K.: Test-Driven Development. Addison-Wesley, Reading (2003) 5. Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Min, A., Monniaux, D., Rival, X.: Combination of abstractions in the Astr´ ee static analyzer. In: Okada, M., Satoh, I. (eds.) ASIAN 2006. LNCS, vol. 4435, pp. 272–300. Springer, Heidelberg (2008) (to appear) 6. de Roever, W.-P., Engelhardt, K.: Data Refinement: Model-Oriented Proof Methods and their Comparison. Cambridge Tracts in Theoretical Computer Science, vol. 47. Cambridge University Press, Cambridge (1998) 7. Blanchet, B., et al.: Design and implementation of a special-purpose static program analyzer for safety-critical real-time embedded software. In: Mogensen, T.Æ., Schmidt, D.A., Sudborough, I.H. (eds.) The Essence of Computation. LNCS, vol. 2566, pp. 85–108. Springer, Heidelberg (2002) 8. European Committee for Electrotechnical Standardization. EN 50128 – Railway applications – Communications, signalling and processing systems – Software for railway control and protection systems. CENELEC, Brussels (2001) 9. Fehnker, A., Huuck, R., Jayet, P., Lussenburg, M., Rauch, F.: Goanna - a static model checker. In: Brim, L., Haverkort, B.R., Leucker, M., van de Pol, J. (eds.) FMICS 2006. LNCS, vol. 4346, pp. 297–300. Springer, Heidelberg (2007) 10. Fr¨ anzle, M., Herde, C., Teige, T., Ratschan, S., Schubert, T.: Efficient solving of large non-linear arithmetic constraint systems with complex boolean structure. Journal on Satisfiability, Boolean Modeling and Computation (2007) 11. GCC, the GNU Compiler Collection. The GIMPLE family of intermediate representations, http://gcc.gnu.org/wiki/GIMPLE 12. Goubault-Larrecq, J., Parrennes, F.: Cryptographic protocol analysis on real C code. In: Cousot, R. (ed.) VMCAI 2005. LNCS, vol. 3385, pp. 363–379. Springer, Heidelberg (2005) ´ Applied Interval Analysis. Springer, 13. Jaulin, L., Kieffer, M., Didrit, O., Walter, E.: London (2001) 14. Leveson, N.G.: Safeware. Addison-Wesley, Reading (1995) 15. L¨ oding, H.: Behandlung komplexer Datentypen in der automatischen Testdatengenerierung. Master’s thesis, University of Bremen (May 2007)
Integrated and Automated Abstract Interpretation, Verification and Testing
299
16. Peleska, J., L¨ oding, H.: Static Analysis By Abstract Interpretation. University of Bremen, Centre of Information Technology (2008), http://www.informatik.uni-bremen.de/agbs/lehre/ws0708/ai/ saai script.pdf 17. Peleska, J., L¨ oding, H.: Symbolic and abstract interpretation for c/c++ programs. In: Proceedings of the 3rd intl Workshop on Systems Software Verification (SSV 2008). Electronic Notes in Theoretical Computer Science. Elsevier, Amsterdam (2008) 18. Peleska, J., L¨ oding, H., Kotas, T.: Test automation meets static analysis. In: Koschke, R., Herzog, K.-H.R.O., Ronthaler, M. (eds.) Proceedings of the INFORMATIK 2007, Band 2, Bremen (Germany), September 24-27, pp. 280–286 (2007) 19. Peleska, J., Zahlten, C.: Integrated automated test case generation and static analysis. In: Proceedings of the QA+Test 2007 International Conference on QA+Testing Embedded Systems, Bilbao (Spain), October 17-19 (2007) 20. Ranise, S., Tinelli, C.: Satisfiability modulo theories. TRENDS and CONTROVERSIES–IEEE Magazine on Intelligent Systems 21(6), 71–81 (2006) 21. SC-167. Software Considerations in Airborne Systems and Equipment Certification. RTCA (1992) 22. Schlich, B., Salewski, F., Kowalewski, S.: Applying model checking to an automotive microcontroller application. In: Proc. IEEE 2nd Int’l Symp. Industrial Embedded Systems (SIES 2007). IEEE, Los Alamitos (2007) 23. Strichman, O.: On solving presburger and linear arithmetic with sat. In: Aagaard, M.D., O’Leary, J.W. (eds.) FMCAD 2002. LNCS, vol. 2517, pp. 160–170. Springer, Heidelberg (2002) 24. Tseitin, G.S.: On the complexity of derivation in propositional calculus. In: Slisenko, A.O. (ed.) Studies in Constructive Mathematics and Mathematical Logic, Part 2, p. 115. Consultants Bureau, New York (1962) 25. Venet, A., Brat, G.: Precise and efficient static array bound checking for large embedded c programs. In: Proceedings of the PLDI 2004, Washington, DC, USA, June 9-11. ACM, New York (2004) 26. Verified Systems International GmbH, Bremen. RT-Tester 6.2 – User Manual (2007)
Automated Proofs for Asymmetric Encryption Joudica¨el Courant, Marion Daubignard, Cristian Ene, Pascal Lafourcade, and Yassine Lakhnech Universit´e Grenoble 1, CNRS,Verimag
Abstract. Chosen-ciphertext security is by now a standard security property for asymmetric encryption. Many generic constructions for building secure cryptosystems from primitives with lower level of security have been proposed. Providing security proofs has also become standard practice. There is, however, a lack of automated verification procedures that analyze such cryptosystems and provide security proofs. This paper presents an automated procedure for analyzing generic asymmetric encryption schemes in the random oracle model. It has been applied to several examples of encryption schemes among which the construction of Bellare-Rogaway 1993, of Pointcheval at PKC’2000 and REACT.
1
Introduction
Our day-to-day lives increasingly depend upon information and our ability to manipulate it securely. This requires solutions based on cryptographic systems (primitives and protocols). In 1976, Diffie and Hellman invented public-key cryptography, coined the notion of one-way functions and discussed the relationship between cryptography and complexity theory. Shortly after, the first cryptosystem with a reductionist security proof appeared (Rabin 1979). The next breakthrough towards formal proofs of security was the adoption of computational security for the purpose of rigorously defining the security of cryptographic schemes. In this framework, a system is provably secure if there is a polynomialtime reduction proof from a hard problem to an attack against the security of the system. The provable security framework has been later refined into the exact (also called concrete) security framework where better estimates of the computational complexity of attacks are achieved. While research in the field of provable cryptography has achieved tremendous progress towards rigorously defining the functionalities and requirements of many cryptosystems, little has been done for developing computer-aided proof methods or more generally for investigating a proof theory for cryptosystems as it exists for imperative programs, concurrent systems, reactive systems, etc... In this paper, we present an automated proof method for analyzing generic asymmetric encryption schemes in the random oracle model (ROM). Generic encryption schemes aim at transforming schemes with weak security properties, such as one-wayness, into schemes with stronger security properties, especially security against chosen ciphertext attacks. Examples of generic encryption
Grenoble, email:[email protected] This work has been partially supported by the ANR projects SCALP, AVOTE and SFINCS.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 300–321, 2010. c Springer-Verlag Berlin Heidelberg 2010
Automated Proofs for Asymmetric Encryption
301
schemes are [11,23,21,7,5,19,18,17]. The paper contains two main contributions. The first one is a compositional Hoare logic for proving IND-CPA-security. That is, we introduce a simple programming language (to specify encryption algorithms that use one-way functions and hash functions) and an assertion language that allows to state invariants and axioms and rules to establish such invariants. Compositionality of the Hoare logic means that the reasoning follows the structure of the program that specifies the encryption oracle. The assertion language consists of three atomic predicates. The first predicate allows us to express that the value of a variable is indistinguishable from a random value even when given the values of a set of variables. The second predicate allows us to state that it is computationally infeasible to compute the value of a variable given the values of a set of variables. Finally, the third predicate allows us to state that the value of a variable has not been submitted to a hash function. Transforming the Hoare logic into an (incomplete) automated verification procedure is quite standard. Indeed, we can interpret the logic as a set of rules that tell us how to propagate the invariants backwards. We have done this for our logic resulting in a verification procedure implemented in less than 250 lines of CAML. We have been able to automatically verify IND-CPA security of several schemes among which [7,18,17]. Our Hoare logic is incomplete for two main reasons. First, IND-CPA security is an observational equivalence-based property, while with our Hoare logic we establish invariants. Nevertheless, as shown in Proposition 1, we can use our Hoare logic to prove IND-CPA security at the price of completeness. That is, we prove a stronger property than IND-CPA. The second reason, which we think is less important, is that for efficiency reasons some axioms are stronger than needed. The second contribution of the paper presents a simple criterion for plaintext awareness (PA). Plaintext awareness has been introduced by Bellare and Rogaway in [5]. It has then been refined in [4] such that if an encryption scheme is PA and IND-CPA then it is IND-CCA. Intuitively, PA ensures that an adversary cannot generate a valid cipher without knowing the plaintext, and hence, the decryption oracle is useless for him. The definition of PA is complex and proofs of PA are also often complex. In this paper, we present a simple syntactic criterion that implies plaintext awareness. Roughly speaking the criterion states that the cipher should contain as a sub-string the hash of a bitstring that contains as substrings the plaintext and the random seed. This criterion applies for many schemes such as [7,17,18] and easy to check. Although (or maybe because) the criterion is simple, the proof of its correctness is complex. Putting together these two contributions, we get a proof method for IND-CCA security. An important feature of our method is that it is not based on a global reasoning and global program transformation as it is the case for the game-based approach [6,20]. Indeed, both approaches can be considered complementary as the Hoare logic-based one can be considered as aiming at characterizing, by means of predicates, the set of contexts in which the game transformations can be applied safely.
302
J. Courant et al.
Related work. We restrict our discussion to work providing computational proofs for cryptosystems. In particular, this excludes symbolic verification (including ours). We mentioned above the game-based approach [6,20,15]. In [8,9] B. Blanchet and D. Pointcheval developed a dedicated tool, CryptoVerif, that supports security proofs within the game-based approach. CryptoVerif is based on observational equivalence. The equivalence relation induces rewriting rules applicable in contexts that satisfy some properties. Invariants provable in our Hoare logic can be considered as logical representations of these contexts. Moreover, as we work with invariants, that is we follow a state-based approach, we need to prove results that link our invariants to game-based properties such as indistinguishability (cf. Proposition 1 and 3). Our verification method is fully automated. It focusses on asymmetric encryption in the random oracle model, while CryptoVerif is potentially applicable to any cryptosystem. G. Barthe and S. Tarento were among the first to provide machine-checked proofs of cryptographic schemes without relying on the perfect cryptography hypothesis. They formalized the Generic Model and the Random Oracle Model in the Coq proof assistant, and used this formalization to prove hardness of the discrete logarithm [1], security of signed ElGamal encryption against interactive attacks [3], and of Schnorr signatures against forgery attacks [22]. They are currently working on formalizing the game-based approach in Coq [2]. D. Nowak provides in [16] an implementation in Coq of the game-based approach. He illustrates his framework by a proof of the semantic security of the encryption scheme ElGamal and its hashed version. Another interesting work is the Hoarestyle proof system proposed by R. Corin and J. Den Hartog for game-based cryptographic proofs [10]. The main difference between our logic and theirs is that our assertion language does not manipulate probabilities explicitly and is at a higher level of abstraction. On the other hand, their logic is more general. In [12], Datta et al. present a computationally sound compositional logic for key exchange protocols. There is, however, no proof assistance provided for this logic neither. Outline: In Section 2, we introduce notations used for defining our programming language and generic asymmetric encryption schemes. In Section 3, we present our method for proving IND-CPA security. In Section 4 we introduce a criterion to prove plaintext awareness. In Section 5 we explain the automated verification procedure derived from our Hoare logic. Finally, in Section 6 we conclude.
2
Definitions
We are interested in analyzing generic schemes for asymmetric encryption assuming ideal hash functions. That is, we are working in the random oracle r model [13,7]. Using standard notations, we write H ← Ω to denote that H is randomly chosen from the set of functions with appropriate domain. By abuse of r notation, for a list H = H1 , · · · , Hn of hash functions, we write H ← Ω instead r r of the sequence H1 ← Ω, . . . , Hn ← Ω. We fix a finite set H = {H1 , . . . , Hn } of
Automated Proofs for Asymmetric Encryption
303
hash functions and also a finite set Π of trapdoor permutations and O = Π ∪ H. We assume an arbitrary but fixed ordering on Π and H; just to be able to switch between set-based and vector-based notation. A distribution ensemble is a countable sequence of distributions {Xη }η∈ . We only consider distribution ensembles that can be constructed in polynomial time by probabilistic algorithms that have oracle access to O. Given two distribution ensembles X = {Xη }η∈ and X = {Xη }η∈ , an algorithm A and η ∈ , we define the advantage of A in distinguishing Xη and Xη as the following quantity: Adv(A, η, X, X ) = r r Pr[x ← Xη : AO (x) = 1] − Pr[x ← Xη : AO (x) = 1]. We insist, above, that for each hash function H, the probabilities are also taken over the set of maps with the appropriate type. Let Adv(η, X, X ) = sup(Adv(A, η, X, X )), the maximal advantage taken over all probabilistic A
polynomial-time algorithms. Then, two distribution ensembles X and X are called indistinguishable if Adv(η, X, X ) is negligible as a function of η and denoted by X ∼ X . In other words, for any polynomial-time (in η) probabilistic algorithm A, Adv(A, η, X, X ) is negligible as a function of η. We insist that all security notions we are going to use are in the ROM, where all algorithms, including adversaries, are equipped with oracle access to the hash functions. 2.1
A Simple Programming Language for Encryption and Decryption Oracles
We introduce a simple programming language without loops in which the encryption and decryption oracles are specified. The motivation for fixing a notation is obvious: it is mandatory for developing an automatic verification procedure. Let Var be an arbitrary finite non-empty set of variables. Then, our programming language is built according to the following BNF described in Table 1, where for a bit-string bs = b1 . . . bk (bi are bits), bs[n, m] = bn . . . bm 1 , and N is the name of the oracle, c its body and x and y are the input and output variable respectively. Note the command y[n, m] is only used in the decryptions, it is why we do not have to consider it in our Hoare logic. With this language we can sample an uniform value to x, apply a one-way function f and its inverse f −1 , a hash function, the exclusive-or, the concatenation and substring function, and perform an “if-then-else” (used only in the decryption function). Table 1. Language grammar r
Command c ::= x ← U | x := f (y) | x := f −1 (y) | x := H(y) | x := y[n, m] | x := y ⊕ z | x := y||z | if x = y then c1 else c2 fi | c; c Oracle declaration O ::= N (x, y) : c
1
Notice that bs[n, m] = , when m < n and bs[n, m] = bs[n, k], when m > k.
304
J. Courant et al.
Example 1. The following command encodes the encryption scheme proposed by Bellare and Rogaway in [7] (shortly E(ine ; oute ) = f (r)||ine ⊕ G(r)||H(ine ||r)): E(ine , oute ) : r r ← {0, 1}η0 ; a := f (r); g := G(r); b := ine ⊕ g; s := ine ||r; c := H(s); u := a||b||c; oute := u; where f ∈ Π and G, H ∈ H.
Semantics: In addition to the variables in Var, we consider variables H1 , . . . , Hi records the queries to the hash function Hi and can not be n . Variable accessed by the adversary. Thus, we consider states that assign bit-strings to the variables in Var and lists of pairs of bit-strings to Hi . For simplicity of the presentation, we assume that all variables range over large domains, whose cardinalr ities are exponential in the security parameter η. u ← U is the uniform sampling of a value u from the appropriate domain. Given a state S, S( H ).dom, respectively S( H ).res, denotes the list obtained by projecting each pair in S( H ) to its first, respectively second, element. A program takes as input a configuration (S, H, (f, f −1 )) and yields a distribution on configurations. A configuration is composed of a state S, a vector of hash functions (H1 , . . . , Hn ) and a pair (f, f −1 ) of a trapdoor permutation and its inverse. Let Γ denote the set of configurations and Dist(Γ ) the set of distributions on configurations. The semantics is given in Table 2, where δ(x) denotes the Dirac measure, i.e. Pr(x) = 1. Notice that the semantic function of commands can be lifted in the usual way to a function from Dist(Γ ) to Dist(Γ ). By abuse of notation we also denote the lifted semantics by [[c]].
H
A notational convention: It is easy to prove that commands preserve the r values of H and (f, f −1 ). Therefore, we can, without ambiguity, write S ← Table 2. The semantics of the programming language r
r
[[x ← U]](S, H , (f, f −1 )) = [u ← U : (S{x → u}, H , (f, f −1 ))] [[x := f (y)]](S, H , (f, f −1 )) = δ(S{x → f (S(y))}, H , (f, f −1 )) [[x := f −1 (y)]](S, H , (f, f −1 )) = δ(S{x → f −1 (S(y))}, H , (f, f −1 )) [[x := y[n, m]]](S, H , (f, f −1 )) = δ(S{x → S(y)[n, m]}, H , (f, f −1 )) [[x :=⎧H(y)]](S, H , (f, f −1 )) = −1 ; if (S(y), v) ∈ H ⎨ δ(S{x → v}, H , (f, f )) δ(S{x → v, H → S( H ) · (S(y), v)}, H , (f, f −1 )) ; ⎩ if (S(y), v) ∈ H and v = H (H)(S(y)) [[x := y ⊕ z]](S, H , (f, f −1 )) = δ(S{x → S(y) ⊕ S(z)}, H , (f, f −1 )) [[x := y||z]](S, H , (f, f −1 )) = δ(S{x → S(y)||S(z)}, H , (f, f −1 )) [[c1 ; c2 ]] = [[c2 ]] ◦ [[c1 ]] [[c1 ]](S, H , (f, f −1 )) if S(x) = 1 [[if x then c1 else c2 fi]](S, H , (f, f −1 )) = [[c2 ]](S, H , (f, f −1 )) otherwise −1 [[N (v, y)]](S, H , (f, f )) = [[c]](S{x → v}, H , (f, f −1 )) where c is the body of N .
Ì
Ì
Ì
Ì
Automated Proofs for Asymmetric Encryption
305
r
[[c]](S, H, (f, f −1 )) instead of (S , H, (f, f −1 )) ← [[c]](S, H , (f, f −1 )). According to our semantics, commands denote functions that transform distributions on configurations to distributions on configurations. However, only distributions that are constructible are of interest. Their set is denoted by Dist(Γ, H, ) and is defined as the set of distributions of the form:
r
r
r
−1
[(f, f −1 ) ← (1η ); H ← Ω; S ← AH,f,f () : (S, H, f, f −1 )] where A is an algorithm accessing f , f −1 and H and which records its queries to hashing oracles into the H ’s in S. 2.2
Asymmetric Encryption
We are interested in generic constructions that convert any trapdoor permutation into a public-key encryption scheme. More specifically, our aim is to provide an automatic verification method for generic encryption schemes. We also adapt IND-CPA and IND-CCA security notions to our setting.
Definition 1. A generic encryption scheme is defined by a triple ( , E(ine , oute ) : c, D(ind , outd ) : c ) such that:
is
a trapdoor permutation generator that on input η generates an ηbitstring trapdoor permutation (f, f −1 ), – E(ine , oute ) : c and D(ind , outd ) : c are oracle declarations for encryption and decryption. –
Definition 2. Let GE be a generic encryption scheme defined by ( , E(ine , oute ) : c, D(ind , outd ) : c ). Let A = (A1 , A2 ) be an adversary and X ∈ Dist(Γ, H, ). For α ∈ {cpa, cca} and η ∈ , let
r
−1 )) ← X; Advind−α A,GE (η, X) = 2 ∗ Pr[(S, H, (f, f r r O1 (x0 , x1 , s) ← A1 (f ); b ← {0, 1}; r S ← [[E(xb , oute )]](S, H , (f, f −1 )) : 2 AO 2 (f, x0 , x1 , s, S (oute )) = b] − 1
where if α = cpa then O1 = O2 = H and if α = cca then O1 = O2 = H ∪ {D}. We insist, above, that A1 outputs x0 , x1 such that |x0 | = |x1 | and that in the case of CCA, A2 does not ask its oracle D to decrypt S (y). We say that GE is IND-α secure if Advind−α A,GE (η, X) is negligible for any constructible distribution ensemble X and polynomial-time adversary A.
3
IND-CPA Security
In this section, we present an effective procedure to verify IND-CPA security. The procedure may fail to prove a secure encryption scheme but never declares correct an insecure one. Thus, we sacrifice completeness for soundness, a situation
306
J. Courant et al.
very frequent in verification2 . We insist that our procedure does not fail for any of the numerous constructions we tried. We are aiming at developing a procedure that allows us to prove properties, i.e. invariants, of the encryption oracle. More precisely, the procedure annotates each control point of the encryption command with a set of predicates that hold at that point for any execution except with negligible probability. Given an encryption oracle E(ine , oute ) : c we want to prove that at the final control point, we have an invariant that tells us that the value of oute is indistinguishable from a random value. Classically, this implies IND-CPA security. A few words now concerning how we present the verification procedure. First, we present in the assertion language the invariant properties we are interested in. Then, we present a set of rules of the form {ϕ}c{ϕ }, meaning that execution of command c in any distribution that satisfies ϕ leads to a distribution that satisfies ϕ . Using Hoare logic terminology, this means that the triple {ϕ}c{ϕ } is valid. From now on, we suppose that the adversary has access to the hash functions H, and he is given the trapdoor permutation f , but not its inverse f −1 . 3.1
The Assertion Language
Our assertion language is defined by the following grammar, where ψ defines the set of atomic assertions: ψ ::= Indis(νx; V1 ; V2 ) | WS(x; V ) | H(H, e) ϕ ::= true | ψ | ϕ ∧ ϕ, where V1 , V2 ⊆ Var and e is an expression constructible (by the adversary) out of the variables used in the program, that is to say, possibly using concatenation, xor, hash oracles or f . Moreover, we define the set of the variables used as substring of an expression e and denote it subvar(e): x ∈ subvar(e) iff e = e1 ||x||e2 , for some expressions e1 and e2 . For example, we use the predicate H(H, R||ine ||f (R||r)||ine ⊕ G(R)), for which, if we denote this latter expression e, we can write subvar(e) = {R, ine }, since those variables are substrings of e, but r ∈ / subvar(e), since it cannot be obtained directly out of e. Intuitively, Indis(νx; V1 ; V2 ) is satisfied by a distribution on configurations, if any adversary has negligible probability to distinguish whether he is given results of computations performed using the value of x or a random value, when he is given the values of the variables in V1 and the image by the one-way permutation of those in V2 . The assertion WS(x; V ) is satisfied by a distribution, if any adversary has negligible probability to compute the value of x, when he is given the values of the variables in V . Finally, H(H, e) is satisfied when the value of e has not been submitted to the hash oracle H. Notations: We use Indis(νx; V ) instead of Indis(νx; V ; ∅) and Indis(νx) instead of Indis(νx; Var). We also write V, x instead of V ∪ {x} and even x, y instead of {x, y}. 2
We conjecture that the IND-CPA verification problem of schemes described in our language is undecidable.
Automated Proofs for Asymmetric Encryption
307
Formally, the meaning of the assertion language is defined by a satisfaction relation X |= ϕ, which tells us when a distribution on configurations X satisfies the assertion ϕ. In order to define the satisfaction relation X |= ϕ, we need to generalize indistinguishability as follows. Let X be a family of distributions in Dist(Γ, H, ) and V1 and V2 be sets of variables in Var. By D(X, V1 , V2 ) we denote the following distribution family (on tuples of bit-strings):
D(X, V1 , V2 )η = r [(S, H, (f, f −1 )) ← X : (S(V1 ), f (S(V2 )), H, f)] Here S(V1 ) is the point-wise application of S to the elements of V1 and f (S(V2 )) is the point-wise application of f to the elements of S(V2 ). We say that X and X are V1 ; V2 -indistinguishable, denoted by X ∼V1 ;V2 X , if D(X, V1 , V2 ) ∼ D(X , V1 , V2 ). Example 2. Let S0 be any state and let H1 be a hash function. Recall that we are working in the ROM. Consider the following distributions: Xη = [β; S := r S0 {x → u, y → H1 (u)} : (S, H, (f, f −1 ))] and Xη = [β; u ← {0, 1}p(η); S := r r S0 {x → u, y → H1 (u )} : (S, H, (f, f −1 ))], where β = H ← Ω; (f, f −1 ) ← r (1η ); u ← {0, 1}p(η), and p is a polynomial. Then, we have X ∼{y};{x} X but we do not have X ∼{y,x};∅ X , because then the adversary can query the value of H1 (x) and match it to that of y.
The satisfaction relation X |= ψ is defined as follows: – X |= true, X |= ϕ ∧ ϕ iff X |= ϕ and X |= ϕ . r r – X |= Indis(νx; V1 ; V2 ) iff X ∼V1 ;V2 [u ← U; (S, H, (f, f −1 )) ← X : (S{x → −1 u}, H, (f, f ))] r – X |= WS(x; V ) iff Pr[(S, H, (f, f −1 )) ← X : A(S(V )) = S(x)] is negligible, for any adversary A. r – X |= H(H, e) iff Pr[(S, H, (f, f −1 )) ← X : S(e) ∈ S( H ).dom] is negligible.
The relation between our Hoare triples and semantic security is established by the following proposition that states that if the value of oute is indistinguishable from a random value then the scheme considered is IND-CPA.
Proposition 1. Let ( , E(ine , oute ) : c, D(ind , outd ) : c ) be a generic encryption scheme. It is IND-CPA secure if {true}c{Indis(νoute ; oute , ine )} is valid. Indeed, if {true}c{Indis(νoute ; oute , ine )} holds then the encryption scheme is secure with respect to randomness of ciphertext. It is standard that randomness of ciphertext implies IND-CPA security. 3.2
A Hoare Logic for IND-CPA Security
In this section we present our Hoare logic for IND-CPA security. We begin with a set of preservation rules that tell us when an invariant established at the control point before a command can be transferred to the next control point.
308
J. Courant et al.
Then, for each command, except x := f −1 (y), x := y[n, m] and conditional, we present a set of specific rules that allow us to establish new invariants. The commands that are not considered are usually not used in encryption but only in decryption procedures, and hence, are irrelevant with respect to our way of proving IND-CPA security. r
Generic preservation rules: We assume z = x and c is either x ← U or x := y||t or x = y ⊕ t or x := f (y) or x := H(y) or x := t ⊕ H(y). Lemma 1. The following rules are sound, when x ∈ V1 ∪ V2 : – (G1) {Indis(νz; V1 ; V2 )} c {Indis(νz; V1 ; V2 )} – (G2) {WS(z; V1 )} c {WS(z; V1 )} – (G3) {H(H , e[e /x])} x := e {H(H , e)}, provided H =
H in case e ≡ H(y). Here, e[e /x] is the expression obtained from e by replacing x by e . Random Assignment Lemma 2. The following rules are sound: r
– (R1) {true} x ← U {Indis(νx)} r – (R2) {true} x ← U {H(H, e)} if x ∈ subvar(e). Moreover, the following preservation rules, where we assume x = y 3 , are sound: r
– (R3) {Indis(νy; V1 ; V2 )} x ← U {Indis(νy; V1 , x; V2 )} r – (R4) {WS(y; V )} x ← U {WS(y; V, x)} Rule (R1) is obvious. Rule (R2) takes advantage of the fact that U is a large set, or more precisely that its cardinality is exponential in the security parameter, and that since e contains the fresh generated x the probability that it has already been submitted to H is small. Rules (R3) and (R4) state that the value of x cannot help an adversary in distinguishing the value of y from a random value in (R3) or computing its value in (R4). This is the case because the value of x is randomly sampled. Hash Function Lemma 3. The following basic rules are sound, when x = y, and α is either a constant or a variable: – (H1) {WS(y; V ) ∧ H(H, y)} x := α ⊕ H(y) {Indis(νx; V, x)} – (H2) {H(H, y)} x := H(y) {H(H , e)}, if x ∈ subvar(e). 3
By x = y we mean syntactic equality.
Automated Proofs for Asymmetric Encryption
309
– (H3) {Indis(νy; V ; V , y) ∧ H(H, y)} x := H(y) {Indis(νx; V, x; V , y)} if y ∈ V Rule (H1) captures the main feature of the random oracle model, namely that the hash function is a random function. Hence, if an adversary cannot compute the value of y and this latter has not been hashed yet then he cannot distinguish H(y) from a random value. Rule (H2) is similar to rule (R2). Rule (H3) uses the fact that the value of y can not be queried to the hash oracle. Lemma 4. The following preservation rules are sound provided that x = y and z = x: – (H4) {WS(y; V ) ∧ WS(z; V ) ∧ H(H, y)} x := H(y) {WS(z; V, x)} / – (H5) {H(H, e) ∧ WS(z; y)} x := H(y) {H(H, e)}, if z ∈ subvar(e) ∧ x ∈ subvar(e) – (H6) {Indis(νy; V1 ; V2 , y) ∧ H(H, y)} x := H(y) {Indis(νy; V1 , x; V2 , y)}, if y ∈ V1 – (H7) {Indis(νz; V1 , z; V2 ) ∧ WS(y; V1 ∪ V2 , z) ∧ H(H, y)} x := H(y) {Indis(νz; V1 , z, x; V2 )} The idea behind (H4) is that to the adversary the value of x is seemingly random so that it can not help to compute z. Rule (H5) states that the value of e not having been hashed yet reminds true as long as e contains a variable z whose value is not computable out of y. (H6) and (H7) give necessary conditions to the preservation of indistinguishability that is based on the seemingly randomness of a hash value. One-way Function Lemma 5. The following rule is sound, when y ∈ V ∪ {x}: – (O1) {Indis(νy; V ; y)} x := f (y) {WS(y; V, x)}. Rule (O1) captures the one-wayness of f . Lemma 6. The following rules are sound when z = x: – (O2) {Indis(νz; V1 , z; V2 , y)} x := f (y) {Indis(νz; V1 , z, x; V2 , y)}, if z = y – (O3) {WS(z; V ) ∧ Indis(νy; V ; y, z)} x := f (y) {WS(z; V, x)} For one-way permutations, we also have the following rule: – (P1){Indis(νy; V1 ; V2 , y)} x := f (y) {Indis(νx; V1 , x; V2 )}, if y ∈ V1 ∪ V2 Rule (O2) is obvious since f (y) is given to the adversary in the precondition and rule (O3) follows from the fact that y and z are independent. Rule (P1) simply ensues from the fact that f is a permutation.
310
J. Courant et al.
The Xor operator. In the following rules, we assume y = z. Lemma 7. The following rule is sound when y ∈ V1 ∪ V2 : – (X1) {Indis(νy; V1 , y, z; V2 )} x := y ⊕ z {Indis(νx; V1 , x, z; V2 )}, Moreover, we have the following rules that are sound provided that t = x, y, z. – (X2) {Indis(νt; V1 , y, z; V2 )} x := y ⊕ z {Indis(νt; V1 , x, y, z; V2 )} – (X3) {WS(t; V, y, z)} x := y ⊕ z {WS(t; V, y, z, x)} To understand rule (X1) one should consider y as a key and think about x as the one-time pad encryption of z with the key y. Rules (X2) and (X3) take advantage of the fact that is easy to compute x given y and z. Concatenation Lemma 8. The following rules are sound: – (C1) {WS(y; V )} x := y||z {WS(x; V )}, if x ∈ V . A dual rule applies for z. – (C2) {Indis(νy; V1 , y, z; V2 ) ∧ Indis(νz; V1 , y, z; V2 )} x := y||z {Indis(νx; V1 , x; V2 )}, if y, z ∈ V1 ∪ V2 – (C3) {Indis(νt; V1 , y, z; V2 )} x := y||z {Indis(νt; V1 , x, y, z; V2 )}, if t = x, y, z – (C4) {WS(t; V, y, z)} x := y||z {WS(t; V, y, z, x)}, if t = x, y, z (C1) states that if computing a substring of x out of the elements of V is hard, then so is computing x itself. The idea behind (C2) is that y and z being random implies randomness of x, with respect to V1 and V2 . Eventually, x being easily computable from y and z accounts for rules (C3) and (C4). In addition to the rules above, we have the usual sequential composition and consequence rules of the Hoare logic. In order to apply the consequence rule, we use entailment (logic implication) between assertions as in Lemma 9.
Lemma 9. Let X ∈ Dist(Γ, H, ) be a distribution ensemble: 1. If X |= Indis(νx; V1 ; V2 ), V1 ⊆ V1 and V2 ⊆ V1 ∪ V2 then X |= Indis(νx; V1 ; V2 ). 2. If X |= WS(x; V ) and V ⊆ V then X |= WS(x; V ). 3. If X |= Indis(νx; V1 ; V2 ∪ {x}) and V ⊆ V1 \ {x} then X |= WS(x; V ). The soundness of the Hoare Logic follows by induction from the soundness of each rule and soundness of the Consequence and Sequential composition rules. Proposition 2. The Hoare triples given in Section 3.2 are valid. Example 3. We illustrate our proposition with Bellare & Rogaway’s generic construction [7].
Automated Proofs for Asymmetric Encryption
311
r
1) r ← {0, 1}n0 Indis(νr; Var) ∧ H(G, r) ∧ H(H, ine ||r) 2) a := f (r) Indis(νa; Var − r) ∧ WS(r; Var − r) ∧ H(G, r) ∧ H(H, ine ||r) 3) g := G(r) Indis(νa; Var − r) ∧ Indis(νg; Var − r)∧ WS(r; Var − r) ∧ H(H, ine ||r) 4) b := ine ⊕ g Indis(νa; Var − r) ∧ Indis(νb; Var − g − r)∧ WS(r; Var − r) ∧ H(H, ine ||r) 5) s := ine ||r Indis(νa; Var − r − s) ∧ Indis(νb; Var − g − r − s)∧ WS(s; Var − r − s) ∧ H(H, s) 6) c := H(s) Indis(νa; Var − r − s) ∧ Indis(νb; Var − r − g − s)∧ Indis(νc; Var − r − s) 7) oute := a||b||c Indis(νoute ; Var − a − b − c − r − g − s) 1) 2) 3) 4) 5) 6) 7)
(R1), (R2), and (R2). (P 1), (O1), (G3), and (G3). (H7), (H1), (H4), and (G3). (X2), (X1), (X3), and (G3). (G1), (G1), (C1), and (G3). (H7), (H7), and (H1). (C2) twice.
3.3
Extensions
In this section, we show how our Hoare logic, and hence our verification procedure, can be adapted to deal with on one hand injective partially trapdoor one-way functions and on the other hand OW-PCA (probabilistic) functions. The first extension is motivated by Pointcheval’s construction in [18] and the second one by the Rapid Enhanced-security Asymmetric Cryptosystem Transform (REACT) [17]. The first observation we have to make is that Proposition 1 is too demanding in case f is not a permutation. Therefore, we introduce a new predicate Indisf (νx; V1 ; V2 ) whose meaning is as follows: r r X |= Indisf (νx; V1 ; V2 ) if and only if X ∼V1 ;V2 [u ← U; (S, H, (f, f −1 )) ← X : −1 (S{x → f (u)}, H, (f, f ))]. Notice that, when f is a bijection, Indisf (νx; V1 ; V2 ) is equivalent to Indis(νx; V1 ; V2 ) (f can be the identity function as in the last step of Example 4 and 5). Now, let oute , the output of the encryption oracle, have the form a1 || · · · ||an with ai = fi (xi ). Then, we can prove the following:
312
J. Courant et al.
Proposition 3. Let GE = ( , E(ine , oute ) : c, D(ind , outd ) : c ) be a generic n encryption scheme. If {true}c{ Indisfi (νai ; a1 , . . . , an , ine )} is valid then GE i=1
is IND-CPA.
Now, we introduce a new rule for Indisf (νx; V1 ; V2 ) that replaces rule (P1) in case the one-way function f is not a permutation: (P 1 ) {Indis(νy; V1 ; V2 , y)} x := f (y) {Indisf (νx; V1 , x; V2 )} if y ∈ V1 ∪ V2 Clearly all preservation rules can be generalized for Indisf . Injective partially trapdoor one-way functions: In contrast to the previous section, we do not assume f to be a permutation. On the other hand, we demand a stronger property than one-wayness. Let f : X × Y → Z be a function and let f −1 : Z → X be such that ∀z ∈ dom(f −1 )∃y ∈ Y, z = f (f −1 (z), y). Here f −1 is a partial function. The function f is said partially one-way, if for any given z = f (x, y), it is computationally impossible to compute a corresponding x. In order to deal with the fact that f is now partially one-way, we add the following rules, where we assume x, y ∈ V ∪ {z} and where we identify f and (x, y) → f (x||y): (PO1) {Indis(νx; V, x, y) ∧ Indis(νy; V, x, y)} z := f (x||y) { WS(x; V, z) ∧ Indisf (νz; V, z) } The intuition behind the first part of (PO1) is that f guarantees one-way secrecy of the x-part of x||y. The second part follows the same idea that (P1’). Example 4. We verify Pointcheval’s transformer [18]. r
1) r ← {0, 1}n0 Indis(νr; Var) ∧ H(G, r) r 2) s ← {0, 1}n0 Indis(νr; Var) ∧ Indis(νs; Var) ∧ H(G, r) ∧ H(H, ine ||s) 3) w := ine ||s Indis(νr; Var) ∧ WS(w; Var − s − w) ∧ H(G, r) ∧ H(H, w) 4) h := H(w) Indis(νr; Var − w − s) ∧ Indis(νh; Var − w − s) ∧ H(G, r) 5) a := f (r||h) Indisf (νa; Var − r − s − w − h) ∧WS(r; Var − r − s − w − h) ∧ H(G, r) 6) b := w ⊕ G(r) Indisf (νa; a, ine ) ∧ Indis(νb; a, b, ine ) 7) oute := a||b Indisf (νa; a, ine ) ∧ Indis(νb; a, b, ine )
Automated Proofs for Asymmetric Encryption
313
1) (R1) and (R2); 2) (R3), (R1), (G3) and (R2); 3) (C3), (C1), (G3), and (G3); 4) (H7), (H1), and (G3); 5) New rule (P O1) and (G3); 6) Extension of (G1) to Indisf , and (H1); 7) Extension of (G1) to Indisf , and (G1). To conclude, we use that Indisf (νa; a, ine ) and Indis(νb; a, b, ine ) implies Indisf (νa; a, b, ine ) OW-PCA: Some constructions such as REACT are based on probabilistic oneway functions that are difficult to invert even when the adversary has access to a plaintext checking oracle (PC), which on input a pair (m, c), answers whether c encrypts m. In order to deal with OW-PCA functions, we need to strengthen the meaning of our predicates allowing the adversary to access to the additional plaintext checking oracle. For instance, the definition of WS(x; V ) becomes: X |= r WS(x; V ) iff Pr[(S, H, (f, f −1 )) ← X : AP CA (S(V )) = S(x)] is negligible, for any adversary A. Now, we have to revisit Lemma 9 and the rules that introduce WS(x; V ) in the postcondition. It is, however, easy to check that they are valid. Example 5. REACT [17] r
1) r ← {0, 1}n0 Indis(νr; Var) r 2) R ← {0, 1}n0 Indis(νr; Var) ∧ Indis(νR; Var) ∧ H(G, R)∧ H(H, R||ine ||f (R||r)||ine ⊕ G(R)) 3) a := f (R||r) Indisf (νa; Var − r − R) ∧ WS(R; Var − r − R)∧ H(G, R) ∧ H(H, R||ine ||a||ine ⊕ G(R)) 4) g := G(R) Indisf (νa; Var − r − R) ∧ Indis(νg; Var − r − R)∧ WS(R; Var − r − R) ∧ H(H, R||ine ||a||ine ⊕ g) 5) b := ine ⊕ g Indisf (νa; Var − r − R) ∧ Indis(νb; Var − g − r − R)∧ WS(R; Var − r − R) ∧ H(H, R||ine ||a||b) 6) w := R||ine ||a||b Indisf (νa; Var − r − w − R) ∧Indis(νb; Var − g − r − w − R) ∧WS(w; Var − r − w − R) ∧ H(H, w) 7) c := H(w) Indisf (νa; a, b, c, ine ) ∧ Indis(νb; a, b, c, ine ) ∧Indis(νc; a, b, c, ine ) 8) oute := a||b||c; Indisf (νa; a, b, c, ine ) ∧ Indis(νb; a, b, c, ine ) ∧Indis(νc; a, b, c, ine ) 1) 2) 3) 4)
(R1) (R3), (R1), (R2) and (R2) (P O1), (G3) and (G3). Extension of (H7) to Indisf , (H1), (H4), and (G3).
314
5) 6) 7) 8)
J. Courant et al.
Extension Extension Extension Extension
4
of of of of
(X2) to Indisf , (X1), (X3), and (G3). (G1) to Indisf , (G1), (C1), and (G3). (H7) to Indisf , (H7), and (H1). (G1) to Indisf , (G1) and (G1).
Achieving a Stronger Criterion: IND-CCA Security Of a Scheme
Up to now, we have been interested in demonstrating the most basic notion of security, namely IND-CPA. Nevertheless, most of the schemes achieve a stronger level of security, since they are IND-CCA secure. The great difference between IND-CPA and IND-CCA is that adversaries attacking IND-CCA security are granted access to the decryption oracle all along their game against the scheme, on condition that they do not ask for the decryption of their challenge. More powerful adversaries mean stronger security criteria, and it is easy to figure out that an IND-CCA scheme is IND-CPA. However, the whole logic that has been developed was meant to deal with IND-CPA. The notion of Weak Secrecy for example is biaised in the new INDCCA context, since the ability to decipher messages often allows to bypass the computation of the inverse of the one-way function used in the scheme. Indistinguishability itself is a far more difficult property to achieve, since giving f (V2 ) may permit the adversary to create ciphertexts he can then submit to the decryption oracle. This is why a change is required in the way to carry out our proofs. In their article [4], Bellare et al. list and compare the most classical security criteria. They show that IND-CCA security is implied by IND-CPA security of a scheme plus its plaintext awareness. This is the way we choose to deal with IND-CCA. 4.1
Introduction of Plaintext Awareness
Plaintext awareness (PA) was first introduced in [5], but its original definition was slightly weaker. It was then refined in [4] to be the following notion. The idea is that the adversary should not be able to obtain ciphertexts without knowing the corresponding plaintext. If it is the case, we can consider that he asks the encryption oracle to cipher them, so that we do not need to care much about this capacity since it is yet taken into account by the IND-CPA criterion. Queries to the decryption oracle are adding a new element to the knowledge of the adversary if he can ask the decryption of interesting ciphertexts, that is, some that he hasn’t obtained from the encryption oracle. Otherwise functional correctness of the scheme imposes the result and the query is useless. In practice, an extra algorithm is introduced to define PA. This is the plaintext extractor K. As its name allows to suppose, it is meant to simulate the decryption algorithm. The idea is to say that if to any ciphertext the adversary manages to output, the plaintext extractor can associate the corresponding plaintext without
Automated Proofs for Asymmetric Encryption
315
asking anything to D, but only looking at the adversary’s queries to hash oracles and the encryption algorithm, then the scheme is plaintext aware. That is to say, no poly-time adversary can output a ciphertext he couldn’t decipher on his own. Formally, an adversary B against PA security of a scheme outputs a list hH of his hash queries and their results, a list C of his queries to E, and a ciphertext y that he challenges the plaintext extractor to decipher. B wins if the plaintext extractor does not output the same thing as the decryption oracle. Otherwise, if y ∈ C (the adversary has cheated and output a ciphertext he obtained from E) or if K(y) is the same as D(y), K is the winner of the experiment. We thus define: Definition 3 (Success Probability of the Plaintext Extractor). Let X be a distribution on configurations and GE be a generic scheme. Then the probability that the plaintext extractor K succeeds against adversary B is worth: r r −1 Succpa )) ← X; (hH, C, y, S ) ← B E(),H (f ); K,B,GE (η, X) = Pr[(S, H, (f, f r S ← [[D(y)]](S , H, (f, f −1 )) : y ∈ C ∨ (y ∈ C ∧ K(hH, C, y, f ) = S (outd ))] Now the formal definition of plaintext awareness is easy to state: Definition 4 (Plaintext Awareness). A generic encryption scheme GE = ( , E(ine , oute ) : c, D(ind , outd ) : c ) is PA-secure, if there is a polynomial time probabilistic algorithm K such that for every distribution X ∈ Dist(Γ, H, ) and adversary B, 1 − Succpa K,B,GE (η, X) is a negligible function in η.
4.2
Intuition on Means to Ensure Plaintext Awareness
Getting used to working with plaintext awareness, and trying to acquire an intuition about it on usual schemes, one can notice that a certain form of decryption algorithms are particularly well-suited for the verification of this criterion. The thing is, some decryption algorithms can be split into two parts: a first part that actually computes the plaintext out of the ciphertext, and a second one that checks whether the ciphertext was ’legally’ obtained. This last verification, that we call the ’sanity check ’, is the one ensuring PA (and hence IND-CCA) security. It allows to discriminate random bitstrings or ciphertexts that have been tampered with from valid ciphertexts output by the encryption algorithm. More precisely, if we consider a scheme GE = ( , E, D(ind , outd ) : c ) using H = (H1 , . . . , Hn ), let us suppose that c has the following pattern:
– some command c1 computes the plaintext, – h∗ := H1 (t∗ ) is computed, – then comes the branching if V(x, h∗ ) = v then outd := m∗ else outd := ”error” fi, where x is a vector of variables (possibly empty) and V is a r function such that for given x and v, P r[r ← U : V(x, r) = v] is negligible. ∗ The condition V(x, h ) = v is called the sanity check. The idea behind such a test is simple: as a hash value cannot be forged (in the ROM), the hash oracle has to be queried on some t∗ , whose right value is
316
J. Courant et al.
meant to be computable by a normal execution of the encryption algorithm only. The challenge of a PA-adversary thus becomes to compute the right hash value, hence we can easily prove he has negligible probability of success. Nevertheless, soundness of such an argument requires the weak injectivity property we impose on V. Indeed, if a great number of values verified the sanity check, we would not be able to deduce from its validity that the adversary can compute the right hash value. The uniform distribution of argument r in the hypothesis on V is meant to simulate the distribution of hash values. In practice, we need two more assumptions on the form of the program and the use of the variables. First, the encryption algorithm is supposed to make an unique call to H1 on a variable t. Secondly, the value t∗ that D computes matches the value of t computed by E during a sound execution of the pair of algorithms. Figure 4.2 illustrates this assumption of t and t∗ playing the same role in respectively the encryption and the decryption algorithm.
state S
DECRYPTION
ENCRYPTION t gets a value S (t)
state S
t∗ gets the same value as t
state S
i.e. S (t) = S (t∗)
Fig. 1. The hypothesis about t and t∗
Hereafter, the reader can find the example of the scheme designed in 1993 by Bellare and Rogaway [7], that illustrates the discussion above very well. To lighten the notations, we use directly a match command, that is not in the language, instead of doing it by hand by cutting the bitstring in three. The sanity check is highlighted in the code of the decryption oracle. Notice that this code indeed falls into two parts: first, the computation of the plaintext m∗ , that does not involve c∗ . This latter only serves the last test purpose, which is to ensure that the right value of t∗ , and thus the right values of m∗ and r∗ , have been computed. This conditional branching somewhat forces whoever attacks plaintext awareness of the scheme to invert f and query G on the result of the inversion itself; it creates an extremely strong link between the random seed and the plaintext by placing it under a hash function. Encryption E(ine , oute ) = r r ← U; a = f (r); g := G(r); b := ine ⊕ g; t := x||r c := H(t); oute := a||b||c
Decryption D(ind , outd ) match ind with a∗ ||b∗ ||c∗ ; r∗ := f −1 (a∗ ); g ∗ := G(r∗ ); m∗ := b∗ ⊕ g ∗ ; t∗ := m∗ ||r∗ ; h∗ := H(t∗ ); if h∗ = c∗ then outd := m∗ else outd := error
Automated Proofs for Asymmetric Encryption
4.3
317
Formal Semantic Criterion for Plaintext Awareness
We recall that we suppose the decryption oracle to be of the following form: c1 ; h∗ := H1 (t∗ ); if V(x, h∗ ) = v then outd := m∗ else outd := ”error” fi, and that H1 is called once in E on a variable t. On top of that, we require that r if S ← [[E]](ine ) and S := D(S(oute )), then S(t) = S (t∗ ). This last condition simply states that t and t∗ play the same role in both algorithms. The intuition behind the semantic criterion is quite easily understandable. We are going to impose three conditions to ensure the ability to construct a plaintext extractor enjoying an overwhelming probability of success. That is to say, we design conditions to enable an efficient simulation of the decryption algorithm. We know that K, the plaintext extractor, is granted access to the list hH1 of oracle queries of the adversary B to H1 and their results. The idea is that, if K is able to select among hH1 .dom the right value of t∗ the decryption algorithm would compute, then (looking at the example above where t∗ = m∗ ||r∗ ), the extraction of the plaintext is pretty likely to succeed (in the example selecting the prefix suffices!). Such a selection could be done by testing candidates to the sanity check one by one. Since there is only a polynomial number of queries (B queried the oracle and is poly-time), this takes a polynomial time. Therefore, showing the existence of K amounts to constructing an efficient tester. The first condition thus consists in assuming the existence of a poly-time algorithm called the tester, able to discriminate valid candidates to the sanity check from unsatisfactory ones. Then, we impose that the extraction of the plaintext be easily achievable from a good candidate. Eventually, to get rid of possible ambiguity, we add that to a value cd (for candidate) of t∗ corresponds at most one possible ciphertext, so that the extracted plaintext is indeed the one the decryption oracle outputs when verifying the sanity check on cd. Here is the formal statement of the semantic criterion: Definition 5 (PA Semantic Criterion). We say that GE satisfies the PAsemantic criterion, if there exist efficient algorithms T and Ext that satisfy the following conditions: 1. The tester T takes as input (hH, C, y, cd, f ) and returns a value in {0, 1}. We require that for any adversary B and any distribution X ∈ Dist(Γ, H, ),
r
r
1 − Pr[ (S, H, (f, f −1 )) ← X; (hH, C, y, S ) ← B E(),H (f ); r r r −1 S cd ← hH1 .dom; b ← T (hH, C, y, cd, f ) : ← [[D(y)]](S , H, (f, f )); = H1 (S (t∗ )) ∧ V(S (x), H1 (cd)) = S (v) ∧ b = 1 ⇒ (H1 (cd) b = 0 ⇒ V(S (x), H1 (cd)) = S (v) ] is negligible. 2. For Ext, we require that for any adversary B and any distribution X ∈ Dist(Γ, H, ),
r
r
1 − Pr[ (S, H, (f, f −1 )) ← X; (hH, C, y, S ) ← B E(),H (f ); r S ← [[D(y)]](S , H, (f, f −1 )) : Ext(hH, C, y, S (t∗ ), f ) = S (outd )] is negligible.
318
J. Courant et al.
3. Finally, we require that for any adversary B and any distribution X ∈ Dist(Γ, H, ),
r
r
Pr[ (S, H, (f, f −1 )) ← X; (hH, C, y, y , S ) ← B E(),H (f ); r r S1 ← [[D(y)]](S , H, (f, f −1 )); S2 ← [[D(y )]](S , H, (f, f −1 )) : y = y ∧ S1 (t∗ ) = S2 (t∗ ) ∧ S1 (outd ) = ”error” ∧ S2 (outd ) = ”error”] is negligible. If a scheme satisfies definition 5, given such a tester T and such an extraction algorithm Ext, the plaintext extractor can be constructed as follows: K T ,Ext (hH, C, y, f ) : Let L = (cd | cd ∈ dom(hH1 ) such that T (hH, C, y, cd, f ) r = 1)if L = then return ”error” else cd ← L; return Ext(hH, C, y, cd, f ) We can then demonstrate that our semantic criterion indeed implies plaintext awareness. Theorem 1. Let GE be a generic encryption scheme that satisfies the PAsemantic criterion. Then, GE is PA-secure. Of course there are generic encryption schemes for which the conditions above are satisfied under the assumption that T has access to an extra oracle such as a plaintext checking oracle (PC), or a ciphertext validity-checking oracle (CV), which on input c answers whether c is a valid ciphertext. In this case, the semantic security of the scheme has to be established under the assumption that f is OW-PCA, respectively OW-CVA. Furthermore, our definition of the PAsemantic criterion makes perfect sense for constructions that apply to IND-CPA schemes such as Fujisaki and Okamoto’s converter [14]. In this case, f has to be considered as the IND-CPA encryption oracle. 4.4
A Syntactic Criterion for Plaintext Awareness
An easy syntactic check that implies the PA-semantic criterion is as follows. Definition 6. A generic encryption scheme GE satisfies the PA-syntactic criterion, if the sanity check has the form V(t, h) = v, where D is such that h is assigned H1 (t), t is assigned ine ||r, ine is the plaintext and E(ine ; r) is the ciphertext (i.e., r is the random seed of E). It is not difficult to see that if GE satisfies the PA-syntactic criterion then it also satisfies the PA-semantic one with a tester T as follows (Ext is obvious): Look in hH1 for a bit-string s such that E(x∗ ; r∗ ) = y, where y is the challenge and x∗ ||r∗ = s. Here are some examples that satisfy the syntactic criterion (we use ·∗ to denote the values computed by the decryption oracle): Example 6. – Bellare and Rogaway [7]: E(ine ; r) = a||b||c = f (r)||ine ⊕ G(r)||H(ine ||r). The ”sanity check” of the decryption algorithm is H(m∗ ||r∗ ) = c∗ .
Automated Proofs for Asymmetric Encryption
319
– OAEP+ [19]: E(ine ; r) = f (a||b||c), where a = ine ⊕ G(r), b = H (ine ||r), c = H(s) ⊕ r and s = ine ⊕ G(r)||H (ine ||r). The ”sanity check” of the decryption algorithm has the form H (m∗ ||r∗ ) = b∗ . – Fujisaki and Okamoto [14]: if (K , E , D ) is a public encryption scheme (that is CPA) then E(ine ; r) = E ((ine ||r); H(ine ||r)). The ”sanity check” of the decryption algorithm is: E (m∗ ||r∗ ; H(m∗ ||r∗ )) = ind . The PA-semantic criterion applies to the following constructions but not the syntactic one: Example 7. – Pointcheval [18]: E(ine ; r; s) = f (r||H(ine ||s))||((ine ||s) ⊕ G(r)), where f is a partially trapdoor one-way injective function. The ”sanity check” of the decryption oracle D(a||b) is f (r∗ ||H(m∗ ||s∗ )) = a∗ . The tester looks in hG and hH for r∗ and m∗ ||s∗ such that E(m∗ ; r∗ ; s∗ ) = y. – REACT [17]: This construction applies to any trapdoor one-way function (possibly probabilistic). It is quite similar to the construction in [7]: E(ine ; R; r) = a||b||c = f (R; r)||ine ⊕ G(r)||H(R||ine ||a||b), where a = f (R; r) and b = ine ⊕ G(R). The ”sanity check” of the decryption algorithm is H(R∗ ||m∗ ||a∗ ||b∗ ) = c. For this construction, one can provide a tester T that uses a PCA oracle to check whether a is the encryption of R by f . Hence, the PA security of the construction under the assumption of the OW-PCA security of f . The tester looks in hH for R∗ ||m∗ ||a∗ ||b∗ such that c∗ = H(R∗ ||m∗ ||a∗ ||b∗ ) and a∗ = f (R∗ ), which can be checked using the CPA-oracle. And now some examples of constructions that do not satisfy the PA-semantic criterion (and hence, not the syntactic one): Example 8. – Zheng-Seberry Scheme [23]: E(x; r) = a||b = f (r)||(G(r) ⊕ (x||H(x)). The third condition of the PAsemantic criterion is not satisfied by this construction. Actually, there is an attack [21] on the IND-CCA security of this scheme that exploits this fact. – OAEP [5]: E(ine ; r) = a = f (ine ||0k ⊕ G(r)||r ⊕ H(s)), where s = ine ||0k ⊕ G(r). Here the third condition is not satisfied.
5
Automation
We can now fully automate our verification procedure of IND-CCA for the encryption schemes we consider as follows: 1. Automatically establish invariants 2. Check the syntactic criterion for PA.
320
J. Courant et al.
Point 2 can be done by a simple syntactic analyzer taking as input the decryption program, but has not been implemented yet. Point 1 is more challenging. The idea is, for a given program, to compute invariants backwards, starting with the invariant Indis(νoute ; oute , ine ) at the end of the program. As several rules can lead to a same postcondition, we in fact compute a set of sufficient conditions at all points of the program: for each set {φ1 , . . . , φn } and each instruction c, we can compute a set of assertions {φ1 , . . . , φm } such that 1. for i = 1, . . . , m, there exists j such that {φi }c{φj } can be derived using the rules given section 3.2, 2. and for all j and all φ such that {φ }c{φj }, there exists i such that φ entails φi and that this entailment relation can be derived using lemma 9. Of course, this verification is potentially exponential in the number of instructions of the encryption program as each postcondition may potentially have several preconditions. However this is mitigated as – the considered encryption scheme are generally implemented in a few instructions (around 10) – we implement a simplification procedure on the computed set of invariants: if φi entails φj (for i = j), then we can safely delete φi from the set of assertions {φ1 , . . . , φn }. In other words, we keep only the minimal preconditions with respect to strength in our computed set of invariants (the usual Hoare logic corresponds to the degenerated case where this set has a minimum element, called the weakest precondition). In practice, checking Bellare & Rogaway generic construction is instantaneous. We implemented that procedure as an Objective Caml program, taking as input a representation of the encryption program. This program is only 230 lines long and is available on the web page of the authors.
6
Conclusion
In this paper we proposed an automatic method to prove IND-CCA security of generic encryption schemes in the random oracle model. IND-CPA is proved using a Hoare logic and plaintext awareness using a syntactic criterion. It does not seem difficult to adapt our Hoare logic to allow a security proof in the concrete framework of provable security. Another extension of our Hoare logic could concern OAEP. Here, we need to express that the value of a given variable is indistinguishable from a random value as long as a value r has not been submitted to a hash oracle G. This can be done by extending the predicate Indis(νx; V1 ; V2 ). The details are future work.
References 1. Barthe, G., Cederquist, J., Tarento, S.: A Machine-Checked Formalization of the Generic Model and the Random Oracle Model. In: Basin, D., Rusinowitch, M. (eds.) IJCAR 2004. LNCS (LNAI), vol. 3097, pp. 385–399. Springer, Heidelberg (2004)
Automated Proofs for Asymmetric Encryption
321
2. Barthe, G., Gr´egoire, B., Janvier, R., Zanella B´eguelin, S.: A framework for language-based cryptographic proofs. In: ACM SIGPLAN Workshop on Mechanizing Metatheory (2007) 3. Barthe, G., Tarento, S.: A machine-checked formalization of the random oracle model. In: Filliˆ atre, J.-C., Paulin-Mohring, C., Werner, B. (eds.) TYPES 2004. LNCS, vol. 3839, pp. 33–49. Springer, Heidelberg (2006) 4. Bellare, M., Desai, A., Pointcheval, D., Rogaway, P.: Relations among notions of security for public-key encryption schemes. In: Krawczyk, H. (ed.) CRYPTO 1998. LNCS, vol. 1462, pp. 26–45. Springer, Heidelberg (1998) 5. Bellare, M., Rogaway, P.: Optimal asymmetric encryption. In: De Santis, A. (ed.) EUROCRYPT 1994. LNCS, vol. 950, pp. 92–111. Springer, Heidelberg (1995) 6. Bellare, M., Rogaway, P.: Code-based game-playing proofs and the security of triple encryption. Cryptology ePrint Archive, Report 2004/331 (2004) 7. Bellare, M., Rogaway, P.: Random oracles are practical: a paradigm for designing efficient protocols. In: CCS 1993, pp. 62–73 (1993) 8. Blanchet, B.: A computationally sound mechanized prover for security protocols. In: S&P 2006, pp. 140–154 (2006) 9. Blanchet, B., Pointcheval, D.: Automated security proofs with sequences of games. In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 537–554. Springer, Heidelberg (2006) 10. Corin, R., den Hartog, J.: A probabilistic hoare-style logic for game-based cryptographic proofs. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 252–263. Springer, Heidelberg (2006) 11. Damgard, I.: Towards practical public key systems secure against chosen ciphertext attacks. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 445–456. Springer, Heidelberg (1992) 12. Datta, A., Derek, A., Mitchell, J.C., Warinschi, B.: Computationally sound compositional logic for key exchange protocols. In: CSFW 2006, pp. 321–334 (2006) 13. Feige, U., Fiat, A., Shamir, A.: Zero-knowledge proofs of identity. J. Cryptol. 1(2), 77–94 (1988) 14. Fujisaki, E., Okamoto, T.: How to enhance the security of public-key encryption at minimum cost. In: Imai, H., Zheng, Y. (eds.) PKC 1999. LNCS, vol. 1560, pp. 53–68. Springer, Heidelberg (1999) 15. Halevi, S.: A plausible approach to computer-aided cryptographic proofs. ePrint archive report 2005 (2005) 16. Nowak, D.: A framework for game-based security proofs. In: Qing, S., Imai, H., Wang, G. (eds.) ICICS 2007. LNCS, vol. 4861, pp. 319–333. Springer, Heidelberg (2007) 17. Okamoto, T., Pointcheval, D.: REACT: Rapid enhanced-security asymmetric cryptosystem transform. In: Naccache, D. (ed.) CT-RSA 2001. LNCS, vol. 2020, pp. 159–175. Springer, Heidelberg (2001) 18. Pointcheval,D.:Chosen-ciphertextsecurityforanyone-waycryptosystem.In:Imai,H., Zheng, Y. (eds.) PKC 2000. LNCS, vol. 1751, pp. 129–146. Springer, Heidelberg (2000) 19. Shoup, V.: Oaep reconsidered. J. Cryptology 15(4), 223–249 (2002) 20. Shoup, V.: Sequences of games: a tool for taming complexity in security proofs (2004), http://eprint.iacr.org/2004/332 21. Soldera, D., Seberry, J., Qu, C.: The analysis of zheng-seberry scheme. In: Batten, L.M., Seberry, J. (eds.) ACISP 2002. LNCS, vol. 2384, pp. 159–168. Springer, Heidelberg (2002) 22. Tarento, S.: Machine-checked security proofs of cryptographic signature schemes. In: di Vimercati, S.d.C., Syverson, P.F., Gollmann, D. (eds.) ESORICS 2005. LNCS, vol. 3679, pp. 140–158. Springer, Heidelberg (2005) 23. Zheng, Y., Seberry, J.: Immunizing public key cryptosystems against chosen ciphertext attacks. J. on Selected Areas in Communications 11(5), 715–724 (1993)
Counterexample Guided Path Reduction for Static Program Analysis Ansgar Fehnker, Ralf Huuck, and Sean Seefried National ICT Australia Ltd. (NICTA) Locked Bag 6016 University of New South Wales Sydney NSW 1466, Australia
Abstract. In this work we introduce counterexample guided path reduction based on interval constraint solving for static program analysis. The aim of this technique is to reduce the number of false positives by reducing the number of feasible paths in the abstraction iteratively. Given a counterexample, a set of observers is computed which exclude infeasible paths in the next iteration. This approach combines ideas from counterexample guided abstraction refinement for software verification with static analysis techniques that employ interval constraint solving. The advantage is that the analysis becomes less conservative than static analysis, while it benefits from the fact that interval constraint solving deals naturally with loops. We demonstrate that the proposed approach is effective in reducing the number of false positives, and compare it to other static checkers for C/C++ program analysis.
1
Introduction
Static program analysis and software model checking are two automatic analysis techniques to ensure (limited) correctness of software or at least to find as many bugs in the software as possible. In contrast to software model checking, static program analysis typically works on a more abstract level, such as the control flow graph (CFG) without any data abstraction. As such a syntactic model of a program is a very coarse abstraction, and reported error traces can be spurious, i.e. they may correspond to no actual run in the concrete program. The result of such infeasible paths are false positives in the program analysis. This work presents counterexample guided path reduction to remove infeasible paths. To do so semantic information in the form of interval equations is added to a previously purely syntactic model. This is similar to abstract interpretation based interval analysis, e.g., used for buffer overflow detection. That approach transforms the entire programm into a set of interval equations, and characterizes its behavior by the precise least solution [1].
National ICT Australia is funded by the Australian Government’s Department of Communications, Information Technology and the Arts and the Australian Research Council through Backing Australia’s Ability and the ICT Research Centre of Excellence programs.
D. Dams, U. Hannemann, M. Steffen (Eds.): de Roever Festschrift, LNCS 5930, pp. 322–341, 2010. c Springer-Verlag Berlin Heidelberg 2010
Counterexample Guided Path Reduction for Static Program Analysis
323
Our approach, in contrast, constructs this kind of equation system only for a given path through the program. This is not only computationally cheaper, but it has the advantage that the analysis becomes less conservative. The main reason for this improvement is that a path has fewer transitions and fewer joinoperators, the main source for over-approximation errors, than the entire program. The paths themselves are determined by potential counterexamples. A counterexample path is spurious if the least solution of a corresponding equation system is empty. The subset of equations responsible for an empty solution is called a conflict. In a second step we refine our syntactic model by a finite observer which excludes all paths that generate the same conflict. These steps are applied iteratively, until either no new counterexample can be found, or until a counterexample is found that cannot be proven to be spurious. This approach is obviously inspired by counterexample guided abstraction refinement (CEGAR), as used in [2,3]. A main difference is the use of the precise least solution for interval equations [1] in the place of SAT-solving. This technique deals directly with loops, without the need to discover additional loop-predicates [3,4], or successive unrolling of the transition relation [5]. An alternative to SAT-based CEGAR in the context of static analysis, using polyhedral approximations, was proposed in [6,7]. Pre-image computations along the counterexamples are used to improve the accuracy of the polyhedral approximation. Our approach in contrast uses an precise least solution of an interval equation system, which is computationally faster, at the expense of precision. Our proposed approach combines nicely with the static analysis approach put forward in [8] and implemented in the tool Goanna. It defines, in contrast to semantic software model checking, syntactic properties on a syntactic abstraction of the program. The experimental results confirm that the proposed path reduction technique situates our tool in-between software model checking and static analysis tools. False positives are effectively reduced while interval solving converges quickly, even in the presence of loops. The next section introduces preliminaries of labeled transition systems, interval solving and model checking for static properties. Section 3 introduces Interval Automata, the model we use to capture the program semantics. Section 4 presents the details of our approach. Implementation details and a comparison with other tools are given in Section 5.
2 2.1
Preliminaries Labeled Transition Systems
In this work we use labeled transition systems (LTS) to describe the semantics of our abstract programs. An LTS is defined by (S, S0 , A, R, F ) where S is a set of states, S0 ⊆ S is a sub-set of initial states, A is a set of actions and R ⊆ S × A × S is a transition relation where each transition is labeled with an action a ∈ A, and F ⊆ S is a set of final states. We call an LTS deterministic if
324
A. Fehnker, R. Huuck, and S. Seefried
for every state s ∈ S and action a ∈ A there is at most one successor state such that (s, a, s ) ∈ R. The finite sequence ρ = s0 a0 s1 a1 . . . an−1 sn is an execution of an LTS P = (S, S0 , A, R, F ), if s0 ∈ So and (si , ai , si+1 ) ∈ R for all i ≥ 0. An execution is accepting if sn ∈ F . We say w = a0 . . . an−1 ∈ A∗ is a word in P , if there exist si , i ≥ 0, such that s0 a0 s1 a1 . . . an−1 sn form an execution in P . The language of P is defined by the set of all words for which there exists an accepting execution. We denote this language as LP . The product of two labeled transition systems P1 = (S1 , S10 , A, R1 , F1 ) and P2 = (S2 , S20 , A, R2 , F2 ), denoted as P× = P1 × P2 , is defined as P× = (S1 × S2 , S10 × S10 , A, R× , F1 × F2 ) where ((s1 , s2 ), a, (s1 , s2 )) ∈ R× if and only if (s1 , a, s1 ) ∈ R1 and (s2 , a, s2 ) ∈ R2 . The language of P× is the intersection of the language defined by P1 and P2 . 2.2
Interval Equation Systems
We define an interval lattice I = (I, ⊆) by the set I = {∅} ∪ {[z1 , z2 ]|z1 ∈ Z ∪ {−∞}, z2 ∈ Z ∪ {∞}, z1 ≤ z2 } with the partial order implied by the contained in relation “⊆”, where a non-empty interval [a, b] is contained in [c, d], if a ≥ c and b ≤ d. The empty element is the bottom element of this lattice, and [−∞, +∞] the top element. For dealing with interval boundaries, we assume that ≤, ≥, +, ∗ as well as min and max are extended in the usual way to the infinite range. Moreover, we consider the following operators on intervals: intersection , union , addition (+) and multiplication (·) with the usual semantics [[.]]for intersection and [[[l1 , u1 ] [l2 , u2 ]]] = [min{l1 , l2 }, max{u1 , u2 }] [[[l1 , u1 ] + [l2 , u2 ]]] = [l1 + l2 , u1 + u2 ] [[[l1 , u1 ] · [l2 , u2 ]]] = [min(product), max(product)] where product = {l1 ∗ l2 , l1 ∗ u2 , l2 ∗ u1 , u1 ∗ u2 }. For a given finite set of variables X = {x0 , . . . , xn } over I we define an interval expression φ as follows: . φ =a | x| φφ| φ φ | φ+φ| φ·φ where x ∈ X, and a ∈ I. The set of all expression over X is denoted as C(X). For all operation we have that [[φ ◦ ϕ]] is [[φ]] ◦ [[ϕ]], where ◦ can be any of , , +, ·. A valuation is a mapping v : X → I from an interval variable to an interval. Given an interval expression φ ∈ C(X), and a valuation v, the [[φ]]v denoted the expression φ evaluated in v, i.e. it is defined to be the interval [[φ[v(x0 )/x0 , . . . , v(xn )/xn ]]], which is obtained by substituting each variable xi with the corresponding interval v(xi ).
Counterexample Guided Path Reduction for Static Program Analysis
325
An interval equation system is a mapping IE : X → C(X) from interval variables to interval expressions. We also denote this by xi = φi where i ∈ 1, . . . , n. The solution of such an interval equation system is a valuation satisfying all equations, i.e., [[xi ]] = [[φi ]]v for all i ∈ 1, . . . , n. As shown in [1] there always is a precise least solution which can be efficiently computed. By precise we mean precise with respect to the interval operators’s semantics and without the use of additional widening techniques. Of course, from a program analysis point of view over-approximations are introduced, e.g., when joining two intervals [1, 2] [4, 5] results in [1, 5]. This, however, is due to the domain we have chosen. 2.3
Static Analysis by Model Checking
This work is based on an automata based static analysis framework as described in [8], which is related to [9,10,11]. The basic idea of this approach is to map a C/C++ program to its CFG, and to label this CFG with occurrences of syntactic constructs of interest. The CFG together with the labels can easily be mapped to the input language of a model checker, in our case NuSMV, or directly translated into a Kripke structure for model checking. A simple example of this approach is shown in Fig. 1. Consider the contrived program foo which is allocating some memory, copying it a number of times to a, and freeing the memory in the last loop iteration. One example of a property to check is whether after freeing some resource, it still might be used. In our automata based approach we syntactically identify program locations that allocate, use, and free resource p. We automatically label the program’s CFG with this information as shown on the right hand side of Fig. 1. This property can then be checked by the CTL property AG (mallocp ⇒ AG (f reep ⇒ ¬EF usedp )), which means that whenever there is free after malloc for a resource p, there is no path such that p is used later on. Obviously, neglecting any further semantic information will lead to a false alarm in this example.
3
Interval Automata
This section introduces interval automata (IA) which abstract programs and capture their operational semantics on the domain of intervals. We define an IA as an extended state machine where the control structure is a finite state machine, extended by a mapping from interval variables to interval expressions. We will show later how to translate a C/C++ program to an IA. Definition 1 (Syntax). An interval automaton is a tuple (L, l0 , X, E, update), with – a finite set of locations L, – an initial location l0 ,
326
1
A. Fehnker, R. Huuck, and S. Seefried
void foo() {
2
int x, *a;
3
int *p=malloc(sizeof(int));
4
for(x = 10; x > 0; x--) {
5
a = p;
6
if(x == 1)
7
free(p)
8 9
}
l0 l1 mallocp l2 l7 l3 usedp f reep l5
l4 l6
}
Fig. 1. Example program and labeled CFG for use-after-free check
– a set of interval variables X, – a finite set of edges E ⊆ L × L, and – an effect function update : E → (X × C(X)). The effect update assigns to each edge a pair of an interval variable and an interval expression. We will refer to the (left-hand side) variable part update|X as lhs, and to the (right-hand side) expression part update|C(X) as rhsexpr. The set of all variables that appear in rhsexpr will be denoted by rhsvars. Note, that only one variable is updated on each edge. This restriction is made for the sake of simplicity, but does does not restrict the expressivity of an IA. Fig. 2 shows an example of an IA. Definition 2 (Semantics). The semantics of an IA P = (L, l0 , X, E, update) is defined by a labeled transition system LT S(P ) = (S, S0 , A, R, F ) where S is the set of states (l, v) with location l and an interval valuation v. S0 is the set of initial states s0 = (l0 , v0 ), with v0 ≡ [−∞, ∞]. A is the alphabet consisting of the set of edges E. R ⊆ S × A × S is the transition relation of triples ((l, v), (l, l ), (l , v )), i.e, transitions from state (l, v) to (l , v ) labeled by (l, l ), if there exists a (l, l ) in E, such that v = v[lhs(e) ← [[rhsexpr(e)]]v ] and [[rhsexpr(e)]]v = ∅. – F = S is the set of final states, i.e., all states are final states.
– – – –
It might seem a bit awkward that the transitions in the LTS are labeled with the edges of the IA, but this will be used later to define the synchronous composition with an observer. Since each transition is labeled with its corresponding edge we obtain a deterministic system, i.e., for a given word there exists only one possible run. We identify a word ((l0 , l1 ), (l1 , l2 ), . . . , (lm−1 , lm )) in the remainder by the sequence of locations (l0 , . . . , lm ).
Counterexample Guided Path Reduction for Static Program Analysis
327
Given an IA P . Its language LP contains all sequences (l0 , . . . , ln ) which satisfy the following: l0 = l 0 ∧ ∀i = 0, . . . , n − 1. (li , li+1 ) ∈ E ∧ v0 ≡ [−∞, +∞] ∧ ∃v1 , . . . , vn .([[rhsexpr(li , li+1 )]]vi = ∅ ∧ vi+1 = vi [lhs(li , li+1 ) ← [[rhsexpr(li , li+1 )]]vi ])
(1) (2) (3)
This mean that a word (1) starts in the initial location, (2) respects the edge relation E, and (3) there exists a sequence of non-empty valuations that satisfies the updates associated with each edge. We use this characterization of words as a satisfiability problem to generate systems of interval equations that have a non-empty solution only if a sequence (l0 , . . . , ln ) is a word. We will define for a given IA P and sequence w a conflict as an interval equation system with an empty least solution, which proves that w cannot be a word of the IA P .
4
Path Reduction
The labeled CFG as defined in Section 2.3 is a coarse abstraction of the actual program. Like most static analysis techniques this approach suffers from false positives. In the context of this paper we define a property as a regular language, and satisfaction of a property as language inclusion. The program itself will be defined by an Interval Automaton P and its behavior is defined by the language of the corresponding LT S(P ). Since interval automata are infinite state systems, we do not check the IA itself but an abstraction Pˆ . This abstraction is initially an annotated CFG as depicted in Fig. 1. A positive is a word in the abstraction Pˆ that does not satisfy the property. A false positive is a positive that is not in the actual behavior of the program, i.e. it is not in the language of the LT S(P ). Path reduction is then defined as the iterative process that restricts the language of the abstraction, until either a true positive has been found, or until the reduced language satisfies the property. 4.1
Path Reduction Loop
Given an IA P = (L, l0 , E, X, update) we define its finite abstraction Pˆ as follows: Pˆ = (L, l0 , E, E , L) is a labeled transition system with states L, initial state l0 , alphabet E, transition relation E = {(l, (l, l), l )|(l, l ) ∈ E}, and the entire set L as final states. The LTS Pˆ is an abstraction of the LT S(P ), and it represents the finite control structure of P . The language of Pˆ will be denoted by LPˆ . Each word of Pˆ is by construction a word of LT S(P ). Let Lφ be the language defined by the specification. We assume to have a procedure that checks if the language of LTS LPˆ is a subset of Lφ , and produces a counterexample if this is not the case (cf. Section 4.5). If this procedure finds a word in LPˆ that is not in Lφ , we have to check whether this word is in LP , i.e. we have to check whether it satisfies equation
328
A. Fehnker, R. Huuck, and S. Seefried
(1) to (3). Every word w = (l0 , . . . , lm ) in LPˆ satisfies by construction (1) and (2). A word w = (l0 , . . . , lm ) such that there exists no solution for (3) cannot be a word of LP . In this case we call the word spurious. In Section 4.2) we introduce a procedure to check whether a word is spurious. We will use it in an iterative loop to check if the infinite LTS of IA P satisfies the property, by checking a finite product of abstraction the Pˆ with the observers instead. This loop is structured as follows: 1. Let Pˆ0 := Pˆ , and i = 0. 2. Check if w ∈ LPˆi \ Lφ exists. If such a w exists got to step 3, otherwise exit with “property satisfied”. 3. Check if w ∈ LP . If w ∈ LP build observer Obsw , otherwise exit with “property not satisfied”. The observer satisfies the following (a) it accepts w, and (b) all accepted words w ∈ LP . C C 4. Let Pˆi+1 := Pˆi × Obsw , with . Pˆi × Obcw is the synchronous composition w of Pˆi and the complement of Obs . Increment i and goto step 3. The role of the observers is to rule out spurious counterexamples from the accepted language. They serve a similar purpose as predicates in counterexample guided abstraction refinements that abstract the C program as Boolean program [3]. Since we abstract the program as an interval automaton, we use observer automata instead of additional predicates to achieve the refinement. This is is necessary since there is no useful equivalent of Boolean predicates in the interval domain. The remainder of this section explains how to check if a word is in LP , how to build a suitable observer, and how to combine it in a framework that uses NuSMV to model check the finite abstraction Pˆi . Example. The initial coarse abstraction as a CFG is shown in Fig. 1 loses the information that p cannot be used after it was freed. The shortest counterexample based on the CFG is to initialize x to 10 in line 4, enter the for-loop, take the if-branch, free p in line 7, decrement x, return to the beginning of the for-loop, and then to use p in line 5. This counterexample is obviously spurious. Firstly, because the if-branch with condition x == 1 at line 7 is not reachable while x = 10. Secondly, because if the programm enters the if-branch, it implies x == 1, and it will be impossible to reenter the loop, given the decrement x-and the loop condition x > 0. 4.2
Checking for Spurious Words
Every word w = (l0 , . . . , lm ) in LPˆ satisfies by construction (1) and (2). It remains to be checked if condition (3) can be satisfied. A straightforward approach is to execute the trace on LT S(P ). However this can only determine if that particular word is spurious. Our proposed approach builds an equation system instead, which allows us to find a set of conflicting interval equations that can in turn be used to show that an entire class of words is spurious. Another
Counterexample Guided Path Reduction for Static Program Analysis
1
void foo() {
l0
2
int x, *a;
3
int* p=malloc(sizeof(int)); for(x = 10; x > 0; x--) {
5
a = p;
6
if(x == 1)
7 9
2
l7 l3
x = x [1, 1]
free(p)
8
x = x [−∞, 0] l
l4
l5
}
p = [−∞, ∞]
}
l6
p = [1, ∞] x = [10, 10] x = x [1, ∞] a = p x = (x [−∞, 0]) (x [2, ∞])
x = x + [−1, −1]
4
l1
329
Fig. 2. Abstraction of the program as IA. Analysis of the syntactic properties of the annotated CFG in Fig.1 is combined with an analysis of the IA to the right.
straightforward approach to build such an equation system is to introduce for each variable and edge in w an interval equation, and to use an interval solver to check if a solution to this system of interval equations exists. A drawback of this approach is that it introduces (m + 1) × n variables and m × n equations. In the following we present an approach to construct an equation system with at most one equation and one variable for each edge in w. Interval Equation System for a Sequence of Transitions. We describe how to obtain an equation system for a word w ∈ LPˆ , such that it has a nonempty least solution only if w ∈ LP . This system is generated in three steps: I. Tracking variables. For each variable X of the program P we will track its use. Let XL be a set of fresh variables xl , one for each variable and occurrence where it can be used. We add to XL a special element , which will be used as default. Given an IA P over variables X, its abstraction Pˆ , and a word w = (l0 , . . . , ln ) of Pˆ we denote the location of the last update of x before the i-th transition of word w as xw (i) . It is recursively defined as follows: xw (i+1) =
xw li+1 if x = lhs(li , li+1 ) x(i) otherwise
for i > 0, and with xw (0) = as base case. The function is parameterized in w, but the superscript will be omitted if it is clear from the context. II. Generating equations. For each edge in w we generate an interval expression over XL . We define exprw : {0, . . . , m} → C(XL ) as follows: exprw (i) → rhsexpr(li−1 , li )[x(i−1) /x]x∈rhsvars(li−1 ,li )
(4)
A. Fehnker, R. Huuck, and S. Seefried
var
exprw (i)
(l0 , l1 ) (l1 , l2 ) (l2 , l3 ) (l3 , l4 ) (l4 , l5 ) (l5 , l6 ) (l6 , l2 ) (l2 , l3 ) (l3 , l4 )
pl1 x l2 x l3 al4 x l5 pl6 x l2 x l3 al4
[1, ∞] [10, 10] xl2 [1, ∞] p1 xl3 [1, 1] [−∞, ∞] xl5 + [−1, −1] xl2 [1, ∞] pl6
IE w ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ pl1 = ⎪ ⎪ ⎪ ⎪ ⎪ x l2 = ⎪ ⎪ ⎬ x l3 = ⎪ ⎪ al4 = ⎪ ⎪ ⎪ ⎪ x l5 = ⎪ ⎪ ⎪ ⎪ pl6 = ⎪ ⎪ ⎪ ⎪ ⎪ ⎭
Reduced conflicts
[1, ∞] [10, 10] (xl5 + [−1, −1]) (xl2 [1, ∞]) pl1 pl6 xl3 [1, 1] [−∞, ∞]
⎫ ⎪ ⎪ ⎪ ⎬ x l2 x l3 ⎪ ⎪ ⎪ x l5 ⎭ ⎫ ⎪ ⎪ ⎪ ⎬ x l5 x l2 ⎪ ⎪ ⎪ ⎭ x l3
= [10, 10] = xl2 [1, ∞] = xl3 [1, 1]
conflict 1
w
= [−∞, ∞] [1, 1] = xl5 + [−1, −1] = xl2 [1, ∞]
conflict 2
330
Fig. 3. Equations for counterexample w of the IA depicted in Fig. 2
An expression exprw (i) is the right-hand side expression of the update on (li−1 , li ), where all occurring variables are substituted by variables in XL . We will use var w (i) to denote the function from rhsvars(li−1 , li ) to XL , that assigns x → x(i) . One might think of this as a partial function over X, restricted to the variables that are actually used on the right-hand side. We use ∅ to denote the functions that have the empty set as domain. III. Generating equation system. Locations may occur more than once in a word, and variables maybe updated by multiple edges. Let writesw ⊆ XL the set {xl |∃i s.t.x = lhs(li−1 , li )}, and indicesw be a mapping, that assigns to each xl ∈ writesw the set {i|x = lhs(li−1 , li ) ∧ li = l}. The system IE w : XL → C(XL ) is defined as follows: if xl ∈ writesw i∈indicesw (xl ) exprw (i) (5) xl → [−∞, ∞] otherwise System IE w assigns each variable xl ∈ writesw to a union of expressions; one expression for each element in indicesw (xl ). The default value is mapped to [−∞, ∞], since by construction ∈ writesw . Example. Fig. 3 depicts for word w = (l0 , l1 , , l2 , l3 , l4 , l5 , l6 , l2 , l3 , l4 ) how to generate IE w . The first column gives the transitions in w. The second column gives the variable in writesw . The variable pl1 , for example, refers to the update of p on the first transition (l0 , l1 ) of the IA in Fig. 2. We have that x(5) = xl3 , since the last update of x before (l4 , l5 ) was on edge (l2 , l3 ). The third column gives the equations exprw (i). For example, the right-hand side rhsexpr(l4 , l5 ) is x = x [1, 1]. Since x(4) = xl3 , we get that exprw (5) is xl3 [1, 1]. The fourth column shows the equation system IE w derived from the equations. We have, for example, that x is updated on (l1 , l2 ), the second edge in w, and (l6 , l2 ), the 8th edge. Hence, indicesw (xl2 ) = {2, 8}. Equation IE w (xl2 ) is then defined as the union of exprw (2), which is [10, 10], and exprw (8), which is xl5 + [−1, −1]. The least solution of the equation system IE w is pl1 = [1, ∞], xl2 = [10, 10], xl3 = [10, 10], al4 = [−∞, ∞], xl5 = ∅, and pl6 = [−∞, ∞]. Since xl5 = ∅, there exists no solution, and w is spurious.
Counterexample Guided Path Reduction for Static Program Analysis
331
Lemma 1. Given a word w ∈ LPˆ . Then there exist a sequence of non-empty valuations v1 , . . . , vm such that (3) holds for w only if IE w has a non-empty least solution. Proof. Given a solution to (3) we can construct a non-empty solution of IE w , which must be included in the least solution of IE w . One advantage of the interval equations framework is that it reasons naturally over loops. A third or fourth repetition does not introduce new variable in writesw , and neither new expressions. This means that the equation system for a word that is a concatenation αβββγ has an empty solution, if the concatenation αββγ has. This is captured by the following lemma: Lemma 2. Given a word w ∈ LPˆ , with w being a concatenation αββγ of α β γ ), β = (l1β , . . . , lm ), and γ = (l1γ , . . . , lm ). Let sequences α = (l0α , . . . , lm α γ β w = αβββγ. Then w ∈ LPˆ , writesw = writesw and the solution of IE w is also the solution of IE w . Proof: (i) Given w ∈ LPˆ , w = (l0 , . . . , lmα +2 mβ +mγ we have that (li , li+1 ) ∈ E. This implies that also all edges of w are in E, and hence, w ∈ LPˆ . (ii) Observe, that the second and third repetition of β in w add no fresh variables xl to writesw , hence writesw = writesw . (iii) Let x ∈ rhsvars(li−1 , li ), with (li−1 , li ) in the second iteration of β, i.e., i ∈ mα + mβ + 1, . . . , mα + 2 mβ . Then x(i−1) refers either to xlj , with j ∈ 0, . . . , mα or j ∈ mα + 1, . . . , 2 mα . In either case we can show that x(i−1+mβ ) = x(i−1) . Henceforth, the third iteration of the loop will lead to the same expressions rhsexpr(li−1 , li )[xx(i−1) /x]. Taking the union of more of the same expressions will not change the solution. 4.3
Conflict Discovery
The previous subsection described how to check if a given word w ∈ LPˆ is spurious. Interval solving, however, leads to an over-approximation, mostly due to the -operation. This subsection describes how to reduce the numbers of (nontrivial) equations in a conflict and at the same time the over-approximation error, by restricting conflicts to fragments and the cone-of-influence. Conceptually, a conflict is an equation system IE w that has no non-empty solution. For matter of convenience we introduce an alternative representation of the equation system; each variable xl in Xl is mapped to a set of pairs. Each of these pairs consists of an edge (li−i , li ) and mapping var (i) from rhsvars(li−i , li ) to XL . Each pair represents an expression exprw (i) as defined in (4); it records the edge, and the relevant variable substitutions. The conflict for a word w = (l0 , . . . , lm ) is thus alternatively represented by confw (xl ) = {((li−1 , li ), var (i−1) ) | x = lhs(li−1 , li ) ∧ l = li }. We refer to this mapping as the representation confw of the conflict. For the equation system of the example we have confw (pl1 ) = {((l0 , l1 ), ∅)}, confw (xl2 ) = {((l1 , l2 ), ∅), {((l6 , l2 ), (x → xl5 )}, confw (xl3 ) = {((l2 , l3 ), (x →
332
A. Fehnker, R. Huuck, and S. Seefried
xl2 ))}}, confw (al4 ) = {((l3 , l4 ), (a → pl1 )), ((l3 , l4 ), (a → pl6 ))}, confw (xl5 ) = {((l4 , l5 ), (x → xl3 )}, and confw (pl6 ) = {((l5 , l6 ), ∅)}. The empty set occurs in a pair when the right-hand side of the update has no variables. Fragments. For CEGAR approaches for infinite-state systems it has been observed that it is sufficient and often more efficient to find a spurious fragments of a counterexample, rather than a spurious counterexample [12,13]. The effect is similar to small or minimal predicates in SAT-based approaches. The difference between a word and a fragment in our context is that a fragment may start in any location. Given a word w = (l0 , . . . , lm ) a fragment w = (l0 , . . . , lm ) is a subsequence of w. A fragment of LT S(P ) is defined as a sequence of edges that satisfies (2) and (3). A fragment of Pˆ is a sequence of edges satisfying the edge relation E, i.e., satisfying (2). Given a fragment w we can construct a system of interval equations IE w as described for words earlier. For subsequence w of a word w we can show the analog of Lemma 1. If the solution of IE w is empty, i.e., if the fragment w is spurious, then the word w is spurious as well. If there exists a sequence of non-empty valuations v1 , . . . , vm for w, then they also define a non-empty subsequence of valuations for w . Rather than checking all m2 /2 fragments, the analysis focusses on promising candidates, based on the following two observation: (1) For an update in (3) to result in an empty solution, there must at least exist an element in I that can be mapped by update to the empty set. An example of such updates are intersections with constants such as x = x [1, ∞]. For any x = [a, b], with b < 1 the next state can only satisfy x = ∅. Updates that map only the empty set to the empty set can be omitted from the tail of a fragment. (2) Initially all variables are unconstrained, i.e. we start in valuation v ≡ [−∞, ∞]. Consequently updates that map valuation v ≡ [−∞, ∞] to [−∞, ∞] can be omitted from the beginning of a fragment. The full fragment can only have an empty solution if the reduced has, without updates at the end that map only empty sets to empty sets, and without updates at the beginning that map [−∞, ∞] to [−∞, ∞]. Cone-of-influence. Let IE w be a conflict for fragment w = (l0 , . . . , lm ). We further reduce the conflict by restricting it to the cone-of-influence of xlm . The cone-of-influence of xlm is defined as the least fixpoint μC.{yl |∃yl ∈ C. yl ∈ rhsvars(IE w (yl ))} ∪ {xlm }). We denote this set as writesw . We then define the reduced conflict IE w as xl →
i∈indicesw (xl )
[−∞, ∞]
exprw (i)
if xl ∈ writesw otherwise.
(6)
This reduction ensures that IE w has an empty least solution if IE w has. In the remainder we will refer to IE w as the reduced conflict, which is uniquely determined by the fragment w. The cone-of-influence reduction starts with the last edge (lm−1 , lm ) and the variable x that is written to, and then backtracks to all variables that it depends
Counterexample Guided Path Reduction for Static Program Analysis
333
on. All other variables are ignored, i.e. assumed to be [−∞, +∞], and all edges that do not contain at least one variable in writesw are omitted. The correspondw ing reduced representation of the reduced conflict is conf confw |writesw . Example. There are two conflicts among the candidate fragments in Fig. 3. Conflict 1, for fragment (l1 , l2 , l3 , l4 , l5 ), has as least solution xl2 = [10, 10], xl3 = [10, 10], xl5 = ∅. Conflict 2, for fragment (l4 , l5 , l6 , l2 , l3 ), has as least solution xl5 = [1, 1], xl2 = [0, 0], xl3 = ∅. The equation was pl6 was not included in the second conflict, as it is not in the cone-of-influence of xl3 . Table 3 shows the conflicts as equation system. The alternative representation w of the reduced conflict for fragment w = (l4 , l5 , l6 , l2 , l3 ) is conf (xl5 ) = {((l4 , l5 ), w w (x → ))}, conf (xl2 ) = {((l6 , l2 ), (x → xl5 ))}, and finally conf (xl5 ) = {((l2 , l3 ), (x → xl2 ))}. Note, that (x → ) is in the first set, since initially xi = for all x ∈ X. When the corresponding equation was generated, was replaced by [−∞, ∞] in IE w (xl5 ). 4.4
Conflict Observer
Given a reduced conflict IE w for a fragment w = (l0 , . . . , lm ), we construct an observer such that if a word w ∈ LPˆ is accepted, then w ∈ / LP . The observer is an LTS over the same alphabet E as LT S(P ) and Pˆ . Definition 3. Given an IA P = (L, l0 , E, X, update) and reduced conflict IE w , w with representation conf , for a fragment w = (l0 , . . . , lm ), define X w as the set of all variables x ∈ X such that xli ∈ writesw , for some edge li in w. The observer Obsw is a LTS with the – set SObs of states (current, eqn, conflict) with valuation current : X w → (writesw ∪), valuation eqn : writesw → {unsat, sat} , and location conflict ∈ {all, some, none}, – initial state (current0 , eqn0 , conflict0 ) with current0 ≡ , eqn0 ≡ unsat, and conflict0 = none, – alphabet E, – transition relation T ⊆ SObs × E × SObs (see Def. 4 below), and – a set final states F . A state is final if conflict = all. Before we define the transition relation formally, we give a brief overview of the role the different variables have. – Variable current is used to records for each variable the location of the last update. It mimics x(i) in the previous section. w – Variable eqn represents IE w (xl ), or alternatively conf (xl ), for xl ∈ writesw . This variable records if IE w (xl ) is satisfied. – Variable conflict has value all, some, none, if eqn (xl ) = sat for all, some or no xl ∈ writesw , respectively. It records if all, some or none of IE w (xl ) is currently satisfied.
334
A. Fehnker, R. Huuck, and S. Seefried
The transitions can be informally characterized as follows: – To update current, the observer needs to check if the observed edge (λ, λ ) has an update that modifies any variable in x ∈ X w . In this case current takes the value xλ . – To update eqn for xl , the observer needs to check if the update on the observed edge (λ, λ ) creates an expression that appears in IE w (xl ), i.e. it needs to check if the transition label and the state of current matches a pair w in conf (xl ). If it does, then eqn(xl ) becomes sat. – To update conflict, we check if eqn is sat in the next state for all xl ∈ writesw . For each of these three variables there are a few exceptions: – The next state of current will be for all variables, if none of eqn is sat, i.e. if conflict = none. – Variables eqn(xl ) will be reset to their initial state , if the edge (λ, λ ) writes to a variable in xl ∈ X w , while neither (λ, λ ) nor the current match w any pair in conf (xl ). In this case eqn(xl ) will be set to its initial state unsat. – Once conflict is in all, it remains there forever. All the different variables depend on each other, but there is no circular dependency. The next state of current depends on the next state of conflict. The next state of conflict the depends on the next state of eqn. But the next state of eqn depends on the current state of current. Before we define the transition relation, we give two Boolean predicates. Given an edge (λ, λ ) and a variable xλ ∈ writesw predicate match(λ, λ , xλ ) is true if the update on (λ, λ ) in the current state matches some expression in IE w (xλ ). Recall that the observer is defined with respect to some fragment w = (l0 , . . . , lm ). w
match(λ, λ , xλ ) ∃((li−1 , li ), var (i) ) ∈ conf (xλ ) s.t. (li−1 , li ) = (λ, λ ) and ∀y ∈ rhsvars(λ, λ ). var (i) (y) = ∨ var (i) (y) = current(y) Related to the match is the following predicate reset(λ, λ , xλ ), which is true when there is no suitable match. w
reset(λ, λ , xλ ) ∀((li−1 , li ), var (i) ) ∈ conf (xλ ) s.t. (li−1 , li ) = (λ, λ ) and ∃y ∈ rhsvars(λ, λ ). var (i) (y) = ∧ var (i) (y) = current(y) The state of the observer will also be reset when (λ, λ ) is not equal to any edge w appearing in conf , while xλ ∈ writesw and x = lhs(λ, λ ). The update (λ, λ ) writes to x and we conservatively assume that it matches none of the expressions in IE w (xλ ). The transition relation for the observer is then defined as follows: Definition 4 (Transition relation). Transitions from (current, eqn, conflict) to (current , eqn , conflict ) labeled (λ, λ ) for the observer Obsw are defined as follows:
Counterexample Guided Path Reduction for Static Program Analysis
335
conf lict if conf lict = all conf lict = all else if ∀xl ∈ writesw . eqn (xl ) = sat conf lict = all else if ∃xl ∈ writesw . eqn (xl ) = sat conf lict = some otherwise conf lict = none eqn(xλ ) w if x = lhs(λ, λ ) ∧ ∀((li−1 , li ), var (i) ) ∈ conf (xλ ). (li−1 = λ ∧ λ = l)) eqn (xλ ) = unsat else if x = lhs(λ, λ ) ∧ ∃yl ∈ writesw . reset(λ, λ , yl ) eqn (xλ ) = unsat else if x = lhs(λ, λ ) ∧ match(λ, λ , xl ) eqn (xλ ) = sat otherwise eqn (xλ ) = eqn(xλ ) current(x) if conflict’= none write (x) = else if x = lhs(λ, λ ) write (x) = λ otherwise write (x) = write(x) The interaction between current, eqn, and conflict is somewhat subtle. The idea is that the observer is initially in conflict = none. If an edge is observed, which generates an expression expr(i) that appears in IE w (xl ) (see Eq. (5)), then conflict = some, and the corresponding eqn(xl ) = sat. It can be deduced IE w (xl ) is satisfied, unless another expression is encountered that might enlarge the fixed point. This is the case when an expression for xl will generated, that does not appear in IE w (xl ). It is conservatively assumed that this expression increases the fixed point solution. If conflict = all it can be deduced that the observed edges produce an equation system IE w that has a non-empty solution only if IE w has a non-empty solution. And from the assumption we know that IE w has an empty-solution, and thus also IE w . Which implies that the currently observed run is infeasible and cannot satisfy Eq. 3. Example. The observer for the first conflict in Fig. 3 accepts a word if a fragment generates a conflict xl2 → [10, 10], xl3 → xl2 [1, ∞], xl5 → xl3 [1, 1]. This is the case if it observes edge (l1 , l2 ), edge (l2 , l3 ) with a last write to x at l2 , and edge (l4 , l5 ) with a last write to x at l3 . All other edges are irrelevant, as long as they do not write to x2 , x3 or x5 , and change the solution. For example, this
336
A. Fehnker, R. Huuck, and S. Seefried
would be the case for (l2 , l3 ) if current(x) = l2 . It creates an expression different from xl2 [1, ∞], and thus potentially enlarges the solution set. The NuSMV model of this observer can be found in the Appendix A. The observer for the other conflicts is constructed similarly. The complement of these observers are obtained by labeling all states in S \ F as final. The product of the complemented observers together with the annotated CFG in Fig.1 removes all potential counterexamples. The observer for the first conflict prunes all runs that enter the for-loop once, and then immediately enter the if-branch. The observer for the second conflict prunes all words that enter the if-branch and return into the loop. w
Lemma 3. Given a reduced conflict IE w and its representation conf fragment w = (l0 , . . . , lm ), the observer Obsw satisfies the following:
for a
– If a word w ∈ LPˆ contains fragment w , such that IE w has non non-empty w
w
solution, and such that conf = conf , then w is accepted by Obsw . – If a word w ∈ LPˆ is accepted by Obsw , then w ∈ LP . Proof. (i) Let w be the first occurrence of a fragment such that that IE w has w
w
non non-empty solution, and such that conf = conf . If conflict is in all at the beginning of the fragment, the word is trivially accepted. Assume that conflict is w
w
in some or none at the beginning of the fragment. By assumption conf = conf , we know that none of the edges in w will satisfy reset. It is also guaranteed that if an edge writes to a variable x in location l it satisfies match. By assumption we also know that at the end of the fragment conflict will be in all. (ii) If a word is accepted, it means that there exists a fragment w = (l0 , . . . , lm ) such that conflict will be in state none at l0 , in state some for l1 . . . , lm−1 at the beginning, and in state all at lm . This fragment will generate for each variable xl ∈ writesw an equation IE w (xl ) that is at least as restrictive as IE w (xl ). Consequently, if IE w (xl ) has no non-empty solution, then neither can IE w (xl ) have. Hence w which contain w as fragment, cannot be in LP . The second property ensures that each observer is only constructed once. This is needed to guarantee termination. Each observer is uniquely determined by the finite set expressions that appear in it, and since XL and E are finite, there exists only a finite set of possible expressions that may appear in conf. Consequently, there can only exist a finite set of conflicts. The second first property states that the language of Pˆ = Pˆ × Obsw C contains LP . 4.5
Path Reduction with NuSMV
The previous subsections assumed a checker for language inclusion for LTS. In practice we use however the CTL model checker NuSMV. The product of Pˆ with the complement of the observers is by construction a proper abstractions of LT S(P ). The results for language inclusion therefore extend to invariant checking, path inclusion, LTL and ACTL model checking. For pragmatic reasons we
Counterexample Guided Path Reduction for Static Program Analysis
337
do not restrict ourselves to any of these, but use full CTL and LTL1 . Whenever NuSMV produces a counterexample path, we use interval solving as described before to determine if this path is spurious. Note, that path reduction can also be used to check witnesses, for example for reachability properties. In this case path reduction will check if a property which is true on the level of abstraction is indeed satisfied by the program. The abstraction Pˆ and the observers are composed synchronously in NuSMV. The observer synchronizes on the current and next location of Pˆ . The property is defined as a CTL property of Pˆ . The acceptance condition of the complements of observers is modeled as LTL fairness condition G¬(conflict = all). The NuSMV code can be found in Appendix A.
5
Implementation and Experiments
5.1
C to Interval Equations
This section describes how to abstract a C/C++ program to a set of interval equations, and covers briefly expression statements, condition statements as well as the control structures. Expressions statements involving simple operations such as addition and multiplication are directly mapped to interval equations. E.g., an expression statement x=(x+y)*5 is represented as xi+1 = (xi + yi ) ∗ [5, 5]. Subtraction such as x = x − y can be easily expressed as xi+1 = xi + ([−1, −1] ∗ yi ). Condition statements occur in constructs such as if-then-else, for-loops, whileloops etc. For every condition such as x<5 we introduce two equations, one for the true-case and one for the false-case. Condition x<5 has two possible outcomes xtt = x [−∞, 4] and xf f = x [5, ∞]. More complex conditions involving more than one variable can also be approximated and we refer the interested reader to [14]. Joins are introduced were control from two different branches can merge. For instance, let xi be the last lhs-variable in the if-branch of an if-then-else, and let xj be the last lhs-variables in the else-branch. The values after the if-then-else is then the union of both possible values, i.e., xk = xi xj . For all other operations that we cannot accurately cover we simply over-approximate their possible effect. Function calls, for example, are handled as conservatively as possible, i.e., x=foo() is abstracted as xi = [−∞, +∞]. The same holds for most of pointer arithmetic, floating point operations and division. It should be noted that infeasible paths mostly depend on combinations of conditions that cannot be satisfied. Typically, condition expressions and the operations having an effect on them are rather simple. Therefore, it is a sufficient first approach to over-approximate most but the aforementioned constructs.
1
NuSMV 2.x supports LTL as well.
338
A. Fehnker, R. Huuck, and S. Seefried
Table 1. For each tool the table shows if the tool found a true positive/negative “+” or a false positive/negative “-”. Entries “(-)” and “(+)” refer to warnings that there may be an error. no violation violation P1 P2 P3 P4 P5 P1 P2 P3 P4 Splint + + - - - - + + UNO (-) (-) (-) (-) - (+) (+) + + com. tool + + + + - - + + Goanna + + + + - + + + +
5.2
P5 + + + +
Comparison
To evaluate our path reduction approach we added it to our static checker Goanna and compare its results with three other static analysis tools. One of the static analyzers is a commercial tool. We applied these tools to five programs.2 – The first program is the one discussed throughout the paper. It is similar to the example discussed in [4]. – The second program is identical, except that the loop is initialized to x=50. – The third program tests how different tools deal with unrelated clutter. The correctness depends on two if-conditions: the first sets a flag, and the second assigns a value to a variable only if the first is set. In between the two if-blocks is some unrelated code, in this case a simple if-then-else. – The fourth program is similar to the third, except that a for-loop counting up to 10 was inserted in-between the two relevant if-blocks . – The last one is similar to the first program. This program uses however two counter-variables x and y. The correctness of the programm depends on the loop-invariant x = y. It is similar to the examples presented in [15]. For each of the programs we constructed one instance with a bug, and one without. For the first program, e.g. the loop condition was changed to x>=0. Since not all tools check for the same, we introduced different bugs for the different tools, which however all depended on the same paths. The results in Table 1 show that static analysis tools often fail to produce correct warnings. Splint, for example, produces for the instances with and without a bug the same warnings, which is only correct in half of the cases. Our proposed path reduction produces one false positive for the last program. The least solution of the interval equation shows that both variables take values in the same interval, but it cannot infer the stronger invariant x = y. The experiments were performed on a DELL PowerEdge SC1425 server, with an Intel Xeon processor running at 3.4 GHz, 2 MiB L2 cache and 1.5 GiB DDR-2 400 MHz ECC memory. The following table gives the maximal, minimal, mean and median run-times in seconds:
2
These programs be found at http://www.cse.unsw.edu.au/∼ansgar/fpe/
Counterexample Guided Path Reduction for Static Program Analysis
Splint UNO com. tool Goanna
max 0.012 0.032 0.003 0.272
min 0.011 0.025 0.002 0.143
339
mean median 0.012 0.012 0.028 0.025 0.003 0.003 0.217 0.226
All static analysis tools are overall fast, and their run times are almost independent of the example program. For test programs as small at these the run-time reflect mostly the time for overhead and setup. While being more precise the Goanna run-time is still negligibly short for these examples and path reduction is independent of the loop counter value, as expected. Goanna benefits from the fact that interval solving deals efficiently with loops without unrolling [1].
6
Conclusions
In this work we presented an approach to enhance static program analysis with counterexample guided path reduction to eliminate false positives. While by default we investigate programs on a purely syntactic level, once we find a potential counterexample, it is mapped to an interval equation system. In case that the least solution of this system is empty, we know that the counterexample is spurious and identify a subset of equations which caused the conflict. We create an observer for the conflict, augment our syntactic model with this observer, re-run the analysis with the new model and keep repeating this iterative process until no more counterexamples occur or none can be ruled out anymore. One of the advantages of our approach is that we do not require to unroll loops or to detect loop predicates as done in some CEGAR approaches. In fact, path-based interval solving is insensitive to loop bounds, and handles loops just like any other construct. However, path-based interval solving adds precision to standard abstract interpretation interval analysis. Moreover, we only use one data abstraction, namely a simple interval semantics for C/C++ programs. We implemented our approach in our static analysis tool Goanna and compared it for a set of examples to existing static program analyzers. This demonstrated that Goanna is typically more precise than standard program analyzers. Future work is to evaluate our approach further on real life software to identify a typical false alarm reduction ratio. Given that static analysis typically turns up a few bugs per 1000 lines of code, this will require some extensive testing. Moreover, we like to explore if slightly richer domains can be used to get additional precision without a signification increase in computation time and, most importantly, if this makes any significant difference for real life software. And finally, we plan to compare our static checker to existing software model checkers that by design should be more precise than Goanna, but also require typically much longer run-times.
340
A. Fehnker, R. Huuck, and S. Seefried
References 1. Gawlitza, T., Seidl, H.: Precise fixpoint computation through strategy iteration. In: De Nicola, R. (ed.) ESOP 2007. LNCS, vol. 4421, pp. 300–315. Springer, Heidelberg (2007) 2. Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Software verification with BLAST. In: Ball, T., Rajamani, S.K. (eds.) SPIN 2003. LNCS, vol. 2648, pp. 235– 239. Springer, Heidelberg (2003) 3. Clarke, E., Kr¨ oning, D., Sharygina, N., Yorav, K.: SATABS: SAT-Based Predicate Abstraction for ANSI-C. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 570–574. Springer, Heidelberg (2005) 4. Kroening, D., Weissenbacher, G.: Counterexamples with loops for predicate abstraction. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 152–165. Springer, Heidelberg (2006) 5. Clarke, E., Kr¨ oning, D., Lerda, F.: A Tool for Checking ANSI-C Programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004) 6. Gulavani, B., Rajamani, S.: Counterexample driven refinement for abstract interpretation. In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS, vol. 3920, pp. 474–488. Springer, Heidelberg (2006) 7. Wang, C., Yang, Z.-J., Gupta, A., Ivanˇci´c, F.: Using counterexamples for improving the precision of reachability computation with polyhedra. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 352–365. Springer, Heidelberg (2007) 8. Fehnker, A., Huuck, R., Jayet, P., Lussenburg, M., Rauch, F.: Model checking software at compile time. In: Proc. TASE 2007. IEEE Computer Society, Los Alamitos (2007) 9. Holzmann, G.: Static source code checking for user-defined properties. In: Proc. IDPT 2002, Pasadena, CA, USA (June 2002) 10. Dams, D.R., Namjoshi, K.S.: Orion: High-precision methods for static error analysis of C and C++ programs. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2005. LNCS, vol. 4111, pp. 138–160. Springer, Heidelberg (2006) 11. Schmidt, D.A., Steffen, B.: Program analysis as model checking of abstract interpretations. In: Levi, G. (ed.) SAS 1998. LNCS, vol. 1503, pp. 351–380. Springer, Heidelberg (1998) 12. Fehnker, A., Clarke, E., Jha, S., Krogh, B.: Refining abstractions of hybrid systems using counterexample fragments. In: Morari, M., Thiele, L. (eds.) HSCC 2005. LNCS, vol. 3414, pp. 242–257. Springer, Heidelberg (2005) 13. Jha, S.K., Krogh, B., Clarke, E., Weimer, J., Palkar, A.: Iterative relaxation abstraction for linear hybrid automata. In: HSCC 2007. LNCS. Springer, Heidelberg (2007) 14. Ermedahl, A., Sj¨ odin, M.: Interval analysis of C-variables using abstract interpretation. Technical report, Uppsala University (December 1996) 15. Jhala, R., McMillan, K.L.: A practical and complete approach to predicate refinement. In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS, vol. 3920, pp. 459–473. Springer, Heidelberg (2006)
Counterexample Guided Path Reduction for Static Program Analysis
A
341
Appendix
Table 2. NuSMV model of the observer for conflict 1. Input pc is the location of abstraction Pˆ . MODULE conflict1(pc) VAR current_x: {l0,l1,l2,l3,l4,l5,l6}; eqn_x_l2, eqn_x_l3, eqn_x_l5: boolean; conflict:{some, none,all}; ASSIGN init(current_x):=l0; init(eqn_x_l2):=0; init(eqn_x_l3):=0; init(eqn_x_l5):=0; init(loc):=none;
next(conflict):= case loc=all:all; next(eqn_x_l2) & next(eqn_x_l3) & next(eqn_x_l5): all; next(eqn_x_l2) | next(eqn_x_l3) | next(eqn_x_l5): some; 1:none; esac; next(eqn_x_l2) := case ((pc=l6 & next(pc) = l2) | (pc = l2 & next(pc) = l3 & current_x != l2) | (pc = l4 & next(pc) = l5 & current_x != l3)): 0; pc = l1 & next(pc) = l2: 1; 1: eqn_x_l2; esac; next(eqn_x_l3) := case ((pc=l6 & next(pc) = l2) | (pc = l2 & next(pc) = l3 & current_x != l2) | (pc = l4 & next(pc) = l5 & current_x != l3)): 0; pc = l2 & next(pc) = l3 & current_x=l2:1; 1: eqn_x_l3; esac; next(eqn_x_l5) := case ((pc=l6 & next(pc) = l2) | (pc = l2 & next(pc) = l3 & current_x != l2) | (pc = l4 & next(pc) = l5 & current_x != l3)): 0; pc = l4& next(pc) = l5& current_x=l3:1; 1: eqn_x_l5; esac; next(current_x):= case next(conflict)=none:l0; pc = l1 & next(pc) = l2:l2; pc = l2 & next(pc) = l3:l3; pc = l4 & next(pc) = l5:l5; pc = l4 & next(pc) = l6:l4; pc = l6 & next(pc) = l2:l2; 1:current_x; esac; FAIRNESS ! (conflict=all)
Gallery
Hans Langmaack
346
Gallery
Antonio Cau
Gallery
Ben Lukoschus
347
348
Gallery
David Harel
Gallery
Dines Bjorner
349
350
Gallery
Ernst-Ruediger Olderog
Gallery
Frank Stomp
351
352
Gallery
Herman Ader
Gallery
Jan Peleska
353
354
Gallery
Job Zwiers
Gallery
Jozef Hooman
355
356
Gallery
Kai Engelhardt
Gallery
Karsten Stahl
357
358
Gallery
Kees Huizing
Gallery
Leslie Lamport
359
360
Gallery
Michael Siegel
Gallery
Olaf Owe
361
362
Gallery
Peter van Emde Boas
Gallery
R.K. Shyamasundar
363
364
Gallery
Ralf Huuck
Gallery
Rob Gerth
365
366
Gallery
Ruurd Kuiper and Ron Koymans as jonge doctors
Gallery
The de Roever family Jojanneke Willem-Paul and Corinne
367
368
Gallery
Yassine Lakhnech
Gallery
A suspenseful moment during the ceremony
Action shot during the retirement ceremony
369
370
Gallery
Allen Emerson
Amir Pnueli
Gallery
Coby Roscoe and Bill Roscoe
Erich Mikk
371
372
Gallery
Erik Krabbe
Erika Abraham
Gallery
Joseph Sifakis
Kirsten Kriegel
373
374
Gallery
Manfred Broy (at the 1989 REX Workshop at Hotel de Plasmolen Mook)
S. Ramesh
Gallery
Willem-Paul and Jojanneke
Frank de Boer
375
Author Index
Agarwal, Shivali
162
Kantor, Amir 207 Koymans, Ron 66 Kuiper, Ruurd 66
Balaban, Ittai 221 Beerel, Peter A. 260 Bjørner, Dines 22 Boer, Frank S. de 127 Broy, Manfred 118 Clarke, Dave 185 Courant, Joudica¨el
Lafourcade, Pascal 300 Lakhnech, Yassine 300 Lamport, Leslie 60 Langmaack, Hans 74
300
Daubignard, Marion
300
Eir, Asger 22 Emde Boas, Peter van 10 Emerson, E. Allen 237 Ene, Cristian 300 Engelhardt, Kai 250 Fehnker, Ansgar
207
Olderog, Ernst-R¨ udiger Owe, Olaf 185
96
Peleska, Jan 277 Pnueli, Amir 221 Podelski, Andreas 96 Roncken, Marly
322
260
Seefried, Sean 322 Shyamasundar, R.K. 162 Sutherland, Ivan 260
Harel, David 207 Hooman, Jozef 142 Huizing, Cornelis 66 Huuck, Ralf 250, 322 Johnsen, Einar Broch Joshi, Prasad 260
Maoz, Shahar
Verhoef, Marcel
142
Zuck, Lenore D.
221
185