Complex Systems Design and Management
Marc Aiguier, Francis Bretaudeau, and Daniel Krob (Eds.)
Complex Systems Design and Management Proceedings of the First International Conference on Complex Systems Design and Management CSDM 2010
ABC
Prof. Marc Aiguier Ecole Centrale de Paris Informatique / MAS Grande Voie des Vignes 92295 Châtenay-Malabry France E-mail:
[email protected]
Prof. Daniel Krob Ecole Polytechnique DIX/LIX 91128 Palaiseau Cedex France E-mail:
[email protected]
Dr. Francis Bretaudeau EADS Defence & Security LoB Integrated Systems Logistics & Planning Systems Parc d’Affaires des portes B.P. 613 - 27106 Val de Reuil Cedex France E-mail:
[email protected]
ISBN 978-3-642-15653-3
e-ISBN 978-3-642-15654-0
DOI 10.1007/978-3-642-15654-0 Library of Congress Control Number: 2010934340 c 2010 Springer-Verlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Data supplied by the authors Production & Cover Design: Scientific Publishing Services Pvt. Ltd., Chennai, India Printed on acid-free paper 987654321 springer.com
Preface
This volume contains the proceedings of the First International Conference on “Complex System Design & Management” (CSDM 2010 ; website: http://www.csdm2010.csdm.fr). Jointly organized, under the guidance of the CESAMES nonprofit organization, by Ecole Polytechnique and Ecole Centrale de Paris, it was held from October 27 to October 29 at the Cit´e Internationale Universitaire of Paris (France). Mastering complex systems requires the understanding of industrial practices as well as sophisticated theoretical techniques and tools. Thus, the creation of a meeting forum at European level (which did not exist yet) dedicated to all academic researchers and industrial actors working on complex industrial systems engineering was deemed crucial. It was actually for us a sine qua non condition in order to nurture and develop in Europe this complex industrial systems science which is now emergent. The purpose of the “Complex Systems Design & Management” (CSDM) conference was exactly to be such a forum, in order to become, in time, the European academic-industrial conference of reference in the field of complex industrial systems engineering, which is a quite ambitious objective. To make the CSDM conference this convergence point of the academic and industrial communities in complex industrial systems, we based our organization on a principle of complete parity between academics and industrialists (see the conference organization sections in the next pages). This principle was first implemented as follows: • the Programme Committee consists of 50 % academics and 50 % industrialists, • the Invited Speakers are coming equally from academic and industrial environments. The set of activities of the conference followed the same principle. They indeed consist of a mixture of research seminars and experience sharing, academic articles and industrial presentations, etc. The conference topics covers in the same way the most recent trends in the emerging field of complex
VI
Preface
systems sciences and practices from an industrial and academic perspective, including the main industrial domains (transport, defense & security, electronics & robotics, energy & environment, health & welfare services, media & communications, e-services), scientific and technical topics (systems fundamentals, systems architecture & engineering, systems metrics & quality, systemic tools) and system types (transportation systems, embedded systems, software & information systems, systems of systems, artificial ecosystems). We received 63 papers, out of which the program committee selected 20 regular papers to be published in these proceedings and 6 complementary for presentation at the conference. Each submission was assigned to at least two program committee members, who carefully reviewed the papers, in many cases with the help of external referees. These reviews were discussed by the program committee through a physical meeting held in Paris by March 17, 2010 and via the EasyChair conference management system. We also chose 16 outstanding speakers with various industrial and scientific expertise who gave a series of invited talks covering all the spectrum of the conference during the two first days of CSDM 2010, the last day being dedicated to the presentations of all accepted papers. Futhermore, we had a poster session for encouraging presentation and discussion on interesting but ”not-yet-polished” ideas. We thank finally all members of the program committee for their time, effort, and contributions to make CSDM 2010 a top quality conference. A special thank is also addressed to the CESAMES nonprofit organization which managed all the administration, logistics and communication of the CSDM 2010 conference (see http://www.cesames.net). The organizers of the conference are also grateful to the following sponsors and partners: Ecole Polytechnique, Thales company, Institut Carnot C3S, Digiteo labs, R´egion Ile-de-France, Ensta Paritech, Mega International company, BelleAventure company, Centre National de la Recherche Scientifique (CNRS), R´eseau National des Syst`emes Complexes (RNSC), Intitut des Syst`emes Complexes “Paris Ile de France” (ISC – Paris Ile de France), Minist`ere de l’Enseignement Sup´erieur et de la Recherche, Commissariat `a l’Energie Atomique (CEA-LIST), the Ecole Polytechnique-Microsoft chair ”Optimization for Sustainable Development”, the International Council on Systems Engineering (INCOSE) and the SEE association. July 2010
Marc Aiguier Francis Bretaudeau Daniel Krob
Conference Organization
Conference Chairs • General & Organizing Commitee chair: – Daniel Krob, Institute Professor, Ecole Polytechnique, France • Programme Commitee chairs: – Marc Aiguier, Professor, Ecole Centrale de Paris, France (academic cochair of the Programme Committee) – Francis Bretaudeau, Director, Logistics & Planning Systems, EADS Defence & Security, France (industrial co-chair of the Programme Committee)
Program Committee The PC consists of 30 members (15 academic and 15 industrial): all are personalities of high international visibility. Their expertise spectrum covers all of the conference topics.
Academic Members • Co-chair: – Marc Aiguier, Professor, Ecole Centrale de Paris, France • Other members: – – – –
Manfred Broy, Professor, Technische Universit¨at of M¨ unchen, Germany David Chemouil, Research Director, ONERA, France Dalcher Daren, Professor, Middlesex University, United Kingdom Olivier de Weck, Professor, Massachussets Institute of Technology, United States
VIII
– – – – – – – – – –
Organization
Dov Dori, Professor, Technion University, Israel Wolter Fabrycky, Professor, Virginia Tech, United States Jos´e Fiadeiro, Professor, Leicester University, United Kingdom Eric Goubault, Research Director, Commissariat a` l’Energie Atomique, France Ignacio Grossmann, Professor, Carnegie Mellon University, United States Kim Larsen, Professor, Aalborg University, Denmark Jerry Luftman, Professor, Stevens University, United States Fei Yue Wang, Professor & Director, Kelon Center for Intelligent Control Systems, China Tapio Westerlund, Professor, Abo University, Finland Jim Woodcock, Professor, York University, United Kingdom
Industrial Members • Co-chair: – Francis Bretaudeau, Director, Logistics & Planning Systems, EADS Defence & Security, France • Other members: – Yves Caseau, Strategic Director, Bouygues Telecom, France – Claude Feliot, Manager, Systems Engineering Core Competency Network, Alstom Transport, France – Hans-Georg Frischkorn, Director, Control & Software, Global Electrical Systems, Germany – Rudolf Haggenm¨ uller, Scientific Director, Artemisia, Germany – Anthony Hall, Senior Consultant, Embedded systems, Praxis, United Kingdom – Matthew Hause, Senior Consultant, Architecture frameworks, Artisan Software, United Kingdom – Butz Henning, Director, Information Management & Electronic Networks, Airbus, Germany – Jon Lee, Manager, Department of Mathematical Sciences, IBM, United States – Dominique Luzeaux, Engineer-General of the ARmy, Direction G´en´erale de l’Armement, France – Michel Morvan, Scientific Director, Veolia Environnement, France – Hillary Sillitto, Architect, Systems of systems, Thales, United Kingdom
Organization
IX
Organizing Committee • Chair: – Daniel Krob, Institute Professor, Ecole Polytechnique, France • Other members: – Paul Bourgine, director, French National Network of Complex Systems, France – Olivier Bournez, Professor, Ecole Polytechnique, France – Pascal Foix, Director, Systems Engineering, Thales, France – Omar Hammami, Associate Professor, ENSTA, France – Leo Liberti, Assistant Professor, Ecole Polytechnique, France – Sylvain Peyronnet, Assistant Professor, Universit´e Paris Sud, France – Yann Pollet, Chaired Professor, CNAM, France – Jacques Ariel Sirat, Vice President, EADS, Europe
Invited Speakers Societal Challenges • Jean de Kervasdou´e, professor, Conservatoire National des Arts et M´etiers, France • Thierry Nkaoua, Deputy Senior Vice President for Research and Innovation, AREVA, France • Dani`ele Nouy, chairman of the board, National Banking Commission, France • Florian Guillermet, Chief Programme Officer, SESAR Joint Undertaking, EEC
Industrial Challenges • • • •
Yannick Cras, technical director, SAP, Germany Marko Erman, technical director, Thales, France Jacques Pellas, general secretary, Dassault Aviation, France Jean Sass, vice-president information systems, Dassault Aviation, France
Scientific State-of-the-Art • Farhad Arbab, professor, National Research Institute for Mathematics and Computer Science (CWI), Netherlands • Olivier de Weck, professeur, Massachusetts Institute of Technology, USA • Kim G. Larsen, professor, Aalborg University, Denmark • Bran Selic, director of advanced technology, Zeligsoft, Canada
X
Organization
Methodological State-of-the-Art • Catherine Devic, Vice-Chairman of AFIS in association with DGA, France • Leon A. Kappelman, Professor of Information Systems, University of North Texas, USA • Tom Gilb, INCOSE Fellow, Norway • Jacques Ariel Sirat, vice-president ”Systems & Products Architecture & Engineering”, EADS, Europe
Contents
1
2
Elements of Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Farhad Arbab 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Interaction Centric Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . 3 An Overview of Reo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Alternator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Sequencer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Exclusive Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Shift-Lossy FIFO1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Dataflow Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Decoupled Alternating Producers and Consumer . . . . 4.7 Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Timed Data Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Constraint Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Connector Coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Other Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enterprise Architecture as Language . . . . . . . . . . . . . . . . . . . . Gary F. Simons, Leon A. Kappelman, John A. Zachman 1 On the Verge of Major Business Re-Engineering . . . . . . . . . . . 2 Nothing so Practical As Good Theory . . . . . . . . . . . . . . . . . . . . 3 Architecture Out of Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Enterprise Architecture as a Language Problem . . . . . . . . . . . 5 GEM: A Language for Enterprise Modeling . . . . . . . . . . . . . . .
1 2 5 9 12 13 14 15 16 16 17 18 18 18 20 20 21 22 23 24 29 29 30 32 34 35
XII
3
4
5
Contents
6 The Repository of Enterprise Models . . . . . . . . . . . . . . . . . . . . . 7 Progress to Date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39 44 45 47
Real-Time Animation for Formal Specification . . . . . . . . . . . Dominique M´ery, Neeraj Kumar Singh 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Overview of Brama . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Description of the Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Applications and Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
Using Simulink Design Verifier for Proving Behavioral Properties on a Complex Safety Critical System in the Ground Transportation Domain . . . . . . . . . . . . . . . . . . . . . . . . . J.-F. Etienne, S. Fechter, E. Juppeaux 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Matlab Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Simulink/Embedded Matlab . . . . . . . . . . . . . . . . . . . 2.2 Simulink Design Verifier . . . . . . . . . . . . . . . . . . . . . 3 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Data Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Proof Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Results Obtained . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SmART: An Application Reconfiguration Framework . . . . Herv´e Paulino, Jo˜ ao Andr´e Martins, Jo˜ ao Louren¸co, Nuno Duro 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 An Analysis of Application Configuration Files . . . . . . . . . . . . 3 An Application Reconfiguration Framework . . . . . . . . . . . . . . . 3.1 Original to Generic Representation (O2G) . . . . . . . . . . 3.1.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Execution Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Extensibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Generic to Original Representation (G2O) . . . . . . . . . . 4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 VIRTU Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49 51 52 54 58 60
61 61 62 62 63 64 65 66 68 69 70 71 72 72 73 74 75 76 77 78 78 79 79 81 81 82
Contents
6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
7
Searching the Best (Formulation, Solver, Configuration) for Structured Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antonio Frangioni, Luis Perez Sanchez 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Search Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 i-dare(control): Controlling the Search in the (Formulation, Solver, Configuration) Space . . . . . . . . . . . . . . . 3.1 Objective Function Computation . . . . . . . . . . . . . . . . . . 3.2 Training and Meta-learning . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Overall Search Process . . . . . . . . . . . . . . . . . . . . . . . 4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information Model for Model Driven Safety Requirements Management of Complex Systems . . . . . . . . . R. Guillerm, H. Demmou, N. Sadou 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 System Engineering Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 System Engineering Approach . . . . . . . . . . . . . . . . . . . . 2.2 EIA-632 Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Integration Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Integration Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 System Design Processes . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Requirement Definition Process . . . . . . . . . . . . 3.2.2 Solution Definition Process . . . . . . . . . . . . . . . . 3.3 Technical Evaluation Processes . . . . . . . . . . . . . . . . . . . . 3.3.1 System Analysis Process . . . . . . . . . . . . . . . . . . 3.3.2 Requirements Validation Process . . . . . . . . . . 3.3.3 System Verification Process . . . . . . . . . . . . . . . 4 Information Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Requirements Management . . . . . . . . . . . . . . . . . . . . . . . 4.2 Supporting the Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Requirements Modeling and Management for Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Proposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 The Information Model . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XIII
83 83 84 85 85 87 88 89 92 93 93 96 97 99 99 101 101 102 102 103 103 103 104 104 104 104 105 105 105 105 106 107 108 109 110
XIV
8
9
Contents
Discrete Search in Design Optimization . . . . . . . . . . . . . . . . . . Martin Fuchs, Arnold Neumaier 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Design Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Convex Relaxation Based Splitting Strategy . . . . . . . . . . . . . . 4 A Simple Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 A Real-Life Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Architectures for Flexible Task-Oriented Program Execution on Multicore Systems . . . . . . . . . . . . . . . Thomas Rauber, Gudula R¨ unger 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Programming Models with Tasks . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Task Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Task Execution and Interaction . . . . . . . . . . . . . . . . . . . 2.3 Internal and External Variables . . . . . . . . . . . . . . . . . . . 2.4 Coordination Language . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Software Architectures of Task-Based Programs . . . . . . . . . . . 3.1 Task Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Runtime Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 Optimal Technological Architecture Evolutions of Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vassilis Giakoumakis, Daniel Krob, Leo Liberti, Fabio Roda 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Operational Model of an Evolving Information System . . . . . 2.1 Elements of Information System Architecture . . . . . . . 2.2 Evolution of an Information System Architecture . . . . 2.3 Management of Information System Architecture Evolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 The Information System Architecture Evolution Management Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Mathematical Programming Based Approach . . . . . . . . . . . . . 3.1 Sets, Variables, Objective, Constraints . . . . . . . . . . . . . 3.2 Valid Cuts from Implied Properties . . . . . . . . . . . . . . . . 4 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113 113 115 116 120 121 121 123 123 124 125 126 127 127 128 129 131 132 133 134 137 137 138 138 139 140 140 141 142 145 146 148
Contents
11 Practical Solution of Periodic Filtered Approximation as a Convex Quadratic Integer Program . . . . . . . . . . . . . . . . . Federico Bizzarri, Christoph Buchheim, Sergio Callegari, Alberto Caprara, Andrea Lodi, Riccardo Rovatti, Gianluca Setti 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Background Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Problem Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Modulation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 3 A ΔΣ Heuristic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 An Exact Branch-and-Bound Algorithm . . . . . . . . . . . . . . . . . . 5 Experimental Evaluation of the Two Approaches . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Performance Analysis of the Matched-Pulse-Based Fault Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Layane Abboud, Andrea Cozza, Lionel Pichon 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The Wire Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 The MP Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Topological Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Equivalent Topological Representation . . . . . . . . . . . . . 4.2 Position of the Network Elements . . . . . . . . . . . . . . . . . 5 Detection Gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Analyzed Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A Natural Measure for Denoting Software System Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jacques Printz 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Cyclomatic Number Measurement Considered Harmful . . . . . 3 Program Text Length Measurement . . . . . . . . . . . . . . . . . . . . . . 4 Programmers Activity Revisited . . . . . . . . . . . . . . . . . . . . . . . . . 5 Classification of Building Bloks . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Costs of Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 The Complexity of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Temporary Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XV
149
149 151 151 152 154 155 157 159 161 161 162 162 163 163 165 167 169 169 169 171 172 173 173 176 178 180 182 184 186 191 193 194 195
XVI
Contents
14 Flexibility and Its Relation to Complexity and Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joel Moses 1 Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Generic Architectures, Flexibility and Complexity . . . . . . . . . 3 Layered Human Organizations and Industries – Health Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Higher Education as a Layered System . . . . . . . . . . . . . . . . . . . 5 Hybrid Organizations - Lateral Alignment . . . . . . . . . . . . . . . . 6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Formalization of an Integrated System/Project Design Framework: First Models and Processes . . . . . . . . . . . . . . . . . J. Abeille, T. Coudert, E. Vareilles, L. Geneste, M. Aldanondo, T. Roux 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Definition of Design and Planning Processes . . . . . . . . 2.2 Interaction between Design and Planning Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Proposition of an Integrated Model . . . . . . . . . . . . . . . . . . . . . 3.1 System Design Module . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Project Planning Module . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Coupling and Monitoring Module . . . . . . . . . . . . . . . . . . 4 Proposition of a Simple System Creation Process . . . . . . . . . . 5 Conclusion and Further Studies . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 System Engineering Approach Applied to Galileo System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steven Bouchired, St´ephanie Lizy-Destrez 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Galileo System Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The Space Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The Launch Service Segment . . . . . . . . . . . . . . . . . . . . . 3.3 The Ground Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 The Ground Mission Segment . . . . . . . . . . . . . . . . . . . . . 3.5 The User Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Requirement Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 About Galileo Lifecycle and Stakeholders . . . . . . . . . . . 4.2 System Prime Perimeter Evolution . . . . . . . . . . . . . . . . 4.3 Example of Boundary Evolution between System and Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
197 197 199 203 204 205 205 206 207
207 209 209 210 211 211 212 213 215 216 217
219 219 219 219 220 220 220 220 221 221 222 222 224
Contents
5
6
XVII
Architectural Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Functional Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Physic Architecture (Interfaces Problematic) . . . . . . . . 5.3 The Data-Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 The “Use Cases” Database . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17 A Hierarchical Approach to Design a V2V Intersection Assistance System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hycham Aboutaleb, Samuel Boutin, Bruno Monsuez 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Motivation and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Resulting Transformations . . . . . . . . . . . . . . . . . . . . . . . . 3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Top Level: Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 First Level: Selecting the Vehicles . . . . . . . . . . . . . . . . . 3.3 Second Level: Selecting the Pairs of Vehicles . . . . . . . . 3.4 Third Level: Identifying All the Scenarii for Each Pair of Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Fourth Level: Managing Priorities for Each Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Fifth Level: Acting and Deciding . . . . . . . . . . . . . . . . . . 4 Advantages of Our Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Conclusion and Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Contribution to Rational Determination of Warranty Parameters for a New Product . . . . . . . . . . . . . . . . . . . . . . . . . . Zdenek Vintr, Michal Vintr 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Two-Dimensional Warranty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Statistical Evaluation of Customers’ Behavior Research . . . . . 4 Determination of Parameters of Two-Dimensional Warranty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Example of Practical Usage of Proposed Procedure . . . . . . . . 5.1 Evaluation of Customers’ Behavior Research . . . . . . . . 5.2 Determination of Warranty Parameters at Limited Level of Warranty Costs . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
225 226 229 229 230 231 237 237 237 238 238 238 239 241 241 241 243 243 244 244 246 246 247 249 249 250 251 253 256 256 257 258 258
XVIII
Contents
19 Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems and Proof-Based System Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G´erard Le Lann, Paul Simon 1 The OISAU Study in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . . 2 Weaknesses in Current SE Practice . . . . . . . . . . . . . . . . . . . . . . 2.1 Requirements Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 System Design and Validation . . . . . . . . . . . . . . . . . . . . . 2.3 Feasibility and Dimensioning . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Lessons Learned with OISAU . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Migration from SE to PBSE Is an Evolutionary Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 “Functional versus Non Functional” Is Too Crude a Dichotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Existing Solutions and Companion Proofs Can Be Tapped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Nothing Specific with COTS Products . . . . . . . . . . . . . 3.5 PBSE Practice Can Be Supported by Tools, in Conformance with a Methodological Standard . . . . . . 4 The OISAU Methodological Standard . . . . . . . . . . . . . . . . . . . . 4.1 Methodological Requirements . . . . . . . . . . . . . . . . . . . . . 4.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The European Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Technical Issues and Standards . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Scenarios Worked Out . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Generic Problems and Solutions, Standards and Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Managing the Complexity of Environmental Assessments of Complex Industrial Systems with a Lean 6 Sigma Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fran¸cois Cluzel, Bernard Yannou, Daniel Afonso, Yann Leroy, Dominique Millet, Dominique Pareau 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 How to Eco-Design Complex Industrial Systems? . . . . . . . . . . 2.1 Aluminium Electrolysis Substations . . . . . . . . . . . . . . . . 2.2 The Aluminium Electrolysis Substation: A Complex Industrial System . . . . . . . . . . . . . . . . . . . . . . . 3 LCA-Based Eco-design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Eco-design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Life Cycle Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Limits of the Current Eco-design Approach . . . . . . . . . . . . . . .
259 260 262 262 263 264 265 266 266 268 269 269 270 270 271 272 273 274 275 275 277 278
279
280 280 280 282 282 282 283 283
Contents
4.1 Technical LCA Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Overall LCA and Eco-design Limits . . . . . . . . . . . . . . . . 4.3 Methodology Requirements . . . . . . . . . . . . . . . . . . . . . . . 5 About Lean Six Sigma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Continuous Improvement and Lean Six Sigma . . . . . . . 5.2 DMAIC Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Define . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Analyze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Improve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Lean & Green . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Proposition of a Meta-methodology . . . . . . . . . . . . . . . . . . . . . . 6.1 General Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 A DMAIC Approach for Eco-design . . . . . . . . . . . . . . . 6.2.1 Define . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Analyze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Improve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Meta-methodology Deployment on Aluminium Electrolysis Substations . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Conclusions and Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Multidisciplinary Simulation of Mechatronic Components in Severe Environments . . . . . . . . . . . . . . . . . . . . J´er´emy Lef`evre, S´ebastien Charles, Magali Bosch, Benot Eynard, Manuel Henner 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Mov’eo: EXPAMTION Project . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Involved Partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Our Contribution to the Project . . . . . . . . . . . . . . . . . . . 2.4 The Problem of FEM Code Coupling . . . . . . . . . . . . . . 3 Proposed Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Modelica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 MpCCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 STEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 The PLM Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XIX
283 284 285 285 285 286 286 287 287 287 287 287 288 288 289 289 289 290 291 291 291 292 293 295
295 297 297 297 298 298 298 299 299 300 301 303 303
XX
Contents
22 Involving AUTOSAR Rules for Mechatronic System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pascal Gouriet 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 About Concept and Components for ESC System . . . 2.2 About Simulation and Validation Tools . . . . . . . . . . . . 3 AUTOSAR Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Project Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Main Working Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Technical Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 AUTOSAR Authoring Tool . . . . . . . . . . . . . . . . . . . . . . . 3.5 AUTOSAR Software Component . . . . . . . . . . . . . . . . . . 3.6 Benefits for Model-Based Design . . . . . . . . . . . . . . . . . . 4 Model-Based Design with AUTOSAR . . . . . . . . . . . . . . . . . . . . 4.1 Atomic Software for ESC system . . . . . . . . . . . . . . . . . . 4.2 AUTOSAR Rules for Model-Based Design . . . . . . . . . . 4.3 Chassis Domain Overview . . . . . . . . . . . . . . . . . . . . . . . . 5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Enterprise Methodology: An Approach to Multisystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dominique Vauquier 1 The Enterprise System Topology . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Notion of Enterprise System . . . . . . . . . . . . . . . . . . . . . . 1.2 Methodological Framework . . . . . . . . . . . . . . . . . . . . . . . 1.3 How to Describe the “Business” Reality . . . . . . . . . . . . 1.4 How to Design the IT System . . . . . . . . . . . . . . . . . . . . . 1.5 Impact of This Approach on a Single System . . . . . . . 2 The Convergence Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
305 305 305 305 307 308 308 308 309 309 310 312 312 312 314 315 315 316 316
317 318 318 318 320 322 323 323 327 327
Elements of Interaction Farhad Arbab
Abstract. The most challenging aspect of concurrency involves the study of interaction and its properties. Interaction refers to what transpires among two or more active entities whose (communication) actions mutually affect each other. In spite of the long-standing recognition of the significance of interaction, classical models of concurrency resort to peculiarly indirect means to express interaction and study its properties. Formalisms such as process algebras/calculi, concurrent objects, actors, agents, shared memory, message passing, etc., all are primarily action-based models that provide constructs for the direct specification of things that interact, rather than a direct specification of interaction (protocols). Consequently, these formalisms turn interaction into a derived or secondary concept whose properties can be studied only indirectly, as the side-effects of the (intended or coincidental) couplings or clashes of the actions whose compositions comprise a model. Alternatively, we can view interaction as an explicit first-class concept, complete with its own composition operators that allow the specification of more complex interaction protocols by combining simpler, and eventually primitive, protocols. Reo [10, 11, 5] serves as a premier example of such an interaction-based model of concurrency. In this paper, we describe Reo and its support tools. We show how exogenous coordination in Reo reflects an interaction-centric model of concurrency where an interaction (protocol) consists of nothing but a relational constraint on communication actions. In this setting, interaction protocols become explicit, concrete, tangible (software) Farhad Arbab Foundations of Software Engineering, CWI Science Park 123, 1098 XG Amsterdam, The Netherlands e-mail:
[email protected]
2
F. Arbab
constructs that can be specified, verified, composed, and reused, independently of the actors that they may engage in disparate applications.
1 Introduction Composition of systems out of autonomous subsystems pivots on coordination of concurrency. In spite of the fact that interaction constitutes the most challenging aspect of concurrency, contemporary models of concurrency predominantly treat interaction as a secondary or derived concept. Shared memory, message passing, calculi such as CSP [43], CCS [67], the π-calculus [68, 78], process algebras [29, 23, 40], and the actor model [7] represent popular approaches to tackle the complexities of constructing concurrent systems. Beneath their significant differences, all these models share one common characteristic: they are all action-based models of concurrency. For example, consider developing a simple concurrent application with two producers, which we designate as Green and Red, and one consumer. The consumer must repeatedly obtain and display the contents made available by the Green and the Red producers, alternating between the two. Figure 1 shows the pseudo code for a typical implementation of this simple application in a Java-like language. Lines 1-4 in this code declare four globally shared entities: three semaphores and a buffer. The semaphores greenSemaphore and redSemaphore are used by their respective Green and Red producers for their turn keeping. The semaphore bufferSemaphore is used as a mutual exclusion lock for the producers and the consumer to access the shared buffer, which is initialized to contain the empty string. The rest of the code defines three processes: two producers and a consumer. Global Objects: 1 private final Semaphore 2 private final Semaphore 3 private final Semaphore 4 private String buffer =
greenSemaphore = new Semaphore(1); redSemaphore = new Semaphore(0); bufferSemaphore = new Semaphore(1); EMPTY;
Consumer: 5 while (true) { 6 sleep(4000); 7 bufferSemaphore.acquire(); 8 if (buffer != EMPTY) { 9 println(buffer); 10 buffer = EMPTY; 11 } 12 bufferSemaphore.release(); 13 }
Green Producer: 14 while (true) { 15 sleep(5000); 16 greenText = ...; 17 greenSemaphore.acquire(); 18 bufferSemaphore.acquire(); 19 buffer = greenText; 20 bufferSemaphore.release(); 21 redSemaphore.release(); 22 } Red Producer: 23 while (true) 24 sleep(3000); 25 redText = ...; 26 redSemaphore.acquire(); 27 bufferSemaphore.acquire(); 28 buffer = redText; 29 bufferSemaphore.release(); 30 greenSemaphore.release(); 31 }
Fig. 1 Alternating producers and consumer
Elements of Interaction
3
The consumer code (lines 5-13) consists of an infinite loop where in each iteration, it performs some computation (which we abstract as the sleep on line 6), then it waits to acquire exclusive access to the buffer (line 7). While it has this exclusive access (lines 8-11), it checks to see if the buffer is empty. An empty buffer means there is no (new) content for the consumer process to display, in which case the consumer does nothing and releases the buffer lock (line 12). If the buffer is non-empty, the consumer prints its content and resets the buffer to empty (lines 9-10). The Green producer code (lines 14-22) consists of an infinite loop where in each iteration, it performs some computation and assigns the value it wishes to produce to local variable greenText (lines 14-15), and waits for its turn by attempting to acquire greenSsemaphore (line 17). Next, it waits to gain exclusive access to the shared buffer, and while it has this exclusive access, it assigns greenText into buffer (lines 18-20). Having completed its turn, the Green producer now releases redSemaphore to allow the Red producer to have its turn (line 21). The Red producer code (lines 23-31) is analogous to that of the Green producer, with “red” and “green” swapped. This is a simple concurrent application whose code has been made even simpler by abstracting away its computation and declarations. Apart from their trivial outer infinite loops, each process consists of a short piece of sequential code, with a straight-line control flow that involves no inner loops or non-trivial branching. The protocol embodied in this application, as described in our problem statement, above, is also quite simple. One expects it be easy, then, to answer a number of questions about what specific parts of this code manifest the various properties of our application. For instance, consider the following questions: 1. Where is the green text computed? 2. Where is the red text computed? 3. Where is the text printed? The answers to these questions are indeed simple and concrete: lines 16, 25, and 9, respectively. Indeed, the “computation” aspect of an application typically correspond to coherently identifiable passages of code. However, the perfectly legitimate question “Where is the protocol of this application?” does not have such an easy answer: the protocol of this application is intertwined with its computation code. More refined questions about specific aspects of the protocol have more concrete answers: 1. What determines which producer goes first? 2. What ensures that the producers alternate? 3. What provides protection for the global shared buffer? The answer to the first question, above, is the collective semantics behind lines 1, 2, 17, and 26. The answer to the second question is the collective semantics behind lines 1, 2, 17, 26, 21, and 30. The answer to the third question is the
4
F. Arbab
collective semantics of lines 3, 18, 20, 27, and 29. These questions can be answered by pointing to fragments of code scattered among and intertwined with the computation of several processes in the application. However, it is far more difficult to identify other aspects of the protocol, such as possibilities for deadlock or live-lock, with concrete code fragments. While both concurrencycoordinating actions and computation actions are concrete and explicit in this code, the interaction protocol that they induce is implicit, nebulous, and intangible. In applications involving processes with even slightly less trivial control flow, the entanglement of data and control flow with concurrencycoordination actions makes it difficult to determine which parts of the code give rise to even the simplest aspects of their interaction protocol. Global Names: synchronization-points g, r, b, d
Green Producer: G := genG(t) . ?g(k) . !b(t) . ?d(j) . !r(k) . G
Consumer: Red Producer: B := ?b(t) . print(t) . !d("done") . B R := genR(t) . ?r(k) . !b(t) . ?d(j) . !g(k) . R Application: G | R | B | !g("token")
Fig. 2 Alternating producers and consumer in a process algebra
Process algebraic models fair only slightly better: they too embody an action-based model of concurrency. Figure 2 shows a process algebraic model of our alternating producers and consumer application. This model consists of a number of globally shared names, i.e., g, r, b, and d. Generally, these shared names are considered as abstractions of channels and thus are called “channels” in the process algebra/calculi community. However, since these names in fact serve no purpose other than synchronizing the I/O operations performed on them, and because we will later use the term “channel” to refer to entities with more elaborate behavior, we use the term “synchronization points” here to refer to “process algebra channels” to avoid confusion. A process algebra consists of a set of atomic actions, and a set of composition operators on these actions. In our case, the atomic actions include the primitive actions read ? ( ) and write ! ( ) defined by the algebra, plus the user-defined actions genG( ), genR( ), and print( ), which abstract away computation. Typical composition operators include sequential composition . , parallel composition | , nondeterministic choice + , definition := , and implicit recursion. In our model, the consumer B waits to read a data item into t by synchronizing on the global name b, and then proceeds to print t (to display it). It then writes a token "done" on the synchronization point d, and recurses. The Green producer G first generates a new value in t, then waits for its turn by reading a token value into k from g. It then writes t to b, and waits to obtain an acknowledgement j through d, after which it writes the token k to r, and recurses. The Red producer R behaves similarly, with the roles of
Elements of Interaction
5
r and g swapped. The application consists of a parallel composition of the two producers and the consumer, plus a trivial process that simply writes a "token" on g to kick off process G to go first. Observe that a model is constructed by composing (atomic) actions into (more complex) actions, called processes. True to their moniker, such formalisms are indeed algebras of processes or actions. Just as in the version in Figure 1, while communication actions are concrete and explicit in the incarnation of our application in Figure 2, interaction is a manifestation of the model with no explicit structural correspondence. Indeed, in all action-based models of concurrency, interaction becomes a by-product of processes executing their respective actions: when a process A happens to execute its ith communication action ai on a synchronization point, at the same time that another process B happens to execute its jth communication action bj on the same synchronization point, the actions ai and bj “collide” with one another and their collision yields an interaction. Generally, the reason behind the specific collision of ai and bj remains debateable. Perhaps it was just dumb luck. Perhaps it was divine intervention. Some may prefer to attribute it to intelligent design! What is not debateable is the fact that, often, a split second earlier or later, perhaps in another run of the same application on a platform with a slightly different timing, ai and bj would collide not with each other, but with two other actions (of perhaps other processes) yielding completely different interactions. An interaction protocol consists of a desired temporal sequence of such (coincidental or planned) collisions. It is non-trivial to distinguish between the essential and the coincidental parts in a protocol, and as an ephemeral manifestation, a protocol itself becomes more difficult than necessary to specify, manipulate, verify, debug, and next to impossible to reuse. Instead of explicitly composing (communication) actions to indirectly specify and manipulate implicit interactions, is it possible to devise a model of concurrency where interaction (not action) is an explicit, first-class construct? We tend to this question in the next section and in the remainder of this paper describe a specific language based on an interaction-centric model of concurrency. We show that making interaction explicit leads to a clean separation of computation and communication, and reusable, tangible protocols that can be constructed and verified independently of the processes that they engage.
2 Interaction Centric Concurrency The fact that we currently use languages and tools based on various concurrent object oriented models, actor models, various process algebras, etc., simply means that these models comprise the best in our available arsenal to tackle the complexity of concurrent systems. However, this fact does not mean that these languages and tools necessarily embody the most appropriate models for doing so. We observe that action-centric models of concurrency
6
F. Arbab
turn interaction into an implicit by-product of the execution of actions. We also observe that the most challenging aspect of concurrent systems is their interaction protocols, whose specification and study can become simpler in a model where interaction is treated as a first-class concept1 . These observations serve as motivation to consider an interaction-centric model of concurrency, instead. The most salient characteristic of interaction is that it transpires among two or more actors. This is in contrast to action, which is what a single actor manifests. In other words, interaction is not about the specific actions of individual actors, but about the relations that (must) hold among those actions. A model of interaction, thus, must allow us to directly specify, represent, construct, compose, decompose, analyze, and reason about those relations that define what transpires among two or more engaged actors, without the necessity to be specific about their individual actions. Making interaction a first-class concept means that a model must offer (1) an explicit, direct representation of the interaction among actors, independent of their (communication) actions; (2) a set of primitive interactions; and (3) composition operators to combine (primitive) interactions into more complex interactions. Wegner has proposed to consider coordination as constrained interaction [79]. We propose to go a step further and consider interaction itself as a constraint on (communication) actions. Features of a system that involve several entities, for instance the clearance between two objects, cannot conveniently be associated with any one of those entities. It is quite natural to specify and represent such features as constraints. The interaction among several active entities has a similar essence: although it involves them, it does not belong to any one of those active entities. Constraints have a natural formal model as mathematical relations, which are non-directional. In contrast, actions correspond to functions or mappings which are directional, i.e., transformational. A constraint declaratively specifies what must hold in terms of a relation. Typically, there are many ways in which a constraint can be enforced or violated, leading to many different sequences of actions that describe precisely how to enforce or maintain a constraint. Action-based models of concurrency lead to the precise specification of how as sequences of actions interspersed among the active entities involved in a protocol. In an interaction-based model of concurrency, only what a protocol represents is specified as a constraint over the (communication) actions of some active entities, and as in constraint programming, the responsibility of how the protocol constraints are enforced or maintained is relegated to an entity other than those active entities. 1
A notion constitutes a first-class concept in a model only if the model provides structural primitives to directly define instances of it, together with operators to compose and manipulate such instances by composing and manipulating their respective structures. Thus, “process” constitutes a first-class concept in process algebras, but “interaction/protocol” does not.
Elements of Interaction
7
Generally, composing the sequences of actions that manifest two different protocols does not yield a sequence of actions that manifests a composition of those protocols. Thus, in action-based models of concurrency, protocols are not compositional. Represented as constraints, in an interaction-based model of concurrency, protocols can be composed as mathematical relations. Banishing the actions that comprise protocol fragments out of the bodies of processes produces simpler, cleaner, and more reusable processes. Expressed as constraints, pure protocols become first-class, tangible, reusable constructs in their own right. As concrete software constructs, such protocols can be embodied into architecturally meaningful connectors. In this setting, a process (or component, service, actor, etc.) offers no methods, functions, or procedures for other entities to call, and it makes no such calls itself. Moreover, processes cannot exchange message through targeted send and receive actions. In fact, a process cannot refer to any foreign entity, such as another process, the mailbox or message queue of another process, shared variables, semaphores, locks, etc. The only means of communication of a process with its outside world is through performing blocking I/O operations that it may perform exclusively on its own ports, producing and consuming passive data. A port is a construct analogous to a file descriptor in a Unix process, except that a port is uni-directional, has no buffer, and supports blocking I/O exclusively. If i is an input port of a process, there are only two operations that the process can perform on i: (1) blocking input get(i, v) waits indefinitely or until it succeeds to obtain a value through i and assigns it to variable v; and (2) input with time-out get(i, v, t) behaves similarly, except that it unblocks and returns false if the specified time-out t expires before it obtains a value to assign to v. Analogously, if o is an output port of a process, there are only two operations that the process can perform on o: (1) blocking output put(o, v) waits indefinitely or until it succeeds to dispense the value in variable v through o; and (2) output with time-out put(o, v, t) behaves similarly, except that it unblocks and returns false if the specified time-out t expires before it dispenses the value in v.
P
C
Fig. 3 Protocol in a connector
Inter-process communication is possible only by mediation of connectors. For instance, Figure 3 shows a producer, P and a consumer C whose communication is coordinated by a simple connector. The producer P consists of an infinite loop in each iteration of which it computes a new value and writes it to its local output port (shown as a small circle on the boundary of its box in the figure) by performing a blocking put operation. Analogously, the consumer C consists of an infinite loop in each iteration of which it performs
8
F. Arbab
a blocking get operation on its own local input port, and then uses the obtained value. Observe that, written in an imperative programming language, the code for P and C is substantially simpler than the code for the Green/Red producers and the consumer in Figure 1: it contains no semaphore operations or any other inter-process communication primitives. The direction of the connector arrow in Figure 3 suggests the direction of the dataflow from P to C. However, even in the case of this very simple example, the precise behavior of the system crucially depends on the specific protocol that this simple connector implements. For instance, if the connector implements a synchronous protocol, then it forces P and C to iterate in lock-step, by synchronizing their respective put and get operations in each iteration. On the other hand the connector may have a bounded or an unbounded buffer and implement an asynchronous protocol, allowing P to produce faster than C can consume. The protocol of the connector may, for instance enable it to repeat, e.g., the last value that it contained, if C consumes faster and drains the buffer. The protocol may mandate an ordering other than FIFO on the contents of the connector buffer, perhaps depending on the contents of the exchanged data. It may retain only some of the contents of the buffer (e.g., only the first or the last item) if P produces data faster than C can consume. It may be unreliable and lose data nondeterministically or according to some probability distribution. It may retain data in its buffer only for a specified length of time, losing all data items that are not consumed before their expiration dates. The alternatives for the connector protocol are endless, and composed with the very same P and C, each yields a totally different system. A number of key observation about this simple example are worth noting. First, Figure 3 is an architecturally informative representation of this system. Second, banishing all inter-process communication out of the communicating parties, into the connector, yields a “good” system design with the beneficial consequences that: – changing P, C, or the connector does not affect the other parts of the system; – although they are engaged in a communication with each other, P and C are oblivious to each other, as well as to the actual protocol that enables their communication; – the protocol embodied in the connector is oblivious to P and C. In this architecture, the composition of the components and the coordination of their interactions are accomplished exogenously, i.e., from outside of the components themselves, and without their “knowledge”2. In contrast, the interaction protocol and coordination in the examples in Figures 1 and 2
By this anthropomorphic expression we simply mean that a component does not contain any piece of code that directly contributes to determine the entities that it composes with, or the specific protocol that coordinates its own interactions with them.
Elements of Interaction
9
2 are endogenous, i.e., accomplished through (inter-process communication) primitives from inside the parties engaged in the protocol. It is clear that exogenous composition and coordination lead to simpler, cleaner, and more reusable component code, simply because all composition and coordination concerns are left out. What is perhaps less obvious is that exogenous coordination also leads to reusable, pure coordination code: there is nothing in any incarnation of the connector in Figure 3 that is specific to P or C; it can just as readily engage any producer and consumer processes in any other application. Obviously, we are not interested in only this example, nor exclusively in connectors that implement exogenous coordination between only two communicating parties. Moreover, the code for any version of the connector in Figure 3, or any other connector, can be written in any programming language: the concepts of exogenous composition, exogenous coordination, and the system design and architecture that they induce constitute what matters, not the implementation language. Focusing on multi-party interaction/coordination protocols reveals that they are composed out of a small set of common recurring concepts. They include synchrony, atomicity, asynchrony, ordering, exclusion, grouping, selection, etc. Compliant with the constraint view of interaction advocated above, these concepts can be expressed as constraints, more directly and elegantly than as compositions of actions in a process algebra or an imperative programming language. This observation behooves us to consider the interaction-as-constraint view of concurrency as a foundation for a special language to specify multi-party exogenous interaction/coordination protocols and the connectors that embody them, of which the connector in Figure 3 is but a trivial example. Reo, described in the next section, is a premier example of such a language.
3 An Overview of Reo Reo [10, 11, 5] is a channel-based exogenous coordination model wherein complex coordinators, called connectors, are compositionally built out of simpler ones. Exogenous coordination imposes a purely local interpretation on each inter-components communication, engaged in as a pure I/O operation on each side, that allows components to communicate anonymously, through the exchange of un-targeted passive data. We summarize only the main concepts in Reo here. Further details about Reo and its semantics can be found in the cited references. Complex connectors in Reo are constructed as a network of primitive binary connectors, called channels. Connectors serve to provide the protocol that controls and organizes the communication, synchronization and cooperation among the components/services that they interconnect. Formally, the protocol embodied in a connector is a relation, which the connector imposes
10
F. Arbab
as a constraint on the actions of the communicating parties that it interconnects. A channel is a medium of communication that consists of two ends and a constraint on the dataflows observed at those ends. There are two types of channel ends: source and sink. A source channel end accepts data into its channel, and a sink channel end dispenses data out of its channel. Every channel (type) specifies its own particular behavior as constraints on the flow of data through its ends. These constraints relate, for example, the content, the conditions for loss, and/or creation of data that pass through the ends of a channel, as well as the atomicity, exclusion, order, and/or timing of their passage. Reo places no restriction on the behavior of a channel and thus allows an open-ended set of different channel types to be used simultaneously together. Although all channels used in Reo are user-defined and users can indeed define channels with any complex behavior (expressible in the semantic model) that they wish, a very small set of channels, each with very simple behavior, suffices to construct useful Reo connectors with significantly complex behavior. Figure 4 shows a common set of primitive channels often used to build Reo connectors.
Fig. 4 A typical set of Reo channels
A Sync, for short, has a source and a sink end and no buffer. It accepts a data item through its source end iff it can simultaneously dispense it through its sink. A LossySync, for short, is similar to synchronous channel except that it always accepts all data items through its source end. The data item is transferred if it is possible for the data item to be dispensed through the sink end, otherwise the data item is lost. A FIFO1 channel represents an asynchronous channel with one buffer cell which is empty if no data item is shown in the box (this is the case in Figure 1). If a data element d is contained in the buffer of a FIFO1 channel then d is shown inside the box in its graphical representation. More exotic channels are also permitted in Reo, for instance, synchronous and asynchronous drains. Each of these channels has two source ends and no sink end. No data value can be obtained from a drain since it has no sink end. Consequently, all data accepted by a drain channel are lost. SyncDrain is a synchronous drain that can accept a data item through one of its ends iff a data item is also available for it to simultaneously accept through its other end as well. AsyncDrain is an asynchronous drain that accepts data items through its source ends and loses them, but never simultaneously.
Elements of Interaction
11
For a filter channel, or Filter(P), its pattern P ⊆ Data specifies the type of data items that can be transmitted through the channel. This channel accepts a value d ∈ P through its source end iff it can simultaneously dispense d through its sink end, exactly as if it were a Sync channel; it always accepts all data items d ∈ P through the source end and loses them immediately. Synchronous and asynchronous Spouts each is a dual of its respective drain channel, as they have two sink ends through which they produce nondeterministic data items. Further discussion of these and other primitive channels is beyond the scope of this paper.
Fig. 5 Reo nodes
Complex connectors are constructed by composing simpler ones via the join and hide operations. Channels are joined together in nodes, each of which consists of a set of channel ends. A Reo node is a logical place where channel ends coincide and coordinate their dataflows as prescribed by its node type. Figure 5 shows the three possible node types in Reo. A node is either source, sink or mixed, depending on whether all channel ends that coincide on that node are source ends, sink ends or a combination of the two. Reo fixes the semantics of (i.e., the constraints on the dataflow through) Reo nodes, as described below. The hide operation is used to hide the internal topology of a component connector. The hidden nodes can no longer be accessed or observed from outside. The term boundary nodes is also sometimes used to collectively refer to source and sink nodes. Boundary nodes define the interface of a connector. Components connect to the boundary nodes of a connector and interact anonymously with each other through this interface by performing I/O operations on the boundary nodes of the connector: take operations on sink nodes, and write operations on source nodes.3 At most one component can be connected to a (source or sink) node at a time. The I/O operations are performed through interface nodes of components which are called ports. We identify each node with a name, taken from a set N ames, with typical members A, B, C, .... For an arbitrary node A, we use dA as the symbol for the observed data item at A. 3
The get and put operations mentioned in the description of the components in Figure 3 are higher-level wrappers around the primitive take and write operations of Reo.
12
F. Arbab
A component can write data items to a source node that it is connected to. The write operation succeeds only if all (source) channel ends coincident on the node accept the data item, in which case the data item is transparently written to every source end coincident on the node. A source node, thus, acts as a synchronous replicator. A component can obtain data items, by an input operation, from a sink node that it is connected to. A take operation succeeds only if at least one of the (sink) channel ends coincident on the node offers a suitable data item; if more than one coincident channel end offers suitable data items, one is selected nondeterministically. A sink node, thus, acts as a nondeterministic merger. A mixed node nondeterministically selects and takes a suitable data item offered by one of its coincident sink channel ends and replicates it into all of its coincident source channel ends. Note that a component cannot connect to, take from, or write to mixed nodes. Because a node has no buffer, data cannot be stored in a node. Hence, nodes instigate the propagation of synchrony and exclusion constraints on dataflow throughout a connector. Deriving the semantics of a Reo connector amounts to resolving the composition of the constraints of its constituent channels and nodes. This is not a trivial task. In sequel, we present examples of Reo connectors that illustrate how non-trivial dataflow behavior emerges from composing simple channels using Reo nodes. The local constraints of individual channels propagate through (the synchronous regions of) a connector to its boundary nodes. This propagation also induces a certain contextawareness in connectors. See [36] for a detailed discussion of this. Reo has been used for composition of Web services [53, 62, 19], modeling and analysis of long-running transactions and compliance in service-oriented systems [59, 18, 58], coordination of multi agent systems [12], performance analysis of coordinated compositions [21], and modeling of coordination in biological systems [35]. Reo offers a number of operations to reconfigure and change the topology of a connector at run-time. Operations that enable the dynamic creation of channels, splitting and joining of nodes, hiding internal nodes and more. The hiding of internal nodes allows to permanently fix the topology of a connector, such that only its boundary nodes are visible and available. The resulting connector can be then viewed as a new primitive connector, or primitive for short, since its internal structure is hidden and its behavior is fixed.
4 Examples Recall our alternating producers and consumer example of Section 1. We revise the code for the Green and Red producers to make it suitable for exogenous coordination (which, in fact, makes it simpler). Similar to the producer P in Figure 3, this code now consists of an infinite loop, in each
Elements of Interaction
13
iteration of which a producer computes a new value and writes it to its output port. Analogously, we revise the consumer code, fashioning it after the consumer C in Figure 3. In the remainder of this section, we present a number of protocols to implement different versions of the alternating producers and consumer example of Section 1, using these revised versions of producers and consumer processes. These examples serve three purposes. First, they show a flavor of programming pure interaction coordination protocols as Reo circuits. Second, they present a number of generically useful circuits that can be used as connectors in many other applications, or as sub-circuits in the circuits for construction of many other protocols. Third, they illustrate the utility of exogenous coordination by showing how trivial it is to change the protocol of an application, without altering any of the processes involved.
B
C
A Fig. 6 Reo circuit for Alternator
4.1 Alternator The connector shown in Figure 6 is an alternator that imposes an ordering on the flow of the data from its input nodes A and B to its output node C. The SyncDrain enforces that data flow through A and B only synchronously. The empty buffer together with the SyncDrain guarantee that the data item obtained from A is delivered to C while the data item obtained from B is stored in the FIFO1 buffer. After this, the buffer of the FIFO1 is full and data cannot flow in through either A or B, but C can dispense the data stored in the FIFO1 buffer, which makes it empty again. A version of our alternating producers and consumer example of Section 1 can now be composed by attaching the output port of the revised Green producer to node A, the output port of the revised Red producer to node B, and the input port of the revised consumer to node C of the Reo circuit in Figure 6. A closer look shows, however, that the behavior of this version of our example is not exactly the same as that of the one in Figures 1 and 2. As explained above, the Reo circuit in Figure 6 requires the availability of a pair of values (from the Green producer) on A and (from the Red producer on) B before it allows the consumer to obtain them, first from A and then from B. Thus, if the Green producer and the consumer are both ready to communicate, they still have to wait for the Red producer to also attempt to communicate, before they can exchange data. The versions in Figures 1
14
F. Arbab
and 2 allow the Green producer and the consumer to go ahead, regardless of the state of the Red producer. Our original specification of this example in Section 1 was abstract enough to allow both alternatives. A further refinement of this specification may indeed prefer one and disallow the other. If the behavior of the connector in Figure 6 is not what we want, we need to construct a different Reo circuit to impose the same behavior as in Figures 1 and 2. This is precisely what we describe below.
4.2 Sequencer Figure 7(a) shows an implementation of a sequencer by composing five Sync channels and four FIFO1 channels together. The first (leftmost) FIFO1 channel is initialized to have a data item in its buffer, as indicated by the presence of the symbol e in the box representing its buffer cell. The actual value of the data item is irrelevant. The connector provides only the four nodes A, B, C and D for other entities (connectors or component instances) to take from. The take operation on nodes A, B, C and D can succeed only in the strict left-to-right order. This connector implements a generic sequencing protocol: we can parameterize this connector to have as many nodes as we want simply by inserting more (or fewer) Sync and FIFO1 channel pairs, as required.
A
B
C
D
A B
C
e
Sequencer (a)
(b)
Fig. 7 Sequencer
Figure 7(b) shows a simple example of the utility of the sequencer. The connector in this figure consists of a two-node sequencer, plus a pair of Sync channels and a SyncDrain connecting each of the nodes of the sequencer to the nodes A and C, and B and C, respectively. Similar to the circuit in Figure 6, this connector imposes an order on the flow of the data items written to A and B, through C: the sequence of data items obtained by successive take operations on C consists of the first data item written to A, followed by the first data item written to B, followed by the second data item written to A, followed by the second data item written to B, and so on. However, there is a subtle difference between the behavior of the two circuits in Figures 6 and 7(b). The alternator in Figure 6 delays the transfer of a data item from A to C until a data item is also available at B. The circuit in Figure 7(b) transfers from A to C as soon as these nodes can satisfy their respective operations, regardless of the availability of data on B.
Elements of Interaction
15
We can obtain a new version of our alternating producers and consumer example by attaching the output port of the Green producer to node A, the output port of the Red producer to node B, and the input port of the consumer to node C. The behavior of this version is now the same as those in Figures 1 and 2. The circuit in Figure 7(b) embodies the same protocol that is implicit in Figures 1 and 2. A characteristic of this protocol is that it “slows down” each producer, as necessary, by delaying the success of its data production until the consumer is ready to accept its data. Our original problem statement in Section 1 does not explicitly specify whether or not this is a required or permissible behavior. While this may be desirable in some applications, slowing down the producers to match the processing speed of the consumer may have serious drawbacks in other applications, e.g., if these processes involve time-sensitive data or operations. Perhaps what we want is to bind our producers and consumer by a protocol that decouples them such as to allow each process to proceed at its own pace. We proceed, below, to present a number of protocols that we then compose to construct a Reo circuit for such a protocol. A
FIFO2
Exclusive Router
in
o
B
out
C
(a)
(b) Fig. 8 An exclusive router and a shift-lossy FIFO1
4.3 Exclusive Router The connector shown in Figure 8(a) is an exclusive router : it routes data from A to either B or C (but not both). This connector can accept data only if there is a write operation at the source node A, and there is at least one taker at the sink nodes B and C. If both B and C can dispense data, the choice of routing to B or C follows from the non-deterministic decision by the lower-middle mixed node: it can accept data only from one of its sink ends, excluding the flow of data through the other, which forces the latter’s
16
F. Arbab
respective LossySync to lose the data it obtains from A, while the other LossySync passes its data as if it were a Sync.
4.4 Shift-Lossy FIFO1 Figure 8(b) shows a Reo circuit for a connector that behaves as a lossy FIFO1 channel with a shift loss-policy. This channel is called shift-lossy FIFO1 (ShiftLossyFIFO1). It behaves as a normal FIFO1 channel, except that if its buffer is full then the arrival of a new data item deletes the existing data item in its buffer, making room for the new arrival. As such, this channel implements a “shift loss-policy” losing the older contents in its buffer in favor of the newer arrivals. This is in contrast to the behavior of an overflowlossy FIFO1 channel, whose “overflow loss-policy” loses the new arrivals when its buffer is full. The connector in Figure 8(b) is composed of an exclusive router (shown in Figure 8(a)), an initially full FIFO1 channel, an initially empty FIFO2 channel, and four Sync channels. The FIFO2 channel itself is composed out of two sequential FIFO1 channels. See [27] for a more formal treatment of the semantics of this connector. The shift-lossy FIFO1 circuit in Figure 8(b) is indeed so frequently useful as a connector in construction of more complex circuits, that it makes sense to have a special graphical symbol to designate it as a short-hand. Figure 9 shows a circuit that uses two instances of our shift-lossy FIFO1. The graphical symbol we use to represent a shift-lossy FIFO1 “channel” is intentionally similar to that of a regular FIFO1 channel. The dashed sink-side half of the box representing the buffer of this channel suggests that it loses the older values to make room for new arrivals, i.e., it shifts to lose.
4.5 Dataflow Variable The Reo circuit in Figure 9 implements the behavior of a dataflow variable. It uses two instances of the shift-lossy FIFO1 connector shown Figure 8(b), to build a connector with a single input and a single output nodes. Initially, the buffers of its shift-lossy FIFO1 channels are empty, so an initial take on its output node suspends for data. Regardless of the status of its buffers, or whether or not data can be dispensed through its output node, every write to its input node always succeeds and resets both of its buffers to contain the new data item. Every time a value is dispensed through its output node, a copy of this value is “cycled back” into its left shift-lossy FIFO1 channel. This circuit “remembers” the last value it obtains through its input node, and dispenses copies of this value through its output node as frequently as necessary: i.e., it can be used as a dataflow variable. The variable circuit in Figure 9 is also very frequently useful as a connector in construction of more complex circuits. Therefore, it makes sense to have a short-hand graphical symbol to designate it with as well. Figure 10 shows 3
Elements of Interaction
17
in
out
Fig. 9 Dataflow variable
instances of our variable used in two connectors. Our symbol for a variable is similar to that for a regular FIFO1 channel, except that we use a rounded box to represent its buffer: the rounded box hints at the recycling behavior of the variable circuit, which implements its remembering of the last data item that it obtained or dispensed.
4.6 Decoupled Alternating Producers and Consumer Figure 10(a) shows how the variable circuit of Figure 9 can be used to construct a version of the example in Figure 3, where the producer and the consumer are fully decoupled from one another. Initially, the variable contains no value, and therefore, the consumer has no choice but to wait for the producer to place its first value into the variable. After that, neither the producer, nor the consumer ever has to wait for the other one. Each can work at its own pace and write to or take from the connector. Every write by the producer replaces the current contents of the variable, and every take by the consumer obtains a copy of the current value of the variable, which always contains the most recent value produced.
Red
Prod
Cons
Cons
Green
Sequencer
(a)
(b)
Fig. 10 Decoupled producers and consumer
18
F. Arbab
The connector in Figure 10(b) is a small variation of the Reo circuit in Figure 7(b), with two instances of the variable circuit of Figure 9 spliced in. In this version of our alternating producers and consumer, these three processes are fully decoupled: each can produce and consume at its own pace, never having to wait for any of the other two. Every take by the consumer, always obtains the latest value produced by its respective producer. If the consumer runs slower than a producer, the excess data is lost in the producer’s respective variable, and the consumer will effectively “sample” the data generated by this producer. If the consumer runs faster than a producer, it will read (some of) the values of this producer multiple times.
4.7 Flexibility Figures 6, 7(b), and 10(b) show three different connectors, each imposing a different protocols for the coordination of two alternating producers and a consumer. The exact same producers and consumer processes can be combined with any of these circuits to yield different applications. It is instructive to compare the ease with which this is accomplished in our interaction-centric world, with the effort involved in modifying the action-centric incarnations of this same example in Figures 1 and 2, which correspond to the protocol of the circuit in Figure 7(b), in order to achieve the behavior induced by the circuit in Figure 6 or 10(b). The Reo connector binding a number of distributed processes, such as Web services, can even be “hot-swapped” while the application runs, without the knowledge or the involvement of the engaged processes. A prototype platform to demonstrate this capability is available at [2].
5 Semantics Reo allows arbitrary user-defined channels as primitives; arbitrary mix of synchrony and asynchrony; and relational constraints between input and output. This makes Reo more expressive than, e.g., dataflow models, workflow models, and Petri nets. On the other hand, it makes the semantics of Reo quite non-trivial. Various models for the formal semantics of Reo have been developed, each to serve some specific purposes. In the rest of this section, we briefly describe the main ones.
5.1 Timed Data Streams The first formal semantics of Reo was formulated based on the coalgebraic model of stream calculus [76, 75, 77]. In this semantics, the behavior of every connector (channel or more complex circuit) and every component is given
Elements of Interaction
19
as a (maximal) relation on a set of timed-data-streams [22]. This yields an expressive compositional semantics for Reo where coinduction is the main definition and proof principle to reason about properties involving both data and time streams. The timed data stream model serves as the reference semantics for Reo. Table 1 TDS Semantics of Reo primitives Sync LossySync
α, aSyncβ, b ≡ α = β ∧ a = b α,a LossySync β, b ≡ β(0) = α(0) ∧ α , a LossySync β , b if a(0) = b(0) if a(0) < b(0) α , a LossySync β, b α, aF IF O1β, b ≡ α = β ∧ a < b < a
empty FIFO1 FIFO1 α, aF IF O1(x)β, b ≡ α = x.β ∧ b < a < b initialized with x SyncDrain α, aSyncDrainβ, b ≡ a = b AsyncDrain α, aSyncDrainβ, b ≡ a = b α,a Filter(P ) β, b ≡ β(0) = α(0) ∧ b(0) = a(0) ∧ α , a Filter(P ) β , b if α(0) P Filter(P) otherwise α , a Filter(P ) β, b M rg(α, a, β, b; γ, c) ≡ Merge α(0) = γ(0) ∧ a(0) = c(0) ∧ M rg(α , a , β, b; γ , c ) if a(0) < b(0) β(0) = γ(0) ∧ b(0) = c(0) ∧ M rg(α, a, β , b ; γ , c ) if a(0) > b(0) Replicate Rpl(α, a; β, b, γ, c) ≡ α = β ∧ α = γ ∧ a = b ∧ a = c
A stream over a set X is an infinite sequence of elements x ∈ X. The set of data streams DS consists of all streams over an uninterpreted set of Data items. A time stream is a monotonically increasing sequence of non-negative real numbers. The set T S represents all time streams. A Timed Data Stream (TDS) is a twin pair of streams α, a in T DS = DS × T S consisting of a data stream α ∈ DS and a time stream a ∈ T S, with the interpretation that for all i ≥ 0, the observation of the data item α(i) occurs at the time moment a(i). We use a to represent the tail of a stream a, i.e., the stream obtained after removing the first element of a; and x.a to represent the stream whose first element is x and whose tail is a. Table 1 shows the TDS semantics of the primitive channels in Figure 4, as well as that of the merge and replication behavior inherent in Reo nodes. The semantics of a Reo circuit is the relational composition of the relations that represent the semantics of its constituents (including the merge and replication in its nodes). This compositional construction for instance, yields XRout(α, a; β, b, γ, c) ≡ α(0) = γ(0) ∧ a(0) = c(0) ∧ XRout(α , a , β, b; γ , c ) if a(0) < b(0) β(0) = γ(0) ∧ b(0) = c(0) ∧ XRout(α, a, β , b ; γ , c ) if a(0) > b(0) as the semantics of the circuit in Figure 8(a).
20
F. Arbab
5.2 Constraint Automata Constraint automata provide an operational model for the semantics of Reo circuits [27]. The states of an automaton represent the configurations of its corresponding circuit (e.g., the contents of the FIFO channels), while the transitions encode its maximally-parallel stepwise behavior. The transitions are labeled with the maximal sets of nodes on which dataflow occurs simultaneously, and a data constraint (i.e., boolean condition for the observed data values). The semantics of a Reo circuit is derived by composing the constraint automata of its constituents, through a special form of synchronized product of automata. Constraint automata have been used for the verification of their properties through model-checking [6, 50, 30, 26]. Results on equivalence and containment of the languages of constraint automata [27] provide opportunities for analysis and optimization of Reo circuits. The constraint automata semantics of Reo is used to generate executable code for Reo [17]. A constraint automaton essentially captures all behavior alternatives of a Reo connector. Therefore, it can be used to generate a statemachine implementing the behavior of Reo connectors, in a chosen target language, such as Java or C. Variants of the constraint automata model have been devised to capture time-sensitive behavior [13, 47, 48], probabilistic behavior [24], stochastic behavior [28], context sensitive behavior [31, 38, 45], resource sensitivity [64], and QoS aspects [65, 15, 16, 70] of Reo connectors and composite systems.
5.3 Connector Coloring The Connector Coloring (CC) model describes the behavior of a Reo circuit in terms of the detailed dataflow behavior of its constituent channels and nodes [36]. The semantics of a Reo circuit is the set of all of its dataflow alternatives. Each such alternative is a consistent composition of the dataflow alternatives of each of its constituent channels and nodes, expressed in terms of (solid and dashed) colors that represent the basic flow and no-flow alternatives. A more sophisticated model using three colors is necessary to capture the context sensitive behavior of primitives such as the LossySync channel. A proof-of-concept for this semantic model was built to show how it can coordinate Java threads as components. The CC model is also used in the implementation of a visualization tool that produces Flash animations depicting the behavior of a connector [38, 73]. Finding a consistent coloring for a circuit amounts to constraint satisfaction. Constraint solving techniques [9, 80] have been applied using the CC model to search for a valid global behavior of a given Reo connector [37]. In this approach, each connector is considered as a set of constraints, representing the colors of its individual constituents, where valid solutions correspond to a valid behavior for the current step. Distributed constraint solving
Elements of Interaction
21
techniques can be used to adapt this constraint based approach for distributed environments. The CC model is at the center of the distributed implementation of Reo [1, 73] where several engines, each executing a part of the same connector, run on different remote hosts. A distributed protocol based on the CC model guarantees that all engines running the various parts of the connector agree to collectively manifest one of its legitimate behavior alternatives.
5.4 Other Models Other formalisms have also been used to investigate the various aspects of the semantics of Reo. Plotkin’s style of Structural Operational Semantics (SOS) is followed in [71] for the formal semantics of Reo. This semantics was used in a proof-of-concept tool developed in the rewriting logic language of Maude, using the simulation toolkit. The Tile Model [41] semantics of Reo offers a uniform setting for representing not only the ordinary dataflow execution of Reo connectors, but also their dynamic reconfigurations [14]. An abstraction of the constraint automata is used in [61] to serves as a common semantics for Reo and Petri nets. The application of intuitionistic temporal linear logic (ITLL) as a basis for the semantics of Reo is studied in [33], which also shows the close semantic link between Reo and zero-safe variant of Petri nets. A comparison of Orc [69, 49] and Reo appears in [74]. The semantics of Reo has also been formalized in the Unifying Theories of Programming (UTP) [44]. The UTP approach provides a family of algebraic operators that interpret the composition of Reo connectors more explicitly than in other approaches [66]. This semantic model can be used for proving properties of connectors, such as equivalence and refinement relations between connectors and as a reference document for developing tool support for Reo. The UTP semantics for Reo opens the possibility to integrate reasoning about Reo with reasoning about component specifications/implementations in other languages for which UTP semantics is available. The UTP semantics of Reo has been used for fault-based test case generation [8]. Reo offers operations to dynamically reconfigure the topology of its coordinator circuits, thereby changing the coordination protocol of a running application. A semantic model for Reo cognizant of its reconfiguration capability, a logic for reasoning about reconfigurations, together with its model checking algorithm, are presented in [34]. Graph transformations techniques have been used in combination with the connector coloring model to formalize the dynamic reconfiguration semantics of Reo circuits triggered by dataflow [52, 51].
22
F. Arbab
6 Tools Tool support for Reo consists of a set of Eclipse plug-ins that together comprise the Eclipse Coordination Tools (ECT) visual programming environment [2]. The Reo graphical editor supports drag-and-drop graphical composition and editing of Reo circuits. This editor also serves as a bridge to other tools, including animation and code generation plugins. The animation plugin automatically generates a graphical animation of the flow of data in a Reo circuit, which provides an intuitive insight into their behavior through visualization of how they work. This tool maps the colors of the CC semantics to visual representations in the animations, and represents the movement of data through the connector [38, 73]. Another graphical editor in ECT supports drag-and-drop construction and editing of constraint automata and its variants. It includes tools to perform product and hiding on constraint automata for their composition. A converter plugin automatically generates the CA model of a Reo circuit. Several model checking tools are available for analyzing Reo. The Vereofy model checker, integrated in ECT, is based on constraint automata [6, 30, 50, 25, 26]. Vereofy supports two input languages: (1) the Reo Scripting Language (RSL) is a textual language for defining Reo circuits, and (2) the Constraint Automata Reactive Module Language (CARML) is a guarded command language for textual specification of constraint automata. Properties of Reo circuits can be specified for verification by Vereofy in a language based on Linear Temporal Logic (LTL), or on a variant of Computation Tree Logic (CTL), called Alternating-time Stream Logic (ASL). Vereofy extends these logics with regular expression constructs to express data constraints. Translation of Reo circuits and constraint automata into RSL and CARML is automatic, and the counter-examples found by Vereofy can automatically be mapped back into the ECT and displayed as Reo circuit animations. Timed Constraint Automata (TCA) were devised as the operational semantics of timed Reo circuits [13]. A SAT-based bounded model checker exists for verification of a variant of TCA [47, 48], although it is not yet fully integrated in ECT. It represents the behavior of a TCA by formulas in propositional logic with linear arithmetic, and uses a SAT solver for their analysis. Another means for verification of Reo is made possible by a transformation bridge into the mCRL2 toolset [3, 42]. The mCRL2 verifier relies on the parameterized boolean equation system (PBES) solver to encode model checking problems, such as verifying first-order modal-calculus formulas on linear process specifications. An automated tool integrated in ECT translates Reo models into mCRL2 and provides a bridge to its tool set. This translation and its application for the analysis of workflows modeled in Reo are discussed in [55, 56, 57]. Through mCRL2, it is possible to verify the behavior of timed Reo circuits, or Reo circuits with more elaborate data-dependent behavior than Vereofy supports.
Elements of Interaction
23
A CA code generator plugin produces executable Java code from a constraint automaton as a single sequential thread. A C/C++ code generator is under development. In this setting, components communicate via put and get operations on so-called SyncPoints that implement the semantics of a constraint automaton port, using common concurrency primitives. The tool also supports loading constraint automata descriptions at runtime, useful for deploying Reo coordinators in Java application servers, e.g., Tomcat, for applications such as mashup execution [54, 63]. A distributed implementation of Reo exists [1] as a middleware in the actorbased language Scala [72], which generates Java source code. A preliminary integration of this distributed platform into ECT provides the basic functionality for distributed deployment through extensions of the Reo graphical editor [73]. A set of ECT plugin tools are under development to support coordination and composition of Web Services using Reo. ECT plugins are available for automatic conversion of coordination and concurrency models expressed as UML sequence diagrams [19, 20], BPMN diagrams [18], and BPEL source code into Reo circuits [32]. Tools are integrated in ECT for automatic generation of Quantified Intentional Constraint Automata (QIA) from Reo circuits annotated with QoS properties, and subsequent automatic translation of the resulting QIA to Markov Chain models [15, 16, 70]. A bridge to Prism [4] allows further analysis of the resulting Markov chains [21]. Of course, using Markov chains for the analysis of the QoS properties of a Reo circuit (and its environment) is possible only when the stochastic variables representing those QoS properties can be modeled by exponential distributions. The QIA, however, remain oblivious to the (distribution) types of stochastic variables. A discrete event simulation engine integrated in ECT supports a wide variety of more general distributions for the analysis of the QoS properties of Reo circuits [46]. Based on algebraic graph transformations, a reconfiguration engine is available as an ECT plugin that supports dynamic reconfiguration of distributed Reo circuits triggered by dataflow [17, 51]. It currently works with the Reo animation engine in ECT, and will be integrated in the distributed implementation of Reo.
7 Concluding Remarks Action and interaction offer dual perspectives on concurrency. Execution of actions involving shared resources by independent processes that run concurrently, induces pairings of those actions, along with an ordering of those pairs, that we commonly refer to as interaction. Dually, interaction can be seen as an external relation that constrains the pairings of the actions of its engaged processes and their ordering. The traditional action-centric models of concurrency generally make interaction protocols intangible by-products,
24
F. Arbab
implied by nebulous specifications scattered throughout the bodies of their engaged processes. Specification, manipulation, and analysis of such protocols are possible only indirectly, through specification, manipulation, and analysis of those scattered actions, which is often made even more difficult by the entanglement of the data-dependent control flow that surrounds those actions. The most challenging aspect of a concurrent system is what its interaction protocol does. In contrast to the how which an imperative programming language specifies, declarative programming, e.g., in functional and constraint languages, makes it easier to directly specify, manipulate, and analyze the properties of what a program does, because what is precisely what they express. Analogously, in an interaction-centric model of concurrency, interaction protocols become tangible first-class constructs that exist explicitly as (declarative) constraints outside and independent of the processes that they engage. Specification of interaction protocols as declarative constraints makes them easier to manipulate and analyze directly, and makes it possible to compose interaction protocols and reuse them. The coordination language Reo is a premier example of a formalism that embodies an interaction-centric model of concurrency. We used examples of Reo circuits to illustrate the flavor programming pure interaction protocols. Expressed as explicit declarative constraints, protocols espouse exogenous coordination. Our examples showed the utility of exogenous coordination in yielding loosely-coupled flexible systems whose components and protocols can be easily modified, even at run time. We described a set of prototype support tools developed as plugins to provide a visual programming environment within the framework of Eclipse, and presented an overview of the formal foundations of the work behind these tools.
References 1. Distributed Reo, http://reo.project.cwi.nl/cgi-bin/trac.cgi/reo/wiki/ Redrum/BigPicture 2. Eclipse coordination tools home page, http://reo.project.cwi.nl/cgi-bin/trac.cgi/reo/wiki/Tools 3. mcrl2 home page, http://www.mcrl2.org 4. Prism, http://www.prismmodelchecker.org 5. Reo home page, http://reo.project.cwi.nl 6. Vereofy home page, http://www.vereofy.de/ 7. Agha, G.: Actors: A Model of Concurrent Computation in Distributed Systems. MIT Press, Cambridge (1986) 8. Aichernig, B.K., Arbab, F., Astefanoaei, L., de Boer, F.S., Meng, S., Rutten, J.J.M.M.: Fault-based test case generation for component connectors. In: Chin, W.-N., Qin, S. (eds.) TASE, pp. 147–154. IEEE Computer Society, Los Alamitos (2009) 9. Apt, K.: Principles of Constraint Programming. Cambridge University Press, Cambridge (2003)
Elements of Interaction
25
10. Arbab, F.: Reo: a channel-based coordination model for component composition. Mathematical. Structures in Comp. Sci. 14(3), 329–366 (2004) 11. Arbab, F.: Abstract Behavior Types: a foundation model for components and their composition. Sci. Comput. Program. 55(1-3), 3–52 (2005) 12. Arbab, F., Astefanoaei, L., de Boer, F.S., Dastani, M., Meyer, J.-J.C., Tinnemeier, N.A.M.: Reo connectors as coordination artifacts in 2APL systems. In: Bui, T.D., Ho, T.V., Ha, Q.T. (eds.) PRIMA 2008. LNCS (LNAI), vol. 5357, pp. 42–53. Springer, Heidelberg (2008) 13. Arbab, F., Baier, C., de Boer, F.S., Rutten, J.J.M.M.: Models and temporal logical specifications for timed component connectors. Software and System Modeling 6(1), 59–82 (2007) 14. Arbab, F., Bruni, R., Clarke, D., Lanese, I., Montanari, U.: Tiles for Reo. In: Corradini, A., Montanari, U. (eds.) WADT 2008. LNCS, vol. 5486, pp. 37–55. Springer, Heidelberg (2009) 15. Arbab, F., Chothia, T., Meng, S., Moon, Y.-J.: Component connectors with QoS guarantees. In: Murphy, A.L., Vitek, J. (eds.) COORDINATION 2007. LNCS, vol. 4467, pp. 286–304. Springer, Heidelberg (2007) 16. Arbab, F., Chothia, T., van der Mei, R., Meng, S., Moon, Y.-J., Verhoef, C.: From coordination to stochastic models of QoS. In: Field, Vasconcelos [39], pp. 268–287 17. Arbab, F., Koehler, C., Maraikar, Z., Moon, Y.-J., Proen¸ca, J.: Modeling, testing and executing Reo connectors with the Eclipse Coordination Tools. In: Tool demo session at FACS 2008 (2008) 18. Arbab, F., Kokash, N., Meng, S.: Towards using Reo for compliance-aware business process modeling. In: Margaria, T., Steffen, B. (eds.) ISoLA. Communications in Computer and Information Science, vol. 17, pp. 108–123. Springer, Heidelberg (2008) 19. Arbab, F., Meng, S.: Synthesis of connectors from scenario-based interaction specifications. In: Chaudron, M.R.V., Szyperski, C., Reussner, R. (eds.) CBSE 2008. LNCS, vol. 5282, pp. 114–129. Springer, Heidelberg (2008) 20. Arbab, F., Meng, S., Baier, C.: Synthesis of Reo circuits from scenario-based specifications. Electr. Notes Theor. Comput. Sci. 229(2), 21–41 (2009) 21. Arbab, F., Meng, S., Moon, Y.-J., Kwiatkowska, M.Z., Qu, H.: Reo2mc: a tool chain for performance analysis of coordination models. In: van Vliet, H., Issarny, V. (eds.) ESEC/SIGSOFT FSE, pp. 287–288. ACM, New York (2009) 22. Arbab, F., Rutten, J.J.M.M.: A coinductive calculus of component connectors. In: Wirsing, M., Pattinson, D., Hennicker, R. (eds.) WADT 2003. LNCS, vol. 2755, pp. 34–55. Springer, Heidelberg (2003) 23. Baeten, J.C.M., Weijland, W.P.: Process Algebra. Cambridge University Press, Cambridge (1990) 24. Baier, C.: Probabilistic models for Reo connector circuits. Journal of Universal Computer Science 11(10), 1718–1748 (2005) 25. Baier, C., Blechmann, T., Klein, J., Kl¨ uppelholz, S.: Formal verification for components and connectors. In: de Boer, F.S., Bonsangue, M.M., Madelaine, E. (eds.) FMCO 2008. LNCS, vol. 5751, pp. 82–101. Springer, Heidelberg (2009) 26. Baier, C., Blechmann, T., Klein, J., Kl¨ uppelholz, S.: A uniform framework for modeling and verifying components and connectors. In: Field, Vasconcelos [39], pp. 247–267 27. Baier, C., Sirjani, M., Arbab, F., Rutten, J.J.M.M.: Modeling component connectors in Reo by constraint automata. Sci. Comput. Program. 61(2), 75–113 (2006)
26
F. Arbab
28. Baier, C., Wolf, V.: Stochastic reasoning about channel-based component connectors. In: Ciancarini, P., Wiklicky, H. (eds.) COORDINATION 2006. LNCS, vol. 4038, pp. 1–15. Springer, Heidelberg (2006) 29. Bergstra, J.A., Klop, J.W.: Process algebra for synchronous communication. Information and Control 60, 109–137 (1984) 30. Blechmann, T., Baier, C.: Checking equivalence for Reo networks. Electr. Notes Theor. Comput. Sci. 215, 209–226 (2008) 31. Bonsangue, M.M., Clarke, D., Silva, A.: Automata for context-dependent connectors. In: Field, Vasconcelos [39], pp. 184–203 32. Changizi, B., Kokash, N., Arbab, F.: A unified toolset for business process model formalization. In: Proc. of the 7th International Workshop on Formal Engineering approaches to Software Components and Architectures, FESCA 2010 (2010); satellite event of ETAPS 33. Clarke, D.: Coordination: Reo, nets, and logic. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2007. LNCS, vol. 5382, pp. 226–256. Springer, Heidelberg (2008) 34. Clarke, D.: A basic logic for reasoning about connector reconfiguration. Fundam. Inform. 82(4), 361–390 (2008) 35. Clarke, D., Costa, D., Arbab, F.: Modelling coordination in biological systems. In: Margaria, T., Steffen, B. (eds.) ISoLA 2004. LNCS, vol. 4313, pp. 9–25. Springer, Heidelberg (2006) 36. Clarke, D., Costa, D., Arbab, F.: Connector colouring I: Synchronisation and context dependency. Sci. Comput. Program. 66(3), 205–225 (2007) 37. Clarke, D., Proen¸ca, J., Lazovik, A., Arbab, F.: Deconstructing Reo. Electr. Notes Theor. Comput. Sci. 229(2), 43–58 (2009) 38. Costa, D.: Formal Models for Context Dependent Connectors for Distributed Software Components and Services. Leiden University (2010) 39. Field, J., Vasconcelos, V.T. (eds.): COORDINATION 2009. LNCS, vol. 5521. Springer, Heidelberg (2009) 40. Fokkink, W.: Introduction to Process Algebra. Texts in Theoretical Computer Science, An EATCS Series. Springer, Heidelberg (1999) 41. Gadducci, F., Montanari, U.: The tile model. In: Plotkin, G.D., Stirling, C., Tofte, M. (eds.) Proof, Language and Interaction: Essays in Honour of Robin Milner, pp. 133–166. MIT Press, Boston (2000) 42. Groote, J.F., Mathijssen, A., Reniers, M.A., Usenko, Y.S., van Weerdenburg, M.: The formal specification language mcrl2. In: Brinksma, E., Harel, D., Mader, A., Stevens, P., Wieringa, R. (eds.) MMOSS. Dagstuhl Seminar Proceedings, vol. 06351, Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany (2006) 43. Hoare, C.A.R.: Communicating Sequential Processes. Prentice-Hall, Englewood Cliffs (1985) 44. Hoare, C.A.R., Jifeng, H.: Unifying Theories of Programming. Prentice Hall, London (1998) 45. Izadi, M., Bonsangue, M.M., Clarke, D.: Modeling component connectors: Synchronisation and context-dependency. In: Cerone, A., Gruner, S. (eds.) SEFM, pp. 303–312. IEEE Computer Society, Los Alamitos (2008) 46. Kanters, O.: QoS analysis by simulation in Reo. Vrije Universiteit Amsterdam (2010) 47. Kemper, S.: SAT-based Verification for Timed Component Connectors. Electr. Notes Theor. Comput. Sci. 255, 103–118 (2009)
Elements of Interaction
27
48. Kemper, S.: Compositional construction of real-time dataflow networks. In: Clarke, D., Agha, G. (eds.) COORDINATION 2010. LNCS, vol. 6116, pp. 92– 106. Springer, Heidelberg (2010) 49. Kitchin, D., Quark, A., Cook, W.R., Misra, J.: The Orc programming language. In: Lee, D., Lopes, A., Poetzsch-Heffter, A. (eds.) FMOODS 2009. LNCS, vol. 5522, pp. 1–25. Springer, Heidelberg (2009) 50. Kl¨ uppelholz, S., Baier, C.: Symbolic model checking for channel-based component connectors. Electr. Notes Theor. Comput. Sci 175(2), 19–37 (2007) 51. Koehler, C., Arbab, F., de Vink, E.P.: Reconfiguring distributed Reo connectors. In: Corradini, A., Montanari, U. (eds.) WADT 2008. LNCS, vol. 5486, pp. 221–235. Springer, Heidelberg (2009) 52. Koehler, C., Costa, D., Proen¸ca, J., Arbab, F.: Reconfiguration of Reo connectors triggered by dataflow. In: Ermel, C., Heckel, R., de Lara, J. (eds.) Proceedings of the 7th International Workshop on Graph Transformation and Visual Modeling Techniques (GT-VMT 2008), vol. 10, pp. 1–13 (2008); ECEASST, ISSN 1863-2122, http://www.easst.org/eceasst/ 53. Koehler, C., Lazovik, A., Arbab, F.: Reoservice: Coordination modeling tool. In: Kr¨ amer et al [60], pp. 625–626 54. Koehler, C., Lazovik, A., Arbab, F.: Reoservice: Coordination modeling tool. In: Kr¨ amer, B.J., Lin, K.-J., Narasimhan, P. (eds.) ICSOC 2007. LNCS, vol. 4749, pp. 625–626. Springer, Heidelberg (2007) 55. Kokash, N., Krause, C., de Vink, E.P.: Data-aware design and verification of service compositions with Reo and mCRL2. In: SAC 2010: Proc. of the 2010 ACM Symposium on Applied Computing, pp. 2406–2413. ACM, New York (2010) 56. Kokash, N., Krause, C., de Vink, E.P.: Time and data-aware analysis of graphical service models in Reo. In: SEFM 2010: Proc. 8th IEEE International Conference on Software Engineering and Formal Methods. IEEE, Los Alamitos (to appear, 2010) 57. Kokash, N., Krause, C., de Vink, E.P.: Verification of context-dependent channel-based service models. In: FMCO 2009: Formal Methods for Components and Objects: 8th International Symposium. LNCS. Springer, Heidelberg (to appear, 2010) 58. Kokash, N., Arbab, F.: Formal behavioral modeling and compliance analysis for service-oriented systems. In: de Boer, F.S., Bonsangue, M.M., Madelaine, E. (eds.) FMCO 2008. LNCS, vol. 5751, pp. 21–41. Springer, Heidelberg (2009) 59. Kokash, N., Arbab, F.: Applying Reo to service coordination in long-running business transactions. In: Shin, S.Y., Ossowski, S. (eds.) SAC, pp. 1381–1382. ACM, New York (2009) 60. Kr¨ amer, B.J., Lin, K.-J., Narasimhan, P. (eds.): ICSOC 2007. LNCS, vol. 4749. Springer, Heidelberg (2007) 61. Krause, C.: Integrated structure and semantics for Reo connectors and Petri nets. In: ICE 2009: Proc. 2nd Interaction and Concurrency Experience Workshop. Electronic Proceedings in Theoretical Computer Science, vol. 12, p. 57 (2009) 62. Lazovik, A., Arbab, F.: Using Reo for service coordination. In: Kr¨ amer et al. [60], pp. 398–403 63. Maraikar, Z., Lazovik, A.: Reforming mashups. In: Proceedings of the 3rd European Young Researchers Workshop on Service Oriented Computing (YR-SOC 2008), June 2008. Imperial College, London (2008)
28
F. Arbab
64. Meng, S., Arbab, F.: On resource-sensitive timed component connectors. In: Bonsangue, M.M., Johnsen, E.B. (eds.) FMOODS 2007. LNCS, vol. 4468, pp. 301–316. Springer, Heidelberg (2007) 65. Meng, S., Arbab, F.: QoS-driven service selection and composition. In: Billington, J., Duan, Z., Koutny, M. (eds.) ACSD, pp. 160–169. IEEE, Los Alamitos (2008) 66. Meng, S., Arbab, F.: Connectors as designs. Electr. Notes Theor. Comput. Sci. 255, 119–135 (2009) 67. Milner, R.: A Calculus of Communication Systems. LNCS, vol. 92. Springer, Heidelberg (1980) 68. Milner, R.: Elements of interaction - turing award lecture. Commun. ACM 36(1), 78–89 (1993) 69. Misra, J., Cook, W.R.: Computation orchestration. Software and System Modeling 6(1), 83–110 (2007) 70. Moon, Y.-J., Silva, A., Krause, C., Arbab, F.: A compositional semantics for stochastic Reo connectors. In: Proceedings of the 9th International Workshop on the Foundations of Coordination Languages and Software Architectures, FOCLASA (2010) 71. Mousavi, M.R., Sirjani, M., Arbab, F.: Formal semantics and analysis of component connectors in Reo. Electr. Notes Theor. Comput. Sci 154(1), 83–99 (2006) 72. Odersky, M.: Report on the programming language Scala (2002), http://lamp.epfl.ch/~ odersky/scala/reference.ps 73. Proen¸ca, J.: Dreams: A Distributed Framework for Synchronous Coordination. Leiden University (2011) 74. Proen¸ca, J., Clarke, D.: Coordination models orc and reo compared. Electr. Notes Theor. Comput. Sci. 194(4), 57–76 (2008) 75. Rutten: Behavioural differential equations: A coinductive calculus of streams, automata, and power series. TCS: Theoretical Computer Science 308 (2003) 76. Rutten, J.J.M.M.: Elements of stream calculus (an extensive exercise in coinduction). Electr. Notes Theor. Comput. Sci. 45 (2001) 77. Rutten, J.J.M.M.: A coinductive calculus of streams. Mathematical Structures in Computer Science 15(1), 93–147 (2005) 78. Sangiorgi, D., Walker, D.: PI-Calculus: A Theory of Mobile Processes. Cambridge University Press, New York (2001) 79. Wegner, P.: Coordination as comstrainted interaction (extended abstract). In: Ciancarini, P., Hankin, C. (eds.) COORDINATION 1996. LNCS, vol. 1061, pp. 28–33. Springer, Heidelberg (1996) 80. Yokoo, M.: Distributed Constraint Satisfaction: Foundations of Cooperaton in Multi-Agent Systems. Springer Series on Agent Technology. Springer, New York (2000); NTT
Enterprise Architecture as Language Gary F. Simons, Leon A. Kappelman, and John A. Zachman
*
1 On the Verge of Major Business Re-Engineering “Insanity is doing the same thing over and over again and expecting different results.” — Albert Einstein Seven years ago the senior leadership at SIL International (see Chart 1), a not-forprofit whose purpose is to facilitate language-based development among the peoples of the world, determined that it was time to build an integrated Enterprise Information System. There were three precipitating factors: mission critical IT systems were almost twenty years old and on the verge of obsolescence, their landscape was dotted with dozens of silo systems, and commitments to new strategic directions demanded significant business re-engineering. Gary F. Simons Chief Research Officer SIL International 7500 W. Camp Wisdom Rd. Dallas, TX 75236 Tel.: 972.708.7487; Fax: 972 708.7546 e-mail:
[email protected] http://www.sil.org/~simonsg/
*
Leon A. Kappelman (CONTACT AUTHOR) Professor of Information Systems Director Emeritus, IS Research Center Founding chair, SIM Enterprise Architecture Working Group College of Business, University of North Texas 1155 Union Circle #305249, Denton, Texas 76203-5017 Voice: 940-565-4698; Fax: 940-565-4935 e-mail:
[email protected] http://courses.unt.edu/kappelman/ John A. Zachman Zachman International 2222 Foothill Blvd., Suite 337 La Canada, CA 91011 U.S.A. Tel.: & Fax: 818-244-3763 e-mail:
[email protected] http://www.ZachmanInternational.com
30
G.F. Simons, L.A. Kappelman, and J.A. Zachman
John Zachman made a site visit to help launch an enterprise architecture initiative. SIL learned from him that architecture (see Chart 2) is the age-old discipline that makes it Chart 1: What is SIL International? possible for humankind to construct SIL is a not-for-profit, academic, faith-based complex systems. If organization committed to the empowerment of an organization indigenous communities worldwide through wants to build language development efforts. something that is SIL is focused on the role of language and culture highly complex in in effective development. such a way that what the builder builds is By facilitating language-based development, SIL aligned with what International serves the peoples of the world the owner actually through research, translation, and literacy. has in mind Since its founding in 1934 SIL has worked in (whether it be a 1,800 languages, in 70 countries, and grown to a skyscraper, an team of 5,000 from 60 countries. airplane, or an information system), then it needs a designer to create a complete set of blueprints to which all the stakeholders agree and against which all will work. Perhaps even more important in this age of increasingly rapid change is that architecture is the discipline that makes it possible for an organization to maintain a highly complex system once it is operational. Before the functioning building or airplane or information system can safely, efficiently, and effectively be changed, it is necessary for the owner, designer, and builder to first make the changes on the blueprints and come to agreement that the proposed changes will achieve what the owner wants and can be implemented by the builder.
2 Nothing so Practical as Good Theory “In the case of information and communication technologies … investments in associated intangible capital … are quite important indeed.” — Federal Reserve Chairman Ben Bernanke (MIT commencement, June 2006) The Zachman Framework for Enterprise Architecture (see Chart 3 and Figure 1) seemed to offer a good theory for what the blueprints of an enterprise should look like: primitive models (see Chart 4) in each of the cells formed by the intersection of rows for stakeholder perspectives (e.g., owner, designer, builder) with columns for interrogative abstractions (i.e., what, how, where, who, when, why).
Enterprise Architecture as Language
31
As the SIL leadership set Chart 2: What is Architecture? out to re-engineer the Architecture is the set of descriptive organization they were representations that are required in order to inspired by Zachman’s create an object. Architecture is also the vision of an enterprise under baseline for changing the object once it is control through a complete created, IF you retain the descriptive set of aligned blueprints. representations used in its creation and IF you SIL’s leadership aspired to ensure that the descriptive representations are Jeanne Ross’ conclusion always maintained consistent with the created that “the payback for object (i.e., the instantiation). The Roman enterprise IT architecture Coliseum is not architecture, it is the result of efforts is strategic alignment architecture, an implementation. between IT and the If the object you are trying to create is so business” (p. 43). In an simple that you can see it at a glance in its application of social entirety and remember all at one time how all of psychologist Kurt Lewin’s its components fit together at excruciating levels famous maxim, “There is of detail, you don’t need architecture. You can nothing so practical as a “wing it” and see if it works. It is only when the good theory,” they saw the object you are trying to create is complex to the practical value of the extent that you can’t see and remember all the Zachman Framework and details of the implementation at once, and only adopted it as their working when you want to accommodate on-going theory. Conversely, there is change to the instantiated object, that architecture nothing so good for the is imperative. development of theory as good application in practice, (Zachman 1987, 2001, 2007) and Zachman with his associate Stan Locke entered into a relationship with SIL to help SIL put theory into practice while SIL helped them refine theory through practice. Following Kotter’s (1996) eight-stage process for managing major change, SIL formed a VP-level guidance team chaired by the Associate Executive Director for Administration. Trained and advised by Locke, this team has met regularly since 2000 to guide the process of architecting a re-engineered enterprise.
32
G.F. Simons, L.A. Kappelman, and J.A. Zachman
Chart 3: What is the Framework for Enterprise Architecture? The Framework for Enterprise Architecture (the “Zachman Framework”, see Figure 1) is simply a schema, a classification scheme for descriptive representations of objects with enterprise names on the descriptions. It is represented in two dimensions as a table or matrix consisting of six columns and five rows. The schema is “normalized” so that no one fact can show up in more than one cell. The columns (nicknamed “one” through “six” from left to right) answer the six interrogatives — what, how, where, who, when, and why, respectively — and correspond to the universal set of descriptive representations for describing any and all complex industrial products (industry-specific variations in terminology notwithstanding): Bills of Materials, Functional Specifications, Drawings, Operating Instructions, Timing Diagrams, and Design Objectives. These are termed “abstractions” in the sense that out of the total set of relevant descriptive characteristics of the object, we “abstract” one of them at a time for producing a formal, explicit, description. The rows (nicknamed from top to bottom “one” through “five”) represent the set of descriptions labeled “perspectives” in the sense that each abstraction is created for different audiences: visionaries or planners, executives or owners, architects or designers, engineers or builders, and implementers or sub-contractors respectively. Each of the six abstractions has five different manifestations depending upon the perspective of the intended audience for whom it is created. These are the industrial product equivalents of Scoping Boundaries (“Concepts Package”), Requirements, Schematics (Engineering descriptions), Blueprints (Manufacturing Engineering descriptions), and Tooling configurations; and these correspond to the enterprise equivalents of boundary or scope, business model, logical model, physical or technology model, and tooling configurations. Enterprise Architecture is the total set of intersections between the abstractions and the perspectives that constitutes the total set of descriptive representations relevant for describing an enterprise: And the ENTERPRISE itself is the implementation, the instantiation, the end result of doing Enterprise Architecture, and is depicted in the framework as row six. (Zachman 1987, 2001, 2007; Zachman & Sowa 1992)
3 Architecture Out of Control “The problem with communication ... is the illusion that it has been accomplished.” — George Bernard Shaw SIL enjoyed excellent buy-in and participation by senior leadership and IT staff, and found that Zachman’s framework was a powerful tool for helping conceptualize what they were doing. But SIL also found that they lacked the tools to deliver all the blueprints. Only in Zachman’s leftmost column one of the
Enterprise Architecture as Language
33 TM
ARCHITECTURE - A FRAMEWORK THEENTERPRISE ENTERPRISE ARCHITECTURE DATA SCOPE (CONTEXTUAL)
Planner ENTERPRISE MODEL (CONCEPTUAL)
Owner
What
List of Things Important to the Business
FUNCTION
How
List of Processes the Business Performs
Designer
Builder DETAILED REPRESENTATIONS (OUT-OFCONTEXT) SubContractor
Where
PEOPLE
Who
List of Organizations Important to the Business
TIME
When
List of Events Significant to the Business
MOTIVATION
Why
List of Business Goals/Strat
ENTITY = Class of Business Thing
Function = Class of Business Process
Node = Major Business Location
e.g. Semantic Model
e.g. Business Process Model
e.g. Business Logistics System
Ent = Business Entity Reln = Business Relationship
Proc. = Business Process I/O = Business Resources
Node = Business Location Link = Business Linkage
People = Organization Unit Work = Work Product
e.g. Logical Data Model
e.g. Application Architecture
e.g. Distributed System Architecture
e.g. Human Interface Architecture
Ent = Data Entity Reln = Data Relationship
Proc .= Application Function I/O = User Views
Node = I/S Function (Processor, Storage, etc) Link = Line Characteristics
People = Role Work = Deliverable
Time = System Event Cycle = Processing Cycle
End = Structural Assertion Means =Action Assertion
e.g. Physical Data Model
e.g. System Design
e.g. Technology Architecture
e.g. Presentation Architecture
e.g. Control Structure
e.g. Rule Design
Ent = Segment/Table/etc. Reln = Pointer/Key/etc.
SYSTEM MODEL (LOGICAL)
TECHNOLOGY MODEL (PHYSICAL)
NETWORK
List of Locations in which the Business Operates
Proc.= Computer Function I/O = Data Elements/Sets
Node = Hardware/System Software Link = Line Specifications
e.g. Data Definition
e.g. Program
e.g. Network Architecture
Ent = Field Reln = Address
Proc.= Language Stmt I/O = Control Block
Node = Addresses Link = Protocols
People = Major Organizations
Time = Major Business Event
Ends/Means=Major Bus. Goal/ Critical Success Factor
e.g. Work Flow Model
e.g. Master Schedule
e.g. Business Plan
People = User Work = Screen Format e.g. Security Architecture
People = Identity Work = Job
Time = Business Event Cycle = Business Cycle e.g. Processing Structure
Time = Execute Cycle = Component Cycle e.g. Timing Definition
Time = Interrupt Cycle = Machine Cycle
SCOPE (CONTEXTUAL)
Planner ENTERPRISE MODEL (CONCEPTUAL)
Owner
End = Business Objective Means = Business Strategy e.g., Business Rule Model
SYSTEM MODEL (LOGICAL)
Designer TECHNOLOGY MODEL (PHYSICAL)
Builder
End = Condition Means = Action e.g. Rule Specification
DETAILED REPRESENTATIONS (OUT-OF CONTEXT) SubContractor
End = Sub-condition Means = Step
THE ENTERPRISE INSTANTIATION
FUNCTIONING ENTERPRISE
e.g. DATA
e.g. FUNCTION
e.g. NETWORK
e.g. ORGANIZATION
e.g. SCHEDULE
e.g. STRATEGY
FUNCTIONING ENTERPRISE
John A. Zachman, Zachman International (810) 231-0531
Fig. 1 The Zachman Framework for Enterprise Architecture
framework (i.e., data) did they succeed in creating formal blueprints. The entityrelationship diagrams (Chen, 1976) commonly used by database designers are compatible with Zachman’s notion of a primitive thing-relationship-thing model. Thus SIL was able to achieve alignment and control in column one by using a popular entity-relationship modeling tool. But SIL found nothing comparable for the other five columns (process, location, organization, timing, and motivation). It turns out that existing modeling techniques, although useful for other purposes, were not well suited since they did not produce primitive models for the single normalized cells of Zachman’s framework. Rather, they produced composite models combining elements from multiple rows or columns of the framework. An obvious alternative would be to use a general drawing program to simply draw the models. SIL tried this, but it did not work. Unlike the entityrelationship tool which was inherently compatible with the Zachman metamodel for column one and thus could not generate anything but a compatible model no matter who used it, a general drawing program is unconstrained and cannot guarantee conformity with the framework or consistency between practitioners. Another advantage of the entity-relationship tool was that it is based on a single underlying knowledge structure that kept the owner, designer, and builder views of the blueprints in alignment. With the general drawing tool, however, once drawings were created, it was virtually impossible to keep them maintained and aligned. In order to give guidance to system builders, some models were
34
G.F. Simons, L.A. Kappelman, and J.A. Zachman
described in documents and spreadsheets rather than diagrams, but these were similarly unconstrained and subject to all the same shortcomings. For the lack of tools to handle the models in columns two through six, five sixths of SIL’s architecture was out of control. Chart 4: Primitive and Composite Models: Why things go bump in the night. A “primitive” model is a model in one variable— the combination of one abstraction with one perspective — that is an artifact specific to one cell of the Zachman Framework. It is the raw material for doing engineering and architecture. In contrast, a “composite” model is comprised of more than one abstraction and/or more than one perspective. Implementations are the instantiation of composite, multi-variable models. Implementations are manufacturing, the creation of the end result. An instantiation, by definition is a composite. An enterprise, an information system, and a computer program are instantiations and therefore composites. The question turns out to be, how did you create the implementation instance? Was it engineered (architected) from primitive models or did you simply create the implementation ad hoc (i.e., it was implemented but NOT architected with primitives)? If you are not creating “enterprise-wide” primitives, you risk creating implementations that will not integrate into the enterprise as a whole. You can manufacture parts of the whole iteratively and incrementally; however, they must be engineered to fit together or they are not likely to fit together (be aligned or easily integrated). Enterprise-wide integration and alignment do not happen by accident. They must be engineered (architected). (Zachman 1987, 2001, 2007)
4 Enterprise Architecture as a Language Problem “In the beginning was the Word.” — John 1:1 (King James Bible) Why didn’t the drawing approach work? Modeling is about expressing ideas, not about drawing pictures. Thus the solution to the modeling problem is even older than architecture — the age-old discipline that makes it possible for humans to express ideas with precision is language. Language is the source of our ability to create, our power to wield ideas, and our freedom to build a better future. Ironically, language achieves this freedom by conventionalizing a strong set of constraints on how words and sentences can be formed. Paradoxically, language uses constraints to unleash freedom of expression. Consider that in any one language all the possible speech sounds are constrained to a relatively small subset that are actually used, syllable patterns constrain the combinations of sounds that
Enterprise Architecture as Language
35
could possibly be words, conventional associations of meaning constrain which of those sequences actually are words, and rules of grammar constrain the order in which words combine to express larger thoughts. By analogy, in order to unleash the creativity, power, and freedom that are the promise of enterprise architecture, an enterprise needs to employ a constrained language for enterprise modeling. The metamodels of the Zachman Framework are too generic to support detailed engineering. This is by design since the framework is a classification system, not a methodology. In order to develop a methodology appropriate for its own use, an enterprise needs to adapt the framework to its specific context by adding both detail and constraint to Zachman’s generic standard for enterprise architecture. The Enterprise Architecture Standards (Zachman, 2006) define the notion of an elaboration of the framework. The allowed elaborations are: ● ● ● ●
Alias a standard thing or relationship. Add named subtypes of standard things and relationships. Name the supported integrations between columns. Add named attributes to a type of thing or relationship or integration.
Such elaborations of the metamodels do not violate the standard framework as long as they follow a dumb-down rule that states, “When the elaborations are backed out of an elaborated model, the result must be a model that conforms to the standard metamodel.”
5 GEM: A Language for Enterprise Modeling “Obedience to a law which we prescribe to ourselves is liberty.” — Jean-Jacques Rousseau (The Social Contract, 1762) To gain control of their enterprise architecture SIL created GEM — a system for Generic Enterprise Modeling. The complete system consists of a methodology, a repository, and a workbench, but at the center of all these is a language that is formally an elaboration of the Zachman Framework metamodel as defined in the Enterprise Architecture Standards (Zachman, 2006). The GEM language is implemented as an application of XML. By analogy to a programming language, the architect writes XML source code to express the semantics (owner view) and logic (designer view) of a system — including things, relationships, integrations, transformations, added detail, and prose definitions. The system compiles the XML source into the graphic primitive models for each cell of the framework. The system also compiles the XML source into “textual models” for each cell — HTML documents that provide human-readable descriptions. For example, Figure 2 shows a fragment from the owner-level process model (that is, the intersection of Row Two and Column Two) for the subsystem that maintains and produces Ethnologue: Languages of the World (Gordon, 2005). The Ethnologue is a 1,272-page reference book published by SIL that catalogs all known languages of the present-day world. Now in its fifteenth edition, the Ethnologue identifies 6,912 living languages, both spoken and signed.
36
G.F. Simons, L.A. Kappelman, and J.A. Zachman
World Language Inventory <description>The process that maintains the most up-to-date information about the existence and status of every known language. Ethnologue Edition <description>The process that produces a particular, published edition of the catalog of all known living languages of the world. <producedAt location="c3.HQ"/> ... ...
Fig. 2 An example of GEM language source code
In the GEM language, each type of thing and relationship used in the primitive thing-relationship-thing models has its own XML element. For example, the fragment in Figure 2 illustrates two kinds of things in the owner-level process model,
representing a process for maintaining an inventory of data entities and representing a process for producing a publication. These are two kinds of processes that recur in SIL’s enterprise, so Zachman’s generic notion of a Row Two process has been elaborated by defining these two subtypes. Each thing element has an ID attribute, which provides a unique identifier that can be used as the target of relationships. Each thing element also contains a and <description> element for humanreadable documentation. The XML element for a relationship is embedded in the thing it originates from and contains an IDREF attribute that expresses the unique identifier of the thing that is the target of the relationship. In Figure 2, is an example of a relationship. The instance is embedded in the Ethnologue Edition
Enterprise Architecture as Language
37
process and points to the World Language Inventory process. It is therefore a formal statement of the fact, “The Ethnologue Edition publication process is fed by the output of the World Language Inventory process.” Figure 3 shows the graphic representation of this model that is generated by GEM from the source fragment in Figure 2.
Fig. 3 The graphic model generated from Figure 2
Relationships between things in different columns are called integrations and are expressed in the same way. In Figure 2, <producedAt> integrates the process to the thing in the Column Three model that represents the location where it is produced and integrates the process to the thing in the Column Five model that represents the timing cycle for the process. Figure 4 shows the textual
Ethnologue Edition A publication process. The process that produces a particular, published edition of the catalog of all known living languages of the world.
Relationships Fed by: Fed by:
Language Map Inventory World Language Inventory
Integrations Produced at: Consumed by: Produced by: Timing: Motivation:
International Headquarters Public VP Academic Affairs Office Ethnologue Edition Cycle Publish Ethnologue
Fig. 4 The textual model generated from Figure 2
38
G.F. Simons, L.A. Kappelman, and J.A. Zachman
model generated by GEM for the element in Figure 2. It is an HTML document in which the targets of the relationships and integrations are active links to the definition of the referenced thing. This example illustrates an important feature of the GEM language, namely, that the reverse relationships and integrations are never expressed explicitly in the XML source code, but always inferred by the compiler that generates the textual model, thus avoiding redundancy and the potential for update anomaly. For instance, in Figure 4, the “Produced by” and “Consumed by” integrations were actually expressed in the source code of the Column Four model and the “Motivation” integration was actually expressed in the source code of the Column Six model. Figure 5 summarizes the coverage of GEM for modeling the owner (Row Two) perspective. This represents about one-third of the GEM language; the remainder
Things
Relationships
Integrations
C1 object association
hasAssociations associatedWith hasMembers hasStructure
Tracked in C2 Model for C4 Motivation is C6
C2 inventory publication
fedBy
tracks C1 producedAt C3 hasTiming C5 Produced by C4 Consumed by C4 Motivation is C6
C3 site
linkedTo
Produced here C2 Located here C4 Motivation is C6
C4 orgUnit
administeredBy
modeledAs C1 produces C2 consumes C2 locatedAt C3 Monitors C5 Motivation is C6
C5 businessCycle
spawns intersects
monitoredBy C4 Timing for C2 Motivation is C5
C6 goal objective
meansFor
reasonFor C1, C2,C3,C4,C5
Fig. 5 GEM vocabulary for Row Two models
Enterprise Architecture as Language
39
is for modeling the things, relationships, and integrations of the designer perspective (Row Three), plus further details like attributes of data entities and states of timing cycles that are needed to fully specify the logical design of a subsystem. The rows of the table in Figure 5 correspond to the six columns of the Zachman Framework (labeled C1 through C6). The contents of the table cells list the XML elements for expressing things, relationships, and integrations in the given framework column. The latter entries also identify the column that is the target for the integration. The entries in the integration column that are in italics are for the implicit reverse integrations that are generated by the compiler. The XML elements listed in Figure 5 can be likened to the vocabulary of the GEM language. From these “words” it is possible to construct sentences like, “Object X associatedWith Object Y” and “Inventory Z tracks Object X.” An XML DTD (Document Type Definition) along with a Schematron schema defines the grammar of the language (that is, the constraints on how the possible words can be combined to create valid sentences). For instance, the schema prevents a sentence like “Inventory Z tracks Site W” since the object of tracks must be a Column One thing.
6 The Repository of Enterprise Models “Any fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction.” — Albert Einstein Modeling an entire enterprise and then managing how its models change over time is a huge task. GEM supports enterprise-wide modeling in two critical ways. First, the complete enterprise (which is too big to handle in one model) is divided into numerous subsystems (each of which is of a manageable size). A subsystem represents a focused set of business functions that falls under the stewardship of a single vice president who “owns” the subsystem on behalf of the enterprise. A GEM source file describes the architecture of just one of those subsystems. Individual subsystem models may reference elements defined in other subsystem models. In this way, the collection of subsystem models is knit into a single contiguous enterprise model and an internal web application allows all stakeholders to browse the set of subsystem models as an integrated whole. Second, a single subsystem model may simultaneously describe the subsystem at various points in the history of its development. Each subsystem declares a set of stages in a build sequence and each thing and relationship is assigned to the stage in which it is added to (and in some cases dropped from) the subsystem. A request to change the functioning enterprise is made by specifying a new stage in the build sequence of the affected subsystem. Each stage passes through a development life cycle with the following states: proposed, planned for implementation, in development, in quality assurance testing, and in production. The XML source files for all of the subsystems are stored in a single repository managed by Subversion — an open-source revision control system. Figure 6 shows the home page of the dynamic web application SIL has developed for
40
G.F. Simons, L.A. Kappelman, and J.A. Zachman
Fig. 6 The GEM Repository of Enterprise Models
providing a user interface to the repository of enterprise models, and shows all of the subsystems (which are limited to a selection of eight to reduce the size of the graphic) as well as the entire enterprise. The left-hand column names the subsystems that have been modeled; they are grouped under headings for the corporate officer who is steward for the model. The numbers on the right-hand side are rough metrics giving the number of things defined in the models for each column. The five columns in the middle of the page give links for navigating to the models themselves; if the subsystem has at least one build-sequence stage in the named life-cycle state, then the link is dark and active. The repository application (by adding and dropping model elements based on the life-cycle state of the build sequence stages) is able to display the models for each subsystem in each of the possible life-cycle states. This helps the enterprise to visualize, discuss, and manage change. Figure 7 is a screenshot showing the result of clicking on the “Development” state link for the Ethnologue subsystem that appears in Figure 6. The body of the page contains 35 links, each of which produces a different view of information in the single GEM language source file. The application is built with Apache Cocoon — an open-source web application framework that uses pipelines of XSLT scripts to transform the XML source file on-the-fly into the requested textual and graphic displays. The top half of the screen gives links to displays that summarize the models over all the columns of the Zachman Framework. The bottom half of the
Enterprise Architecture as Language
41
Fig. 7 Framework for the Ethnologue System in Development State
screen gives links to the individual cell models for the top three rows of the Zachman Framework. These are the rows that deal with the ideas that lie behind the subsystem before it is transformed into a technology solution. These are the models that are used by executive leaders and the staff sections they manage. This repository application is aimed at these users; another application, the GEM Workbench, is aimed at IT staff and encompasses all the rows of the Zachman Framework. Figure 8 shows the first page of the HTML document generated as a result of clicking the “model” link in Row Two and Column Two. It illustrates the content from Figure 4 in its full context. Each of the eighteen “list” and “model” links in the bottom half of Figure 7 generates a comparable document. The G icons in the second and third rows are also links; they generate the graphic form of the primitive cell model. Figure 9 shows all six of the graphic models generated for Row Two of the development state of the Ethnologue subsystem. These graphs are created by transforming the XML source model into a graph specification in the DOT graphic description language which is then rendered on-the-fly by Graphviz — an open-source graph visualization package. Since the XML source file for one subsystem is able to make reference to an element defined in another subsystem, the repository application is able to assemble the entire enterprise model by aggregating the individual subsystem models. This is the effect of clicking on the links for Complete Enterprise at the bottom of Figure 6. The result is a screen comparable to Figure 7, but that generates the models for the entire enterprise by aggregating the individual subsystem models. For example, Figure 10 shows the Row Three data model for
42
G.F. Simons, L.A. Kappelman, and J.A. Zachman
Fig. 8 A primitive cell model (as text)
all the subsystems that are in production — in other words, it is the logical data model for the Enterprise Information System as it is currently in production. The entities are defined in eight different subsystems and the graphic color codes the entities by subsystem. This graph brings to light a current deficiency in the state of development — the six subsystems on the left side of the graph form a contiguous model, but the two subsystems on the right have yet to be integrated with the rest of the enterprise.
Enterprise Architecture as Language
Fig. 9 All Six Primitive Cell Models for Row Two (as graphs)
43
44
G.F. Simons, L.A. Kappelman, and J.A. Zachman
Fig. 10 The enterprise-wide Row Three data model
7 Progress to Date “In a time of drastic change it is the learners who inherit the future. The learned usually find themselves equipped to live in a world that no longer exists.” — Eric Hoffer SIL’s efforts at re-engineering and creating an integrated Enterprise Information System are a work in progress. Their enterprise architecture blueprints facilitate communication among the staff of SIL so that the operational aspects of SIL that are managed by those people, including IT, can be aligned. To date SIL’s repository holds eighteen subsystem models and each falls under the stewardship of one of their vice presidents. Originally, they had blueprints for only Column One (data models) of the Zachman Framework. The impetus for developing GEM was to get the complete architecture under control by developing blueprints for the other five columns as well.
Enterprise Architecture as Language
45
Figure 11 reports the progress to date in achieving this. This, as well as the entire GEM development effort, represents the work product of a small team consisting of an enterprise architect and a software engineer (both devoting less than half-time to the endeavor), plus a few domain specialists who have learned to do the GEM modeling for subsystems in their domain. The two rows of the table separate counts for the eight subsystems that are now part of the in-production integrated Enterprise Information System versus the ten that are in an earlier stage of planning or development. The second column in the table gives a sense of the size of the effort by reporting the number of data entities in the Column One models. (Note that a large number of the data entities for the subsystems in production are within build-sequence stages that are not yet in production; this is why the aggregated model in Figure 10 contains many fewer than 178 entities.) The remaining columns show the progress toward modeling the enterprise in all columns of the Zachman Framework: three-quarters of the subsystems are now modeled in two columns, just over half in three columns, one-third in five columns, and only one-sixth in all six columns.
GEM Data subsystems entities
Modeled in at least n columns of the Zachman Framework 2
3
4
5
6
In production
8
178
5
4
3
3
1
Not in production
10
248
9
6
3
3
2
Totals
18
426
14
10
6
6
3
78%
56%
33%
33%
17%
As per cent
Fig. 11 Enterprise Architecture Progress and Control at SIL
Considering that most enterprises today are fortunate to have even the single data column fully architected, let alone enterprise-wide, SIL stands at the vanguard of what may be a paradigm shift in how enterprises are managed. A change in thought and practice perhaps as significant as those brought about in the Industrial Age by Frederick Taylor’s “scientific management” and Joseph Juran’s “statistical quality control” (Kappelman, 2007). And with their enterprise architecture language, tools, methods, and process in place, and with significant organizational learning and success already experienced, SIL’s pace and momentum are on the rise.
8 Lessons Learned “Someday, you’re going to wish you had all those models, enterprise-wide, horizontally and vertically integrated, at an excruciating level of detail.” — John Zachman
46
G.F. Simons, L.A. Kappelman, and J.A. Zachman
Even more than the benefits of creating new tools, processes, methods, innovations, technologies, and intellectual capital while transforming their IT systems, SIL has learned some critical and universal lessons. Lessons, perhaps even basic truths, which shed light not only on the practice and value of enterprise architecture but also on some of the fundamental causes of seemingly intractable issues in IT management, such as the perennial quest for alignment. Among these was the discovery that when the owner speaks directly with the builder (skipping over the Row Three designer), the result is typically a localized stove-piped solution that is not architecturally optimal and thus difficult and costly to integrate and change. That is, the problem of immediate concern is solved but at the cost of adding more complexity to the overall enterprise than was actually necessary. Regrettably, the lack of staff that can function as Row Three architects has been a bottleneck in most of SIL’s projects, and it appears this shortage of the architecturally skilled is widespread. Row Three is a scarce but critical perspective. The fact that someone has been successful as a software designer (Row Four) does not mean they will be successful as an enterprise designer in Row Three. It takes someone who can straddle the owner’s perspective in Row Two and the builder’s perspective in Row Four — who can translate the owner’s view into a formal logical design that transcends any particular technology for implementing it. Technology designers tend to push a Row Four perspective into Row Three by solving the problem in terms of their preferred technology. The GEM language is giving SIL a way to train people to function in the Row Three role without getting drawn into the details of a Row Four technology solution. Through GEM, SIL has also learned that having and maintaining “all those models” is possible if they are automatically generated from a single source. When all the primitive models are generated on demand from a single source they always stay synchronized and in alignment, and enable the enterprise as implemented to be in alignment, In sum, SIL has found that elaborating Zachman’s Enterprise Architecture Standards to create a custom modeling language allows an enterprise to gain control of its architecture; but more importantly, to gain control of the actual data, processes, technologies, people, and other resources of which the architecture is a representation. Moreover, having a constrained formal language allows novice modelers to be productive and ensures that all modelers produce comparable results. But more than all this, SIL has found that the most important result of their enterprise architecture initiative was not the new Enterprise Information System (as they originally thought it would be), but an enterprise change management process that will make it possible for them to use their newly developed enterprise blueprints to manage the never ending cycle of changes to the enterprise. In other words, enterprise architecture is the key to SIL achieving the design objectives that keep nearly all IT managers up at night — alignment, simplicity, flexibility, speed, and agility. In order to ensure that this is the result, SIL’s EA leadership team recently assigned their Chief Architect two new highest priorities: (1) developing a plan for finishing the blueprints of all subsystems that are part of the in-production
Enterprise Architecture as Language
47
Enterprise Information System (including reverse engineering the models for the legacy subsystems and third-party systems that were integrated without blueprints), and (2) assisting the EA Program Manager to specify an enterprise change management process that is based on managing the complete blueprints. Zachman’s theory remains confirmed by the practical experience of SIL International, and SIL has realized tangible and intangible benefits as their enterprise architecture efforts are helping them to bridge the chasm between strategy and implementation (Kappelman, 2007). SIL has found that their architecture isn’t their organization any more than a map is the highway or the blueprints the building. But like maps and blueprints, enterprise architecture is a tool to help us efficiently and effectively get where we want to go, and to keep us from getting lost.
References Chen, P.P.: The Entity-Relationship Model: Toward a Unified View of Data. ACM Transactions on Database Systems 1(1), 9–36 (1976) Gordon Jr., R.G.: Ethnologue: Languages of the World, 15th edn. Dallas: SIL International, Web edition at: http://www.ethnologue.com Kappelman, L.A.: Bridging the Chasm. Architecture and Governance 3(2) (2007), http://www.architectureandgovernance.com/articles/ 09-lastword.asp Kotter, J.P.: Leading Change. Harvard Business School Press (1996) Ross, J.: Creating a Strategic IT Architecture Competency: Learning in Stages. MISQ Executive 2(1) (2003) Zachman, J.A.: A Framework for Information Systems Architecture. IBM Systems Journal 26(3) (1987), IBM Publication G321-5298: http://www.research.ibm.com/journal/sj/382/zachman.pdf Zachman, J.A.: The Zachman Framework for Enterprise Architecture: A Primer for Enterprise Engineering and Manufacturing, Zachman International (2001), http://www.zachmaninternational.com/2/Book.asp Zachman, J.A.: Enterprise Architecture Standards. Zachman International (2006), http://www.zachmaninternational.com/2/Standards.asp Zachman, J.A.: Architecture Is Architecture Is Architecture. EIMInsight 1(1) (March 2007), Enterprise Information Management Institute, http://www.eiminstitute.org/library/eimi-archives/ volume-1-issue-1-march-2007-edition Zachman, J.A., Sowa, J.F.: Extending and Formalizing the Framework for Information Systems Architecture. IBM Systems Journal 31(3) (1992); IBM Publication G321-5488 Kappelman, L.A. (ed.): An earlier version of this manuscript appears in The SIM Guide to Enterprise Architecture. CRC Press, NY (2010)
Real-Time Animation for Formal Specification Dominique M´ery and Neeraj Kumar Singh
Abstract. A formal specification is a mathematical description of a given system. Writing a formal specification for real-life, industrial problems is a difficult and error prone task, even for experts in formal methods. It is crucial to get the approval and feedback when domain experts have a lack of knowledge of any specification language, to avoid the cost of changing a specification at later stage of development. This paper introduces a new functional architecture, together with a direct and efficient method of using real-time data set, in a formal model without generating the legacy source code in any target language. The implemented architecture consists of six main units. These units are: Data acquisition and preprocessing unit; Feature extraction unit; Database; Graphical animations dedicated tool: Macromedia Flash; Formal model animation tool Brama plug-in to interface between Flash animation and Event-B model; and formal specification system Event-B. These units are invoked independently and allow for simple algorithms to be executed concurrently. All the units of this proposed architecture help to animate the formal model with real-time data set and offer an easy way for specifiers to build a domain specific visualization that can be used by domain experts to check whether a formal specification corresponds to their expectations.
1 Introduction Formal methods aim to improve software quality and produce zero-defect software, by controlling the whole software development process, from specifications to implementations. Formal methods are used by industries in a range of critical domains, Dominique M´ery · Neeraj Kumar Singh LORIA Universit´e Henri Poincar´e Nancy 1 BP 239 54506 Vandœuvre-l`es-Nancy e-mail: [email protected],[email protected]
50
D. M´ery and N.K. Singh
involving higher safety integrity level certification and IEC 61508 [7] safety standard. IEC 61508 is intended to be a basic functional safety standard applicable to all kinds of industry. In formal model development, they use top-down approaches and start from highlevel and abstract specifications, by describing the fundamental properties of the final system. A detailed information about a given system is introduced in an incremental way [2]. The correctness between two levels is ensured by refinement proofs. The final refinement leads to the expected behaviour of the system implementation model. The role of verification and validation is very important in the development of safety critical systems. Verification starts from the requirements analysis stage where design reviews and checklists are used for validation where functional testing and environmental modelling is done. The results of the verification and validation process are an important component in the safety case, which is used to support the certification process. Event-B is a formal method for system-level modelling and analysis. There are two main proof activities in Event-B: consistency checking, which is used to show that the events of a machine preserve the invariant, and refinement checking, which is used to show that one machine is a valid simulation of another. There are several ways to validate a specification: prototyping, structured walkthrough, transformation into a graphical language, animation, and others. Each technique has a common goal, to validate a system according to the operational requirements. Animation focuses on the observable behaviour of the system [16]. The principle is to simulate an executable version of the requirements model and to visualize exact behaviours of the actual system. Animators use finite state machines to generate a simulation process which can be then observed with the help of UML diagrams, textual interfaces, or graphical animations [11]. Animation can be used in the early stage of development during the elaboration of the specification: there is no need to wait until it is finished and get the generated code. As a relatively low cost activity, animation can be frequently used during the process to validate important refinement steps. It then provides us with a validation tool consistent with the refinement structure of the specification process. The final code generation process consists of two stages: final level formal specifications are translated into programs in a given programming language, and then these programs are compiled. Nevertheless all approaches which support formal development from specification to code must manage several constraining requirements, particularly in the domain of embedded software where specific properties on the code are expected. Finally, it is not possible to use the real-time data in the early stage of formal development without compiling the source code in any target language. Based on our various research experience using formal tools in an industrial requirements (verification and validation) and our desire to disseminate formal methods, we have imagined a new approach to present an animated model of specification using real-time data set, in the early stage of formal development. It can help a specifier gain confidence that the model that is being specified, refined and implemented, does meet the domain requirements. This is achieved by
Real-Time Animation for Formal Specification
51
the animation component of Brama [15] with Macromedia Flash tool, that allows to check the presence of desired functionality and to inspect the behaviour of a specification. In this paper, we describe an approach to extend an animator tool which will be useful to use the real-time data set. Now, present time all the animation tools use a toy data set to test the model while we are proposing a key idea to use the real-time data set with the model without generating the source code in any target language (C, C++, VHDL etc.). In this work, we present an architecture which allows to easily develop visualizations for a given specification. Our architecture supports state-based animations, using simple pictures to represent a specific state of a Event-B specification, and transition-based animations consisting of picture sequences by using real-time data set. The animated model consists in Macromedia Flash components in picture sequences that is controled by the real-time data set and it presents an actual view of the system. Before moving on we should also mention that there are scientific and legal applications as well, where the formal model based animation can be used to simulate (or emulate) certain scenarios to glean more information or better understanding of the system requirements. This paper is organized as follows. Section 2 briefly introduces an animator tool, Brama. Section 3 presents the functional architecture which enables the animation of a proved specification with real-time data set. The functional architecture is then illustrated in section 4 on a real case study, the cardiac pacemaker system. Section 5 concludes the paper with some lessons learned from this experience and some perspectives along with future works.
2 Overview of Brama Brama [15] is an animator for Event-B specifications which is designed by Clearsy. Brama is an Eclipse plug-in suit and Macromedia Flash extension that can be used with Windows, Linux and MacOS for RODIN platform [12]. Brama can be used to create animations at different stages of development of a simulated system. To do so, a modeler may need to create an animation using the Macromedia Flash plug-in for Brama. The use of this plug-in is established through a communication between the animation and the simulation. A modeler can represent the system manually within RODIN [12] or represent the system with the Macromedia Flash tool that allows for communication with the Brama animation engine through a communication server. Brama communicates with Macromedia Flash through a network connection. Brama acts as a server to which the animation will connect. In order to connect to a Brama server, a Flash animation has to use the Brama component. This component handles the connection and the communication with the Brama server. This server has been created to communicate with animations which would stimulate the simulation and display an image of the system. The communication server exchanges the information packets between a model and a tool. Macromedia Flash controls the functional behaviour of all graphical components (button, check box, movie clip etc.). When the
52
D. M´ery and N.K. Singh
modeler and domain experts are satisfied with output, Brama can export the finished animation. Brama contains the following main modules: B2Rodin: an animation engine (predicate solver), event and B variable visualization tools, an automatic event linkage management module, a variable management module, observed predicates and expressions, and a Macromedia Flash communication module. The Brama model animation tool provides some feedbacks that can be used by the modeler throughout the modeling process. The animation functions allow it to “create” various model events, filters and properties during testing process [15].
3 Description of the Architecture Figure 1 depicts the overall functional architecture that can use the real-time data set to animate the Event-B model without generating source code of the model in any target language (C, C++, VHDL etc.). This architecture has six components: Data acquisition and preprocessing unit; Feature extraction unit; Database; Graphical animations dedicated tool: Macromedia Flash; Formal model animation tool Brama plug-in to interface between Flash animation and Event-B model; and formal specification system Event-B. Data acquisition and preprocessing begin with the physical phenomenon or physical property to be measured. Examples of this include temperature, light intensity, heart activities and blood pressure [13] and so on. Data acquisition is the process of sampling of real world physical conditions and conversion of the resulting samples into digital numeric values. The data acquisition hardware can vary from environment to environment (i.e camera, sensor etc.). The components of data acquisition systems include sensors that convert physical properties. A sensor, which is a type of transducer, that measures a physical quantity and converts it into a signal which can be read by an observer or by an instrument. Data preprocessing is a next step to perform on raw data to prepare it for another processing procedure. Data preprocessing transforms the data into a format that will be more easily and effectively processed for the purpose of the user. There are a number of different tools and methods used for preprocessing on different types of raw data, including: sampling, which selects a representative subset from a large population of data; transformation, which manipulates raw data to produce a single input; denoising, which removes noise from data; normalization, which organizes data for more efficient access. The features extraction unit is a set of algorithms that is used to extract the parameters or features from the collected data set. Theses parameters or features are numerical values that are used by animated model at the time of animation. The feature extraction relies on a thorough understanding of the entire system mechanics, the failure mechanisms, and their manifestation in the signatures. The accuracy of the system is fully dependent on the feature or parameter values being used. Feature extraction involves simplifying the amount of resources required to describe a large set of data accurately. When performing analysis of complex data one of the
Real-Time Animation for Formal Specification
53
Fig. 1 A functional architecture to animate a formal specification using real time data set without generating source code
major problems stems from the number of variables involved. Analysis with a large number of variables generally requires a large amount of memory and computation power or a classification algorithm which overfits the training sample and generalizes poorly to new samples. Feature extraction is a general term for methods of constructing combinations of the variables to get around these problems while still describing the data with sufficient accuracy. Collecting measured data and processing these data to accurately determine model parameter values is an essential task for the complete characterization of a formal model. The database unit is optional. It stores the feature or parameter values in the database file in any specific format. This database file of parameters or features can be used in future to execute the model. Sometimes, feature extraction algorithms take more time to calculate the parameters or the features. In such a situation, modeler can store the parameters or the features in database file to test the model in future. A modeler can also use the extracted parameters or features directly in the model, without using the database. The animated graphics are designed in the Macromedia Flash tool [14]. Macromedia Flash, a popular authoring software developed by Macromedia, is used to create vector graphics-based animation programs with high graphic illustrations and simple interactivity. Here we use this tool to create the animated model of the physical environment and use the Brama plug-in to connect the Flash animation and the Event-B model. This tool also helps to connect the real-time data set to a formal model specification using some intermediate steps and finally makes the animated model closer to the domain expert expectations. Brama is a tool allowing to animate Event-B models on the RODIN platform. It allows animating and inspecting a model using Flash animations. Brama has two objectives: to allow the formal models designer to ensure that his model is executed in accordance with the system it is supposed to represent; to provide this model with a graphic representation and animate this representation in accordance with the state of the formal model. The graphic representation must be in Macromedia Flash format and requires the use of a separate tool for its elaboration (Flash MX, for example). Once the Event-B model is satisfactory (it has been fully proven and
54
D. M´ery and N.K. Singh
its animation has demonstrated that the model behaves like its related system), you can create a graphic representation of this system and animate it synchronously with the underlying Event-B Rodin model. Brama does not create this animation. It is up to the modeler to create the representation of the model depending on the part of the model he wants to display. However, Brama provides the elements required to connect your Flash animation and Event-B model (in more detail see sec. 2) [15]. Event-B is a proof-based formal method [5, 2] for system-level modeling and analysis of large reactive and distributed systems. In order to model a system, EventB represents in terms of contexts and machines. The set theory and first order logic are used to define contexts and machines of a given system. Contexts [5, 2] contain the static parts of a model. Each context may consist of carrier sets and constants as well as axioms which are used to describe the properties of those sets and constants. Machines [5, 2] contain the dynamic parts of an Event-B model. This part is used to provide behavioral properties of the model. A machine is made of a state, which is defined by means of variables, invariants, events and theorems. The use of refinement represents systems at different levels of abstraction and the use of mathematical proof verifies consistency between refinement levels. Event-B is provided with tool support in the form of an open and extensible Eclipse-based IDE called RODIN [12] which is a platform for Event-B specification and verification.
4 Applications and Case Studies We have tested our proposed architecture on a case study performed on bradycardia operating modes of an artificial single electrode cardiac pacemaker [9]. A pacemaker is a high confidence medical device [1, 6, 17] that is implemented to provide proper heart rhythm when the body’s natural pacemaker does not function properly. In the single electrode pacemaker, the electrode is attached to the right atrium or the right ventricle. It has several operational modes that regulate the heart functioning. The specification document [4] describes all possible operating modes that are controlled by the different programmable parameters of the pacemaker. All the programmable parameters are related to real-time and action-reaction constraints, that are used to regulate the heart rate. In order to understand the “language” of pacing, it is necessary to comprehend the coding system that is produced by a combined working party of the North American Society of Pacing and Electrophysiology (NASPE) and the British Pacing and Electrophysiology Group (BPEG) known as NASPE/BPEG generic (NBG) pacemaker code [8]. This is a code of five letters of which the first three are most often used. The code provides a description of the pacemaker pacing and sensing functions. The sequence is referred to as “bradycardia operating modes”(see Table-1). In practice, only the first three or four-letter positions are commonly used to describe bradycardia pacing functions. The first letter of the code indicates which chambers are being paced, the second letter indicates which chambers are being sensed, the third letter of the code indicates the response to sensing and the final letter, which is optional, indicates the presence of rate modulation in response to the physical
Real-Time Animation for Formal Specification
55
activity measured by the accelerometer. Accelerometer is an additional sensor in the pacemaker system that detects a physiological result of exercise or emotion and increases the pacemaker rate on the basis of a programmable algorithms. “X” is a wildcard used to denote any letter (i.e. “O”, “A”, “V” or “D”). Triggered (T ) refers to deliver a pacing stimulus and Inhibited (I) refers to an inhibition from further pacing after sensing of an intrinsic activity from the heart chamber.
Table 1 Bradycardia operating modes of a pacemaker system Category Chambers Paced Letters O-None A-Atrium V-Ventricle D-Dual(A+V)
Chambers Sensed O-None A-Atrium V-Ventricle D-Dual(A+V)
Response to Rate Modulation Sensing O-None R-Rate Modulation T-Triggered I-Inhibited D-Dual(T+I)
An Event-B specification of the model has been written [4, 9] as an effort to make it amenable to the formal techniques required by high confidence medical device certification [1, 17, 6]. The formal specification of a cardiac pacemaker system consists of five machines; one abstract and four refinements. This study represents a formal specification and a systematic block diagram (see Fig. 2) of hierarchical tree structure of the bradycardia operating modes of the single electrode cardiac pacemaker. The hierarchical tree structure shows the stepwise refinement from abstract to concrete model. Each level of refinement introduces the new features of pacemaker as functional and parametric requirements. The root of this tree indicates the single electrode cardiac pacemaker. The next two branches of tree show the two chambers; atrium and ventricular. These atrium and ventricular are the right atrium and ventricular. The atrium chamber uses the three operating modes; AOO, AAI and AAT (see Table-1). Similarly, the ventricular chamber uses the three operating modes; VOO, VVI and VVT (see Table-1). It is an abstract level of the model. The abstract model presents all the operating modes abstractly with required properties of the pacemaker. From the first refinement to the last refinement, there is only one branch in every operating modes of the atrium and ventricular chambers. The subsequent refinement models introduce all detailed informations for the resulting system. Every refinement level shows an extension of previous operating modes as an introduction of a new feature or functional requirement. The triple dots (...) represents that there is no refinement at that level in particular operating modes (AOO and VOO). From abstract level to third refinement level, there are similar operating modes. But the fourth refinement level has represented by additional rate adaptive operating modes(i.e AOOR, AAIR, VVTR etc). These operating modes are different from the previous levels operating modes. These refinement structure is very helpful to model the functional requirements of the single electrode cardiac
56
D. M´ery and N.K. Singh
!
"# !
!
Fig. 2 Refinement structure of bradycardia operating modes of the single electrode cardiac pacemaker
pacemaker. The following outline is given about every refinement level to understand the basic notion of a formal model of the cardiac pacemaker system:• Abstract Model : In the context of an abstract model of the single electrode pacemaker contains the definitions and properties of different time interval parameters (upper rate limit (URL), lower rate limit (LRL), refractory period (RF)...etc.), and the pacemaker’s actuator and sensor status (ON and OFF). The first abstract model has been specified by the pacing, sensing and timing components with the help of action-reaction and real-time pattern using some initial events (Pace ON, Pace OFF, and tic). In an abstract of AAI and VVI modes, two new extra events (Pace OFF with Sensor and Sense ON) has been introduced. Similarly in AAT and VVT modes, two new extra events (Pace ON with Sensor and Sense ON) has been introduced. Remaining other rate adaptive bradycardia operating modes (AOOR, VOOR, AAIR, AATR,VVIR and VVTR) of the single electrode pacemaker are refinement of basic bradycardia operating modes, which are described in stepwise refinements. The basic abstract model of a pacemaker represents the action-reaction and real-time pattern for describing pacing and sensing modes of the single electrode cardiac pacemaker. • Refinement 1 : In this refinement of the single electrode cardiac pacemaker model, we have introduced more invariants to satisfy the pacing and sensing requirements of the system under real time constraints. • Refinement 2 : This refinement is relatively more complex then the last refinement. In this refinement, we have introduced the threshold parameter is used to filter the exact sensing value within a sensing period to control the sensing and pacing events. A pacemaker has a stimulation threshold measuring unit which
Real-Time Animation for Formal Specification
57
measures a stimulation threshold voltage value of a heart and a pulse generator for delivering stimulation pulses to the heart. The pulse generator is controlled by a control unit to deliver the stimulation pulses with respective amplitudes related to the measured threshold value and a safety margin. • Refinement 3 : In this refinement, we have introduced the application of hysteresis interval to provide consistent pacing or to prevent constant pacing in the heart chambers (atrial or ventricle). • Refinement 4 : In the last and final refinement, we have introduced the accelerometer sensor component and rate modulation function to obtain a new rate adaptive operating modes of the pacemaker. Rate adaptive term is used to describe the capacity of a pacing system to respond to physiologic need by increasing and decreasing pacing rate. The rate adaptive mode of the pacemaker can progressively pace faster than the lower rate, but no more than the upper sensor rate limit, when it determines that heart rate needs to increase. This typically occurs with exercise in patients that cannot increase their own heart rate. The amount of rate increase is determined by the pacemaker on the basis of maximum exertion is performed by the patient. This increased pacing rate is sometimes referred to as the “sensor indicated rate”. When exertion has stopped the pacemaker will progressively decrease the paced rate down to the lower rate. The final refinement represents the rate modulation function with new operating modes (AOOR, VOOR, AAIR, VVIR, AATR and VVTR) of the pacemaker system. To find the complete formal development of the single electrode cardiac pacemaker see the research report [9]. We have mainly used this case study to experiment on our proposed architecture which enables the animation of a proved specification with real-time data set without generating the legacy source code in any target language. According to the proposed architecture (see Figure 1) for this experiment, we have not used any data acquisition device to collect the ECG (electrocardiogram) signal. We have done this experiment in off-line mode, meant we have used our architecture to test the realtime data set of ECG signal that is already collected. ECG signal collection and features extraction in on-line mode is too expensive due to complex data acquisition process and limitation of feature extracting algorithms. So, we have used the ECG signal and feature extraction algorithms for our experiment from the MIT-BIH Database Distribution [10]. We have downloaded the ECG signal from ECG data bank [10]. ECG signals are freely available for academic experiments. We have applied some algorithms to extract the features (P, QRS, PR, etc.) from ECG signal and stored it into a database. We have written down some Macromedia Flash scripts to interface between Flash tool and Brama component, to pass the real data set as a parameters from database to the Event-B model. No any tool is available to interface between database and the Event-B model. Extra Macromedia Flash script coding and the Brama animation tool help to test the Event-B formal model of cardiac pacemaker on real-time data set. We have designed an animated graphics of heart and pacemaker in Macromedia Flash, where this animated model represents the pacing activity in right ventricular
58
D. M´ery and N.K. Singh
Fig. 3 Implementation of proposed functional architecture on the single electrode cardiac pacemaker case study
chamber. This animated model simulates the behaviour of heart according the bradycardia operating modes (VVT, VVI) and animates the graphic model. The animation of the model is fully based on the Event-B model. Event-B model is executing all events according to the parametric value. These parametric values are the extracted features from the ECG signal, which are passing into the Event-B model. Figure 3 represents an implementation of proposed architecture on the formal model of a cardiac pacemaker case study. According to the architecture, data acquisition unit collect the ECG signal and features extraction are done by the feature extraction or parameter estimation unit. The extracting features are stored in database as the XML file format. Macromedia Flash tool helps to design the animated graphics of the heart and pacemaker. In next unit, Brama plug-in helps to communicate between animated graphics and Event-B formal model of the single electrode cardiac pacemaker. Finally, we have tested a real-time data set in the formal models without generating the source code with the help of Brama existing animation tool.
5 Conclusion and Future Work The objective of this proposed architecture is to validate the formal model with realtime data set in the early stage of development without generating the legacy source code in any target language. In this paper, we focused the attention on the techniques introduced in the architecture for using the real-time data set to achieve the adaptability and confidence on formal model. Moreover, this architecture should guarantee that the formal model is correct with respect to the high level specifications and it is runtime error free. At last, this proposed architecture should be adaptable to various target platforms and formal models techniques (Event-B, Z, Alloy, TLA+ etc.). With respect to adaptability of new architecture, two techniques were considered useful, implemented and tested on the case study. The proposed architecture results are satisfactory and demonstrate the ability to validate the formal model of a single
Real-Time Animation for Formal Specification
59
electrode cardiac pacemaker system with real-time data set. The certification of a software item is concerned both by verification and validation activities. Ideally, the former should be fully formal, relying on proofs and formal analysis. The latter deals with an inherently informal element: the requirements. The gains rely then on the guarantees provided by the use of a formal method and on the certification level which can be obtained by this way. The technique discussed here aims at improving the confidence in the software at earlier stages of development cycle. As far as we know, no any animation tool support to validate the formal model on real-time data set, which can be closed to the source code. The adaptation of this architecture needs more complete experiments, specially for their impact on the data acquisition and features extraction time for some specific domains. A alternative approach is developed: rather than generating a source code of formal model in advance that the proposed architecture always produces a desired result which is similar to the correctly implemented source code. A key feature of this validation as it is full automation and animation of specification in the early stage of formal development. The case study has shown that requirement specifications could be used directly in real-time environment without modifications for automatic test result evaluation using our approach. Moreover, there are scientific and legal applications as well, where the formal model based animation can be used to simulate (or emulate) certain scenarios to glean more information or better understanding of the system and assist to improve the final given system. While arguing about the relationship between refinement based modeling and its stepwise validation, we discovered that not every refinement step is animatable. This is consistent with using animation as a kind of quality-assurance activity during development. We believe that one animation per abstraction level is sufficient. In fact, the first refinement of a level may often have a non-determinism too wide to allow for meaningful animation (concept introduction), but subsequent refinements get the definitions of the new concept precise enough to allow animation. The proposed architecture is not complete yet due to certain limitations; acquisition devices, features extraction algorithms and so on. In future this is expected to use same architecture in multi domain, the proposed architecture to use the real time data set to validate the any formal model specification in the early stage of development. Manual application of this architecture to apply real-time data set in the formal model is tedious, cumbersome and may be error prone if not applied carefully. Therefore we are planning to write an application programming interface (API) which can interface automatically from any acquisition device or database to formal model using Flash animation and Brama component. Acknowledgements. Work of Dominique M´ery and Neeraj Kumar Singh is supported by grant No. ANR-06-SETI-015-03 awarded by the Agence Nationale de la Recherche. Neeraj Kumar Singh is supported by grant awarded by the Ministry of University and Research.
60
D. M´ery and N.K. Singh
References 1. A Reseach and Development Needs Report by NITRD. High-Confidence Medical Devices: Cyber-Physical Systems for 21st Century Health Care, http://www.nitrd.gov/About/MedDevice-FINAL1-web.pdf 2. Abrial, J.-R.: Modeling in Event-B: System and Software Engineering (2010) (forthcoming book) 3. Bjørner, D., Henson, M.C. (eds.): EATCS Textbook in Computer Science. Springer, Heidelberg (2007) 4. Boston Scientific Boston Scientific: Pacemaker system specification, Technical report (2007) 5. Cansell, D., M´ery, D.: Logics of Specification Languages, pp. 33–140. Springer, Heidelberg (2007); See [3] 6. Hoare, C.A.R., Misra, J., Leavens, G.T., Shankar, N.: The verified software initiative: A manifesto. ACM Comput. Surv. 41(4), 1–8 (2009) 7. IEC, IEC functional safety and IEC 61508: Working draft on functional safety of electrical/electronic/programmable electronic safety-related systems (2005) 8. Writing Committee Members, Epstein, A.E., DiMarco, J.P., Ellenbogen, K.A., Estes III, Mark, N.A., Freedman, R.A., Gettes, L.S., Marc Gillinov, A., Gregoratos, G., Hammill, S.C., Hayes, D.L., Hlatky, M.A., Kristin Newby, L., Page, R.L., Schoenfeld, M.H., Silka, M.J., Stevenson, L.W., Sweeney, M.O.: ACC/AHA/HRS 2008 Guidelines for DeviceBased Therapy of Cardiac Rhythm Abnormalities: Executive Summary: A Report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the ACC/AHA/NASPE 2002 Guideline Update for Implantation of Cardiac Pacemakers and Antiarrhythmia Devices): Developed in Collaboration With the American Association for Thoracic Surgery and Society of Thoracic Surgeons. Circulation 117(21), 2820–2840 (2008) 9. M´ery, D., Singh, N.K.: Pacemaker’s Functional Behaviors in Event-B. Research Report (2009), http://hal.inria.fr/inria-00419973/en/ 10. MIT-BIH Database Distribution and Software, http://ecg.mit.edu/index.html 11. Ponsard, C., Massonet, P., Rifaut, A., Molderez, J.F., van Lamsweerde, A., Tran Van, H.: Early verification and validation of mission critical systems. Electronic Notes in Theoretical Computer Science 133, 237–254 (2005); Proceedings of the Ninth International Workshop on Formal Methods for Industrial Critical Systems (FMICS 2004) 12. Project RODIN. Rigorous open development environment for complex systems, 2004– 2007 (2004), http://rodin-b-sharp.sourceforge.net/ 13. Quiones, M.A., Tornes, F., Fayad, Y., Zayas, R., Castro, J., Barbetta, A., Di Gregorio, F.: Rate-Responsive Pacing Controlled by the TVI Sensor in the Treatment of Sick Sinus Syndrome. Springer, Heidelberg (2006) 14. Reinhardt, R., Dowd, S.: Adobe Flash CS3 professional bible, p. 1232. Wiley, Chichester (2007) 15. Servat, T.: BRAMA: A New Graphic Animation Tool for B Models. In: Julliand, J., Kouchnarenko, O. (eds.) B 2007. LNCS, vol. 4355, pp. 274–276. Springer, Heidelberg (2006) 16. Tran Van, H., van Lamsweerde, A., Massonet, P., Ponsard, C.: Goal-oriented requirements animation. In: IEEE International Conference on Requirements Engineering, pp. 218–228 (2004) 17. Woodcock, J., Banach, R.: The verification grand challenge. J. UCS 13(5), 661–668 (2007)
Using Simulink Design Verifier for Proving Behavioral Properties on a Complex Safety Critical System in the Ground Transportation Domain J.-F. Etienne, S. Fechter, and E. Juppeaux
Abstract. We present our return of experience in using S IMULINK D ESIGN V ERI FIER for the verification and validation of a safety-critical function. The case study concerns the train tracking function for an automatic train protection system (ATP). We basically show how this function is formalized in S IMULINK and present the various proof strategies devised to prove the correctness of the model w.r.t. high-level safety properties. These strategies have for purpose to provide a certain harness over time/memory consumption during proof construction, thus avoiding the state space explosion problem.
1 Introduction In this paper, we present our return of experience in using S IMULINK D ESIGN V ER IFIER for the verification and validation of a safety-critical function in the railway transportation domain. The case study concerns the train tracking function for an automatic train protection system (ATP). The requirements specification document was provided by Thales (the system designer), where the safety and functional properties were written in natural language. At Safe-River, a mathematical referential (of about 105 predicates/functions) was first defined in order to eliminate any ambiguity or imprecision in the informal definitions and to properly formalize the set of properties to be proved. Based on this referential, a S IMULINK model was afterwards derived in such a way as to ensure a traceability with the mathematical definitions. The choice of S IMULINK [8] as modeling language is mainly justified by the fact that it allowed us to obtain an executable specification, which is close to the code being embedded. The executable specification has for purpose to: establish the feasibility of the system via simulation; determine the correctness of model with S IMULINK D ESIGN V ERIFIER (DV) [6]. Moreover, DV provides the possibility J.-F. Etienne · S. Fechter · E. Juppeaux Safe-River, 55, rue Boissonnade - 75014 Paris - France www.safe-river.com
62
J.-F. Etienne, S. Fechter, and E. Juppeaux
to easily analyze significant and low-level implementation details that would generally require a cumbersome effort in proof assistant systems such as Coq [11] and B [2]. For instance, the counterexample feature was very useful for debugging purposes and has allowed to detect omissions/contradictions in the mathematical specification. For this case study, we had to address various modeling challenges. One important issue was to determine whether the high-level safety properties could be expressed seamlessly in the S IMULINK model without introducing any disparity with the mathematical referential. Moreover, it was also essential to guarantee that all proofs are carried out in a systematic and mechanic manner. In essence, all the proof outlines have to be validated with DV such that there are no manually proved intermediate lemmas left behind. Another issue was to ensure the compatibility of the S IMULINK model with DV. In particular, we had to determine whether: the data representations chosen do not violate the synchronous hypothesis; the primitive operators used are supported by DV and do not introduce any overhead in proof mode. Finally, due to the complexity of the model designed in S IMULINK, we also had to define a set of proof strategies to have a certain harness over the time/memory consumption during proof construction. The paper is organized as follows: an overview of the S IMULINK modeling language is given in Section 2; our case study is introduced in Section 3; Section 4 describes how the train tracking function was formalized in S IMULINK; Section 5 outlines the various proof strategies employed to counteract the state space explosion problem; Finally, Sections 6 and 7 conclude on the results obtained.
2 M ATLAB Environment M ATLAB environment [7], developed by T HE M ATH W ORKS, provides a wide variety of tools that can be applied at the different phases of the development life cycle process. For our case study, we essentially made use of S IMULINK [8] and E MBED DED M ATLAB for modeling purposes. The verification and validation process were performed using the S IMULINK D ESIGN V ERIFIER [6]. These are presented in the subsequent subsections.
2.1 S IMULINK /E MBEDDED M ATLAB S IMULINK is a synchronous data flow language, which has the particularity to be graphical. Data flows are synchronized based upon a unique global clock. Each data flow is also typed (e.g. integers, boolean, unsigned integers, ...) and new data flows may be derived from existing ones using primitive constructs called blocks. S IMULINK provides different types of primitive blocks, namely: arithmetic, boolean and relational blocks; control flow blocks such as loops and conditionals; temporal blocks based on the unit delay block, which is a reminiscent of the pre operator of the L USTRE [3, 5] language. The unit delay may also be used to specify temporal loops. An example of such a loop is given in Figure 1, where the output
Using Simulink Design Verifier for Verification and Validation
63
Fig. 1 Temporal loop in S IMULINK
data flow ( Out1 ) at ti is the sum of the input data flow ( In1 ) at ti and the output data flow ( Out1 ) at ti−1 . The unit delay block is initialized to 0 at t0 . An equivalent L USTRE code is the following, node sum_loop ( i n 1 : i n t ) r e t u r n s ( o u t 1 : i n t ) ; let o u t 1 = ( 0 → pre o u t 1 ) + i n 1 ; tel
where -> is used to initialize the output data flow properly. E MBEDDED M ATLAB (EML) is a high-level programming language, which may be considered as equivalent to S IMULINK. With the exception of the unit delay, each primitive S IMULINK block has a corresponding construct in EML. The behavior of the unit delay block can be coded in EML using persistent variables. In an EML function, a variable declared as persistent retains its assigned value from the previous function call. The temporal loop example (see Figure 1) can be specified in EML as follows, f u n c t i o n o u t 1 = sum_loop ( i n 1 ) p e r s i s t e n t sum ; i f i s e m p t y ( sum ) sum = i n t 3 2 ( 0 ) ; end o u t 1 = sum + i n 1 ; sum = o u t 1 ; end
where predicate isempty is used to initialize the sum variable when function sum_loop is first called. In EML, for loops cannot be of variable size due to the synchronous hypothesis. Finally, it is also possible to reference an EML function within a S IMULINK model via a user-defined function block.
2.2 S IMULINK D ESIGN V ERIFIER S IMULINK D ESIGN V ERIFIER (DV) software is a plug-in to P ROVER [1], which is a formal verification tool that performs reachability analysis using a combination of Bounded Model Checking (BMC) [4] and induction based on the K-Induction rule (see [1, 9] for more details). BMC is used as a refutation technique, while the K-Induction rule is applied for invariant satisfaction. The satisfiability of each
64
J.-F. Etienne, S. Fechter, and E. Juppeaux
reachable state is determined by a SAT solver [10]. With DV, one can generate tests for S IMULINK and EML models according to model coverage and user-defined objectives. The underlying P ROVER engine also allows the formal verification of properties for a given model. Any failed proof attempt ends in the generation of a counterexample, which represents an execution path to an invalid state. A harness model is also generated to exploit the counterexample for debugging purposes.
Fig. 2 General proof outline in S IMULINK
In S IMULINK, a proof objective is generally specified as illustrated in Figure 2. We have a function F for which we would like to prove a certain property P. As shown in Figure 2, the output of function F is specified as input to block P. Property P is a predicate, which should always return true when hypotheses H set on the input data flows of the model are satisfied. P is therefore connected to an «Assertion» block, while H is connected to a «Proof Assumption» block. Whenever an «Assertion» block is used, DV attempts to verify whether its specified input data flow is always true. «Proof Assumption» blocks have for purpose to constrain the input data flows of the model during proof construction. They can also be used to specify intermediate lemmas. As from the R2008b version of M ATLAB, directives sldv.assume and sldv.prove are available in EML to respectively specify assumptions and proof objectives.
3 Case Study Our case study concerns a train tracking function for an automatic train protection system (ATP) developed by Thales. The target system is a subway line located in Paris (France), which is considered to be even more complex than the "METEOR" system1 . On one hand, it has to manage fully automated trains communicating via a dedicated network (CBTC2 ). On the other hand, it has to handle a traditional signaling system for non-automated and non-communicating trains. The main purpose of the train tracking function is to assign limits of movement authorities (LMA) to automated trains. A LMA is basically a location area in which 1 2
A fully automated subway line in Paris. Communication Based Train Control.
Using Simulink Design Verifier for Verification and Validation
65
a train is authorized to circulate. It is computed with respect to all potential obstacles present on the railway track. An obstacle may be a red signal, a non-commuted railway switch, a location area of another train or even a LMA assigned to another train. Therefore, a LMA allocated to an automated train must always be limited by an obstacle (if applicable) modulo a certain security distance. The complexity of the train tracking function is mainly due to several factors. First of all, the system is highly dependent on the topology of the track and the number of trains/trackside equipments. For instance, the tracking function must be able to manage at least 32 trains simultaneously. Another influencing factor is the fact that the tracking function has to interact asynchronously with its environment. Hence, there is a need to handle network delays when processing the location points received from communicating trains. The distributed nature of the tracking is another aspect that accounts for more complexity. In fact, the track is decomposed into manageable tracking sections and the workload is dispatched among these sections. A handover is performed as and when trains move from one section to another. Finally, the complexity of the train tracking function also resides in the fact that it is classified at the highest level of safety integrity (SIL 4) w.r.t. the EN50128 standard. Therefore, there is a need to provide evidence for the amount of risk reduction employed to ensure that no undesired event may occur. The compliance process is therefore established via formal verification, where the acceptance level is assessed according to the following categories of properties: • Liveness properties, which ensure the availability of the system. E.g., a train always receives an optimal LMA under nominal operating conditions. • Soundness properties, which guarantee that the system is well specified. E.g., the LMA assigned to a given train always satisfies the maximal movement distance authorized. • Safety properties, which provide evidence that no undesired event may occur. E.g., the LMA assigned to a given train is always limited by an obstacle (if applicable) modulo the security distance. • Temporal properties, which ensure that the system remains consistent from one application cycle to another. E.g. the order between trains is preserved from one application cycle to another.
4 Formalization The model designed with S IMULINK is very significant. It is composed of about 242 proprietary basic blocks for custom-made operators, of 60 «Assertion» blocks for the proved properties and of any necessary «Proof Assumption» block for the hypotheses and intermediate lemmas required during the formal verification process. The model complexity is exposed in the following subsections, where we mainly focus on the data representation choices made to translate the mathematical referential into an executable specification.
66
J.-F. Etienne, S. Fechter, and E. Juppeaux
4.1 Data Representation In the referential, various mathematical principles are introduced to formulate in an unambiguous way the different concepts relative to the train tracking function. We here show how these concepts are coded in the S IMULINK model. Oriented graph. The topology of the railway track is formally specified as an oriented graph. In S IMULINK, the oriented graph is defined as a combination of scalars and vectors of unsigned integers. The basic definition may be denoted as G = (Vω , Sω , Hω , Tω ), where Vω represents the number of vertices (or nodes) and Sω the number of arcs (or directed edges). To stick to the railway jargon, arcs are referred to as segments. Each vertex v of the graph is represented by a strictly positive integer s.t. v ≤ Vω . Similarly, each segment a is a strictly positive integer s.t. a ≤ Sω . Hω and Tω are vectors of size Sω and are indexed according to the segment identifier a. In fact, each segment a is associated to an ordered pair (x, y) of vertices and is considered to be directed from x to y (where x, y ≤ Vω ). As such, x is called the tail and y is called the head of the segment. Vectors Hω and Tω are therefore used to store the set of ordered pairs (xa , ya ) s.t. Hω = [x1 , x2 , . . . , xn ], Tω = [y1 , y2 , . . . , yn ] and n = Sω . We write Hω (a) and Tω (a) to respectively denote the head and tail of segment a ∈ [1 .. Sω ]. Oriented graph G is also subjected to the following well-formedness constraints: G must not contain loop transitions (i.e., segment which starts and ends on the same vertex); cycles of less than two vertices are not allowed; a vertex can be connected to at most three segments. This last restriction allows railway switches to be modeled correctly. In fact, a commutation point is modeled as a vertex incident to three segments. We therefore have a triple of segments (p, hl , hr ) ∈ [1 .. Sω ] × [1 .. Sω ] × [1 .. Sω ], where p is being referenced as the point of switch, while hl and hr are the left-hand heel and right-hand heel respectively. The definition of graph G is extended with: SWω to represent the number of railway switches on the track; vectors SWP , SWHL and SWHR to store the set of triples (pi , hli , hri ) s.t. SWP = [p1 , . . . , pn ], SWHL = [hl1 , . . . , hln ], SWHR = [hr1 , . . . , hrn ] and n = SWω . We write SWP(s), SWHL (s) and SWHR (s) to respectively denote the point, left-hand heel and right-hand heel of a railway switch s ∈ [1 .. SWω ]. Undirected path. Undirected paths are mainly used to designate the different possible routes that can be followed by a given train on the track. An undirected path is basically a path in which the segments are not all oriented in the same direction. In the S IMULINK model, an undirected path (or path for short) of n segments is represented by an unsigned integer vector of fixed size Nseg denoted as Ps , where n ≤ Nseg . In fact, vectors of variable size are not supported in DV due to the synchronous hypothesis. The fixed size Nseg is determined according to certain operational criteria. Hence, Ps (1), . . . , Ps (n) corresponds to a sequence of adjacent segments and Ps (n + 1), . . ., Ps (m) are padded with zero whenever n < Nseg . Moreover, each segment of path Ps is assigned an orientation to reflect the direction movement. As such, an additional unsigned integer vector Po of fixed size Nseg is required. Each
Using Simulink Design Verifier for Verification and Validation
67
orientation Po (i) is determined according to whether segment Ps (i) is oriented in the same direction as the path: Po (i) = 1 if Ps (i) is in the same direction; Po (i) = 2 if Ps (i) is in the opposite direction; Po (i) = 0 if i > n and i ≤ Nseg . The given set Ω = {0, 1, 2} is used to denote the domain values for a given orientation o. A path is therefore modeled as a pair (Ps , Po ) and the corresponding given set is denoted as Ppath . An undirected path is also limited by the fact that adjacent segments cannot correspond to the left-hand and right-hand heels of the same switch simultaneously. In particular, a train cannot traverse a railway switch from the left-hand heel to the right-hand heel (or vice versa). To represent the commutation status of each railway switch, two boolean vectors KHL and KHR of size SWω are introduced s.t. KHL (s) and KHR (s) denote the status for the left-hand and right-hand heels of switch s ∈ [1 .. SWω ] respectively. The set of all possible routes for graph G may therefore change from one application cycle to another. Point on segment. As its name suggests, a point on segment describes a location on the graph according to a given segment. A point on segment is modeled as a triple (d, a, o) ∈ N×Sω × Ω , where d denotes an abscissa, a the segment of reference and o an orientation. The tail vertex of segment a is referenced as origin to determine the positioning of abscissa d s.t. d ≤ Lgω (a). In the S IMULINK model, vector Lgω of size Sω is introduced to store the length (i.e., between 15 000 and 100 000 cm) associated to each segment of graph G. We write Lgω (a) to denote the length for segment a. The given set of all points on segment is defined as Pseg ⊆ N × Sω × Ω . Location area. The notion of location area is introduced to represent where signaling lights and trains are located on the track. Location areas are also used to consider the LMAs assigned to each automated train. As illustrated in Figure 3, a location area is a path limited by two abscissas to mark the beginning and end of the area. In the S IMULINK model, a location area is therefore designed as a triple (P, Abss , Abse ) ∈ Ppath × N × N, where P denotes the associated path. The unsigned integer scalars Abss and Abse specify the corresponding abscissas such that the beginning and end of the area are characterized by (Abss , Ps (1), Po (1)) ∈ Pseg and (Abse , Ps (n), Po (n)) ∈ Pseg respectively (where Ps = Π1 (P), Po = Π2 (P) and n the number of segments in path Ps )3 . For the location area shown in Figure 3, Ps = [1, 2, 3, 0, 0], Po = [1, 1, 1, 0, 0], Abss = 1000 cm and Abse = 5500 cm. We here assume that the longest path can contain at most five segments. The given set for location areas is noted as Larea . Signaling light. A signaling light is modeled as a pair (l, b) ∈ Larea × B, where l denotes a small location area and b a boolean value that determines whether it is in a permissive (i.e., green signal) or a restrictive (i.e., red signal) state. We write Ssig ⊆ Larea × B to denote the given set for signaling lights. 3
Notation Πi (T ) is used to access element i of a given tuple T .
68
J.-F. Etienne, S. Fechter, and E. Juppeaux
Train. Finally, a train is defined as a tuple (l, vs , bd , ed ) ∈ Larea × N × B × N, where l denotes a location area, vs a given speed (in cm.s−1 ), bd a boolean value determining a direction movement (i.e., forward or backward), and ed a message latency (in application cycle) representing the aging factor associated to the information received from the train. Notation Ttrain ⊆ Larea × N × B × N is used to denote the given set for trains.
Fig. 3 Location area
4.2 Operators Based on the concepts introduced in the previous section, the train tracking can essentially be seen as a set of basic operators for resolving graph problems. These operators mainly work on the different data structures defined in Section 4.1. In particular, they allow to compute new location areas/points on segment, to infer a certain number of characteristics associated to these data structures (e.g., length of a location area) and to provide predicates that can be used when evaluating properties (e.g. well-formedness constraints). For instance, the following is a non-exhaustive list of the operators coded in S IMULINK/EML: • LA_Inter( l1 , l2 ): determines whether location areas l1 and l2 intersect. • LA_Direction( l1 , l2 ): determines whether location area l1 is in the same direction as location area l2 . • Pseg_shift( e, d, KHR , KHL ): translates point on segment e according to distance d in the direction pointed by Π3 (e). The translation is also performed w.r.t. the status of the railway switches on the track. • LA_Coherence( l ): determines whether location area l is the well-formed. • LA_Comm( l ): determines whether location area l only traverses commuted railway switches (if any). In the S IMULINK model, only discrete state operators were used to code all the functions necessary for the train tracking system. «Selector» and «Assignment» blocks were extensively used in conjunction with «For-Iterator» blocks to efficiently manipulate vectors/matrices of relatively large size. Conditional blocks such as «IfThen-Else», «Merge» and «Switch» were generally applied to exert a certain control flow in basic operators. The use of «Delay» blocks and temporal loops allowed to properly model the temporal features inherent to train tracking.
Using Simulink Design Verifier for Verification and Validation
69
4.3 Properties In the S IMULINK model, each property is either modeled as a S IMULINK block or an EML function. The logical connectors are coded by their equivalent S IMULINK/ EML constructs. Universal quantifiers are translated into the input data flows of the model when the property to be verified is entirely within the scope of the quantified variables. In fact, for a given input data flow, DV derives its corresponding domain values based on its type modulo any constraint specified via «Proof Assumption» blocks. However, when the scope of the universal quantifiers only covers some premises of the goal to be proved, a specific coding pattern is required. For instance, a loop may be necessary to ensure that all the domain values of the quantified variable satisfy the corresponding hypothesis. For such a case, the domain must be finite. Similarly, existential quantifiers also require the use of loops to be considered properly. The idea is to extract a witness that satisfies the given proposition. Example 1 (Safety property). An example of high-level property proved on the S IMULINK model is the following: ⎫ ⎪ ( ∀ t0 ∈ Ttrain , ∀ l ∈ Larea , ⎬ l = Π1 ( t0 ) =⇒ Hyp 1 ⎪ ⎭ LA_Coherence( l ) ∧ LA_Comm( l ) ) =⇒ ∀ t1 ∈ Ttrain , ∀ l1 , lLMA ∈ Larea ,
⎫ ⎪ ⎬
l1 = Π1 ( t1 ) =⇒
Hyp 2 ( ∀ t2 ∈ Ttrain , ∀ l2 ∈ Larea , ⎪ ⎭ t1 = t2 ∧ l2 = Π1 ( t2 ) =⇒ LA_Inter( l1 , l2 ) ) =⇒
⎫ ⎪ ( ∀ s ∈ Ssig , ∀ ls ∈ Larea , ⎬ ls = Π1 ( s ) =⇒ Hyp 3 ⎪ ⎭ Π2 ( s ) ∨ LA_Direction( ls , l1 ) ∨ LA_Inter( ls , l1 ) ) =⇒
lLMA = Train_LMA( t1 ) =⇒ lOBS ∈ Obsω ( t1 ), LA_Inter( lOBS , l1 ) where Obsω ( t1 ) is defined as follows: Obsω ( t1 ) ≡ { l ∈ Larea | ( ∃ t2 ∈ Ttrain , t2 > t1 ∧ Π1 ( t2 ) = l ) ∨ ( ∃ t2 ∈ Ttrain , t2 < t1 ∧ Train_LMA( t2 ) = l ) ∨ ( ∃ s ∈ Ssig , Π1 ( s ) = l ∧ Π2 ( s ) ∧ LA_Direction( Train_LMA( t1 ), l ) ) }
This safety property states that there are no obstacle present in an LMA assigned to an automated train. Hypothesis 1 specifies that the location area of each train is well-formed (LA_Coherence) and only traverses commuted railway switches (LA_Comm). Hypothesis 2 states that location areas of all trains do not intersect (no collision). Hypothesis 3 makes the assumption that each train always stops at a red
70
J.-F. Etienne, S. Fechter, and E. Juppeaux
signaling light. These hypotheses describe the nominal working conditions on the track. The safety property was modeled "as is" in S IMULINK.
5 Proof Methodology For a relatively simple model, the proof construction process may be instantaneous with DV. However, as the model complexity increases, a brute-force approach to property proving generally ends up in excessive memory consumptions and thus in an undecidable result. As exposed in Section 4, the train tracking model is very complex. Hence, in order to prove its correctness with respect to high-level properties, we devised a set of proof techniques to tackle the state space explosion problem. Model Optimization. One way to leverage the computational complexity during proof construction is the use of precomputed constants in the model. These precomputed constants have to satisfy a set of well-formedness properties, which are proved once and for all. For instance, function Pseg_shift is supposed to translate a point on segment e by a certain distance d (see Section 4.2) w.r.t. the status of railway switches on the track. To prove the correctness of this function, one would like to determine whether there exists a valid path between e and the newly computed point. In doing so, DV attempts to build all the possible paths according to the different execution states characterized by Pseg_shift. To avoid these calculations, all the possible valid paths of size ≤ Nseg for graph G can be precomputed first. With an appropriate indexing mechanism, a valid path between two segments can afterwards be retrieved easily without much effort. Property Decomposition. Another approach to tackle the state space explosion problem is to decompose the high-level property at hand into a hierarchy of lowerlevel lemmas. The decomposition has to be performed according to how the model under scrutiny is structurally organized. Hence, less complex intermediate lemmas are proved first. These are afterwards specified as assumption when proving the higher-level ones until the root property is completely inferred. The following code excerpt shows how a property decomposition can be specified in EML. f u n c t i o n p r o o f _ o b j e c t i v e ( Ps , Po , Abs_s , Abs_e , K_hl , K_hr ) g o a l = lemme1 ( Ps , Po , Abs_s , Abs_e , K_hl , K_hr ) ; hyp1 = lemme1_1 ( Ps , Po , Abs_s , Abs_e , K_hl , K_hr ) ; hyp2 = lemme1_2 ( Ps , Po , Abs_s , Abs_e , K_hl , K_hr ) ; ... sldv.assume ( hyp1 ) ; sldv.assume ( hyp2 ) ; sldv.prove ( g o a l ) ; end
Proof by induction. The induction principle can be used as an alternative to identify additional intermediate lemmas, whenever the above methods are revealed to be insufficient for a given proof. For a given property ∀ n ∈ N, P(n), a proof
Using Simulink Design Verifier for Verification and Validation
71
Fig. 4 Base and inductive cases
by induction entails proving a base case P(0) and an inductive case of the form ∀ i ∈ N, P(i) ⇒ P(i + 1). In S IMULINK, we therefore have two distinct models: one for the base case and one for the inductive case. These are illustrated in Figure 4. Finally, the inductive principle can not only be applied on data (e.g., trains, switches, segments, length of path, ...) but also on the model cycle.
6 Results Obtained Simulink model. The model developed in S IMULINK does not correspond to an abstract specification, but to a concrete executable implementation of the mathematical referential. It is generic in the sens that it can accommodate for any type of railway track that satisfies the well-formedness constraints prescribed in Section 4.1. Proved properties. About 60 high-level properties were proved on the S IMULINK model. Some determine the correctness of the model, while others guarantee that the safety of the tracking system is preserved in nominal working conditions. They are considered to be generic w.r.t. the railway track under exploitation as they were derived for any given train and according to any given working condition. Seamless proof integration process. Based on the different proof strategies presented in Section 5, we were able to express all the high-level properties formalized in the mathematical referential "as is" in the S IMULINK model. This has allowed us to properly assess the conformity of the model with the specification. Moreover, by combining the different proof strategies adopted, we put in place a fully automated proof integration process. In particular, by ensuring that all intermediate lemmas had to be fully satisfied before moving up to the next level of integration, we established a seamless proving process leading to the verification of the whole model. Proof construction. The proofs carried out on the S IMULINK model allowed us to identify certain inconsistencies and omissions in the mathematical referential. These were not detected during an intensive simulation performed on the model for 100 000 cycles. The counterexamples generated by DV were also used to detect bugs introduced during the design phase (e.g. overflow on arithmetic operations, out of bound exception, ...). These bugs would normally have been detected downstream the development life cycle with static analysis.
72
J.-F. Etienne, S. Fechter, and E. Juppeaux
7 Conclusion Our return of experience using S IMULINK D ESIGN V ERIFIER for the verification and validation of the train tracking function is very positive. In particular, we intensively made use of the Bounded model Checking and K-Induction features of the P ROVER engine to establish the satisfiability of our proof objectives. These mechanisms were crucial when dealing with very large state spaces. The compatibility of the S IMULINK model with DV could be achieved quite easily. We identified a set of design and coding rules, which excludes certain modeling patterns that could impede the proving process through the introduction of unnecessary overheads. The counterexample feature has also proved to be very efficient and very interesting for detecting errors or even omissions in the specification. Finally, an incremental verification approach using DV is easily achievable. For each refinement step, additional proof objectives are introduced for the new execution contexts. The non-regression of the model is determined via the introduction of equivalence proofs for the common execution contexts between each refinement step. DV appears to be well-suited for such type of proof objectives since we are not contrive to explicitly prove the equivalence as one would normally do with proof assistant systems such as Coq or B.
References 1. Abdulla, P.A., Deneux, J., Stålmarck, G., Ågren, H., Åkerlund, O.: Designing safe, reliable systems using scade. In: Margaria, T., Steffen, B. (eds.) ISoLA 2004. LNCS, vol. 4313, pp. 115–129. Springer, Heidelberg (2006) 2. Abrial, J.R.: The B Book, Assigning Programs to Meanings. Cambridge University Press, Cambridge (1996) 3. Caspi, P., Pilaud, D., Halbwachs, N., Place, J.: Lustre: a declarative language for programming synchronous systems. In: ACM Symp. on Princ. of Prog. Langs., POPL 1987 (1987) 4. Clarke, E.M., Biere, A., Raimi, R., Zhu, Y.: Bounded model checking using satisfiability solving. Formal Methods in System Design 19 (2001) 5. Halbwachs, N., Caspi, P., Raymond, P., Pilaud, D.: The synchronous dataflow programming language lustre. Proceedings of the IEEE 79(9), 1305–1320 (1991) 6. MathWorks. S IMULINK D ESIGN V ERIFIER, http://www.mathworks.com/products/sldesignverifier/ 7. MathWorks, http://www.mathworks.com/products/stateflow/ 8. MathWorks, S IMULINK, http://www.mathworks.com/products/simulink/ 9. Sheeran, M., Singh, S., Stålmarck, G.: Checking safety properties using induction and a SAT-solver. In: Johnson, S.D., Hunt Jr., W.A. (eds.) FMCAD 2000. LNCS, vol. 1954, pp. 108–125. Springer, Heidelberg (2000) 10. Sheeran, M., Stålmarck, G.: A tutorial on stålmarck’s proof procedure for propositional logic. In: Gopalakrishnan, G.C., Windley, P. (eds.) FMCAD 1998. LNCS, vol. 1522, pp. 82–99. Springer, Heidelberg (1998) 11. The Coq Development Team. Coq, version 8.2. INRIA (February 2009), http://coq.inria.fr/
SmART: An Application Reconfiguration Framework Hervé Paulino, João André Martins, João Lourenço, and Nuno Duro
Abstract. SmART (Smart Application Reconfiguration Tool) is a framework for the automatic configuration of systems and applications. The tool implements an application configuration workflow that resorts to the similarities between configuration files (i.e., patterns such as parameters, comments and blocks) to allow a syntax independent manipulation and transformation of system and application configuration files. Without compromising its generality, SmART targets virtualized IT infrastructures, configuring virtual appliances and its applications. SmART reduces the time required to (re)configure a set of applications by automating time-consuming steps of the process, independently of the nature of the application to be configured. Industrial experimentation and utilization of SmART show that the framework is able to correctly transform a large amount of configuration files into a generic syntax and back to their original syntax. They also show that the elapsed time in that process is adequate to what would be expected of an interactive tool. SmART is currently being integrated into the VIRTU bundle, whose trial version is available for download from the project’s web page. Keywords: Automatic configuration, Virtualization, Virtual appliance. Hervé Paulino · João Lourenço CITI / Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal e-mail: {herve,Joao.Lourenco}@di.fct.unl.pt João André Martins Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal e-mail: [email protected] Nuno Duro Evolve Space Solutions, Centro de Incubação e Desenvolvimento, Lispolis, Estrada Paço do Lumiar, Lote 1, 1600-546 Lisboa, Portugal e-mail: [email protected]
74
H. Paulino et al.
1 Introduction Virtualization techniques and technologies have been around for quite some time in mainframe environments. However, only more recently, with the advent of low cost multi-core processors with support for hardware virtualization, and a wider operating system support, the use of virtualization is spreading at a fast pace. This widespread use raises several issues on the automation of the configuration and deployment of applications to be executed in Virtual Appliances (VA – customized virtual machine image). Anticipating some of these issues, a consortium including both Industry and Academic/Research partners is currently tackling the automation of the configuration and deployment of applications in VAs in the scope of the VIRTU project [3]. The consortium involves Evolve Space Solutions, the European Space Agency, Universidade Nova de Lisboa, and Universidade de Coimbra. The goal of VIRTU is to enable on-demand configuration and deployment of Virtual Machines (VMs) and applications, independently from the vendor, enabling virtual infrastructure management for the IT industry and for the experimental/testing of complex systems [2]. The configuration of VMs at the application level (fine control over the installed applications) provides the means for the fine-grained creation and provisioning of configurable virtualized application stacks. One of the challenges addressed by the VIRTU project is to enable the configuration of such application stacks in large scale virtualized environments. A process that must be scalable, automatic and executed with no administrator intervention, goals that antagonize with the flexibility required by user customized installations. Applications are usually parameterizable through configuration files, whose basic concepts crosscut most applications. The absence of an adopted standard representation makes the systemic configuration of computer systems a hard and time consuming task. Automatic configuration can be achieved by processing the system and application configuration file(s) and applying a set of regular expression based search-and-replace operations. However, this approach has several limitations: i) it only applies to text-based configuration files; ii) it limits the scope of the changes to be applied to the scope of the search string, and; iii) it assumes the configuration file keywords and syntax will not change in future versions of the application. In order to overcome these limitations, we propose the Smart Application Reconfiguration Framework (SmART), that automatically configures systems (Including Virtual Machines) and application stacks (possibly inside Virtual Appliances), regardless of the application being configured. The SmART framework implements an application configuration workflow by recognizing the syntax and, to some extent, the semantic of a configuration file, and producing a structured generic (application independent) equivalent intermediate representation. SmART also embeds the syntax of the original configuration file into the generic intermediate representation. Configuration transformation scripts may operate over the intermediate (structured) representation, with or without administrator support, to safely generate a customized configuration. A generic component uses the embedded syntax information in the intermediate representation to
SmART: An Application Reconfiguration Framework
75
generate a new configuration file, equivalent to the original one but reflecting the applied customizations. SmART deals with the heterogeneity of configuration file formats by exploring the similarities between configuration files of different applications. For example, text based configuration files include typically parameter definitions, parameter blocks and comments. In this paper we limit our scope to text-based configuration files, not covering, for now, the processing of binary configuration files. The remainder of the document is structured as follows: Section 2 analyses the format and structure of the configuration files of widely used applications; Section 3 presents the SmART architecture and concrete implementation; Section 4 shows how we chose to evaluate the tool and reflects on the obtained results; Section 5 discusses the integration of SmART in the VIRTU project; Section 6 covers the related work; and, finally, Section 7 states our final conclusions on the carried work.
2 An Analysis of Application Configuration Files This section focuses on identifying patterns in the structure and contents of application text-based configuration files, in order to attain a uniform representation. We performed a comprehensive analysis of the configuration files of well known and widely used applications, such as Apache, Eclipse, MySQL, PostgresSQL, GNUstep, and Mantis. The study was, for now, confined to open-source applications. As anticipated, the concrete syntax of configuration files tends to differ, but they resort to a limited number of concepts. In fact, only four distinct concepts were identified in all of the inspected files: • • • •
Parameter assignment – set the value of an application configuration parameter; Block – group configuration settings; Comment – explain the purpose of one or more lines of the file; Directive – denote commands or other directives, such as the inclusion of a file.
Our analysis also focused on the actual syntax used by the applications to express these concepts. Although no standard exists, we observed that some formats, such as INI [1] and XML [10], have emerged as community standards. Consequently, we were able to classify all studied applications under the following three categories: INI-based (Listing 1.): The syntax follows the INI format or similar. Assignments conform to the syntax “parameter separator value”, where value can be either a single value (line 4), a list of values, or even empty (lines 2 and 3). Blocks are explicitly initialized (line 1) but are implicitly terminated by the beginning of the next block, which discards block-nesting support. For instance, the block that begins at line 5 implicitly terminates the previous block that began at line 1. Comments are denoted by a special character (line 7). XML-based (Listing 2.): This category addresses applications that store their configurations as XML variants, encompassing syntaxes a little more permissive than
76
H. Paulino et al.
pure XML. This category has a broader scope than the INI-based, since explicitly initialized and terminated blocks (lines 2 to 4) can nest other blocks. Block-based (Listing 3.): This category addresses a wider scope of formats on which blocks are delimited by symmetric symbols, such as { } or ( ). The depicted example contains three of such blocks. The first one is anonymous and envelops the entirety of the file; the second, NSGlobalDomain, comprised of line 2, and; and the third, sogod, ranges from line 3 to 10. Listing 1. MySQL configuration file snippet - INI-based example 1 2 3 4 5 6 7 8
[ mysqldump ] qui ck quote−names max_allowed_packet = 16M [ isamchk ] key_buffer = 16M # The MySQL database s e r v e r c o n f i g u r a t i o n f i l e . ! i n c l u d e d i r / e t c / mysql / c onf . d /
1 2 3 4
<workbenchAdvisor / > < f as t V i ew D at a f a s t V i e w L o c a t i o n ="1024" > < o r i e n t a t i o n view =" org . e c l i p s e . u i . views . C o n t e n t O u t l i n e " p o s i t i o n = " 512" / > fastViewData >
Listing 2. Eclipse configuration file snippet - XML-based example
Listing 3. GNUstep configuration file snippet - Block-based example 1 2 3 4 5 6 7 8 9 10 11
{ NSGlobalDomain = { }; sogod = { NGUseUTF8AsURLEncoding = YES ; SOGoACLsSendEMailNotifications = NO; SOGoAppointment SendEMail Notif ic at ions = NO; SOGoAuthenticationMethod = LDAP; SOGoDefaultLanguage = E n g l i s h ; S OGoF ol dersS endE Mai l N oti f i c ati ons = NO; }; }
The conclusions drawn from our analysis sustain our premises: the basic concepts of application configuration are crosscutting, and thus can be reduced to a structured generic representation, detached from the application specifics. This generic representation can then be modified systemically and once altered be converted back to the original syntax, reflecting the applied modifications.
3 An Application Reconfiguration Framework This Section presents SmART, a framework that builds on the idea of reducing configuration files to a generic (application independent) intermediate representation, in order to provide scalable means to systematically apply configuration transformations, with or without administrator support, for IT infrastructure
SmART: An Application Reconfiguration Framework
77
administration and maintenance. We will address the framework’s architecture, execution flow, extensibility concerns, concrete implementation, and produced output. As depicted in Figure 1, the reconfiguration process is divided in five steps: i) Identification (and extraction) of the configuration files to be customized; ii) Translation of the configuration file from its original representation to a generic one (stage 1); iii) Customization of the (generic) configuration file to reflect the desired configuration (stage 2); iv) Translation of the generic configuration file back to its original representation (stage 3); and v) Incorporation of the new configuration file into the application/system. The stages 1 to 3 are independent, requiring only the use of the same intermediate representation. Such design enhances flexibility, for instance: file modification can be performed manually (e.g., a graphic tool that displays all the editable settings) or automatically (e.g., a script); several executions of stage 2 can be performed over a single output of stage 1; and stage 3 can convert a file back to its original representation with no extra information besides the one included in the input file. The emphasis of this paper is on the systemization of the whole configuration process, namely on stages 1 and 3, covered in Sections 3.1 and 3.2, respectively.
Incorporation of Configuration FIle
Application Files
Modified Configuration File in Original Syntax Generic to Original Representation Converter
Extraction of Configuration FIle
Configuration File in Original Syntax
1
3
Modified Configuration File in Structured Format
2
Original to Generic Representation Converter
Configuration File in Structured Format
Configuration File Customization
Fig. 1 Configuration file transformation workflow in SmART
3.1 Original to Generic Representation (O2G) The purpose of the O2G component is to convert configuration files from their original syntax into the structured generic intermediate representation. The converter must be equipped with a set of parsers capable of recognizing as much configuration files as possible. A given file is translated into an internal uniform representation that is dumped to a file, according to a concrete generic syntax. This syntax is implementation specific and no restrictions are imposed at the architectural level. Although our analysis revealed that three categories are sufficient to classify the totality of our case studies, it is clear that other formats may be used. Thus, for the sake of extensibility, new parsers can be added to the O2G, either directly, as long as
78
H. Paulino et al.
framework compliance is preserved (see Section 3.1.3), or by submitting a grammar that the O2G will use to produce the parser itself. Section 3.1.3 also addresses the concern of extending the concepts recognizable in a configuration file. 3.1.1
Architecture
Regarding its internals, the O2G comprises six components organized in a three-tier software architecture as illustrated in Figure 2. User Interface Presentation Layer
Configuration File Parser
Grammar Compiler
Code Generator Logical Layer
Parser Repository
Tentative Grammar Repository Storage Layer
Fig. 2 Original to generic representation converter components
The Parser Repository is the database that stores the parsers currently available to the framework. It provides the means to add, remove and update parsers. The Tentative Grammar Repository stores all the grammars defined by the user on the process of producing a new parser. The repository can be logically viewed as a tree, providing a simple way of iterating previous attempts. The actual parser generation is performed by the Grammar Compiler that resorts to an external parser generator (e.g., JavaCC [5]) to compile the grammar and, consequently, produce the new parser. The return value indicates whether the compilation was successful or not. The Configuration File Parser produces the abstract syntax tree (AST) of a given configuration file. The operation can be performed by all the available parsers or by a given single one. Each parsing attempt produces an output that includes the AST and the statistics regarding the percentage of file successfully parsed. Naturally, the all parsers option produces a list of such results. The AST itself provides an internal uniform representation of a recognizable file. It is composed of nodes denoting the concepts identified in Section 2 (parameters, blocks, comments, and directives) or denoting new concepts introduced by the user. Its output to a file, according a chosen concrete generic syntax, is performed by a specialized implementation of the Code Generator. Finally, the User Interface poses as an intermediate between the lower layers and the user, exposing all the framework’s functionalities. 3.1.2
Execution Flow
Figure 3 illustrates interactions between the O2G component modules. Each file received by the O2G is passed to the Configuration File Parser that iterates the Parser Repository to check if, at least, one of the available parsers in the repository is able to perform a successful recognition. If such is the case, the result of the
SmART: An Application Reconfiguration Framework
79
operation is made available to the user, in order to be validated. A positive evaluation will cause the AST to be translated into generic representation via Code Generator, whilst a negative will force the Configuration File Parser to continue its iteration of the repository, until no more parsers are available. When none of the parsers is capable of completely recognizing a given file, the user will have to supply a new one to the framework. This can be achieved by directly importing an already existing parser, or by submitting a grammar to the O2G so that the parser can be internally generated by the framework. To ease the burden of this latter task, statistics are collected for each parsing attempt and the parser that performed better is made available to be used as a working base. The Grammar Compiler is the component responsible for producing a parser from a submitted grammar. The parser will be added to the Parser Repository if it fully recognizes the target file and its output is validated by the user. The whole parser creation process is assisted by the Tentative Grammar Repository that keeps every submitted grammar, allowing for the rollback of modifications by iterating the previous attempts. As soon as the parser is added to the Parser Repository the Tentative Grammar Repository is cleared. Tentative Grammar Repository
Grammar Compiler
Parser Generator Code Generator
User
User Interface
Configuration File Parser Configuration File in XML
Application Files Application Binaries
Configuration Files
Parser Repository
Fig. 3 Original to generic representation component interactions
3.1.3
Extensibility
The import of existing parsers is disciplined by an interface that specifies the framework’s compliance requirements, i.e., the Grammar Compiler/parser interaction protocol. This includes the format of the parsing output, which must recognizable to SmART, i.e., comply with the existing AST node types (a direct mapping from the concepts the framework is able to recognize in a configuration file). The set of the recognizable concepts can also be extended. Once again the compliance requirements are specified by an interface that determines which data extracted from the source file is to be passed to the generic representation. 3.1.4
Implementation
A Java prototype of the SmART framework has been implemented and evaluated. The implementation required technology decisions concerning the concrete
80
H. Paulino et al.
syntax for the generic representation and the external parser generator. The choice fell, respectively, on XML [10] for its widespread use and support in mainstream programming languages, and JavaCC [5] for its functionalities and ease of use. A major design and implementation challenge was how to communicate the details of original file’s syntax to the Generic to Original Converter (stage 3), providing it with the information required to later generate a valid customized configuration file from the generic representation. A clean and flexible solution is to code this information in the XML file, along with the generic representation. For that purpose we defined two dedicated XML elements: Metadata and FStr. The first is present once, at the beginning of the file, and contains immutable information about each concept that the file exhibits. The second is a format string present at each concept instance, and specifies how such instance must be written in its original syntax. The string is composed by a list of placeholders that indicate where the actual data to be written is stored. The placeholder possibilities are: %a.x %e %m.x %c %s %n
Print the value of attribute x from the current XML element; Print the value delimited by the next inner XML element; Print the value delimited by x within the metadata XML element storing information about the concept being processed; Recursively print the next inner XML element, using its format string; Print a blank space; Issue a new line.
We exemplify the XML produced output with the translation of part of the MySQL configuration file snippet of Listing 1.. The generated metadata is presented in Listing 4., while Listings 5. and 6. contain, respectively, the translation of the block ranging from lines 1 to 4, and of the comment in line 7. Space restrictions do not permit the exposure of complete real-life examples, but these can be found in [6]. Listing 4. Metadata 1 2 3 4 5 6 7 8 9 10 11 12
<Metadata > < s t a r t ># s t a r t > <Parameter > < e q u a l >= e q u a l > Parameter > < s t a r t >[ s t a r t > <end > ] < / end > Block > Metadata >
Listing 5. XML generic representation of a block 1 2 3 4 5 6 7 8 9 10 11 12 13 14
< Block name =" mysqldump " > %m. s t a r t%a . name%m. end%n%c%c%c%c < / FStr > <Parameter > %e%n < / FStr > q u i c k < / Key> Parameter > <Parameter > %e%n < / FStr > q u o t e −names < / Key> Parameter > <Parameter > %e%m . e q u a l%e%n < / FStr > m a x _ a l l o w e d _ p a c k e t < / Key> 16M< / Value > Parameter > Block >
Listing 6. XML generic representation of a comment 1 2 3 4
%m. s t a r t %e%n < / FStr > The MySQL d a t a b a s e s e r v e r c o n f i g u r a t i o n f i l e . < / Text > Comment>
SmART: An Application Reconfiguration Framework
81
3.2 Generic to Original Representation (G2O) G2O performs the operation complementary to O2G, by converting configuration files from their generic representation back into their original syntax. It is composed of a single component which is able to reconstruct the configuration file by processing the metadata available in the generic representation. The module traverses the XML representation, outputting each concept according to the specified format string and metadata information. Observe the XML generic representation depicted in Listing 5.. The output format string for the block (line 2) is “%m.start%a.name%m.end%n%c%c%c%c” with the following meanings: • %m.start – inspect the metadata element holding information about the concept current being processed (a Block) to retrieve the data delimited by start , i.e., [ ; • %a.name – retrieve the value of the name attribute of the current concept, i.e., mysqldump; • %m.end – retrieve ] from the metadata; • %n – issue a new line; • The three %c – process the next three inner blocks (all of type Parameter in this example) according to their own format strings.
4 Evaluation The framework was evaluated from three different angles: functionality, operationally and performance. The functional evaluation verified that all the requirements towards which the framework was designed were met. For instance, a requirement was that the framework should be extensible to accommodate configuration files with new syntaxes. The complete requirement list can be found in [6]. The objective of the operational evaluation was to determine if the framework can correctly transform configuration files into the XML generic representation and back to their original syntax. Several tests were performed upon real configuration files [6] of the three categories presented in Section 2. The initial and final files, with no customization in the intermediate representation, were compared using the UNIX diff utility. Overlooking ignorable characters, such as extra black spaces in the considered formats, all compared files were identical. The performance evaluation focused on the time required by the O2G and G2O converters to apply their transformations. A first test aimed at assessing if the time required by the transformations is proportional to the size of the source file. The graphic displayed on Figure 4 refers to the processing of INI-based PostgreSQL configuration files (available at http://asc.di.fct.unl.pt/smart). We can observe, the processing time in both converters grows linearly with the size of the file to be processed, but in both cases is suitable for an interactive tool. The largest evaluated file, of 16.6 KB, spent only 0.46 seconds on stage 1 and 0.26 seconds on stage 3.
82
H. Paulino et al. 0,5 0,45 0,4
Time (seconds)
0,35 0,3 0,25 O2G
0,2
G2O
0,15 0,1 0,05
O2G G2O Total File 1.4KB 0.143s 0.073s 0.216s File 3.5KB 0.191s 0.086s 0.277s File 16.6KB 0.463s 0.262s 0.725s
0 0
2
4
6
8
10
12
14
16
18
File size (Kbytes)
Fig. 4 Performance evaluation
A second test focused on another aspect worthy of monitoring. The time required to generate a parser from a given grammar. Table 1 presents the time spent by JavaCC to generate the parsers for the three format categories of Section 2. The results indicate that parser compilation time is also sustainable. Nevertheless, to have a more accurate assessment we compiled a parser for Java, a language likely to be more complex than any configuration file format. The elapsed time was 0.55 seconds, which sustains the previous conclusions. Table 1 Elapsed times for parser compilation in JavaCC INI-based Block-based XML-based 0.27s 0.28s 0.30s
Both tests were performed on a system comprising an Intel Pentium Dual T3200 processor with 2GB DDR2 main memory, and running the Linux 2.6.31 kernel.
5 VIRTU Integration This section briefly discusses the integration of SmART in VIRTU [3], a platform conceived to enable on-demand configuration and deployment of VMs and application stacks independently of the vendor. VMs in VIRTU are constructed by assembling building blocks (operating system, and applications) whose configuration is specified in special purpose files, named publication files. Configuration is decoupled from assembling, allowing many-tomany relationships. Thus, a VM is only configured when deployed. The publication files specifying the configurations of the assembled building blocks are retrieved from a pre-determined database and handled by a script, to perform the desired configurations before the system is on-line. SmART integrates with VIRTU at two levels. The O2G converter is used by the administrator to transpose one or more configuration files into a building block publication file, in order to define the block’s default configuration and user editable parameters. The G2O is used in the VM’s on-boot configuration process. It converts the building block’s publication file back into its original format, so that the
SmART: An Application Reconfiguration Framework
83
configuration script may carry its work. Reconfigurations can also be performed by having a daemon (on every running VM) listening for reconfiguration requests. The process is equivalent to the on-boot configuration process, with the exception that the target application may have to be restarted.
6 Related Work To the best of our knowledge, our approach is the first to exploit the similarities among configuration files to allow for automatic, vendor-independent and on-thefly application reconfiguration. Similar existing projects, such as AutoBash [8] or Chronus [11], take on automatic application configuration as a way to assist the removal of configuration bugs. AutoBash employs the causality support within the Linux kernel to track and understand the actions performed by a user on an application and then recurs to a speculative execution to rollback a process, if it moved from a correct to an incorrect state. Chronus, on the other hand, uses a virtual machine monitor to implement rollback, at the expense of an entire computer system. It also focuses a more limited problem: finding the exact moment when an application ceased to work properly. Two other projects, better related to this work, are Thin Crust [9] and SmartFrog [4]. Both projects aim at automatic application configuration, but take an approach different from ours. Thin Crust is an open-source set of tools and metadata for the creation of VAs. It features three key components: Appliance Operating System (AOS), Appliance Creation Tool (ACT) and Appliance Configuration Engine (ACE). The AOS is a minimal OS built from a Fedora Kickstart file, which can be cut down to just the required packages to run an appliance. SmartFrog is a framework for the creation of configuration-based systems. I ts objective is to make the design, deployment and management of distributed component-based systems simpler and more robust. It defines a language to describe component configurations and a runtime environment to activate and manage those components.
7 Conclusions and Future Work The work described makes evidence that systemic configuration of applications can be safely achieved by abstracting configuration files from their format specificities. The SmART framework enables this idea by featuring two complementary modules (O2G and G2O), that perform transformations between the application dependent syntax and a generic representation and back, regardless of the original application. A proof-of-concept prototype has been implemented. By default it supports the three format categories that, according to the analysis in Section 2, cover the majority of the existing applications with text-based configurations files. Nonetheless, a major effort was put on extensibility. The framework allows for the addition of new configuration file parsers, and of the concepts that can be recognized on a file. The carried evaluation certifies that all of the tested use case files can be transformed into our generic representation and back. Moreover, both these
84
H. Paulino et al.
operations (as well as grammar compilation) are performed in a time that is adequate for an interactive application, i.e., less than one second. The prototype is being integrated into the VIRTU product line and is being further extended by Evolve. We can therefore conclude that, once having a parser able to recognize the source configuration file, SmART contributes to accelerate the configuration process, since it does not require the knowledge of the source file format in order to apply the desired modifications. Moreover, the use of a generic representation allows for the systemization and automation of the whole process. Regarding future work, naturally that there is room for improvement at different levels. For instance, to broad the scope to include binary files, which may even require the addition of new recognizable concepts. However, in our opinion, the main research challenge is on the use of grammar inference [7] to create new parsers. By inferring grammars, instead of delegating their definition on the user, the creation of new parsers can be perform almost entirely without the user’s intervention, enhancing usability and generality. Acknowledgements. This work was partially funded by ADI in the framework of the project VIRTU (contract ADI/3500/VIRTU) and by FCT-MCTES.
References 1. Cloanto: Cloanto implementation of INI file format (2009), http://www.cloanto.com/specs/ini/ 2. Duro, N., Santos, R., Lourenço, J., Paulino, H., Martins, J.A.: Open virtualization framework for testing ground systems. In: PADTAD 2010: Proceedings of the 8th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging, Trento, Italy (2010) 3. Evolve Space Solutions: VIRTU Tool (2009), http://virtu.evolve.pt/ 4. Goldsack, P., et al.: The SmartFrog configuration management framework. SIGOPS Oper. Syst. Rev. 43(1), 16–25 (2009) 5. Kodaganallur, V.: Incorporating language processing into Java applications: A JavaCC tutorial. IEEE Software 21, 70–77 (2004) 6. Martins, J.: SmART: An application reconfiguration framework. Master’s thesis, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa (2009) 7. Parekh, R., Honavar, V.: Learning DFA from simple examples. Machine Learning 44(1/2), 9–35 (2001) 8. Su, Y.Y., Attariyan, M., Flinn, J.: AutoBash: improving configuration management with operating system causality analysis. SIGOPS Oper. Syst. Rev. 41(6), 237–250 (2007) 9. Thin Crust: Thin crust main page, http://www.thincrust.net/ 10. W3C: Extensible Markup Language (XML), http://www.w3.org/XML/ 11. Whitaker, A., Cox, R.S., Gribble, S.D.: Configuration debugging as search: finding the needle in the haystack. In: OSDI 2004: Proceedings of the 6th conference on Symposium on Operating Systems Design & Implementation, Berkeley, CA, USA, pp. 77–90 (2004)
Searching the Best (Formulation, Solver, Configuration) for Structured Problems Antonio Frangioni and Luis Perez Sanchez
Abstract. i-dare is a structure-aware modeling-reformulating-solving environment based on Declarative Programming, that allows the construction of complex structured models. The main aim of the system is to produce models that can be automatically and algorithmically reformulated to search for the “best” formulation. This article describes how the information is organized to make the search in the (formulation, solver, configuration) space possible with several different exploration techniques. In particular, we propose a way to combine general machine learning (ML) mechanisms and ad-hoc methods, where available, in order to effectively compute the “objective function” of the search. We also discuss how this mechanism can take upon itself part of the exploration, the one in the sub-space of configurations, thus simplifying the task to the rest of the system by reducing the dimensionality of the search space it has to traverse. Finally we present some practical results using MCF structure and SVR ML technique.
1 Introduction Mathematical Modeling (MM) is commonly used for countless many industrial applications. However, some of the most striking discoveries of science and mathematics in the last century revealed that just creating a MM does not mean being able to solve it; conversely, “most” MMs are very difficult (if at all possible) to solve algorithmically. Antonio Frangioni Dipartimento di Informatica, Universit` a di Pisa, Polo Universitario della Spezia, Via dei Colli 90, 19121 La Spezia, Italy e-mail: [email protected] Luis Perez Sanchez Dipartimento di Informatica, Universit` a di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy e-mail: [email protected]
86
A. Frangioni and L. Perez Sanchez
When a model of a practical industrial/scientific application is built, often a choice is made a priori about which structure of the model is the most prominent from the algorithmic viewpoint. This decision is mostly driven by the previous expertise of the modeler, by the set of tools (bag of tricks) he has available, and by his understanding of the intricate relationships between the choices made during the modeling phase and the effectiveness/availability of the corresponding solution procedures. All this leads to selecting (more or less arbitrarily) a class of MM into which the one at hand is moulded, a process not without significant consequences both in terms of accuracy and of solvability of the model, in order to be able to apply some (usually, general-purpose) solver for the selected class. Using a solver designed for a general class of problems often leads to poor performances, as algorithmically relevant forms of structure present in the model are ignored. In fact, while an enormous literature is available about which algorithms are best for solving classes of problems with specific structures, it is the user who has to realize that the structure is there, and that a solver capable of exploiting it is available. To make matters worse, the structures that make it possible to apply specialized approaches are typically not “naturally” present in the MMs, and must be purposely created by weird tricks of the trade such as preconditioning techniques [19], domain reduction in CP [3], specialized rowand column-generation in MILP [10], and many others. In other words, the original MM has to be reformulated to reveal (create) the relevant structures. Finding reformulations is a painstaking process, up to now firmly in the hands of specialized experts with little to no support from modeling tools. There have been efforts in trying to define automatic reformulation techniques [18, 4, 14, 15], most of them dealing with particular and restricted cases, or without defining proper algorithmic approaches, or defined mainly at an algebraic level. Once a formulation is chosen, one also needs to determine which solver will be applied and how to configure it. It is well known that the configuration process may be difficult, and it is crucial for the solver’s performance. Solvers that may be either extremely very fast or extremely slow, depending on the configuration, are usually deemed unsuitable for general use. Because modeling, reformulating and solving a problem is such a complex task, a system capable of streamlining these operations, much like Integrated Development Environments do for computer programming, would be very useful. A few commercial solutions exist for this, but they are strongly tied to a specific solver, and a fortiori to a specific problem class. Even OS efforts like the OSI [16] project, which allow some degree of solver independence, are tied to a specific problem class; this contrasts with the need of experimenting with different formulations, and therefore problem classes. Furthermore, the decision of which solver to use and how to configure it has to be made by the user, which implies a high level of expertise. The i-dare system introduced in [11] aims of moving forward by performing on behalf of the user the selection process of three fundamental aspects:
Searching the Best (Formulation, Solver, Configuration)
87
formulation, solver and configuration. The system has been designed to be able of dealing with a vast set of MMs, such as large-scale structured LP or MILP, to many different classes of structured NLP or MINLP, up to extremely challenging problems like PDE-constrained ones [13]. In order to allow this vast applicability, it is based on a very broad concept of structure, intended as whatever characteristics of the problem that can be exploited for algorithmic purposes. A fundamental feature of the system is the search is performed on all the three aspects simultaneously, as choosing a reformulation immediately impacts upon which solver(s) can be used, and therefore the specific options that may be chosen in a solver. For instance, if a model is reformulated to a MILP with a specific structure, for which specific cuts can be generated, then a solver attached to that structure will have the option to generate (or not) these cuts. Moreover, for MINLP problems some algorithms may use different combinations of solvers to solve the LP and NLP problems [9]; the i-dare environment not only allows the choice of different combinations, but also automatizes the selection of the “best” choice for any given instance, which has the possibility to significantly increase the performances of the approach. This requires a framework where solution techniques are able to communicate between each other in order to solve a problem by means of a general solver interface which permits to plug different existing or new solvers. The user is expected to provide a model picking from the set of available structures; the system will then compute possible reformulation of the models, using available reformulation rules, and figure out—transparently to the user—which formulation allows for the best algorithmic approach, taking into account the issue of configuration, which comprises delicate choices such as the balancing between solution time and accuracy and the choice of the most cost/effective architecture. Such a framework may favor the creation of competitive solvers concentrated in particular forms of (combinations of) structure. A fundamental prerequisite for such a system is that all the needed information is available to properly define the search space(s). Even when it is done, the problem of designing the proper algorithms to search for the “best” (formulation, solver, configuration) has to be solved. This paper briefly describes in §2 an unified way of handling search spaces (for more information see [12]). In §3 we propose an interface to accommodate all possible decision algorithm and present a general machine learning control mechanism that proposes a framework for the implementation of potential decision algorithm based con machine learning techniques. Finally, in §4 we experiment with some of the proposed ideas using a simple combinatorial problem to investigate the potential impact of the automatic selection process on the expected performances.
2 Search Spaces i-dare (via the queries provided by F LORA-2 [21]) retrieves structured data about the current instance, in a very effective way. This data can be used to
88
A. Frangioni and L. Perez Sanchez
characterize and explore the search space, that is composed by three main sub-spaces: (i)Formulation + Instance = Extended Model; (ii) Solvers; (iii) Configurations. For each sub-space, an extensible set of predefined queries and methods are available to consult the data. These queries provide any control mechanisms (cf. §3) with the information it needs for effectively guiding the search throughout the whole space. A large number of queries are available in i-dare to retrieve information about variables, constants, dimensions, and components of an Extended Model (EM); how these components are related between each other within a formulation. These queries allow to obtain a complete description of any “static” EM; however, the main goal of i-dare is to allow reformulating the models. This is obtained by applying atomic reformulation rules (ARR) [11] to specific components inside the formulation. i-dare defines a general interface for solver plug-ins. Each solver must register itself to one structure and define its configuration template. i-dare automatically generates a F LORA-2 file containing solver and configuration data. Each structure may have more than one solver registered. As previously mentioned, each solver must define how it must be configured. For this purpose i-dare defines Configuration Templates (CT). A CT is a hierarchical structure defining the relevant algorithmic parameters and the possible range of their values. Hence, CTs can be easily used to describe single configurations by simply forcing each parameter to have a single-valued domain. Two descriptions of CTs are available: the “external” and the “internal” one. The external one is in terms of an XML file that specifies parameters and their domains. CTs currently support four base parameter types: integer, double, choice and vector. When a solver is exported to the F LORA-2 file, it exports also its CT. Therefore, the “internal” representation of the CT in F LORA-2 is automatically constructed [12].
3 i-dare(control): Controlling the Search in the (Formulation, Solver, Configuration) Space i-dare(control) defines the most delicate and innovative feature of the system: given a structured instance, to automatically select the “best” combination in the space of the possible (re)formulations, solvers and configurations. This is clearly a very complex process, for which several different techniques may be used. The problem is made particularly difficult by the fact that even predicting the performances of a given (set of) algorithm(s) and configuration(s) on a given formulation is far from being a trivial task. We have chosen to provide a rather general and abstract setting for performing the search, so as to allow different search mechanisms to be compared and contrasted. The whole search is controlled by the i-dare(control) module, which may be any control mechanism conforming to the simple interface
Searching the Best (Formulation, Solver, Configuration)
1 2 3
89
d_control [ process ( d _ I n s t a n c e W r a p p e r) -> [ d _ I n s t a n c e W ra ppe r , _term ] ].
The interface declares a method that, given an extended model, returns the selected “best” reformulation of the model along with the solver tree and the correspondent configuration. Of course the initial and final model may be the same, in which case i-dare(control) “only” selects the best solver and configuration for the given instance. This is already a rather difficult problem in itself, for which little is known in practice; indeed, it is important to remark that even predicting the running time of a given algorithmic approach on a given data input is problematic. This seems to essentially require the use of Machine Learning (ML) techniques (e.g. [6]), which may be the only approach capable of automatically devising suitable approximations of the function which estimates the efficiency (and, possibly, the effectiveness) of an algorithmic approach when applied to the solution of a given instance. Remarkably, the use of ML tools for the selection of algorithm parameters have been recently advocated in [8], although in a much more limited context, with promising initial results. Thus, while the i-dare system does not specify the exact strategy used by i-dare(control) to search the (Formulation, Solver, Configuration) space, it must provide any actual implementation with enough information to effectively drive the process. While i-dare(control) has full access to all the characteristics of the instance, the previous discussion highlights the need for further mechanisms that allow an efficient comparison between different points of the space. These are described in the next sections.
3.1 Objective Function Computation The fundamental mechanism needed for driving the search is an effective and efficient way for evaluating the quality of a (Formulation, Solvers, Configuration) choice; we will consider this the “objective function” of the search, and denote it by ψ. This performance metric should be designed to reflect the trade-off between running time and solution accuracy; for instance, ψ may measure the running time it takes to the solver obtain a solution with the prescribed optimality, if that can be done within the time limit, and a weighted sum of time limit and final objective function gap otherwise. Alternatively, accuracy of the solution may be treated as a parameter (cf. “Fixed Features” below). In general, one should not expect that an arithmetic or algorithmic description of ψ will be available for all possible formulations, solvers and configurations, although this may indeed happen in some cases. Therefore, we propose the application of ML techniques to approximate such function based on known observations.
90
A. Frangioni and L. Perez Sanchez
Features As usual in ML, one critical point is the definition of the set of features that represent each data point in the learning set of the method. The complexity (and practical performances) of several optimization algorithms can be shown to depend on characteristics of the instances: for Linear Programs, some of the main features are the number of variables and constraints together with the density of the constraint matrix. However, the relevant set of features should be very different for different problem classes, and even for different algorithms for the same problem class; again in the LP case, degeneracy of the vertexes of the polyhedron affects Simplex approaches but is next to irrelevant for interior-point ones. Therefore, defining a unique set of features for a problem does not seem reasonable. On the other hand, the responsibility of defining the right set of features cannot be demanded to a general mechanism, so each solver will be required to define them. Thus, we define a layer over the existing i-dare solver interface, which is called Solver Wrapper (SW), that will provide the list of relevant features to parametrize ψ. All SWs must inherit from the following interface Listing 1. ”Solver Wrapper Interface” 1 2 3 4 5
d_solverWrapper [ solver = > d_solver , retrieve (? EM , ? CT ) = > [ _list , CT ] , [ internal ] ].
Given the current EM, the retrieve() method returns the feature list and a list of possible configurations, represented by a CT. The meaning of the method is somewhat different according to the value of the optional property internal. When internal is not present, the evaluation of ψ is demanded to the general mechanism described later on. In this case, the SW “only” has the responsibility to extract from EM the features set. The second return value in retrieve() is a CT that describes all possible configurations (compatible with the fixed choices, see below) of the solver for this particular instance type. When internal is present instead, the evaluation of ψ for the given solver is done inside the wrapper. In this case, there will be only one (or few) features, consisting in the (estimated) value(s) of ψ. The second return value is meant to contain the single configuration that produces the estimated value of ψ. It is intended in this case that the SW will choose the (estimated) best configuration, if more than one is available. There may be intermediate scenarios between these two extreme ones (e.g. the SW may internally compute some performance figures, out of which predicting the actual running time may be much easier, and/or return a CT containing only a subset of the possible configurations). This general mechanism allows on one side to use a general ML mechanism (described below) for the case where nothing relevant is
Searching the Best (Formulation, Solver, Configuration)
91
known about predicting the performances of a solver. Note that the SW may use, internally, a specialized ML approach to select the best configuration (cf. e.g. [8]). The Generic Machine Learning Sub-system When a SW does not compute its ψ value internally, the Generic Machine Learning Sub-system (GMLS) can be invoked to try to estimate it. In ML parlance, the SW produces several data points, each one formed by the unique feature set of EM and one among the different configurations from the template; in other words, the actual feature set of the ML is a pair (features of the instance, configuration of the algorithm). This information can be used with any of the several possible ML approaches to try to estimate ψ. In order not to tie-in the i-dare system to any specific ML technology, i-dare(control) defines a general interface to ML algorithms, described by the following class Listing 2. ”Machine Learning Interface” 1 2 3 4
d_machineLearning [ evaluate ( _list ) = > _list , = > train ( _list , _list ) ].
where train() trains the ML using a set of data points—that is, (features, configuration) pairs—and a list of known ψ-values (one for each point), and evaluate() computes ψ for the specified data point. Each concrete class inheriting from d_machineLearning will define an actual ML technique (Neural Networks, Support Vector Machine, Decision Tree, . . . ); the GMLS sub-system will associate each SW with one (possibly the “most appropriate”, cf. MetaLearning) concrete ML in charge of computing ψ for the corresponding solver. Machine Learning as a Search Mechanism Clearly, the above ML approach provides one way to automatize the search in the configuration space. Provided that the configurations are “few”, one may simply list them all and compute ψ for each; then, the configuration with the best value is retained as the selected one. Provided that the possible solvers for a given structure are not too many either (which looks a reasonable assumption), an effective ML approach to computing ψ would provide all the tools for performing the search in the (Solver, Configuration) sub-space, leaving “only” the (re-)formulations space to be explored. In general, however, the set of configurations may be rather large. One might thus devise ML approaches capable of working with “meta” data points, i.e., pairs (features of the instance, configurations template). These approaches might for instance still rely on standard ML techniques at their core, but coupled with smart sampling techniques that avoid to compute all possible data points, somewhat in the spirit of active learning techniques [17].
92
A. Frangioni and L. Perez Sanchez
More in general, one may devise ML approaches aimed not just at predicting ψ for a given configuration, but rather at predicting the configuration which produces the best value of ψ within a given CT. Fixed Features The retrieve() method of the SW has a second parameter ?CT, whose use has not been discussed so far. It is intended to be a partial CT, whose use is to constrain the possible configurations to be generated by SW. This allows the caller of a SW to instruct it to avoid considering some configurations that are not feasible, or not “interesting”. There are at least three important cases that may require such a mechanism: handling of (i) accuracy in the solution, in terms of constraint satisfaction or of quality of the solution; (ii) maximum resource usage (CPU time) in the solver; and (iii) architecture, i.e., the fact that the same solver may be executed on different parallel hardware. These aspects may be considered included in the CT of a solver. However, depending on the actual form of ψ, they may not be freely chosen by the SW in quest for the smallest ψ value. In fact, accuracy of the overall solution and/or the maximum total allotted running time will typically be set by the final user. In turn, a block of the formulation that has some sub-blocks may want to explore their accuracy/time frontier to seek for the most appropriate setting, e.g. settling to less accurate solutions in change for a reduced running time; this is, for instance, the setting that is most often chosen for separation algorithms in Mixed-Integer Programs when they require the solution of a hard subproblem. However, in other cases a “master” problem may require solutions of its subproblems with a higher degree of accuracy. A SW may want to explore the possibility to allocate its subproblems to different computational nodes to exploit their complementary strengths (see e.g. [7]); on the other hand, some solvers may not be available on some architectures, or the target architecture may be severely limited by the user. All this cases can be handled with the general mechanism of externally constraining the set of available configurations. Note that if the SW is not able to generate at least one configuration that satisfies the constraints imposed by ?CT, it will fail by returning an empty CT, thus signaling that it cannot be used under that set of conditions; basically, this amounts at producing an infinite value of ψ.
3.2 Training and Meta-learning Training The fundamental assumption under any ML approach is that the machine be fed with an appropriate set of samples. This is known as training. The GMLS sub-system will therefore have to execute a learning process before that the ML be ready for actual use in the search. The learning process consists in
Searching the Best (Formulation, Solver, Configuration)
93
solving the instances in the training database with all available algorithms and all available configurations, thereby producing the data to be fed to the train method of d_machineLearning. Meta Learning It is obvious that the effectiveness of the prediction of ψ, upon which all the search process ultimately rely, can be significantly affected by the choice of the concrete ML in charge of computing ψ, together with its possible several learning parameters (topology for NNs, parameters of SVM, . . . ) [6]. Choosing the “most appropriate” ML is therefore, itself, a difficult (yet fundamental) task. Thus, GMLS will also have to implement a meta learning process, whereby the results of the same learning phase for a given solver are fed into different ML, and the “best” is selected as the one which minimizes some appropriate discrepancy measure between the actual values and the predictions. Actually executing the solvers need to be done only once; provided that the results are properly stored, they can re-used by all MLs during subsequent meta-learning phases.
3.3 The Overall Search Process The GMLS sub-system thus defined provides a sound basis for implementing any general search procedure in the (formulation, solver, configuration) space. It may also directly take care of the selection of the latter two components (solver and configuration), leaving to i-dare(control) “only” the task of traversing the (re)formulations space using the available ARRs to reformulate parts of the whole structured model, for which different approaches are possible, from complete enumeration to (more likely) heuristic searches. Figure 1 shows a diagram that outlines the overall search process in i-dare(control), highlighting the fundamental role of GMLS. It is worth mentioning, that each time a final EM is selected and actually solved, all solution data is sent back to i-dare to be added to the testing database. This way, the testing database is automatically enriched from real problems, strengthening the observation set and therefore allowing the GMLS to perform a better approximation of ψ in the future.
4 Experiments In order to test the soundness of the design decisions of the I-DARE system, we performed some preliminary experiments with a relatively simple (yet quite powerful) model: the Min-Cost Flow (MCF) problem. For this we selected two solvers: the primal/dual RelaxIV [5] and MCFSimplex, a recent implementation of the classical network simplex algorithm [2], both distributed by the MCFClass project [1]. Besides being rather different in nature, the solvers have several algorithmic parameters which impact their
94
A. Frangioni and L. Perez Sanchez
Fig. 1. GMLS diagram
performances. For MCFSimplex these are the choice of primal or dual simplex, the number of candidate list and the size of hot list in the all-important pricing rule. For RelaxIV one can decide if the auction/shortest paths initialization procedure is used, the number of single-node iterations attempted before the first multinode iteration is allowed, the threshold parameters to stop the scanning process after a multinode price changes, the bound to decide when another multinode iteration must be performed, and the number of passes. Thus, a large number of possible different configurations can be used besides the default one (Dconf), which is typically hard-coded in the solver and however not touched by all but the most adventurous users. These solver were applied over 144 different graphs using different configurations. The graphs were created using the following generators: complete networks (10-3000 nodes/106-11252189 arcs), gridgen (255-65535/2048-1048576) and netgen (256-16384/2048-1048600). After having executed each configuration of each solver on each instance we tested the effectiveness of the ML approach as the basis for the search in the (Solver, Configuration) sub-space. For this, a k-fold validation mechanism was applied. We partitioned randomly the set of graphs into 4 equally sized chunks. The ML was trained using 3 chunks and tested for accuracy using one, this process was repeated four times ensuring all chunks were part of the testing process (the average results were reported on all trials). Given that MCF is a polynomially solvable problem, the ψ function only measured the obtained running time. In order to show that a ML approach makes sense, we first report some data showing that choosing the best solver and configuration indeed makes
Searching the Best (Formulation, Solver, Configuration)
95
a difference. In the following table we report for each solver the number of different configurations that were found to be the best one (denoted by Aconf) for at least one instance (“nbest”), the average ratio between the running time of Aconf and that of Dconf (“b/d ratio”, in parenthesis the variance), and the percentage of total instances in which the best configuration for that solver was better than the best configuration for the other solver (“% best”). Solver nbest b/d ratio % best RelaxIV 80 0.77 (0.22) 70.8 MCFSimplex 34 0.86 (0.17) 29.2
We also mention that the best solver requires on average around 60% of the running time of the worst solver. We then experimented about the effectiveness of the ML approach for the selection of the best (Solver, Configuration) pair using the support vector regression (SVR) tools provided by the SHOGUN library [20]. For our tests we analyzed different feature combinations, taken from a set of 24 ones: nodes (0), arcs (1), min/max/average/variance of node degree (2,3,4,5), arc capacities (6,7,8,9), arc costs (10,11,12,13), node deficits (14,15,16,17), and length of min-hop path (19,20,21,22), the ratio of average node deficit and arc capacity, and an approximation of the graph diameter (23). The results are shown in the following table; in particular, “e. err” is the average displacement of the estimated time for the best configuration according to ML (MLconf) with respect to the actual time for that configuration, “b/ml ratio” is the ratio between the running time of Aconf and that of MLconf, “ml/d ratio” is the ratio between the running time of MLconf and that of Dconf, and “opt. features” is the subset of features that have been found to provide the most accurate results. Solver e. err b/ml ratio ml/d ratio opt. features RelaxIV 0.542 (0.871) 0.770 (0.205) 0.966 (0.305) [0, 1, 4, 8, 12, 16, 24] MCFSimplex 0.500 (0.723) 0.855 (0.182) 0.919 (0.158) [0, 1, 18, 21, 23]
We emphasize that different feature subsets led to fairly worse results; a few representative ones (only, due space limitations) are Solver
e. err b/ml ratio ml/d ratio 0.548 0.775 0.969 0.755 0.975 RelaxIV 0.479 1.224 0.768 0.996 0.513 0.862 0.923 0.867 0.928 MCFSimplex 0.720 1.472 0.852 0.953
opt. features [0, 1] [0, 1, 18, 21, 23] [0, 1, 3, 7, 11, 15, 20, 23] [0, 1] [0, 1, 4, 8, 12, 16, 24] [0, 1, 3, 7, 11, 15, 20, 23]
96
A. Frangioni and L. Perez Sanchez
Using the selected features for each solver, we applied the ML to determine the solver to be used. In the following table we report the percentage of solver mis-selection (“s. err”), the ratio between the running time of the actual best solver (with its best configuration) and that of the (Solver, Configuration) chosen by the ML (“b/ml ratio”), and the ratio between the latter and the default configuration for both solvers (“ml/dR ratio” and “ml/dS ratio”, respctively). s. err b/ml ratio ml/dR ratio ml/dS ratio 0.145 0.787 (0.208) 0.933 (0.342) 0.791 (0.450)
5 Conclusions In this paper we described the set of architectural choices in the i-dare system that have been designed to make an effective search, while avoiding to tie-in the system to specific search strategies that may not ultimately prove effective enough. In particular, we discussed the fundamental role of the GMLS, which allows to integrate general-purpose ML approaches with specialized methods for the nontrivial task of computing ψ. This task is “naturally” extended to that of selecting the best algorithmic configuration of the available solvers, thereby providing the i-dare(control) sub-system with a powerful tool to streamline the search. This requires a sophisticated ML (meta) process that is continuously running and keeps modifying the assessment of each reformulation with respect to given algorithms. Although use of ML techniques to select algorithmic parameters have very recently been advocated elsewhere, the scale of our proposal is, to the best of our knowledge, unheard of. The results attained with the experiments prove that even if the training and testing sets were rather small, the usage of the ML technique, applied to select the solver and configuration, improved indeed the average time with respect to the default configuration. This shows that the automatic selection process may provide significant benefits w.r.t. just selecting one configuration which happens to work “reasonably well” in most cases. For this to work, however, the features used in the ML process should not be the same for each solver, even for solvers attached to the same structure. Of course, this performances improvement may come at a significant expense in terms of training and Meta-Learning processing time, which depends on the size and complexity of the problems being solved for generating the training points, the amount of instance features and configuration fields of each point, and the amount of parameters the MLs may have. However, the outcome of this sophisticated process may well be a very significant improvement of the efficiency experienced by the “average” (non expert) user in the solution of her models, thereby significantly contributing to the overall scientific and technological progress.
Searching the Best (Formulation, Solver, Configuration)
97
References 1. The MCFClass Project, http://www.di.unipi.it/di/groups/optimize/Software/MCF.html 2. Ahuja, R.K., Magnanti, T.L., Orlin, J.B.: Network Flows: Theory, Algorithms, and Applications. Prentice-Hall, Englewood Cliffs (1993) 3. Apt, K.R.: Principles of Constraint Programming. Cambridge University Press, Cambridge (2003) 4. Audet, C., Hansen, P., Jaumard, B., Savard, G.: Links between linear bilevel and mixed 0-1 programming problems. Journal of Optimization Theory and Applications 93(2), 273–300 (1997) 5. Bertsekas, D.P., Tseng, P.: Relax-iv: a faster version of the relax code for solving minimum cost flow problems. Technical report, Dept. of Electrical Engineering and Computer Science, MIT (1994) 6. Bishop, C.M.: Pattern recognition and machine learning. Springer, New York (2006) 7. Cappanera, P., Frangioni, A.: Symmetric and Asymmetric Parallelization of a Cost-Decomposition Algorithm for Multi-Commodity Flow Problems. INFORMS Journal on Computing 15(4), 369–384 (2003) 8. Cassioli, A., Di Lorenzo, D., Locatelli, M., Schoen, F., Sciandrone, M.: Machine Learning for Global Optimization. Technical Report 2360, Optimization Online (2009) 9. D’Ambrosio, C., Frangioni, A., Liberti, L., Lodi, A.: Experiments with a feasibility pump approach for nonconvex minlps. In: Festa, P. (ed.) SEA 2010. LNCS, vol. 6049, pp. 350–360. Springer, Heidelberg (2010) 10. Desaulniers, G., Desrosiers, J., Solomon, M.M. (eds.): Column generation. Springer, Heidelberg (2005) 11. Frangioni, A., Perez Sanchez, L.: Artificial intelligence techniques for automatic reformulation of complex problems: the i-dare project. Technical report, TR0913, Dipartimento di Informatica, Universit` a di Pisa (2009) 12. Frangioni, A., Perez Sanchez, L.: Searching the best (formulation, solver, configuration) for structured problems. Technical report, TR-0918, Dipartimento di Informatica, Universit` a di Pisa (2009) 13. Hinze, M., Pinnau, R., Ulbrich, M., Ulbrich, S.: Optimization with PDE Constraints. Springer, Heidelberg (2009) 14. Liberti, L.: Reformulations in mathematical programming: Definitions and systematics. RAIRO-RO 43(1), 55–86 (2009) 15. Liberti, L., Cafieri, S., Tarissan, F.: Reformulations in mathematical programming: a computational approach. In: Abraham, A., Hassanien, A.-E., Siarry, P., Engelbrecht, A. (eds.) Foundations of Computational Intelligence, vol. 3. Studies in Computational Intelligence, vol. 203, pp. 153–234. Springer, Berlin (2009) 16. Ralphs, T., Saltzman, M., Ladnyi, L.: The COIN-OR Open Solver Interface: Technology Overview (May 2004) 17. Settles, B.: Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison (2009) 18. Sherali, H.: Personal communication (2007)
98
A. Frangioni and L. Perez Sanchez
19. Shewchuk, J.R.: An introduction to the conjugate gradient method without the agonizing pain (1994) 20. Sonnenburg, S., Raetsch, G., Schaefer, C., Schoelkopf, B.: Large Scale Multiple Kernel Learning. Journal of Machine Learning Research (7), 1531–1565 (2006) 21. Yang, G., Kifer, M., Wan, H., Zhao, C.: Flora-2: User’s Manual
Information Model for Model Driven Safety Requirements Management of Complex Systems R. Guillerm, H. Demmou, and N. Sadou
Abstract. The aim of this paper is to propose a rigorous and complete design framework for complex system based on system engineering (SE) principles. The SE standard EIA-632 is used to guide the approach. Within this framework, two aspects are presented. The first one concerns the integration of safety requirements and management in system engineering process. The objective is to help designers and engineers in managing safety of complex systems. The second aspect concerns model driven design through the definition of an information model. This model is based on SysML (System Modeling Language) to address requirements definition and their traceability towards the solution and the Verification and Validation (V&V) elements.
1 Introduction Systems engineering processes are becoming more and more critical and complex. A fundamental characteristic of modern systems is their inherent complexity. Complexity implies that different parts of the system are interdependent so that changes in one part may have effects on other parts of the system. These complex systems include the emerging of multiple functions or behaviors, that was not possible before, and they are expected to satisfy additional constraints, especially constraints of reliability, safety and security which are specially addressed in this presentation. R. Guillerm · H. Demmou LAAS-CNRS , 7 avenue du Colonel Roche, F-31077 Toulouse, France University of Toulouse; UPS, INSA, INP, ISAE; LAAS, F-31077 Toulouse, France e-mail: {guillerm,demmou}@laas.fr N. Sadou SUPELEC / IETR Avenue de la Boulais, F-35511 Cesson-Sevigne e-mail: [email protected]
100
R. Guillerm, H. Demmou, and N. Sadou
Safety [1] of complex systems relies heavily on the emergent properties that result from the complex interdependencies that exist among the involved components, subsystems or systems and their environments. It is obvious that the safety properties of complex systems must be addressed in an overall study, with a global framework early in the design phase. Weaknesses of the current safety processes can be resumed in the following points [2]: • Safety analysis involves some degree of intrinsic uncertainty. So, there is a degree of subjectivity in the identification of safety issues. • Different groups need to work with different views of the system (e.g. systems engineers’ view, safety engineers’ view). This is generally a benefit but it can be a weakness if the views are not consistent. • Definition of the safety requirements, their formalization, and their traceability can be ambiguous or not fully considered. • System models are developed in electronic form, but no use is made of this for Safety/ Reliability analysis. Ideally there should be a common repository of all requirements, design and safety information. Some of these points are due to the absence of a safety global approach. Indeed, safety must be addressed as global property and safety requirements [3] must be formulated not only in the small (sub-system level) but in the large (system level). One of SE processes is requirements engineering (RE) [4]. RE is generally considered in the literature as the most critical process within the development of complex systems [5], [6]. The safety requirements engineering are of concern. A common classification proposed for requirements in the literature classifies requirements as functional or non-functional [7]. Functional requirements describe the services that the system should provide, including the behavior of the system in particular situations. Non-functional requirements are related to emergent system properties such as safety attributes and response time or costs. Generally these Non-functional properties cannot be attributed to a single system component. Rather, they emerge as a result of integrating system components. Furthermost, non-functional requirements are also considered as quality requirements, and are fundamental to determine the success of a system. Requirements engineering can be divided into 2 mains groups of activities [8]: 1. Requirements development: this activity includes the processes of elicitation [9], documentation, analysis and validation of requirements. 2. Requirements management: this activity includes processes of maintainability management, changes management and requirements traceability [3], [10]. In addition to other processes of system engineering which must be concerned by the safety evaluation, requirements engineering is not an exception. Inadequate or misunderstood requirements have been recognized as the major cause of safetyrelated catastrophes. The work presented in this paper is divided into two parts. The first part concerns the integration of safety management in system engineering process. The objective is to help engineers in safety management of complex system by proposing a new approach based on system engineering best practices which can
Information Model for Model Driven Safety Requirements Management
101
be shared between safety and design engineers. The proposed approach is based on system engineering standard EIA-632. The second part presents an information model based on SysML language to address requirements definition and their traceability [3], [10] towards the solution elements and the V&V (Verification and Validation) elements. Safety requirements are integrated on RE activities, including management activities related to maintenance, traceability, and change management. The paper is structured into five sections. The second section introduces the design framework. The integration approach is presented in the third section. In the fourth section, the information model is proposed for efficient management of safety requirements. The last section gives some conclusions.
2 System Engineering Approach The development process highlights the necessary activities, their sequencing and the obtained products. Two approaches for system design have been studied and have been defined. The V-cycle approach and its variants and the processes approach. The processes approach is based on the observation that whatever the strategy used to develop a system, these development activities remain the same. The technical processes are based on different activities of system engineering. They are divided into two categories, system definition processes and Verification and Validation (V&V) processes. They are defined by system engineering standards (IEEE 1220, EIA 632, ISO 15288). The processes approach is more flexible than the Vcycle development; it fits better with complex systems. Moreover, the processes vision does not constrain the sequence of development activities in contrast to the development based on a particular development cycle. This difference is another motivation for adopting a processes approach to systems engineering. In this work the process approach is based on EIA-632 SE standard.
2.1 System Engineering Approach System Engineering is an interdisciplinary approach, which provides concepts to build new applications. It is a collaborative and interdisciplinary process of problems resolution, supporting knowledge, methods and techniques resulting from the sciences and experiment. System engineering is a framework which helps to define the wanted system, which satisfies identified needs and is acceptable for the environment, while seeking to balance the overall economy of the solution on all the aspects of the problem in all the phases of the development and the life of the system. SE concepts are adequate specifically for complex problems; research issues undergone can bring a solution [11]. In System engineering best practice, we have the following chain: Processes → Methods → Tools. These entities, such as processes, methods and tools, are the conceptual basis of our approach taken from System engineering best practice. In the first step, the processes can be identified with respect to the accumulated know-how, and can also
102
R. Guillerm, H. Demmou, and N. Sadou
be taken from standards like the thirteen generic processes proposed in the EIA-632 standard [12] [13]. The second step concerns the methods to be used. The methods can be either developed or existing one but only if it reflects the whole semantics of the process. No taxonomy has yet been developed for corresponding processes and methods. The third step concerns the tools that do not correspond to the processes but the methods; hence in this approach we cannot use a tool to implement a process without first identifying the associated methods.
2.2 EIA-632 Standard One famous standard, currently used in the industrial and military fields, is the EIA-632. This standard covers the product life cycle from the needs capture to the transfer to the user. It gives a system engineering methodology trough 13 interacting processes grouped into 5 groups, covering the management issues, the supply/acquisition, design and requirement, realization and verification/validation processes.
3 Integration Approach Managing requirements, and specially safety requirements, at the early stages of system development becomes more and more important as system complexity is continuously growing. Safety of complex systems relies heavily on the emergent properties that result from the complex interdependencies that exist among the involved systems or sub systems and their environments. System Engineering (SE) is the ideal framework for the design of complex system. The need for systems engineering arose with the increase in complexity of systems and projects. A system engineering approach to safety starts with the basic assumption that safety proprieties can only be treated adequately in their entirety when taking into account all the involved variables and the relations between the social and the technical aspects [14]. This basis for system engineering has been stated as the principle that a system is more than the sum of its parts. The Safety management must follow all the steps of SE from the requirements definition to the verification and the validation of the system. The starting point of the work presented in this paper is the following note provided in EIA-632 standard: Note: Standard does not purport to address all safety problems associated with its use or all applicable regulatory requirements. It is the responsibility of the user of this Standard to establish appropriate safety and health practices and to determine the applicability of regulatory limitations before its use [13]. The next section aims to help designers in addressing safety problems. It describes, briefly, for each process, how the safety will be considered. It illustrates the proposed approach in term of process which must be defined independently to methods and/or tools (other projects are focused on the methods and tools ([15] [16] for example).
Information Model for Model Driven Safety Requirements Management
103
3.1 Integration Approach The integration of safety must concern all system engineering processes. This paper is focused only on System Design processes and Technical Evaluation processes. The implementation of the approach consists in identifying and indicating in which way the safety must be considered for each sub-processes of EIA-632. In other words, the sub-processes of EIA-632 standard are translated or refined in terms of safety and included in system design process.
3.2 System Design Processes The System Design Processes are used to convert agreed-upon requirements of the acquirer into a set of realizable products that satisfy acquirer and other stakeholder requirements. The safety requirements must be taken into account in requirements definition process. It allows the formulation, the definition, the formalization and the analysis of these requirements. Then a traceability [5] model must be build to ensure the consideration of the requirements throughout the development cycle of the system. 3.2.1
Requirement Definition Process
The goal of the requirements definition process is to transform the stakeholder requirements into a set of technical requirements. For functional and non-functional requirements, if this distinction is not possible or not relevant at the requirement elicitation process level, the analyzer may do it to categorize requirements. In the EIA-632 standard, three types of requirements are defined; the Acquirer Requirements, the Other Stakeholder Requirements and the System Technical Requirements. Concerning Acquirer Requirement and Other Stakeholder Requirements, the developer shall define a validated set of acquirer requirements for the system, or portion thereof. Safety requirements, generally, correspond to constraints in the system. It is necessary to identify and collect all constraints imposed by the acquirer to obtain a safe system. A hierarchical organization associates weight to safety requirements, following their criticality. Safety requirements can be derived from certification or quality requirements or can be explicitly expressed by acquirer or other stakeholder. The developer shall define a validated set of system technical requirements from the validated sets of acquirer requirements and other stakeholder requirements. For safety requirements, the system technical requirements traduce system performances. It consists on defining safety attributes (determine risk tolerability, SIL level, MTBF, MTBR, failure rate for example). Technical requirements can be derived from a preliminary hazard analysis. Some Standards are available to guide designer to define safety requirements. For example, safety critical systems within the civil aerospace sector are developed subject to the recommendations outlined in ARP4754 [17]and l’ARP-4761 [18]. These standards give guidance on the ’determination’ of requirements, including requirements capture, requirements types and derived requirements. When requirements
104
R. Guillerm, H. Demmou, and N. Sadou
are defined, it is possible to define some attributes to facilitate their management by, for example, an expression of requirements using SysML. It allows to link requirements to the design solution. 3.2.2
Solution Definition Process
The Solution Definition Process is used to generate an acceptable design solution. For Logical Solution Representations, the developer shall define one or more validated sets of logical solution representations that conform with the technical requirements of the system. The recommendation is to use semi formal / formal models for the solution modeling. The use of formal models allows the automation of verification and analysis. In this processes, safety analysis techniques will be used to determine the best logical solution. The physical solution representations are derived from logical solution representation and must respect all requirements, particularly safety requirements. The same safety analysis may be done for the physical solution representation. The same recommendations than for logical solution remain true.
3.3 Technical Evaluation Processes The Technical Evaluation Processes are intended to be invoked by one of the other processes for engineering a system. Four processes are involved: Systems Analysis, Requirements Validation, System Verification and End Products Validation. 3.3.1
System Analysis Process
In system analysis process, the developer shall perform risk analysis to develop risk management strategies, support management of risks and support decision making. The step of risk analysis can generate some safety requirements other than that defined by the acquirer and stakeholder. These new requirements must be taken into account. 3.3.2
Requirements Validation Process
Requirements Validation is critical to successful system product development and implementation. Requirements are validated when it is certain that they describe the input requirements and objectives such that the resulting system products can satisfy them. In this process, a great attention is given to traceability analysis, which allows verifying all the links among Acquirer and Other Stakeholder Requirements, Technical and Derived Technical Requirements, and Logical Solution Representations. Like other requirements, safety requirements must be validated. The validation allows designing safe system. To facilitate this step, semi-formal solutions, like UML [19] or SysML [20] can be used for good formulation of requirements. Indeed, the diversity of people concerned by the system design project can have limited knowledge concerning the structure of the future system, that’s why industry-scale requirement engineering projects are so hard. So the use of UML or SysML with their different diagrams can be helpful.
Information Model for Model Driven Safety Requirements Management
3.3.3
105
System Verification Process
The System Verification Process is used to ascertain that the generated system design solution is consistent with its source requirements, in particular, safety requirements. Some traceability models allow defining the procedure of verifying safety requirement. These procedures are planned at the definition of safety requirement. Simulation is a good and current method used to achieve system verification. Other methods like virtual prototyping, model checking and tests can be used.
4 Information Model 4.1 Requirements Management Requirements management is a crucial activity for the success of a project [5]. Indeed, an important number of documents can be produced in the system definition phase. Without requirements management, it seems impossible to ensure the consistency and the quality necessary for success. Statistical studies show that the success or failure of a project depends, on 40%, on the definition and the management of requirements. Requirements management allows to: ” • • • •
collect requirements and facilitate their expression, detect inconsistencies between them, validate them, manage requirements changes and ensure their traceability.
It must also ensure that each requirement is properly declined, allocated, monitored, satisfied, verifiable, verified and justified. Figure 1 presents an overview of the requirements management of the EIA-632 standard. The proposed information model is inspired from this pattern. We see that Other stakeholder requirements, when added to the Acquirer requirements, make up a set of stakeholder requirements that are transformed into system technical requirements. The logical and physical solution representations are derived from technical requirements. Design solution and specified requirements are defined by completing the Solution Definition Process.
4.2 Supporting the Design An information model can be used to: • • • •
guide the design, manage requirements changes, evaluate project progress, or simply to help to understand the system development, on the basis of a common and understandable language.
Indeed, modelling is important for the following reasons:
106
R. Guillerm, H. Demmou, and N. Sadou
• it is a support for system analysis and design, • can be used for sharing knowledge, • it is used to capitalize knowledge. Modelling allows the transformation of needs into the system definition. In fact, during this transformation, we will gradually go from abstract concepts to a rigorous definition of the system. In modelling, there are 2 separate areas: the problem area and the possible solutions area. At the beginning of the project, the representation of the problem area is more important than the representation of the possible solutions area. During the progress of the design, representation of possible solutions area will be enriched to achieve the strict definition of the system. In parallel the overall representation of the problem area will be enriched to better define the expectations of the system (needs/requirements) until it is stabilized. The transition between the problem domain and the solution domain is a very delicate point of system engineering. It must be expressed by allocating requirements/properties/constraints on possible solutions. These allocations will generate traceability links which are crucial for the system verification and validation steps. We propose an information model that will be compatible with the requirements of the EIA-632 standard, while adding aspects of safety and risk management. We use SysML language to establish this information model thanks to the different available diagrams which make SysML as the language for systems engineering.
Fig. 1 Requirements management of the EIA-632
4.3 Requirements Modeling and Management for Safety SysML is a systems modeling language that supports specification, analysis, design, verification and validation of a broad range of complex systems. The language is an evolution of UML 2.0 and is defined for systems that may include hardware, software, information, processes and personnel. It aims to facilitate the communication between heterogeneous teams (mechanical, electrical and software engineers for instance) who work together. The language is effective in specifying requirements, structure, behaviour, allocations of elements to models, and constraints on system
Information Model for Model Driven Safety Requirements Management
107
properties to support engineering analysis. SysML, through a unique environment integrating requirements, allows modeling and supports different views: • • • •
The requirements: Requirements diagram, Use Case diagram, The structure: Block diagram (internal/external), The behaviour: Statechart, Activity diagram, Sequence diagram, The constraints: Parametric Diagram.
So, SysML seems be an excellent candidate for a common language. It allows sharing specifications of a complex system between different trades. Between design engineers and safety engineers in our case. Among other benefits of SysML, we can cite: • Risks identification and creation of a common analytical basis to all participants of the project. • Facilitates the management of complex projects, the scalability and the maintainability of complex systems. • Documents and capitalizes the knowledge of all trades in a project. SysML provides the opportunity to express the requirements using the requirements diagram. It also defines some relationships that link a given requirement to other requirements or elements of the model. It becomes possible to define a hierarchy between requirements, requirement derivation, and requirement satisfaction by a model element, the requirement verification by a test case (TestCase) or the requirement refinement. So, this language forms a good basis for our information model. Indeed, in the system definition process, it is necessary to establish a relationship between the identified requirements and the system functions and/or components. The traceability models linking requirements to the system components allow performing impact analysis of requirements change or modification. Thus, it is possible to assess the consequences of a requirement change on the system safety using the network built between requirements, functions and components.
4.4 Proposition In this part, we propose a system approach to improve requirements management for safe system design. This approach is based on a SysML information model, following the SE process of the EIA-632 standard. This information model is the ”system” knowledge basis of the design project, allowing data sharing between all expertise trades (mechanical, hydraulic, thermal, electrical...). Therefore, the model is intended to model the ”system” level, showing the interactions with the environment and the connections between the various subsystems. The information model must be seen as a means of knowledge sharing, including the 3 components: requirements, design solution and V&V. It is considered as the interconnection level between all the different trades. The safety authorities impose a separation of system design concepts. The requirements, the design solution and the V&V parts must be developed independently. We must be able to distinguish clearly these different concepts. Based on the previous observation, the proposed approach allows the
108
R. Guillerm, H. Demmou, and N. Sadou
expression of all the concepts, with a separation between these concepts but with a creation of traceability link between these concepts in order to facilitate understanding and impacts analysis. With SysML, it is easy and possible to mix all the concepts in a single diagram. We propose an extension of SysML and information meta-model that allows structuring the elements of the design project with respect of the separation concepts. In other words, our approach allows a rigorous organization of the project design. Indeed, different diagrams manage different concepts.
4.5 The Information Model The information model (Figure 2) that we propose is adapted to the EIA-632 standard, making a clear distinction between different requirements classes (acquirer, other stakeholders, technical or specified). To achieve this meta-model in SysML, we have extended the language. Firstly, we define new stereotypes to requirements, while adding new attributes to our requirements. Then we define a new link type (specify) linking the specified requirements to model elements.
Fig. 2 Information model
In this model, we have simplified the number of requirements classes. Indeed, we consider that the ”systemTechnicalReqt” represents the system technical requirements, but also the system technical requirements non-allocated to the logical solution and the derived technical requirements coming from the logical and the physical solutions.
Information Model for Model Driven Safety Requirements Management
109
The acquirer and other stakeholders’ requirements are represented, knowing that the field ’requirement source’ must be consistent with the stereotype and indicates better the concerned stakeholders in the case of ”OtherStakeholderReqt”. Note: We invite readers who are not necessarily friendly with all the above concepts (system technical requirements, logical solution, physical solution ) to refer themselves to the EIA-632 standard [13]. All traceability links requested by the EIA-632 are considered in this model, and the distinction between logical solution (functional part) and physical solution (component part) appears. In this model, we highlight the definition of interfaces, which are components themselves and which links several components together. The concept of interface is essential for a proper system design. Indeed, it is one source of problems encountered during development. The last important element that is included in this model, neither a requirement nor a design solution element, is the ”TestCase”. These elements of V&V are included in the model to be directly connected to the requirements they satisfy. Concerning safety requirements and the consideration of safety in design, which can be derived from risk analysis, a block risk is defined and is linked to safety requirements (see figure 3). In fact, identification of risks is the starting point for many studies about security, but also reliability. Thus, defining a block ”risk” in the information model and its link with the safety requirements, allows on one way to improve the system understanding and justifying the requirement, and on the other way to show that all the identified risks are taken into account. Impact analyses also derive benefit from the presence of risks in the information model, because the risks, which could be challenged by model element (requirement, function, component) changing, can be viewed directly.
Fig. 3 From risk to safety requirements
5 Conclusion Our contribution in this paper can be summarized in the following points: firstly, we illustrate the importance of a global approach for safety evaluation and management. Considering this point we proposed a system engineering approach based on EIA632 standard. Then we have focused our presentation on the integration of safety management in system engineering processes from requirements definition process
110
R. Guillerm, H. Demmou, and N. Sadou
to verification and validation process. Requirements engineering is a crucial activity for the success of a project of complex system design and development. An effective requirements management is necessary. So, in the second part we introduced an information model based on the SysML language. We proposed an extension of this language to define a meta-model by adding new stereotypes for requirements and new attributes to requirements. We also defined a new type of links (specify) which link specified requirements to the elements of the model. The proposed model allows the expression of all the handled concepts, and the creation of traceability links between the concepts to facilitate the comprehension and/or the impacts analysis.
References 1. Avizienis, A., Laprie, J.-C., Randell, B., Landwehr, C.: Basic Concepts and Taxonomy of Dependable and Secure Computing. IEEE Transactions on Dependable and Secure Computing 1, 11–33 (2004) 2. Rasmussen, J.: Risk Management in a Dynamic Society: A Modelling Problem. Safety Science 27(2/3), 183–213 (1997) 3. Gotel, O., Finkelstein, A.: An Analysis of the Requirements Traceability Problem. In: 1st International Conference on Requirements Engineering (ICRE 1994), Colorado Springs, April 1994, pp. 94–101 (1994) 4. Sommerville, I.: Software Engineering (Update), 8th edn. International Computer Science. Addison-Wesley Longman Publishing Co., Inc., Boston (2006) 5. Juristo, N., Moreno, A.M., Silva, A.: Is the European Industry Moving Toward Solving Requirements Engineering Problems? IEEE Software 19(6), 70–77 (2002) 6. Komi-Sirvio, S., Tihinen, M.: Great Challenges and Opportunities of Distributed Software Development - An Industrial Survey. In: Proceedings of the Fifteenth International Conference on Software Engineering & Knowledge Engineering (SEKE 2003), pp. 489– 496 (2003) 7. Robertson, S., Robertson, J.: Mastering the Requirements Process, 2nd edn. AddisonWesley Professional, Reading (2006) 8. Parviainen, P., Tihinen, M., Lormans, M., van Solingen, R.: Requirements Engineering: Dealing with the Complexity of Sociotechnical Systems Development. In: Mate, J.L., Silva, A. (eds.) Requirements Engineering for Sociotechnical Systems, ch. 1, pp. 1–20. IdeaGroup Inc. (2004) 9. Goguen, J., Linde, C.: Techniques for requirements elicitation. In: 1st IEEE International Symposium on Requirements Engineering, San Diego, January 4-6, pp. 152–164 (1993) 10. Sahraoui, A.-E.-K.: Requirements Traceability Issues: Generic Model, Methodology And Formal Basis. International Journal of Information Technology and Decision Making 4(1), 59–80 (2005) 11. Sahraoui, A.E.K., Buede, D., Sage, A.: Issues in systems engineering research. In: INCOSE congress, Toulouse (2004) 12. Spitzer Cary, R.: Avionics Development and Implementation, 2nd edn. CRC Press, Boca Raton (2007) 13. EIA-632: processes for engineering systems 14. Kotovsky, K., Hayes, J.R., Simon, H.A.: Why are some problems hard? Evidence from Tower of Hanoi. Cognitive Psychology 17 (1985) 15. Bozzano, M., Cavallo, A., Cifaldi, M., Valacca, L., Villafiorita, A.: Improving Safety Assessment of Complex Systems: An Industrial case study. In: FM 2003, Pisa, September 8-14 (2003)
Information Model for Model Driven Safety Requirements Management
111
16. Akerlund, O., Bieber, P., Boede, E., Bozzano, M., Bretschneider, M., Castel, C., Cavallo, A., Cifaldi, M., Gauthier, J., Griffault, A., Lisagor, O., L¨udtke, A., Metge, S., Papadopoulos, S., Peikenkamp, T., Sagaspe, L., Seguin, C., Trivedi, H., Valacca, L.: ISAAC, a framework for integrated safety analysis of functional, geometrical and human aspects. In: European Congress on Embedded Real-Time Software (ERTS 2006), Toulouse, 25, 26, January 27 (2006) 17. Certification considerations for highly-integrated or complex aircraft systems. Society of Automotive Engineers (December 1994) 18. Guidelines and methods for conducting the safety assessment process on civil airborne systems and equipment. Society of Automotive Engineers (August 1995) 19. Booch, G., Rumbaugh, J., Jacobson, I.: The Unified Modeling Language User Guide. Addison-Wesley, Reading (1998) 20. SysML: Source Specification Project, http://www.sysml.org/
Discrete Search in Design Optimization Martin Fuchs and Arnold Neumaier
Abstract. It is a common feature of many real-life design optimization problems that some design components can only be selected from a finite set of choices. Each choice corresponds to a possibly multidimensional design point representing the specifications of the chosen design component. In this paper we present a method to explore the resulting discrete search space for design optimization. We use the knowledge about the discrete space represented by its minimum spanning tree and find a splitting based on convex relaxation.
1 Introduction In design optimization the typical goal is to find a design with minimal cost while satisfying functionality constraints. In some cases there are further objectives relevant to assess the quality of the design, thus leading to multicriteria optimization. We focus on the case that the objective is 1-dimensional and we look for the optimal design θ ∈ T ⊂ Rn0 where the components of T = T 1 × · · · × T n0 are intervals of continuous variables (e.g., the thickness of a car’s body) or integer variables (e.g., the choice between different motor types). The integer variables arise from reformulation of discrete multidimensional sets in terms of subsets of Z. That means the original problem contains constraints like z ∈ Z := {z1 , . . . , zN }, zk ∈ Rn , where zi contains the specifications of the choice i, e.g., mass, performance, and cost of different motors. The constraint z ∈ Z can be reshaped to an integer formulation of the search space by using N binary variables b1 , . . . , bN , bi ∈ {0, 1} and the constraints z = ∑Ni=1 bi zi and ∑Ni=1 bi = 1. Martin Fuchs CERFACS, 31057 Toulouse, France e-mail: [email protected] Arnold Neumaier University of Vienna, Faculty of Mathematics, 1090 Wien, Austria e-mail: [email protected]
114
M. Fuchs and A. Neumaier
Depending on the objective function and the constraints design optimization belongs to one of the following problem classes of mixed integer programming: both objective function and constraints are all linear, i.e., mixed integer linear programming (MILP); at least one constraint or the objective function is nonlinear, i.e., mixed integer nonlinear programming (MINLP); at least one constraint or the objective function is given as a black box, i.e., black box optimization. For all these types of problems there exist algorithms that employ a splitting of the search space T as a crucial part of their solution technique. A splitting technique finds a subdivision of the original problem in two or more subproblems such that the associated optimization algorithm can decide how to proceed solving these subproblems which may possibly include further splitting. The subdivision must ensure that the optimal solution of the original problem can be found as one of the solutions of the subproblems. For a study of efficient methods using splitting in branch and bound for MILP see, e.g., [3, 11]. Additionally, methods for MINLP also employing branch and bound can be found, e.g., in [10, 14]. Branching in black box problems with continuous variables only is studied, e.g., in [7, 9]. Branching rules in mixed integer programming (mainly MILP) are presented, e.g., in [1]. For a survey on discrete optimization, including branch and bound, see [13]. The special class of design optimization problems can be studied, e.g., in [2, 4, 12]. If the design optimization problem is formulated using integer choice variables, it can be tackled heuristically without branching, e.g., by separable underestimation [5]. An optimization algorithm using a branching method on the integers does not exploit the knowledge about the structure of Z . Using this knowledge, however, may have significant advantages since the constraints that model the functional relationships between different components of the design depend on the values of z ∈ Z rather than on the values of the integer choices. In this paper we present a splitting strategy for discrete search spaces such as Z described above. We use only local information about functional constraints. Thus the method is applicable in all problem classes of mixed integer programming, even for black box optimization, which frequently occurs in design optimization. We use the knowledge about the structure of the space Z represented by its minimum spanning tree [15]. We represent a solution of an auxiliary optimization problem as a convex combination of the points z1 , . . . , zN , and use the coefficients of the combination to determine a splitting across an edge of the minimum spanning tree of Z , cf. [6]. An implementation of our method in M ATLAB can be found at www.martin-fuchs.net/downloads.php. This paper is organized as follows. In Section 2 we introduce the design optimization problem formulation and notation. In Section 3 we describe how to determine a splitting based on convex relaxation of the discrete constraints. A simple solver strategy that employs our splitting technique is sketched in Section 4. Results for a real-life example in 10 dimensions are given in Section 5.
Discrete Search in Design Optimization
115
2 Design Optimization Let F : Rnz → R be a scalar objective function which typically is a black box model, e.g., for the cost of the design, containing the functional relationships between different design components. We assume that the design optimization problem is formulated in the following form: ⎫ min F(z) ⎪ ⎪ θ ,z ⎬ (1) s.t. z = Z(θ ), ⎪ ⎪ ⎭ θ ∈ T, where Z : Rn0 → Rnz , θ = (θ 1 , θ 2 , . . . , θ n0 )T is a vector of choice variables, T means transposed, and T is the domain of possible choices of θ . We assume that the choices can be discrete or continuous. By Id we denote the index set of choice variables that are discrete, and we denote the index set of choice variables that are continuous by Ic . We have Id ∪ Ic = {1, 2, . . . , n0 }, Id ∩ Ic = 0. / The selection constraints θ ∈ T specify which choices are allowed for each choice variable, i.e., T = T 1 × · · · × T n0 , T i = {1, 2, . . . , Ni } for i ∈ Id , and T i = [θ i , θ i ] for i ∈ Ic . The mapping Z assigns an input vector z to a given choice vector θ . In the discrete case, i ∈ Id , a choice variable θ i determines the value of ni components of the vector z ∈ Rnz which is the input for the model F. Let 1, 2, . . . , Ni be the possible choices for θ i , i ∈ Id , then the discrete choice variable θ i corresponds to a finite set of Ni points in Rni . Usually this set is provided in an ni × Ni table τ i , i.e., Z i (θ i ) is the θ i th column of τ i (see, e.g., Table 1). The mapping Z i , i ∈ Id , can be regarded as a reformulation of a multidimensional discrete sub search space consisting of Z i (1), . . . , Z i (Ni ) into the integer choices 1, . . . , Ni . In the continuous case, i ∈ Ic , the choice variable θ i belongs to an interval [θ i , θ i ], i.e., Z i (θ i ) := θ i for θ i ∈ [θ i , θ i ].
(2)
Via the concatenation of Z i (θ i ) we now define Z(θ ) := z := (Z 1 (θ 1 ), . . . , Z n0 (θ n0 )).
(3)
Note that any vector z = Z(θ ) has the length nz = ∑i∈Id ni + |Ic |. We call Z a table mapping as the nontrivial parts of Z consist of the tables τ i . Toy example. Let T = T 1 × T 2 × T 3 be choices for the design of a car, let 1 T = {1, . . . , 7} be seven choices for the motor of the car, let T 2 = [0, 2] be the continuous choice of the thickness of the car’s body, let T 3 = {1, . . . , 10} be ten choices for the length of the car’s axes that are only available in integer units. Thus we have Id = {1, 3}, Ic = {2}. Let the associated table τ 1 be as shown in Table 1 with N1 = 7,
116
M. Fuchs and A. Neumaier
n1 = 2, describing two-dimensional characteristics of each motor. The table τ 3 is simply given by τ 3 = (1, 2, . . . , 10)T . Let the objective function F be given by 1 2 3 2 F(z) = F(v11 , v12 , v21 , v31 ) = v11 + + v12 − + exp(v21 ) + (v31 − 10)2. (4) 2 4 modeling the cost of the designed car, where vi := Z i (θ i ). The optimal solution of (1) with these specifications is given as θ = (5, 0, 10) since Z 1 (5) = (−4, 0)T is closest to(− 12 , 34 )T and the two choices θ 2 and θ 3 are obvious. Note that the objective in fact depends on Z(T), and not on the integer values in T 1 or T 3 . This is the crucial message of this example: in design optimization discrete choices are typically associated with multidimensional discrete spaces. Subdividing the search space should thus subdivide Z(T) instead of T. The rest of the example is constructed to be sufficiently simple to illustrate the method presented in the next sections. Table 1 Tabulated data Z 1 (T 1 )
θ1 1 2 3 4 5 6 7 v11 4 4 6 5 −4 −8 −4 v12 0 1 0 3 0 0 2
The method that we use to solve (1) is a splitting strategy based on convex relaxation of the discrete search spaces, cf. [6]. We solve (1) as a special case of the following optimization problem. ⎫ min cT x ⎪ ⎪ θ ,x ⎬ (5) s.t. F(Z(θ )) ≤ Ax, ⎪ ⎪ ⎭ θ ∈ T := T 1 × · · · × T n0 , where F : Rnz → RmF , possibly mF > 1, c, x ∈ Rmx , A ∈ RmF ×mx . With c = A = 1, and by eliminating x and introducing a new intermediate variable z in (5) we see that (1) is a special case of (5).
3 Convex Relaxation Based Splitting Strategy The idea behind the method presented in this section is that a representation of a continuous relaxed solution z = ( z1 , . . . , znz ) of (5) by convex combinations of i i i i the points Z (θ ), θ ∈ T , i ∈ Id , gives an insight about the relationship between the solution z and the structure of the discrete search space in each component i ∈ Id . The split divides this space into two branches, where the contribution of each branch to the relaxed solution of (5) is as balanced as possible. Thus we can exploit
Discrete Search in Design Optimization
117
the knowledge of the structure of Z i (T i ) – represented by its minimum spanning tree – towards finding a natural subdivision of T. A convex combination for z= ( v1 , . . . , vn0 ) is given by
vi =
Ni
λθi i Z i (θ i ), for i ∈ Id , ∑ i
(6)
θ =1
i with ∑Nj=1 λ ji = 1, λ ji ≥ 0, i.e., a convex combination of the finitely many tabulated λ i = ( λ i,..., λ i ), vectors in Rni given in a table τ i . We will see how the coefficients
1
Ni
i ∈ Id , of the convex combination in the ith coordinate impose a splitting of the search space components T i , i ∈ Id . To compute a convex relaxation of (5) we reformulate the problem as follows. Assume that we have an initial set of N0 starting points z1 , . . . , zN0 coming from a 1 × · · · × Z n0 of the discrete constraints. That means the discrete sets relaxation Zrel rel i i n i = Zi i Z (T ) ⊂ R i , i ∈ Id , are relaxed to interval bounds Zrel rel,1 × · · · × Zrel,ni , where i Zrel,k = [, u] with = minvi ∈Z i (T i ) vik , u = maxvi ∈Z i (T i ) vik . Let F1 = F(z1 ), . . . , FN0 = F(zN0 ) be the function evaluations of the model F at the starting points which are used to approximate F. We solve the following problem: ⎫ ⎪ min cT x + ε μ p ⎪ ⎪ z,x, μ ,v,λ ⎪ ⎪ ⎪ ⎪ N0 ⎪ ⎪ ⎪ ⎪ s.t. ∑ μ j Fj ≤ Ax, ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎪ ⎪ N0 ⎪ ⎪ ⎪ ⎪ z = ∑ μ jz j, ⎪ ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎪ ⎪ N0 ⎪ ⎪ ⎪ ⎪ μ = 1, j ∑ ⎬ j=1 (7) ⎪ ⎪ z = (v1 , . . . , vn0 ), ⎪ ⎪ ⎪ ⎪ ⎪ Ni ⎪ ⎪ i i i ⎪ v = ∑ λ j Z ( j) for i ∈ Id , ⎪ ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎪ ⎪ Ni ⎪ ⎪ i ⎪ ⎪ λ = 1 for i ∈ I , d ∑ j ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎪ ⎪ i λ j ≥ 0 for i ∈ Id , 1 ≤ j ≤ Ni , ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ i i i v ∈ [θ , θ ] for i ∈ Ic . 0 Here we approximate F at the given evaluation points, i.e., F(z) ≈ ∑Nj=1 μ j Fj , for 0 0 μ j z j , ∑ j=1 μ j = 1. We require the solution to be a convex combination z = ∑ j=1 of the tabulated points Z i ( j) in the discrete case i ∈ Id , i.e., z = (v1 , . . . , vn0 ), vi =
N
N
118
M. Fuchs and A. Neumaier
Ni i λ ji Z i ( j), ∑Nj=1 λ ji = 1, λ ji ≥ 0, 1 ≤ j ≤ Ni . And we require the solution to respect ∑ j=1 the bound constraints on the continuous choices, i.e., vi ∈ [θ i , θ i ]. The constant ε can be considered as a regularization parameter, adjusted externally. The objective function in (7) is convex, the constraints are linear.
Remark 1. In case of design optimization with 1-dimensional F the problem (7) is typically unbounded for p = 1 and low ε . After increasing ε it typically changes towards a binary solution μi = 1 for some i, and μk = 0 for k = i. A binary μ would simply mean that if there are starting points among the z j in the Cartesian product of the convex hulls of Z i (T i ), then the solution z =: zstart is the best of these points. Choosing p = 2 and increasing ε from 0 towards ∞ in numerical experiments, the solution of (7) typically changes from unbounded to a binary solution and then converges to μ = ( N10 , . . . , N10 ) which may produce an alternative solution z = zstart . Hence in case that mF = 1 we solve (7) twice, with p = 1 and p = 2. Thus we get two solutions z1 and z2 , respectively. Then we compare the two solutions by evaluating F, i.e., F1 := F( z1 ) and F2 := F( z2 ). If F2 < F1 we use in the remainder of our method the convex combination λ found by (7) with p = 2, otherwise we use the convex combination λ found by (7) with p = 1. Remark 2. One could also approximate F nonlinearly in (7). Hence (7) becomes a nonlinear programming problem which requires a different formulation. The solution of (7) gives the values of the coefficients λ i = ( λ1i , . . . , λNi i ), i ∈ Id , of i the convex combinations for v . These values are now used to determine a splitting of T i . Consider the ith coordinate θ i ∈ T i = {1, 2, . . ., Ni }, i ∈ Id . One computes the minimum spanning tree for the points Z i (T i ) ⊂ Rni , see, e.g., Fig. 1. For a fixed edge k in the graph belonging to the minimum spanning tree of Z i (T i ) i the set of all points on the right side of k, and we denote by Z i we denote by Zk1 k2 the set of points on the left side of k (e.g., in Fig. 1 let k be the edge (1—5), then 1 = {1, 2, 3, 4}, and Z 1 = {5, 6, 7}). Zk1 k2 For every edge k in the minimum spanning tree of Z i (T i ) one computes the i and the weight wi of all points in Z i by weight wik1 of all points in Zk1 k2 k2 wik1 =
∑
λ ji ,
(8)
∑
λ ji .
(9)
i } { j|Z i ( j)∈Zk1
wik2 =
i } { j|Z i ( j)∈Zk2
k = argmink |wik1 − 12 |, i.e., the edge where We split T i into T1i and T2i across the edge the weight on the one side and the weight on the other side are closest to 50%. Remark 3. Interpreting the weight on one side as the contribution to the relaxed solution we thus split T i as balanced as possible. Though a balanced splitting is not necessarily the best strategy in general, it is motivated quite naturally. If there is one
Discrete Search in Design Optimization
119
leaf of the minimum spanning tree close to the relaxed solution it has a weight close to 1, and we split off just this leaf, thus strongly reducing the search space. If the weights are rather uniform we do not have sufficiently precise information to split off just a small subset of the minimum spanning tree, so we split in a rather uniform way, i.e., in two partitions of similar weight. Remark 4. In total we find up to 2|Id | possible branches. The decision on which variable to split – and whether or how to join the branches in an optimization algorithm in order to find a division of the search space after computing T1i , T2i – may seriously affect the performance of the algorithm and depends on how the algorithm handles the resulting subproblems. Section 4 presents one possibility of a branching strategy. Remark 5. Using the minimum spanning tree is not scaling invariant as the results depend on distances between the discrete points Z i (T i ). Hence the user of the method should use a scaling of the variables where distances between the discrete points have a reasonable meaning. A particular strength of our approach is that we do not require information about F except for the function evaluations F1 , . . . , FN0 , hence also black box functions F can be handled which is often occurring in real-life applications. An implementation of the method can be found online at www. martin-fuchs.net/downloads.php. Toy example. To solve our example problem we apply the method with N0 = 20, c = A = 1, and ε = 102 . We look for splittings of T i , i ∈ Id = {1, 3}. The graph of the minimum spanning tree of Z 1 (T 1 ) is shown in Fig. 1.
4
v12 7 2 2 1 6
5
1
3
0 −8
−6
−4
−2
0
2
4
Fig. 1 Graph of the minimum spanning tree of
v11
Z 1 (T 1 )
Relaxing the discrete search space to a continuous space, the solution of (5) is obtained at z = (− 12 , 34 , 0, 10). λ 1 = ( 18 , 18 , 18 , 18 , 16 , 16 , 16 ) would give The convex combination with fixed 7
∑ λ j1 Z 1 ( j) = (−0.29, 0.83),
j=1
(10)
120
M. Fuchs and A. Neumaier
which is close to ( z1 , z2 ), so our method should find a weighting similar to λ 1 . This 1 1 weighting would apparently lead to a split of Z (T ) across the edge k = (1—5) since ∑ j∈{1,2,3,4} λ j1 = ∑ j∈{5,6,7} λ j1 = 0.5. Thus it is not surprising that our implemented method actually splits T 1 into T11 = {1, 2, 3, 4} and T21 = {5, 6, 7} in most experiments. Sometimes it splits off the leaf 7, i.e., T11 = {1, 2, 3, 4, 5, 6} and T21 = {7}, or it finds T11 = {1, 3, 5, 6, 7}, T21 = {2, 4}, depending on the approximation and the convex combination found from (7), determined by the function evaluations F1 , . . . , FN0 . The split of T 3 is expected to split off the leaf 10 since F is strictly monotone 3 close to 1. Except for few decreasing in v3 = z4 ∈ [1, 10] resulting in a weight λ10 3 3 cases where we find T1 = {1, 2, 3, 4, 5, 6, 7, 8}, T2 = {9, 10}, we find the expected splitting of T 3 into T13 = {1, 2, 3, 4, 5, 6, 7, 8, 9} and T23 = {10} with our implemented method.
4 A Simple Solver We have implemented the method in a simple solver with the following branching strategy: We round the relaxed solution z of (7) to the next feasible point of (5) zround := arg min{z∈Z(T)} z − z 2 and start from zround a local search in T, i.e., an integer line search for the discrete choice variables, afterwards multilevel coordinate search (MCS) [7], for the continuous choice variables and an iteration of this procedure until satisfaction. The function evaluations during the local search are used in two ways. First, we use them to determine the coordinate i of θ = (θ 1 , θ 2 , . . . , θ n0 ) for which devmax (i) is maximal, where devmax (i) is the maximum deviation of the function values while varying θ i in the local search. Then we split the original T = T 1 × · · · × T n0 only in this coordinate and get two branches T1 , T2 , i.e., T1 = T 1 × · · · × T1i × · · · × T n0 , T2 = T 1 × · · · × T2i × · · · × T n0 , where T1i , T2i are the results from our splitting method in Section 3. Second, we select T1 for the next iteration step if the best point found during local search comes from T1 , otherwise we select T2 for the next iteration step. Having selected a branch for the next step we iterate branching and local search until satisfaction. As a stopping criterion one may choose, e.g., that the optimal solution found by the local search has not been improved for Niter times, or a maximum total number of iterations. This simple strategy is already suited to demonstrate the usefulness of our splitting routine in Section 5. As soon as our method has been implemented into an enhanced version of the solver we will also provide a comparison with further different solvers on more test cases. Toy example. We have used the solver strategy described to find the optimum θ = (5, 0, 10) of our example problem. In the first iteration we split T 3 and find 3 3 the branch Tnew = T 1 × T 2 × Tnew , Tnew = {10} for the next iteration step. In the 1 following iterations T is reduced to {5, 6, 7}, then to {5, 6}, and finally to {5}, and local search confirms the optimum θ = (5, 0, 10).
Discrete Search in Design Optimization
121
5 A Real-Life Application We have applied our method to a problem of optimization under uncertainty in spacecraft system design, described in the study [12], and we compare the results with the existing method used in that study. After reasonable simplification, the problem can be formulated as in (1), where θ ∈ R10 is a 10-dimensional design point, F(Z(θ )) is a M ATLAB routine computing the worst case for the total mass of the spacecraft at the design point θ under all admissible uncertainties. That means one looks for the design with the minimal total mass, taking into account possible uncertainties. In [12] heuristics based on S NOBFIT [8] was used which suggested a candidate for the optimal solution in each iteration step. From this candidate one performs a local search and iterates afterwards until no improvement of the optimal solution has been found 4 times in a row. This S NOBFIT based search is done 20 times independently with 20 different random starting ensembles to check the reliability of the putative global optimum found. However, S NOBFIT is not developed to deal with integers, so integer variables θ i , i ∈ Id are treated as continuous variables and rounded to the next integer values. Hence the optimum candidates suggested by S NOBFIT are suboptimal, and the local search gives the most significant improvement towards the optimal solution. The global optimum was found in 3 out of 20 runs. On an average one run required about 2500 evaluations of F. With the solver strategy described in Section 4 we can confirm the resulting global optimum. The reliability of our approach, however, is significantly better. In 5 independent runs we have found the optimum 4 times. One run failed because there was no feasible point in the set of initial function evaluations. One run also required about 2500 function evaluations on an average. Hence at the same level of reliability we have found the solution with much less effort.
References 1. Achterberg, T., Koch, T., Martin, A.: Branching rules revisited. Operations Research Letters 33(1), 42–54 (2005) 2. Alexandrov, N., Hussaini, M.: Multidisciplinary design optimization: State of the art. In: Proceedings of the ICASE/NASA Langley Workshop on Multidisciplinary Design Optimization, Hampton, Virginia, USA (1997) 3. Floudas, C.: Nonlinear and Mixed-Integer Optimization: Fundamentals and Applications. Oxford University Press, Oxford (1995) 4. Fuchs, M., Girimonte, D., Izzo, D., Neumaier, A.: Robust and automated space system design. In: Robust Intelligent Systems, pp. 251–272. Springer, Heidelberg (2008) 5. Fuchs, M., Neumaier, A.: Autonomous robust design optimization with potential clouds. International Journal of Reliability and Safety 3(1/2/3), 23–34 (2009) 6. Fuchs, M., Neumaier, A.: A splitting technique for discrete search based on convex relaxation. Journal of Uncertain Systems, Special Issue on Global Optimization and Intelligent Algorithm (2009), accepted, preprint available on-line at: http://www. martin-fuchs.net/publications.php
122
M. Fuchs and A. Neumaier
7. Huyer, W., Neumaier, A.: Global optimization by multilevel coordinate search. Journal of Global Optimization 14(4), 331–355 (1999) 8. Huyer, W., Neumaier, A.: SNOBFIT – Stable Noisy Optimization by Branch and Fit. ACM Transactions on Mathematical Software 35(2), Article 9, 25 (2008) 9. Jones, D., Perttunen, C., Stuckman, B.: Lipschitzian optimization without the Lipschitz constant. Journal of Optimization Theory and Applications 79(1), 157–181 (1993) 10. Leyffer, S.: Deterministic methods for mixed integer nonlinear programming. Ph.D. thesis, University of Dundee, Department of Mathematics & Computer Science (1993) 11. Nemhauser, G., Wolsey, L.: Integer and combinatorial optimization. Wiley Interscience, Hoboken (1988) 12. Neumaier, A., Fuchs, M., Dolejsi, E., Csendes, T., Dombi, J., Banhelyi, B., Gera, Z.: Application of clouds for modeling uncertainties in robust space system design. ACT Ariadna Research ACT-RPT-05-5201, European Space Agency (2007) 13. Parker, R., Rardin, R.: Discrete optimization. Academic Press, London (1988) 14. Tawarmalani, M., Sahinidis, N.: Global optimization of mixed-integer nonlinear programs: A theoretical and computational study. Mathematical Programming 99(3), 563– 591 (2004) 15. Weisstein, E.: Minimum spanning tree. MathWorld – A Wolfram Web Resource (2008), http://mathworld.wolfram.com/MinimumSpanningTree.html
Software Architectures for Flexible Task-Oriented Program Execution on Multicore Systems Thomas Rauber and Gudula R¨unger
Abstract. The article addresses the challenges of software development for current and future parallel hardware architectures which will be dominated by multicore and manycore architectures in the near future. This will have the following effects: In several years desktop computers will provide many computing resources with more than 100 cores per processor. Using these multicore processors for cluster systems will create systems with thousands of cores and a deep memory hierarchy. A new generation of programming methodologies is needed for all software products to efficiently exploit the tremendous parallelism of these hardware platforms. The article aims at the development of a parallel programming methodology exploiting a two-level task-based view of application software for the effective use of large multicore or cluster systems. Task-based programming models with static or dynamic task creation are discussed and suitable software architectures for designing such systems are presented.
1 Introduction Future cluster and multicore systems will soon offer ubiquitous parallelism not only for high-performance computing but for all software developers and application areas. However, the mainstream programming model is still sequential and von Neumann oriented, and sophisticated programming techniques are required to access the parallel resources available. Therefore, a change in programming and software development is imperative for making the capabilities of the new architectures available to programmers and users of all kinds of software systems. Thomas Rauber University Bayreuth e-mail: [email protected] Gudula R¨unger Chemnitz University of Technology e-mail: [email protected]
124
T. Rauber and G. R¨unger
The article aims at the software development for future parallel systems with a large number of compute units (cores). Although many parallel programming models, including MPI, OpenMP, and Pthreads, have been developed in the past (especially targeting HPC and scientific computing), none of them seems to be appropriate for main stream software development for large-scale complex industrial systems since the level of abstraction provided is too low-level [22]. The trend in hardware technology towards large multicore systems requires a higher level of abstraction for software development to reach productivity and scalability. To provide such programming abstractions for parallel executions is a key challenge of future software development. In this article, we propose a parallel programming methodology exploiting taskbased views of software products, so that future multicore and cluster systems can be used efficiently and productively without requiring too much effort by the programmer. The main goal is to deliver a hybrid, flexible and abstract parallel programming model at a high level of abstraction (in contrast to low-level state-of-the-art models like MPI or Pthreads) and a corresponding programming environment. A main feature of the approach is a decoupling of the specification of a parallel algorithm from the actual execution of the parallel work units identified by the specification on a given parallel system. This allows the programmer to concentrate on the characteristics of his algorithm and relieves him from low-level details of the parallel execution. In particular, we propose to extend the standard model of a task-based execution to multi-threaded tasks that can be executed by multiple cores and in parallel with other multi-threaded tasks of the same application program. The use of an appropriate specification mechanism allows the expression of algorithms from different application areas on an abstract level and relieves the programmer from an explicit mapping of computations to processors as well as from the specification of explicit data exchanges. This facilitates the development of parallel programs significantly and, thus, makes parallel computing resources available to a large community of software developers. A coordination language provides support for the specification of tasks and for expressing dependencies between tasks. In the following sections, we present an overview of a new approach for taskoriented programming in Section 2, a description of the corresponding software architectures for task execution in Section 3, and a runtime evaluation in Section 4. Section 5 discusses related work and Section 6 concludes.
2 Programming Models with Tasks In this section, we give a short overview of task-based programming, which we propose to use for multicore systems with a large number of cores. Task-based programming is based on a decomposition of the application program into tasks that can cooperate with each other. Task creation can be determined statically at compile time or at program start, but can also evolve dynamically during program execution. An important property is that, for an arbitrary execution platform, task creation is separated from task execution, i.e., tasks do not need to
Software Architectures for Flexible Task-Oriented Program Execution
125
be executed immediately after their creation, and the specification of tasks does not necessarily fix details of their execution. Moreover, the task definitions may vary significantly depending on the special needs of the execution. In the following, task-oriented programming models are classified according to different criteria, including static or dynamic task creation, sequential or parallel execution of a single task, or the specification of dependencies between tasks.
2.1 Task Decomposition An application program is decomposed into tasks such that the application can be represented by a tuple (T , C) with a set of tasks T = {T1 , T2 , . . .} and a coordination structure C defining the interactions between tasks. A task incarnates a specific computation in a piece of program code. Usually, a task captures a logical unit of the application and can be defined with different granularities. The coordination structure describes possible cooperations between the tasks of a specific application program. For the execution mode of tasks, we can distinguish between: • Sequential execution of a task on a single core; • Parallel execution of a task on several cores with a shared address space using a multi-threaded execution with synchronization; • Parallel execution of a task on several cores or execution units employing a distributed address space performing intra-task communication by message-passing; • Parallel execution of a task on several cores with a mixed shared and distributed address space, containing both synchronization and communication. Moreover, it can be distinguished between task executions for which the number of cores used for the execution is fixed when deploying the task, and between task executions for which the number of cores can be changed during execution. The latter case is particularly interesting for long-running tasks for which an adaptation to the specific execution situation of the target platform is beneficial. For parallel platforms with multicore processors, a model with a parallel execution of a single task using a shared address space is particularly interesting, since this allows the mapping of a single tasks to all cores of a multicore processor, thus exploiting the memory hierarchy efficiently. In this case, the single tasks are implemented in a multi-threaded way using shared variables for information exchange. The cooperation between the tasks can also be specified in different ways, leading to the following main characteristics for task cooperations: • Tasks cooperate by specifying input-output relations, i.e., one task outputs data that is used by another task as input; in this case, the tasks must be executed one after another; • Tasks are independent of each other, allowing a concurrent execution without interactions; • Tasks can cooperate during their execution by exchanging information or data, thus requiring a concurrent execution of cooperating tasks.
126
T. Rauber and G. R¨unger
The different execution modes combined with the different cooperation modes result in several different task-based programming models, which require different implementation support for a correct and efficient execution. An additional criterion is the time when the tuple (T , C) is defined. This can be done statically such that the application program defines the tasks and their interactions in the source program. In this case, all tasks and their interactions are known at compile time before the actual program execution, and appropriate scheduling and mapping techniques can be used to yield the best task execution on a given platform. In contrast, the tasks may also be created dynamically at runtime. In this case, the tasks and their interactions are not known at program start and, thus, task deployment must be planned at runtime. In the following, we consider a task-oriented programming model with parallel tasks using a shared address space and a coordination structure based on inputoutput dependencies, i.e., there are no interactions between concurrently running tasks during their execution. Application programs coded according to this model are suitable for single multicore processors as well as for clusters with multicore processors. Both, the static and the dynamic case are considered.
2.2 Task Execution and Interaction The set of tasks T consists of parallel tasks that can be executed on multiple cores of a multicore processor or the nodes of a multicore cluster consisting of several multicore processors. For each task, a parameterized number of p cores can be used. Each task is implemented as a multi-threaded program using shared variables for information exchange. For each task, we distinguish between internal variables and external variables. The internal variables are only visible within the task and can be accessed by all threads executing that task. For the threads executing a task, a synchronization mechanism for a coordinated access to the internal variables has to be used. Such synchronization mechanisms, e.g., lock variables, are available in current languages or libraries for thread-based programming of shared address space, such as Pthreads, OpenMP, or Java threads. The external variables of a task T are visible to other tasks T ∈ T of the source program. The visibility is restricted to an input-output relation between tasks, which means that an explicit variable A produced by task T can be used as input by another task T . In such a case, the execution of T has to be finished before the execution of T can start. Tasks T and T can be executed on the same set of cores, but different sets of cores can also be used. In the latter case, synchronization or communication has to be used between the execution of T and T to make the output variables of T available to the cores executing T . Thus, there has to be a coordinated access to a variable A at task level. Considering internal and external variables, a synchronization structure on two separate levels results. The advantage of this approach is that the synchronization of variables is restricted to smaller parts of the program and that only specific interactions via shared variables are allowed: interactions within a task via internal variables and interactions between tasks through input-output relations
Software Architectures for Flexible Task-Oriented Program Execution
127
via external variables. Another advantage of this two-level structure is an increase of parallelism due to a parallel execution of a single task as well as a concurrent execution of independent tasks.
2.3 Internal and External Variables For a task T ∈ T , let IVT denote the set of internal variables and EVT the set of external variables. Each external variable is either an input variable provided for T before the actual execution starts, or an output variable that is computed by T and is available after the execution of T has been finished. For the internal and external variables of the tasks T ∈ T of one specific task-based parallel program, the following requirements have to be fulfilled for the two-level shared-memory task structure: • For any two tasks T, T ∈ T , the set of internal variables has to be disjoint, i.e., IVT ∩ IVT = ∅. This can be achieved by a local definition of data variables and a visibility restricted to the specific task code. • For any two tasks T, T ∈ T , the set of external variables can either be disjoint, i.e., EVT ∩EVT = ∅, or can be accessed in a strictly predefined order as specified by the task program. For example, if v ∈ EVT ∩EVT = SVT T = ∅ exists and T uses v as output variable and T uses v as input variable, then T has to be executed before T . In particular, a temporal interleaved access to v ∈ EVT ∩EVT by both T and T is not allowed. To guarantee these constraints for external variables, tasks of a program may have to be executed according to a predefined execution order as specified by an inputoutput relations between the tasks. When task T has to be executed before task T , this is denoted by T → T . Tasks that are not connected by an input-output relation can be executed in any execution order, and they can also be executed concurrently to each other, see Fig. 1 for an illustration. The execution order resulting from the input-output relations is a relation between two single tasks. The entire set of relations between the tasks of a task-based program can be represented as a graph structure G = (V, E) where V = T and E captures the input-output relations between the tasks. This graph structure is also denoted as task graph. A task program is a valid task program if the task graph is a directed graph without cycles (DAG). Efficient execution orders of task graphs are considered in the Section 3.
2.4 Coordination Language For the specification of the interactions between tasks, a coordination language can be used. The coordination language provides operators for specifying dependencies or independencies between task. For two tasks, the operator || is used to express independence; for more than two independent tasks, the operator parfor is provided. The operator ◦ expresses a dependence between two tasks.
128
T. Rauber and G. R¨unger cores access to SVT1T 3 visible: EVT1 and IV T1
time
Task T1
Task T2
access to SV T2T 3 visible: EVT2 , IV T2
barrier and flush of SVT1T 3 and SVT T
2 3
Task T3
access to variable set SVT1T 3 and SV T2T 3 visible variable sets: EV T3 and IV T3
Fig. 1 Execution situation for three tasks T1 , T2 , T3 with EVT1 ∩ EVT2 = ∅, EVT1 ∩ EVT2 = SVT1 T3 and EVT2 ∩ EVT3 = SVT2 T3
In the declaration of a task, the external variables are declared in the form of input and output variables along with their corresponding type. We illustrate the use of the coordination language for two methods from numerical analysis. Figure 2 shows a coordination program for the iterated Runge-Kutta (RK) method for solving systems of ordinary differential equations (ODEs). The method performs a series of time steps. In each time step, an s-stage iterated RK method performs a fixed number m of iterations to compute s stage vectors v 1 , . . . , v s iteratively and uses the result of the last iteration to compute the next approximation vector ynew . In the figure, the stage vectors are computed in ItComputeStagevectors, the next approximation vector is computed in ComputeApprox. The new stepsize hnew and the new x-value xnew for the next iteration step are computed in StepsizeControl. The ODE system to be solved is represented by the parameter f. The computation of the stage vectors is performed by a sequential loop with m iterations where each iteration is a parallel parfor loop with s independent activations of the task StageVector, which is not shown in detail. The parameter list contains IN and OUT variables, which form the set of external variables for that function. Figure 3 shows a coordination program for extrapolation methods for solving ODEs. In each time step, different approximations are computed with different stepsizes, and these are then combined in an extrapolation table to obtain an approximation solution of higher order. In the figure, the task EulerStep performs the single micro-steps with different step-sizes. The micro-steps of one time step can be executed in parallel, expressed by a parfor operator.
3 Software Architectures of Task-Based Programs The execution of task-based programs requires support at several levels. For the execution of a single task, the task is mapped to a set of cores together with its internal variables. In addition, the corresponding external variables have to be made available. The coordination structure expresses the precedence relations between the tasks that must be considered for a correct execution. The coordination structure does not specify an exact execution order of the tasks, but leaves some degree of
Software Architectures for Flexible Task-Oriented Program Execution
129
External task declarations: StageVector(IN f: scal × vec(n) → vec(n), x: scal, y: vec(n), s: scal, A: mat(s × s), h: scal, V: list[s] of vec(n); OUT vnew: vec(n)) ComputeApprox(IN f: scal × vec(n) → vec(n), x: scal, y: vec(n), s: scal, b: vec(s), h: scal, V: list[s] of vec(n); OUT ynew: vec(n)) StepsizeControl(IN y: vec(n), ynew: vec(n); OUT hnew: scal, xnew: scal) Task definitions: ItRKmethod(IN f: scal × vec(n) → vec(n), x: scal, xend: scal, y: vec(n), s: scal, A: mat(s × s), b: vec(s), h: scal; OUT X: list[] of scal, Y: list[] of vec(n) ) = while( x < xend) { ItComputeStagevectors ( f, x, y, s, A, h ; V) ◦ ComputeApprox ( f, x, y, s, b, h, V ; ynew) ◦ StepsizeControl ( y, ynew; xnew, hnew) } ItComputeStagevectors(IN f: scal × vec(n) → vec(n), x: scal, y: vec(n), s: scal, A: mat(s × s), h: scal; OUT V: list[s] of vec(n)) = InitializeStage(y ; V ) ◦ for(j=1,...,m) parfor (l=1,...,s) StageVector(f, x, y, V ; Vnew) Fig. 2 Specification program of the iterated RK method
freedom when tasks are independent of each other because of an empty set SVT T = ∅ for two tasks T and T . In this case, T and T are independent of each other and can be executed at the same time on different, disjoint sets of cores if this is beneficial for the resulting execution time. This leads to a task scheduling problem: For a given coordination structure, how can the tasks be mapped to the cores such that a minimum overall execution time results? To solve the scheduling problem, a task scheduler is integrated into the execution environment. Thus, the specification of a task program provided for an application program is separated from the actual assignment of tasks to cores for execution.
3.1 Task Scheduler The task scheduler accepts correct task programs in form of a specification (T , C). According to the coordination structure C and the state of the program execution, there are usually several tasks that can be executed next at each point of program execution. At program start, these are the tasks at the roots of the DAG representing C. In later steps, there is a set of tasks that are ready for execution when tasks from which they depend are finished. The scheduler has knowledge about idle cores of the multicore platform and selects tasks for execution from the set of ready tasks. For the selection, the scheduler has several decisions to make. In particular, the
130
T. Rauber and G. R¨unger External task declarations: BuildExtrapTable(IN Y: list[r] of vec(n); y: OUT vec(n)) ComputeMicroStepsize(IN j: scal, H:scal, r:scal; OUT hj :scal) EulerStep(IN f: scal × vec(n) → vec(n), x: scal, y: vec(n), h: scal; OUT ynew: vec(n)) StepsizeControl(IN y: vec(n), ynew: vec(n); OUT hnew: scal, xnew: scal) Task definitions: ExtrapMethod(IN f: scal × vec(n) → vec(n), x: scal, xend: scal, y: vec(n), r: scal, H: scal; OUT X: list[] of scal, Y: list[] of vec(n) ) = while( x < xend) parfor(j=1,...,r) MicroSteps( j, f, x, y, H ; yj ) ◦ BuildExtrapTable((y1,...,yr ) ; ynew) ◦ StepsizeControl(y, ynew ; Hnew, xnew) MicroSteps(IN j:scal , f: scal × vec(n) → vec(n), x: scal, y: vec(n), H: scal; OUT ynew: vec(n)) = ComputeMicroStepsize(j, H, r ; hj ) ◦ for(i=1,...,j) EulerStep(f, x, y, hj ; ynew) Fig. 3 Specification program of the extrapolation method
number and the set of tasks that are executed next must be defined and the number of cores used has to be determined for each of the tasks selected. This selection depends both on the number of cores that are available and the size of the set of ready tasks. Moreover, information about the task graph structure can be taken into consideration. For example, preference could be given to tasks on the critical path to avoid that these tasks delay the overall program termination. When only a small number of tasks is ready for execution, it is usually advantageous to select fewer tasks, and to assign each of these tasks a larger number of cores for execution. If it is known that some tasks create more child tasks than other tasks, the former tasks should be executed first so that more tasks can be made available for execution, thus ensuring an effective load balancing. We refer to [24, 10, 16] for more details on appropriate scheduling algorithms. A single task is a multi-threaded piece of code that can be executed by a number of threads. These threads can either be created for each task on free cores, or the threads can be kept alive between the execution of different tasks. In the following, we assume the latter case. The assignment of the threads to cores is done by a thread scheduler. In summary, a two-level scheduling system results which is a suitable and flexible execution scheme for an adaptive execution of task-parallel programs, see Fig. 4. The execution of task-based programs with parallel tasks is supported by a software environment containing several components working either statically or dynamically. Main parts of this software environment are the front-end with a correctness checker and a dynamically working back-end two-level scheduler as described before, see Fig. 5. After processing the coordination specification and the
Software Architectures for Flexible Task-Oriented Program Execution thread groups
tasks
cores groups
dynamic thread mapping
Task T 2
task assignment
dynamic thread mapping
task assignment
...
...
task assignment
...
Task T1
Task T n
131
dynamic thread mapping
Fig. 4 Two-step scheduling system with task assignment to thread groups of different size and the dynamic mapping of threads to free cores of the execution platform
declarations of parallel tasks, the correctness checker checks important properties such as the correctness of the DAG or the correctness of the types and declarations of parallel tasks. The resulting intermediate representation is created statically and can then be handled dynamically by the scheduler. The scheduler requires feedback about the status of the execution platform concerning idle cores and about the status of the execution of tasks, e.g., whether the execution of a submitted task is finished. Based on this information the scheduler assigns new tasks to groups of cores and the internal thread scheduler handles the actual multi-threaded execution of a task on one set of cores. In summary, the execution environment combines static compiler-based components for the translation of the specification into an intermediate format and dynamic parts for assigning parallel tasks to execution platforms at runtime. Originally, both environments have been designed for parallel message-passing programs. The specific feature of the execution environment presented here is the two-level shared memory environment with internal and external variables. This gives rise to a twolevel synchronization mechanism between threads on the one hand (supported by the thread scheduler in multi-threaded languages) and parallel tasks on the other hand.
4 Runtime Experiments Task-based executions not only facilitate the programming effort, they are also able to provide competitive runtime results compared to traditional parallel programming techniques. This is especially useful for compute-intensive applications. We will illustrate this for an application from numerical analysis, the iterated RK method from Section 2.4, for two different execution platforms with different execution
132
T. Rauber and G. R¨unger dynamic status of execution platform
coordination specification C set of tasks T
dependence checker
task graph with external variables
scheduler
task assignment
parallel multicore machine
status of task execution
Fig. 5 Interaction between scheduler, execution platform and dependence checker for the construction of the set SVT T , and dependence analysis between tasks
characteristics: The first platform is a Xeon cluster consisting of two nodes with two Intel Xeon E5345 ”’Clovertown”’ quad core processors each. The processor run at 2.33 GHz and are connected by an infiniband network. The second cluster is the Chemnitz High Performance Linux Cluster (CHiC) which is built up of 538 nodes, each consisting of two AMD Opteron 2218 dual core processors with a clock rate of 2.6 GHz. Figure 6 (left) shows the execution times of one time step of an iterated RK method with four stage vectors on the Xeon cluster with 16 cores. As ODE system, a spatial discretization of the 2D Brusselator equation is used, which describes the reaction of two chemical substances with diffusion. In particular, the figure compares a traditional data parallel implementation with a task-based implementation using four parallel tasks. In this configuration each parallel task is executed by four threads, which can be mapped in different ways to the cores. For a consecutive mapping, the threads are mapped to the cores of one processor. For a scattered mapping, cores of different processors are used. Figure 6 (right) illustrates the speedups achieved for different realizations on the CHiC. The ODE system solved arises from a Galerkin approximation of a Schr¨odinger-Poisson system that models a collisionless electron plasma. In particular, the figure compares hybrid execution schemes which use OpenMP for the parallel tasks within one node with pure MPI execution schemes. For the data parallel version, much higher speedups are achieved by using the OpenMP programming model within the cluster nodes. This hybrid parallel version even outperforms the orthogonal program version with an optimized task mapping. The main source of this impressive improvement is the reduction of the number of MPI processes to 1/4. The best results are based on the program version with parallel working tasks using OpenMP intra node.
5 Related Work Task-based approaches have been considered at several levels and with different programming models in mind. Language extensions have been proposed to express the execution of tasks, including Fortran M [11], Opus [6], Braid [25], Fx [21], HPF 2.0 [13], Orca [12], and Spar/Java [23]. The HPCS language proposals, Sun’s
Software Architectures for Flexible Task-Oriented Program Execution IRK with RadauIIA7 for brusselator on Xeon (16 cores)
IRK−method with RadauIIA7 for schrödinger (n=128002) on CHiC
1 M−task 4 M−tasks ort scattered 4 M−tasks ort mixed (d=2) 4 M−tasks ort mixed (d=4) 4 M−tasks ort consecutive
400
350
2
1 M−task 1 M−task OpenMP 4 M−tasks ort 4 M−tasks ort OpenMP
300
250
speedup
time per step in seconds
2.5
133
1.5
200
150
1
100
0.5 50
0 20000180000
500000 720000
980000
1280000
system size
1620000
2000000
0 16324864
96
128
192
256
320
384
448
512
cores
Fig. 6 Left: Execution times of one time step of the IRK method using the four-stage RadauIIA7 method on the Xeon cluster for one M-task (data parallel) and four M-tasks (task parallel) with different mappings to cores. Right: Comparison of the speedups for pure MPI realizations with a hybrid MPI+OpenMP implementation of the IRK method for a Schr¨odinger equation with system size 128002 on the CHiC cluster.
Fortress [2], IBM’s X10 [7], and Cray’s Chapel [5], also contain some support for the specification of tasks. Moreover, skeleton-based approaches like P3L [18], LLC [9], Lithium [1], and SBASCO [8], as well as library-based and coordination-based approaches have been proposed. The compiler-based static specification of parallel tasks is supported by the TwoL approach [19], which transforms a task-based specification of a parallel program step-by-step into an executable parallel program based on MPI. Task-internal communication is also expressed by MPI. The dynamic creation and deployment of parallel MPI tasks is supported by the Tlib library [20]. Both approaches are in principle suited for multicore systems or clusters, but they rely on the fact that the MPI implementation uses the memory hierarchy of the execution platform efficiently. The dynamic deployment of tasks is supported by several libraries. An example is the TPL library that has been developed for .NET [15], which supports the specification of tasks at loop level. Tasks arising from unstructured parallelism is supported via futures. All tasks are executed sequentially, the parallel execution of a single task is not supported. Load distribution is done through work stealing, see [4] for an analysis of this technique. Support for sequential tasks is also included in many other environments, including Cilk [3] and Charm++ [14]. Support for taskbased execution is also available in OpenMP 3.0, but tasks are executed sequentially by a single thread. [17]
6 Conclusions The specification of parallel programs as a set of tasks that can cooperate with other tasks via input-output relations is a useful abstraction, since it relieves the programmer from the need to specify many low-level details of the parallel
134
T. Rauber and G. R¨unger
execution. In particular, the programmer does not need to specify an exact mapping of the computations to threads or processes for execution. Instead, these computations are assigned to tasks according to their natural algorithmic decomposition. A runtime system then brings the tasks to execution by dynamically selecting tasks when free execution resources are available. The use of tasks has been particularly useful for expressing irregular applications, including particle simulation methods or computer graphics computations, like radiosity or ray tracing because of their dynamically evolving computation structure. In the traditional approach, single-processor tasks have been used that are executed by a single execution resource. The abstraction with tasks is also useful for programming multicore systems or multicore clusters. In this context, single-processor tasks can still be used, but it is more efficient and flexible to extend the approach to parallel tasks where a single task can be executed by multiple execution resources. This allows the mapping of one task to all cores or to a part of the cores of a node of a multicore system. The resources executing one task have to access a shared memory. In this article, we have outlined this approach and have demonstrated its usefulness. In particular, we have shown how programs based on this approach can be expressed and how an execution environment can be organized. The use of parallel tasks leads to good execution times on large parallel systems.
References [1] Aldinucci, M., Danelutto, M., Teti, P.: An advanced environment supporting structured parallel programming in Java. Future Generation Computer Systems 19(5), 611–626 (2003) [2] Allen, E., Chase, D., Hallett, J., Luchangco, V., Maessen, J.-W., Ryo, S., Steele Jr., G.L., Tobin-Hochstadt, S.: The Fortress Language Specification, Version 1.0. Technical report, Sun Microsystems, Inc. (March 2008) [3] Blumofe, R.D., Joerg, C.F., Kuszmaul, B.C., Leiserson, C.E., Randall, K.H., Zhou, Y.: Cilk: An efficient multithreaded runtime system. Journal of Parallel and Distributed Computing 37(1), 55–69 (1996) [4] Blumofe, R.D., Leiserson, C.E.: Scheduling multithreaded computations by work stealing. J. ACM 46(5), 720–748 (1999) [5] Chamberlain, B.L., Callahan, D., Zima, H.P.: Parallel Programmability and the Chapel Language. Int. J. High Perform. Comput. Appl. 21(3), 291–312 (2007) [6] Chapman, B., Haines, M., Mehrota, P., Zima, H., Van Rosendale, J.: Opus: A coordination language for multidisciplinary applications. Sci. Program. 6(4), 345–362 (1997) [7] Charles, P., Grothoff, C., Saraswat, V., Donawa, C., Kielstra, A., Ebcioglu, K., von Praun, C., Sarkar, V.: X10: an object-oriented approach to non-uniform cluster computing. In: OOPSLA 2005: Proc. of the 20th ACM Conf. on Object-oriented Programming, systems, languages, and applications, pp. 519–538. ACM, New York (2005) [8] Diaz, M., Romero, S., Rubio, B., Soler, E., Troya, J.M.: An Aspect Oriented Framework for Scientific Component Development. In: Proc. of the 13th Euromicro Conf. on Parallel, Distributed and Network-Based Processing (PDP 2005), pp. 290–296. IEEE, Los Alamitos (2005)
Software Architectures for Flexible Task-Oriented Program Execution
135
[9] Dorta, A.J., Gonz´alez, J.A., Rodr´ıguez, C., de Sande, F.: llc: A Parallel Skeletal Language. Parallel Processing Letters 13(3), 437–448 (2003) [10] Dutot, P.-F., N’Takpe, T., Suter, F., Casanova, H.: Scheduling Parallel Task Graphs on (Almost) Homogeneous Multicluster Platforms. IEEE Transactions on Parallel and Distributed Systems 20(7), 940–952 (2009) [11] Foster, I.T., Chandy, K.M.: Fortran M: A Language for Modular Parallel Programming. J. Parallel Distrib. Comput. 26(1), 24–35 (1995) [12] Ben Hassen, S., Bal, H.E., Jacobs, C.J.H.: A task- and data-parallel programming language based on shared objects. ACM Transactions on Programming Languages and Systems (TOPLAS) 20(6), 1131–1170 (1998) [13] High Performance Fortran Forum. High Performance Fortran Language Specification 2.0. Technical report, Center for Research on Parallel Computation, Rice University (1997) [14] Kale, L.V., Bohm, E., Mendes, C.L., Wilmarth, T., Zheng, G.: Programming Petascale Applications with Charm++ and AMPI. In: Bader, D. (ed.) Petascale Computing: Algorithms and Applications, pp. 421–441. Chapman & Hall / CRC Press, Boca Raton (2008) [15] Leijen, D., Schulte, W., Burckhardt, S.: The design of a task parallel library. In: OOPSLA 2009: Proceeding of the 24th ACM SIGPLAN conference on Object oriented programming systems languages and applications, pp. 227–242. ACM, New York (2009) [16] N’takp´e, T., Suter, F., Casanova, H.: A Comparison of Scheduling Approaches for Mixed-Parallel Applications on Heterogeneous Platforms. In: Proc. of the 6th Int. Symp. on Par. and Distrib. Comp. IEEE, Los Alamitos (2007) [17] OpenMP Application Program Interface, Version 3.0 (May 2008), http://www.openmp.org [18] Pelagatti, S.: Task and data parallelism in P3L. In: Rabhi, F.A., Gorlatch, S. (eds.) Patterns and skeletons for parallel and distributed computing, pp. 155–186. Springer, London (2003) [19] Rauber, T., R¨unger, G.: A Transformation Approach to Derive Efficient Parallel Implementations. IEEE Transactions on Software Engineering 26(4), 315–339 (2000) [20] Rauber, T., R¨unger, G.: Tlib - A Library to Support Programming with Hierarchical Multi-processor Tasks. Journ. of Parallel and Distrib. Comput. 65(3), 347–360 (2005) [21] Subhlok, J., Yang, B.: A new model for integrated nested task and data parallel programming. In: Proc. of the 6th ACM SIGPLAN symposium on Principles and practice of parallel programming, pp. 1–12. ACM Press, New York (1997) [22] Sutter, H., Larus, J.: Software and the Concurrency Revolution. ACM Queue 3(7), 54– 62 (2005) [23] van Reeuwijk, C., Kuijlman, F., Sips, H.J.: Spar: a Set of Extensions to Java for Scientific Computation. Concurrency and Computation: Practice and Experience 15, 277–299 (2003) [24] Vydyanathan, N., Krishnamoorthy, S., Sabin, G.M., Catalyurek, U.V., Kurc, T., Sadayappan, P., Saltz, J.H.: An Integrated Approach to Locality-Conscious Processor Allocation and Scheduling of Mixed-Parallel Applications. IEEE Transactions on Parallel and Distributed Systems 20(8), 1158–1172 (2009) [25] West, E.A., Grimshaw, A.S.: Braid: integrating task and data parallelism. In: FRONTIERS 1995: Proceedings of the Fifth Symposium on the Frontiers of Massively Parallel Computation (Frontiers 1995), p. 211. IEEE Computer Society, Los Alamitos (1995)
Optimal Technological Architecture Evolutions of Information Systems Vassilis Giakoumakis, Daniel Krob, Leo Liberti , and Fabio Roda
Abstract. We discuss a problem arising in the strategic management of IT enterprises: that of replacing some existing services with new services without impairing operations. We formalize the problem by means of a Mathematical Programming formulation of the Mixed-Integer Nonlinear Programming class and show it can be solved to a satisfactory optimality approximation guarantee by means of existing off-the-shelf software tools.
1 Introduction For any information system manager, a recurrent key challenge is to avoid creating more complexity within its existing information system through the numerous IT projects that are launched in order to respond to the needs of the business. Such an objective leads thus typically to the necessity of co-optimizing both creation and replacement/destruction — called usually kills in the IT language — of parts of the information system, and of prioritizing the IT responses to the business consequently. This important question is well known in practice and quite often addressed in the IT literature, but basically only from an enterprise architecture or an IT technical management perspective [2, 3, 10]. Architectural and managerial techniques, Vassilis Giakoumakis MIS, Universit´e d’Amiens, Amiens, France e-mail: [email protected] Daniel Krob · Leo Liberti · Fabio Roda ´ LIX, Ecole Polytechnique, 91128 Palaiseau, France e-mail: {dk,liberti,roda}@lix.polytechnique.fr
´ Partially supported by Ecole Polytechnique-Thales “Engineering of Complex Systems” Chair and ANR grant 07-JCJC-0151 “ARS”. Corresponding author.
138
V. Giakoumakis et al.
however, are often only parts of the puzzle that one has to solve to handle these optimization problems. On the basis of budget, resource and time constraints given by the enterprise management, architecture provides the business and IT structure of these problems. This is however not sufficient model them completely or solve them. In this paper we move a step towards the integration of architectural business and IT project management aspects. We employ optimization techniques in order to model and numerically solve a part of this general problem. More precisely, we propose an operational model and a Mathematical Programming formulation expressing a generic global priorization problem occurring in the — limited, but practically rather important — context of a technological evolution of an information system (i.e. the replacement of an old IT technology by a new one, without any functional regression from the point of view of business). This approach promises to provide a valuable help for IT practitioners.
2 Operational Model of an Evolving Information System 2.1 Elements of Information System Architecture Any information system of an enterprise (consisting of a set D of departments) is classically described by two architectural layers: • the business layer: the description of the business services (forming a set V ) offered by the information system; • the IT layer: the description of the IT modules (forming a set U) on which business services rely on. In general, the relationship A ⊆ V × U between these two layers is not one-to-one. A given business service can require any number of IT modules to be delivered and vice-versa a given IT module can be involved in the delivery of several business services, as shown in Fig. 1.
service1
service2
....
serviceM
Business layer IT layer
M1
M2
....
Fig. 1 A simple two-layer information system architecture
Mn
Optimal Technological Architecture Evolutions of Information Systems
139
2.2 Evolution of an Information System Architecture From time to time, an information system may evolve in its entirety due the replacement of an existing software technology by a new one (e.g. passing from several independent/legacy software packages to an integrated one, migrating from an existing IT technology to a new one, and so on). These evolutions invariably have a strong impact at the IT layer level, where the existing IT modules U E = {M1 , . . . , Mn } are replaced by new ones in a set U N = {N1 , . . . , Nn } (in the sequel, we assume U = U E ∪ U N ). This translates to a replacement of existing services (sometimes denoted ES) in V by new services (sometimes denoted NS) in W ensuring that the impact on the whole enterprise is kept low, to avoid business discontinuity. This also induces a relation B ⊆ W × U N expressing reliance of new services on IT modules. Note also that in this context, at the business level, there exists a relation (in V ×W ) between existing services and new services which expresses the fact that a given existing service shall be replaced by a subset of new business services. We note in passing that this relation also induces another relation in U E × U N expressing the business covering of an existing IT module to a subset of new IT modules (see Fig. 2). Business layer
ES1
ES2
ES|V |
NS1
NS2
NS|W |
Requires
M1
M2
Mn
N1
N2
IT layer Fig. 2 Evolution of an information system architecture
Nn
140
V. Giakoumakis et al.
2.3 Management of Information System Architecture Evolutions Mapping the above information system architecture on the organization of a company, it appears clear that three main types of enterprise actors are naturally involved in the management of these technological evolutions which are described below. 1. Business department managers: they are responsible of creating business value — within the perimeter of a business department in the set D — through the new business services. This value might be measured by the amount of money they are ready to invest in the creation of these services (business services are usually bought internally by their users within the enterprise). 2. IT project managers: they are responsible for creating the new IT modules, which is a pre-requisite to creating the associated business services. The IT project manager has a project schedule usually organized in workpackages, each having a specific starting times and global budget (see Fig. 2). 3. Kill managers: they are responsible for destroying the old IT modules in order to avoid to duplicate the information system — and therefore its operating costs — when achieving its evolution. Kill managers have a budget for realizing such “kills”, but they must ensure that any old IT module i is only killed after the new services replacing those old ones relying on i are put into service. In this context, managing the technological evolution of an information system means being able of creating new IT modules within the time and budget constraints of the IT project manager in order to maximize both the IT modules business value brought by the new services and the associated kill value (i.e. the number of old services than can be killed).
2.4 The Information System Architecture Evolution Management Problem The architecture evolution of the IT system involves revenues, costs and schedules over a time horizon tmax , as detailed below. • Time and budget constraints of the IT project manager. Each new IT module i ∈ U has a cost ai and a production schedule. • IT module business value. Each department ∈ D is willing to pay qk monetary units for a new service k ∈ W from a departmental production budget H = ∑k:(,k)∈F qk ; the business value of the new service k is ck = ∑:(,k)∈F qk . We assume that this business value is transferred in a conservative way via the relation B to the IT modules. Thus, there is a business contribution βik over every (i, k) ∈ B such that for each k we have ck = ∑(i,k)∈B βik ; furthermore, the global business value of module i is ∑k:(i,k)∈B βik . We also introduce a set N ⊆ U of IT modules that are necessary to the new services. • Kill value. Discontinuing (or killing) a module i ∈ U has a cost bi due to the requirement, prior to the kill, of an analysis of the interactions between the module
Optimal Technological Architecture Evolutions of Information Systems
141
and the rest of the system architecture, in order to minimize the chances of the kill causing unexpected system behaviour. The evolution involves several stakeholders. The department heads want to maximize the value of the required new services. The module managers want to produce the modules according to an assigned schedule whilst maximizing the business value for the new services to be activated. The kill managers want to maximize the number of deactivated modules within a certain kill budget. Thus, the rational planning of this evolution requires the solution of an optimization problem with several constraints and criteria, which we shall discuss in the next session.
3 Mathematical Programming Based Approach Mathematical Programming (MP) is a formal language used for modelling and solving optimization problems [12, 9]. Each problem is modelled by means of a list of index sets, a list of known parameters encoding the problem data (the instance), a list of decision variables, which will contain appropriate values after the optimization process has taken place, an objective function to be minimized or maximized, and a set of constraints. The objective and constraints are expressed in function of the decision variables and the parameters. The constraints might include integrality requirements on the decision variables. MPs are classified into Linear Programs (LP), Mixed-Integer Linear Programs (MILP), Nonlinear Programs (NLP), MixedInteger Nonlinear Programs (MINLP) according to the linearity of objective and constraints and to integrality requirements on the variables. MILPs and MINLPs are usually solved using a Branch-and-Bound (BB) method, explained at the beginning in Sect. 3.2. A solution is an assignment of numerical values to the decision variables. A solution is feasible if it satisfies the constraints. A feasible solution is optimal if it optimizes the objective function. As explained above, an enterprise in our context consists of a set D of departments currently relying on existing services in V and wishing to evolve to new services in W within a time horizon tmax . Each service relies on some IT module in U (the set N ⊆ U indexes those IT modules that are necessary). The relations between services and modules and, respectively, departments and services, are denoted as follows: A ⊆ V × U, B ⊆ W × U, E ⊆ D × V and F ⊆ D × W . If an IT module i ∈ U is required by a new service, then it must be produced (or activated) at a certain cost ai . When an IT module i ∈ U is no longer used by any service it must be killed at a certain cost bi . Departments can discontinue using their existing services only when all new services providing the functionalities have been activated; when this happens, the service (and the corresponding IT modules) can be killed. Departments have budgets dedicated to producing and killing IT modules, which must be sufficient to perform their evolution to the new services; for the purposes of this paper, we suppose that departmental budgets are interchangeable, i.e. all departments credit and debit their costs and revenues to two unique enterprise-level budgets: a production budget Ht and a kill budget Kt indexed by the time period t. A new service k ∈ W has a value ck , and an IT module i ∈ U contributes βik to the
142
V. Giakoumakis et al.
value of the new service k that relies on it. We use the graph G = (V , E ) shown in Fig. 3 to model departments, existing services, new services, IT modules and their relations. The vertices are V = U ∪V ∪W ∪ D, and the edges are E = A ∪ B ∪ E ∪ F. This graph is the union of the four bipartite graphs (U,V, A), (U,W, B), (D,V, E) and (D,W, F) encoding the respective relations. We remark that E and F collectively induce a relation between existing services and new services with a “replacement” semantics (an existing service can be killed if the related new services are active). depart ment
s
E D
F exi stin g V servi ces vj
A IT
mo dul es
B
U ui
zi
new serv ices W wk
Fig. 3 The bipartite graphs used to model the problem
Although in Sect. 3.1 we omit, for simplicity, to list explicit constraints for the production schedule of IT modules, these are not hard to formulate (see e.g. [4]), and do not change the computational complexity of the solution method we employ.
3.1 Sets, Variables, Objective, Constraints We present here the MP formulation of the evolution problem. We recall that NS stands for new service and ES for existing service. 1. Sets: • T = {0, . . . ,tmax }: set of time periods (Sect. 2.4, p. 140); • U: set of IT modules (Sect. 2.1, p. 138);
Optimal Technological Architecture Evolutions of Information Systems
• • • • • • • •
143
N ⊆ U: set of IT modules that are necessary for the NS (Sect. 2.4, p. 140); V : set of existing services (Sect. 2.1, p. 138); W : set of new services (Sect. 2.2, p. 139); A ⊆ V × U: relations between ES and IT modules (Sect. 2.1, p. 138); B ⊆ W × U: relations between NS and IT modules (Sect. 2.2, p. 139); D: set of departments (Sect. 2.1, p. 138); E ⊆ D × V : relations between departments and ES (Sect. 3, p. 142); F ⊆ D × W : relations between departments and NS (Sect. 3, p. 142).
2. Parameters: • • • • •
∀i ∈ U ai = cost of producing an IT module (Sect. 2.4, p. 2.4); ∀i ∈ U bi = cost of killing an IT module (Sect. 2.4, p. 2.4); ∀t ∈ T Ht = production budget per time period (Sect. 3, p. 3); ∀t ∈ T Kt = kill budget per time period (Sect. 3, p. 3); ∀(i, k) ∈ B βik = monetary value given to NS k by IT module i (Sect. 2.4, p. 2.4).
3. Decision variables:
∀i ∈ U,t ∈ T uit = ∀i ∈ U,t ∈ T zit = ∀ j ∈ V,t ∈ T v jt = ∀k ∈ W,t ∈ T wkt =
1 if IT module i is used for a ES at time t 0 otherwise;
(1)
1 if IT module i is used for a NS at time t 0 otherwise;
(2)
1 if existing service j is active at time t 0 otherwise;
(3)
1 if new service k is active at time t 0 otherwise.
(4)
4. Objective function. Business value contributed to new services by IT modules. This is part of the objective of the module managers, which agrees with the objective of the department heads. max
u,v,w,y,z
∑ t∈T
βik zit wkt .
(5)
(i,k)∈B
5. Constraints. • Production budget (cost of producing new IT modules; this is another objective of the module managers): ∀t ∈ T {tmax }
∑ ai (zi,t+1 − zit ) ≤ Ht ,
(6)
i∈U
where the term zi,t+1 − zit is only ever 1 when a new service requires production of an IT module — we remark that the next constraints prevent the term from ever taking value −1.
144
V. Giakoumakis et al.
• Once an IT module is activated, do not deactivate it. ∀t ∈ T {tmax }, i ∈ U
zit ≤ zi,t+1 .
(7)
• Kill budget (cost of killing IT modules; this is part of the objective of the kill managers): ∀t ∈ T {tmax } ∑ bi (uit − ui,t+1 ) ≤ Kt , (8) i∈U
where the term uit − ui,t+1 is only ever 1 when an IT module is killed — we remark that the next constraints prevent the term from ever taking value −1. • Once an IT module is killed, cannot activate it again. ∀t ∈ T {tmax }, i ∈ U
uit ≥ ui,t+1 .
(9)
• If an existing service is active, the necessary IT modules must also be active: ∀t ∈ T, (i, j) ∈ A uit ≥ v jt .
(10)
• If a new service is active, the necessary IT modules must also be active: ∀t ∈ T, (i, k) ∈ B : i ∈ N
zit ≥ wkt .
(11)
• An existing service can be deactivated once all departments relying on it have already switched to new services; for this purpose, we define sets W j = {k ∈ W | ∃ ∈ D ((, j) ∈ E ∧ (, k) ∈ F)} for all j ∈ V : ∀t ∈ T, j ∈ V
∑ (1 − wkt ) ≤ |W j |v jt .
(12)
k∈W j
• Boundary conditions. To be consistent with the objectives of the module and kill managers, we postulate that: – at t = 0 all IT modules needed by existing services are active, all IT modules needed by new services are inactive: ∀i ∈ U ui0 = 1 ∀ j ∈ V v j0 = 1
∧ ∧
zi0 = 0; ∀k ∈ W wk0 = 0.
(13) (14)
– at t = tmax all IT modules needed by the existing services have been killed: ∀i ∈ U
uitmax = 0.
(15)
These boundary conditions are a simple implementation of the objectives of module and kill managers. Similar objectives can also be pursued by adjoining further constraints to the MP, such as for example that the number of IT modules serving ES must not exceed a given amount. The formulation above belongs to the MINLP class, as a product of decision variables appears in the objective function and all variables are binary; more precisely, it
Optimal Technological Architecture Evolutions of Information Systems
145
is a Binary Quadratic Program (BQP). This BQP can either be solved directly using standard BB-based solvers [1, 11, 7] or reformulated exactly (see [8] for a formal definition of reformulation) to a MILP, by means of the P ROD B IN reformulation [9, 5] prior to solving is with standard MILP solvers. A few preliminary experiments showed that the MILP reformulation yielded longer solution times compared to solving the BQP directly.
3.2 Valid Cuts from Implied Properties The BB method for for MPs with binary variables performs a binary tree-like recursive search. At every node, a lower bound to the optimal objective function value is computed by solving a continuous relaxation of the problem. If all integral variables happen to take integer values at the optimum of the relaxation, the node is fathomed with a feasible optimum. If this optimum has better objective function value than the feasible optima found previously, it replaces the incumbent, i.e. the best current optimum. Otherwise, a variable x j taking fractional value x¯ j is selected for branching. Two subnodes of the current node are created by imposing constraints x j ≤ x¯ j (left node) and x j ≥ x¯ j (right node) to the problem. If the relaxed objective function value at a node is worse than the current incumbent, the node is also fathomed. The step of BB which most deeply impacts its performance is the computation of the lower bound. To improve the relaxation quality, one often adjoins “redundant constraints” to the problem whenever their redundancy follows from the integrality constraints. Thus, such constraints will not be redundant with respect to the relaxation. An inequality is valid for a MP if it is satisfied by all its feasible points. If an inequality is valid for an MP but not for its relaxation, it is called a valid cut. We shall now discuss two valid inequalities for the evolution problem. The first one stems from the following statement: If a new service k ∈ W is inactive, then all existing services linked to all departments relying on k must be active. We formalize this statement by defining the sets: ∀k ∈ W
Vk = { j ∈ V | ∃ ∈ D ((, j) ∈ E ∧ (, k) ∈ F)}.
The statement corresponds to the inequality: ∀t ∈ T, k ∈ W
∑ (1 − v jt ) ≤ |Vk |wkt .
(16)
j∈Vk
Lemma 1. Whenever (v, w) are part of a feasible solution of the evolution problem, (12) implies (16). Proof. We proceed by contradiction: suppose (12) holds and (16) does not. Then there must be t ∈ T, k ∈ W, j ∈ Vk such that wkt = 0 and v jt = 0. By (12), v jt = 0 implies ∀h ∈ W j (wht = 1). By definition of Vk and W j , we have that k ∈ W j , and hence wkt = 1 against the assumption. Thus, (16) is a valid inequality for the evolution problem.
146
V. Giakoumakis et al.
The second inequality is a simple relation between v and w. First, we observe that the converse of Lemma 1 also holds; the proof is symmetric to that of Lemma 1: it suffices to swap j with k, W j with Vk , v with w, (12) with (16). Hence, (12) ⇔ (16) for all feasible (v, w). Proposition 1. The inequalities ∀t ∈ T, j ∈ V, k ∈ W ∃ ∈ D ((, j) ∈ E ∧ (, k) ∈ F)
v jt + wkt ≥ 1
(17)
are valid for the evolution problem. Proof. Suppose (17) does not hold: hence there are t ∈ T, j ∈ V, k ∈ W, ∈ D with (, j) ∈ E and (, k) ∈ F such that v jt + wkt = 0. Since v jt , wkt ≥ 0, this implies v jt = wkt = 0. It is easy to verify that if this is the case, (12) and (16) cannot both hold, contradicting (12) ⇔ (16). Eq. (17) states that at any given time period no pair (ES, NS) related to a given department must be inactive (otherwise the department cannot be functional). We can add (16) and (17) to the MP formulation of the evolution problem, and hope they will improve the quality of the lower bound obtained via the LP relaxation. We remark that other valid inequalities similar to (16), (17) can be derived by the problem constraints; these will be studied in further works.
4 Computational Results We aim to establish to what extent the proposed methodology can be used to solve realistically sized instances our problem. We first solve a set of small instances to guaranteed optimality and then a set of larger instances to within an approximation guarantee. We look at the CPU time and approximation guarantee behaviours in function of the instance size, and use these data to assess the suitability of the method. We used the AMPL modelling environment [6] and the off-the-shelf CPLEX 10.1 solver [7] running on a 64-bit 2.1 GHz Intel Core2 CPU with 4GB RAM. Ordinarily CPLEX’s Quadratic Programming (QP) solver requires QPs with Positive SemiDefinite (PSD) quadratic forms only. Although in our case this may not be true (depending on the values of β ), CPLEX can reformulate the problem exactly to the required form because all variables are binary. We consider a set of small instances, to be solved to guaranteed optimality, and one of larger instances where the BB algorithm is stopped either at BB termination or after 30 minutes of CPU time (whichever comes first). All instances have been randomly generated from a model that bears some similarity to data coming from an actual service industry. We consider three parameter categories: cardinalities (vertex set), graph density (edge creation probability) and monetary values. Each of the 64 instances in each set corresponds to a triplet (cardinality, edge creation probability, monetary value), each component of which ranges over a set of four elements. In order to observe how CPU time scales when solving to guaranteed optimality, we present 12 plots referring to the small set, grouped by row. We plot seconds
Optimal Technological Architecture Evolutions of Information Systems ’card_5’
’card_10’
1
0.05
1 0.5
’card_15’
0.05 0.04
0.03 0.03
0
-0.5 -1
0.01 0
-1
1
1
0.8
60
0.6
40
0.4
20
0.01
0.2
0
0.6
0.5
0.4
0.3
20.2
5 4 3
0.5
0.4
0.3
20.2
0.8
0.7
0.6
4
0.6 0.5 0.4
0.3
0.3
0.2 0.1
0.2
0
0.1
0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
8
5 4 3 2 4
10
8
6
12
16
14
18
0.25
20
0.2
2 4
0.05 0
1.4
1.4 1.2
1
80 60 40
0.2 0
0.4
20
0.05
0.2
0
0
20
0
0
14
16
18
4 3 2 4
80 60
0.4 0.3 0.24
10
8
6
12
16
14
18
10
8
12
14
18
16
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
40
0.1
20
0
0.2 0.1
0
0
0.8 0.7 0.6
0.5
6
20
’budget_8’
0.7
20
18
16
14
12
10
8
6
12
100
0.6
0.24
60 40
20
5
120
0.7
0.3
80
40
0.8
0.6 0.4
10
0
0.8
0.5
8
6
100
0.8 0.6
0.8
0.7
100
60
20
120
1
0.6 0.4
0.8
120
80
’budget_6’
0.1
0.15 0.1
’prob_0.8’
120 100
6 5 4
1.2
0.2 0.15
0.6
0.5
0.8
7
’budget_4’
0.25
0.7
8
3
’budget_2’
0.4
0.3
20.2
6
20
18
16
14
12
10
8
3
7 6
6
4
8
6
2 4
5
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
7
3
0.6
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
8 7
4
0.5
0.8
’prob_0.6’
0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
0
5
0.4
0.3
20.2
0.7
’prob_0.4’
0.4
0
6 5 3
’prob_0.2’
0.6
20
7 6
0.8
0.5
40
8
6
3
60
7
6 4
80
0
7 0.7
100
80
0.8
8
7
120
100
0.4
8
5
120
0.2 0
0.6
0
8
’card_20’
1.4 1.2
0.02
0.02
0 -0.5
1.4 1.2
0.04
0.5
147
20
0.6 0.5 0.4 0.3 0.24
8
6
10
12
14
16
18
20
0.5 0.4 0.3 0.24
6
10
8
12
14
18
16
20
Fig. 4 CPU time when solving small instances to guaranteed optimality ’card_25’
2000 1800 1600 1400 1200 1000 800 600 400 200 0
’card_30’
2000 2000 1800 18001600 16001400 14001200 12001000 1000 800 800 600 600 400 400 200 0 200 0
16
’card_35’
2000 2000 1800 18001600 16001400 14001200 12001000 1000 800 800 600 600 400 400 200 0 200 0
16 15
16 15
14 12 11 100.2
0.4
0.3
0.5
0.7
0.6
0.8
11 100.2
0.4
0.3
0.5
0.7
0.6
0.8
12 100.2
600
400 300 200 100 0
1024
26
28
30
32
34
1024
26
28
30
32
34
36
38
0.224
26
28
30
32
34
36
38
40
2000 1800 1600 1400 1200 1000 800 600 400 200 0
32
34
40
12 11 1024
0.3 0.224
26
28
30
32
34
36
38
40
28
30
32
34
36
38
40
’budget_16’
2000 1800 1600 1400 1200 1000 800 600 400 200 0
0.8 0.7 0.6
0.4
26
20002000 1800 18001600 16001400 14001200 12001000 1000 800 800 600 600 400 400 200 0 200 0
0.7 0.5
13
’budget_14’
0.8 0.6
0.3
0.8
’prob_0.8’
14
26
30
38
20002000 1800 18001600 16001400 14001200 12001000 1000 800 800 600 600 400 400 200 0 200 0
0.7
0.4
0.7
0.6
15
1024
28
36
’budget_12’
0.8
0.5
0.5
16
13 12
20002000 1800 18001600 16001400 14001200 12001000 1000 800 800 600 600 400 400 200 0 200 0
0.4
0.3
0
40
11
’budget_10’
0.6
100.2
14
11
0.7
11
15 13 12
0.8
12
16
40
2000 1800 1600 1400 1200 1000 800 600 400 200 0
13
2000 2000 1800 18001600 16001400 14001200 12001000 1000 800 800 600 600 400 400 200 0 200
14
11
0.8
’prob_0.6’
15
12
0.7
0.6
0
16 14 38
0.5
2000 600 1800 500 1600 1400 400 1200 1000 300 800 600 200 400 200 100 0
500
15
36
0.4
0.3
’prob_0.4’
9 8 7 6 5 4 3 2 1 0
13
14 13 11
’prob_0.2’
16
15 14
13 12
9 8 7 6 5 4 3 2 1 0
2000 1800 1600 1400 1200 1000 800 600 400 200 0
16 15
14 13
’card_40’
2000 2000 1800 18001600 16001400 14001200 12001000 1000 800 800 600 600 400 400 200 0 200 0
0.6 0.5 0.4 0.3 0.224
26
28
30
32
34
36
38
40
0.5 0.4 0.3 0.224
26
28
30
32
34
36
38
40
Fig. 5 Optimality gap when solving large instances within 30 minutes of CPU time
of user CPU time: for each fixed cardinality, in function of edge creation probability and monetary value (Fig. 4, first row); for each fixed edge creation probability, in function of cardinality and monetary value (Fig. 4, second row); for each fixed monetary value, in function of cardinality and edge creation probability (Fig. 4, third row). The largest “small instance” corresponds to the triplet (20, 0.8, 8). The plots show that the proposed methodology can solve a small instance to guaranteed optimality within half an hour; it is also possible to notice that denser graphs and smaller budgets yield more difficult instances. Sudden drops in CPU time might correspond to infeasible instances, which are usually detected quite fast. Fig. 5 is organized by rows as Fig. 4, but we plot the optimality gap — an approximation ratio — at termination rather than the CPU time, which is in this case limited
148
V. Giakoumakis et al.
to 30 minutes. The largest “large instance” corresponds to (40, 0.8, 16). the triplet 100| f ∗ − f¯| The optimality gap, expressed in percentage, is defined as | f ∗ +10−10 | %, where f ∗ is the objective function value of the best feasible solution found within the time limit, and f¯ is the tightest overall lower bound. A gap of 0% corresponds to the instance being solved to optimality. The plots show that the proposed methodology is able to solve large instances to a gap of 14% within half an hour of CPU time at worst, and to an average gap of 1.18% within an average CPU time of 513s (just over 8 minutes). It took 6h of user CPU time to reach a 15% gap in the real instance, which roughly corresponds to a triplet (80, 0.2, 10). We managed, however, to reach a satisfactory 20% gap within 513s (the actual solution value improvement was < 0.01%).
References 1. Belotti, P., Lee, J., Liberti, L., Margot, F., W¨achter, A.: Branching and bounds tightening techniques for non-convex MINLP. Optimization Methods and Software 24(4), 597–634 (2009) 2. Bernus, P., Mertins, K., Schmidt, G.: Handbook on Architectures of Information Systems. Springer, Berlin (2006) 3. Caseau, Y.: Performance du syst`eme d’information – Analyse de la valeur, organisation et management. Dunod, Paris (2007) (in French) 4. Davidovi´c, T., Liberti, L., Maculan, N., Mladenovi´c, N.: Towards the optimal solution of the multiprocessor scheduling problem with communication delays. In: MISTA Proceedings (2007) 5. Fortet, R.: Applications de l’alg`ebre de Boole en recherche op´erationelle. Revue Franc¸aise de Recherche Op´erationelle 4, 17–26 (1960) 6. Fourer, R., Gay, D.: The AMPL Book. Duxbury Press, Pacific Grove (2002) 7. ILOG: ILOG CPLEX 10.0 User’s Manual. ILOG S.A., Gentilly, France (2005) 8. Liberti, L.: Reformulations in mathematical programming: Definitions and systematics. RAIRO-RO 43(1), 55–86 (2009) 9. Liberti, L., Cafieri, S., Tarissan, F.: Reformulations in mathematical programming: A computational approach. In: Abraham, A., Hassanien, A.E., Siarry, P., Engelbrecht, A. (eds.) Foundations of Computational Intelligence, vol. 3. Studies in Computational Intelligence, vol. 203, pp. 153–234. Springer, Berlin (2009) 10. Luftman, J.: Competing in the Information Age. Oxford University Press, Oxford (2003) 11. Sahinidis, N., Tawarmalani, M.: BARON 7.2.5: Global Optimization of Mixed-Integer Nonlinear Programs, User’s Manual (2005) 12. Williams, H.: Model Building in Mathematical Programming, 4th edn. Wiley, Chichester (1999)
Practical Solution of Periodic Filtered Approximation as a Convex Quadratic Integer Program Federico Bizzarri, Christoph Buchheim, Sergio Callegari, Alberto Caprara, Andrea Lodi, Riccardo Rovatti, and Gianluca Setti
Abstract. The problem addressed in this paper comes from electronics and arises in the development of pulse coders for actuation, signal synthesis and, perspectively, audio amplification. The problem is formalized and efficient optimization techniques are considered for its solution. Being the standard approach used in electronics based on modulators, a comparison between the reach of these techniques and ad-hoc exact and heuristic solvers is proposed for some significant sample cases, considering both performance and computational efforts.
1 Introduction The problem addressed in this paper comes from electronics and arises in the development of pulse coders for actuation, signal synthesis and, perspectively, audio Federico Bizzarri ARCES, University of Bologna, Italy e-mail: [email protected] Christoph Buchheim TU Dortmund, Germany e-mail: [email protected] Sergio Callegari · Riccardo Rovatti ARCES and DEIS, University of Bologna, Italy e-mail: [email protected], [email protected] Alberto Caprara · Andrea Lodi DEIS, University of Bologna, Italy e-mail: [email protected], [email protected] Gianluca Setti ENDIF, University of Ferrara, Italy and ARCES, University of Bologna e-mail: [email protected]
150
F. Bizzarri et al.
amplification. The aim is to synthesize arbitrary analog waveforms by either bipolar or tripolar pulse codes to be passed through proper filters. To understand its importance, it is worth noticing that the two main alternatives for analog signal synthesis employ either a direct analog generation, or some digital coding, D/A conversion and filtering. Obviously, the second approach is nowadays much more appealing. Conventionally, digital coding has been practiced by Pulse Code Modulation (PCM), where the desired waveform is sampled at the so-called Nyquist rate1 and each sample is individually coded as a digital value. The samples are then stored in a memory, to be re-played at need on a D/A converter, whose output is then filtered through a smoothing filter [8] and amplified. The setup is illustrated at Figure 1a. As it is, it has some liabilities mainly concerned with the fact that PCM is by definition a high-depth code. In other words, samples need to be stored at a relatively large number of levels, otherwise the difference between the desired and the achieved waveforms gets too large.2 In turn, a high-depth code implies the use of a fully fledged D/A converter and a continuous-value amplifier.
(a) Conventional PCM/high-depth digital signal synthesis approach.
(b) Low-depth digital signal synthesis approach. Fig. 1 Comparison of digital signal syntheses approaches
An alternative approach consists of storing a higher-rate, lower-depth sequence, trading sampling-rate for resolution. If the depth can be reduced to 2 or 3 levels, a bridge of switches can be substituted for the D/A and the amplifier altogether, as illustrated in Figure 1b. This simplification is very appealing in applications where the synthesis overhead must be low (e.g., built-in analog testing). Additionally, there can be advantages in power efficiency. Thus, the approach is now greatly utilized, from audio power amplifiers to drivers for electric machines, from testing-frameworks to novel audio formats, such as Direct-Stream Digital (DSD) [11]. 1 2
In fact, at a frequency only slightly superior to the minimum established by the NyquistShannon sampling theorem [1]. The ratio between the power of the desired waveform and the difference signal can be interpreted as a Signal to Noise Ratio (SNR). PCM there is a straightforward link between the number of bits-per-sample NB and the SNR (under common conditions, SNR = 1.76 dB + 6.02 dB × NB ), which explains why a high-depth is needed.
Practical Solution of Periodic Filtered Approximation
151
A key point is how to obtain suitable low-depth sequences. Electronic engineers exploit modulators that have practically proved quite effective. Commonly used modulations include Pulse Width Modulation (PWM) [10] and ΔΣ modulation [13] (see Section 2.2). By contrast, here the problem is also considered in its fundamental optimization terms. From an electronic point of view, this is sensible for two reasons. First, conventional modulators can only provide a solution as a side effect of their inherent properties. Conversely, an explicit optimization approach may prove more effective or capable of considering merit factors and constraint combinations that modulators can not. Secondly, even if optimization techniques proved too computation intensive for direct application, they would still offer insight into the gap existing between the current best modulators and a true optimum, and thus hint at how to improve current solutions or the convenience of an effort in this sense. By contrast, experts in operations research can gain more than a mere application for their methods. Once established that optimization techniques can solve a problem conventionally solved by modulators, the point of view may be reversed to see if modulators can be regarded as heuristic solvers for a certain class of optimization problems, thus adding a new heuristic technique to their toolbox [2]. In the following sections, the problem is formalized and efficient optimization techniques are proposed for its solution. Furthermore, a comparison between modulators, global optimizers and heuristic optimizers is proposed for some significant sample cases, considering both performance and computational efforts.
2 Background Concepts 2.1 Problem Formalization In signal-processing, the problem of finding a Discrete-Time (DT), discrete-valued signal that, once passed through a filter, is as close as possible to a given waveform is known as Filtered Approximation (FA). Mathematically, given: (i) a target sequence w(n) with n ∈ N and w(n) ∈ R; (ii) a DT causal filter H(z); (iii) an evaluation interval [n1 , n2 ] ∩ N with n1 , n2 ∈ N; and (iv) some A ⊂ R with finite cardinality, it consists in finding a sequence x(n) with x(n) ∈ A so that the difference w(n) − x(n), once filtered through H(z) to give an error sequence e(n), has minimum average power 2 W = N1 ∑nn=n (e(n))2 in the evaluation interval [6]. 1 Note that the mathematical formalization above corresponds almost perfectly to the challenge of finding an appropriate low-depth sequence for the architecture in Figure 1b. Indeed, the target there is a Continous-Time (CT) signal (let it be indicated as w(t)) ˜ and similarly CT is the smoothing filter. However, one can think of w(n) as a sampled version of w(t) ˜ and of H(z) as a DT filter approximating the ˜ ω ). As long as the sampling frequency is sufficiently high, alsmoothing one H(i most full equivalence can be practically assumed. Here, a restricted version of the FA problem is considered, namely Periodic Filtered Approximation (P-FA) where w(n) is periodic with some period N, n2 − n1 = N (i.e., the evaluation interval spans exactly one period) and the filter is assumed
152
F. Bizzarri et al.
to have reached a steady-state at n1 . From an application point of view, P-FA is interesting, for instance, for the synthesis of test signals and excitations for electric motor drives. Notably, P-FA allows one to express w(n), x(n) and e(n) by N-element vectors w, x, and e [6]. P-FA corresponds to the challenge of finding an appropriate low-depth sequence for the architecture in Figure 1b, when the signal to be synthesized is periodic. Once again, assume that w(n) derives from the sampling of a periodic w(t). ˜ Assume, for simplicity, that the sampling period T is an integer fraction of the period Tw˜ of w(t) ˜ and that N = Tw˜ /T . With this, N/2 can be regarded as an oversamplingindex. From an implementation point of view, it is convenient to keep it as low as possible. However, it is intuitive that the larger N is, the more degrees of freedom are available to keep W low. Application-related specifications will most likely set an upper bound on W and the challenge will be to pick a low N such that W can be kept within it. This is the reason why it is quite important to be capable of choosing x(n) in an optimal or quasi-optimal way. For the same reason, one generally needs to tackle not just a single P-FA problem, but a whole family of them, all coming with from the same w(t) ˜ sampled at increased rates to have increasing N values. Following [3, 6], P-FA can be naturally recast as the following Convex Quadratic Integer Program (CQIP): arg min xT Qx + LT x + c s.t. x ∈ A N
(1)
where Q ∈ RN×N is a positive definite symmetric matrix, L ∈ RN , and c ∈ R. Obviously, A N ⊆ RN is a finite cardinality convex set for which membership can be tested in polynomial time. Specifically, W = xT Qx + LT x + c, and Q, L and c are Q s.t. q j,k =
i( j−k) 1 N−1 i2π Ni −i2π N ) H(e e ∑ 2 N i=0
L = −2wT Q
c = wT Qw (2)
The positive definiteness of Q derives from the physical meaning of xT Qx. Indeed, the latter can be interpreted as the average power of the filtered version of x(n) [6] that, due to its energetic nature, is compelled to be positive. Furthermore, the positive definiteness of Q guarantees the strict convexity of xT Qx + LT x + c.
2.2 Modulation Algorithms As already mentioned, in electronics FA problems are typically solved by modulators. Two types are mostly employed, those based on PWM and those based on ΔΣ modulation. PWM modulators are mainly adopted due to extremely easy implementations and because of a relatively low switching density (namely, the ratio NC /(n2 − n1 ), NC being the number of changes in x(n) in an interval [n1 , n2 ] ∩ N). They work by a very straightforward principle. First of all, a segment length S ∈ N is defined, so that in w(n) the sequence index n is said to belong to segment j if j = n/S. Then, a quiescent level XQ valid for all segments is chosen among the values in A .
Practical Solution of Periodic Filtered Approximation
153
Finally, for each segment j, x(n) is defined so that it is always equal to XQ , but for ( j) a consecutive lot having length A( j) , that is set at an active value XA ∈ A . The consecutive lot of active values is typically at the center of the segment (symmetric PWM), but in some cases it can lay at the beginning or at the end (asymmetric ( j) PWM). For each segment j the value XA and the length A( j) are chosen so that S−1 S−1 ∑i=0 x(S j + i) ≈ ∑i=0 w(S j + i). In other words, the on-segment average value of x(n) is made as close as possible to that of w(n).3 This explains the PWM name since w(n) modulates the activation width A( j) in each segment j. Figure 2a shows a sample 3-level symmetric case.
(a) Sample signals for PWM modulation.
(b) Sample signals for ΔΣ modulation.
Fig. 2 Comparison of PWM and ΔΣ modulations
Though it is intuitive, PWM is a rather rough scheme. Typically, W can remain quite far from the theoretical optimum and be acceptable only when w(n) is very slowly varying. Furthermore, the scheme is not very flexible, since it only works for Low-Pass (LP) filters H(z). Thus, it has been presented for historical completeness, but it will not be discussed any further. ΔΣ modulators are a more modern concept. They work by taking advantage of the basic architecture in Figure 3a. Here, the target sequence w(n) is fed to a loop structure incorporating two filters FF(z) and FB(z) and a quantizer. The quantizer is chosen so that it rounds its input to the closest value in A .
(a) Sample architecture for ΔΣ modulation. (b) Linearized approximation of the architecture. Fig. 3 Sample ΔΣ modulation architecture and its linearized approximation 3
In practice the problem of correctly modulating the activation width is solved by comparing the target waveform to a set of triangular waves with period S [10].
154
F. Bizzarri et al.
The architecture is generally analyzed by means of its linear approximation in Figure 3b, where an adder is substituted for the quantizer, injecting a signal ε (n) representing the sequence of quantization errors. The approximation consists of considering ε (n) independent of w(n). From this model, one can immediately derive two transfer functions, one from w(n) to x(n), namely the Signal Transfer Function (STF), and one from ε (n) to x(n), namely the Noise Transfer Function (NTF). Mathematically: STF(z) =
cFF(z) 1 + FF(z)FB(z)
NTF(z) =
1 1 + FF(z)FB(z)
(3)
To solve FA problems, the architecture is designed by taking STF(z) = 1 and making the NTF complementary to H(z). With this x(n) is required to be as similar as possible to w(n), apart from an error that gets spectrally distributed so that most of its power lays where H(z) attenuates most [3, 13]. Actually, there are many subtleties involved that mainly derive from a design methodology based on an approximated model [13]. Consequently, to practically achieve good results, some constraints need to be respected in the choice of the NTF. Furthermore, whenever the modulator input is too regular some dithering (purposely created noise) must be superimposed on w(n) at the input. For a low-pass H(z), the obtained behavior is similar to that pictured in Figure 2b, justifying the fact that ΔΣ codes for low-pass signals are also often referred to as Pulse Density Modulations (PDMs).
3 A ΔΣ Heuristic Algorithm The use of ΔΣ modulation to solve P-FA problems requires particular care for two reasons. First of all, any periodic input is certainly to be considered a rather regular signal for a ΔΣ modulator, so that a dithering d(n) needs to be superimposed on w(n). Such dithering can be taken to be white Gaussian noise characterized by its standard deviation σ . Secondly, the output of a ΔΣ modulator, and particularly that of a dithered one, is generally aperiodic, since the modulator can be seen as a tracking machine continuously trying to correct the errors it previously made. How should the finite length x be extracted from it? A reasonable approach is to let the modulator run for a relatively long time (e.g. nmax cycles) and then to scan its output by means of a sliding window N symbols long. At the end of the scanning, one picks the data window whose symbols, once properly arranged in a vector x, result in the lower W according to Eqn. (2).4 The scanning phase ensures that among all the possible subsequences delivered by the modulator, one picks the one that can be wrapped-around best in a periodic x(n). This is important, since the modulator is itself agnostic in relation to the periodic nature of the P-FA problem. 4
In building x from the window, care must be taken in copying the window elements into x with the appropriate phase [6].
Practical Solution of Periodic Filtered Approximation
155
Additionally, the scanning allows one to take maximum advantage of the input dithering. In other words, dithering enables the modulator to have some exploration abilities: during the nmax cycles, the modulator produces nmax −N +1 sub-sequences (i.e., candidate solutions) all different one from the other. Their “distance” in terms of the quality index W is increased as the dithering is increased. Thus, the larger σ the larger the exploration. By contrast the scanning phase can be regarded as being in charge of the exploitation ability. An important matter is thus to choose the correct dithering σ and the appropriate nmax , to proficiently balance exploration and exploitation. As it can be expected, a larger σ requires a larger nmax , otherwise the modulator performance can vary too much from run to run. In turn, a larger nmax implies a higher computational effort. However, a larger σ — when matched by an appropriate nmax — may help to deliver solutions closer to the true-optimum. We regard the above original interpretation of dithering and scanning respectively as exploration and exploitation enablers as particularly interesting, since it allows one to directly tie the modulator operation to that of more common heuristic optimizers. At this point, it is worth noting that ΔΣ modulators are commonly implemented in hardware. Incidentally, the resulting ability to solve a P-FA problem on the bare circuits surely has a certain appeal. However, in order to compare the performance of a modulator based approach to that of heuristic and non-heuristic optimizers, it is necessary to reason in terms of software implementations. This way, it becomes possible to put side-by-side not just performance alone but — more fairly — time-performance relationships, where time is taken as an indicator of the applied computational resources. Interestingly, by quantizing w(n), the modulator can be turned into a digitalto-digital converter admitting an efficient software implementation.5 Only marginal care needs to be taken in assuring that the quantization happens at a sufficiently high resolution to correctly model the applied dithering. In the tests reported in Section 5, a floating-point representation has been used for w(n) and the state variables within the filters. Furthermore, the DT filters themselves have been implemented in terms of circular buffers, as is common in digital signal processing [1].
4 An Exact Branch-and-Bound Algorithm A natural approach to P-FA is to use the mathematical programming approaches available for CQIP, as is done in [5]. In that paper, it is discussed how adaptation of the existing approaches, mainly designed for the case of binary variables (i.e, for the special case |A | = 2) leads to fairly poor results. In particular, even if |A | = 3, which appears to be the most important case in our application, among the available methods the best one turns out to be (by far) the direct application of the IBMCplex MIQP solver [7] to the original problem, without any variable transformation. 5
This is fundamental to the discussion, since having to simulate an analog circuit would clearly add an unacceptable overhead, hindering any comparison.
156
F. Bizzarri et al.
However, the approach proposed in [5] performs much better. In this section, we briefly sketch it. The main observation is that the computation of the minimum of the objective function (neglecting all constraints) simply requires solving a system of linear equations. Remark 1. The unique minimum of f (x) = xT Qx + LT x + c in case Q is positive definite is attained at x¯ = − 12 Q−1 L and has value c − 14 LT Q−1 L. In particular, f (¯x) is a lower bound for Problem (1). In [5], the focus is on how to get stronger bounds by exploiting the fact that the variables are discrete. The full algorithm is a branch-and-bound with a depth-first enumeration strategy. Branching consists of fixing a single variable to a value in A , as follows. Assume that the next variable to be fixed is x j . We consider the value x¯ j of x j in the continuous minimum computed in the current subproblem. We fix x j to values in A by increasing distance from x¯ j . In order to enumerate subproblems as quickly as possible, one can perform the most time-consuming computations in a preprocessing phase. In particular, having fixed d variables, we get the reduced objective function f¯ : RN−d → R of the form ¯ dx + L ¯ T x + c¯ . f¯(x) = xT Q ¯ d is obtained from Q by deleting d rows and columns, and therewhere the matrix Q fore is positive definite and does not depend on the values at which the variables are fixed. ¯ d , we need the inverse matrix and possibly the eigenvalues. For this reason, For Q we do not change the order of fixing variables, i.e., we always fix the first unfixed variable according to an order that is determined before starting the enumeration. ¯ d , which This implies that, in total, we only have to consider N different matrices Q we know in advance as soon as the fixing order is determined. (If the variables to be fixed were chosen freely, the number of such matrices could be exponential.) Every subproblem in the enumeration tree can easily be processed in O(N 2 ) time, the bottleneck being the computation of the continuous minimum given the pre¯ −1 . Moreover, with some adjustments, the running time computed inverse matrix Q d to process a subproblem of depth d in our case can be decreased to O(N − d). This allows for the exploration of a very large number of subproblems within small computing time. Generally speaking, the running time of this branch-and-bound algorithm is exponential in general, so one has to decide some stopping criterion and also to guarantee that at termination a (hopefully good) solution is available along with the lower bound on the optimal value provided by the continuous relaxation. Note that a feasible solution is found every time either all variables are fixed, or the continuous optimum turns out to be satisfy all the constraints. In addition, a primal heuristic, based on the genetic algorithm for quadratic 0–1 programming presented in [9], is applied in the preprocessing phase. If x¯ is the continuous minimum, let x¯ j A and x¯ j A be the two values in A closest to x¯ j . We consider the set
Practical Solution of Periodic Filtered Approximation
157
B = {x¯1 A , x¯1 A } × · · · × {x¯N A , x¯N A } and try to minimize f over B. The idea is that the local minimum of f around x¯ is hopefully not far away from the global minimum of f . It is easy to see that this problem can be transformed into a binary quadratic problem.
5 Experimental Evaluation of the Two Approaches To evaluate the performance of both the ΔΣ heuristic algorithm and the exact approach we performed a series of computational experiments. We considered a set of instances characterized by different values of N in the range [32, 384]. More precisely, N = 32 + 8k with k = 1, . . . , 44. The instances are generated under the following lines: a. A continuous time w(t) ˜ = 0.6 cos(2π t) is taken as a starting point, together with ˜ ω ) with order equal to 2, cut-off frequency a continuous time smoothing filter H(i at 1.5 and a Butterworth arrangement of its poles. b. A sampling time T = 1/N is chosen for the specific N being considered. Ac˜ ω) cordingly, w(n) is taken as w(nT ˜ ) and H(z) is taken as a discretization of H(i determined by a bilinear transformation. c. From H(z), Q, L and c are determined according to equation (2). For the ΔΣ tests, the ΔΣ NTF(z) is chosen as a second order high pass filter with a cut off frequency set at 2 (i.e., approximately complementary to the inverse of H(z)), designed using the DELSIG toolbox [12]. Three different sets of experiments were run which are described in the following. All tests were executed on a XEON 2 GHz PC running Linux and computing times are expressed in CPU seconds. Ternary versus Binary approximation. In order to show how much we can gain by tackling P-FA with x ∈ {−1, 0, +1}N (i.e., A = {−1, 0, 1}), instead of as a simple binary optimization problem with x ∈ {−1, +1}N (i.e., A = {−1, 1}), we have compared the optimal solution values obtained by the branch-and-bound for N ≤ 120 in both cases. This is shown in Figure 4a. The figure shows that the approximation one can obtain by considering the ternary case instead of the binary one is clearly superior. In the case that the optimal solution is required, of course this comes at a certain cost in terms of computational effort as shown in Figure 4b. However, the increase in running time is not dramatic and, if one is concerned with heuristic solutions only, it does not play a significant role. Quality of the ΔΣ heuristic with respect to the optimal solution. For values of N up to 120 the branch-and-bound algorithm is able to compute the optimal solution of the ternary problem within reasonable computing times. Such an optimal solution is used to assess the quality of the heuristic solution provided by the ΔΣ algorithm. This is shown in Figure 5a.
158
F. Bizzarri et al.
0.001
1000
optimal binary solution optimal ternary solution
running time binary running time ternary
100 0.0001
10
1 1e-05
0.1
1e-06
40
50
60
70
80
90
100
110
120
(a) Ternary versus Binary approximation: Solution Quality.
0.01 40
50
60
70
80
90
100
110
120
(b) Ternary versus Binary approximation: Computational Effort.
Fig. 4 Ternary versus Binary approximation comparison
0.001
0.001
optimal solution heuristic algorithm
exact algorithm with time limit heuristic algorithm
0.0001 0.0001 1e-05
1e-05 1e-06
1e-06
40
50
60
70
80
90
100
110
120
(a) Quality of the ΔΣ heuristic with respect to the optimal solution in the ternary case.
1e-07
50
100
150
200
250
300
350
(b) Using the branch-and-bound heuristically.
Fig. 5 ΔΣ modulation heuristic versus exact algorithm, with and without a time limit
The results in Figure 5a show that the approximate value computed by the ΔΣ heuristic is reasonably close to the optimal one. This is obtained by a limited computational effort with computing times growing at a rate in between a linear and a quadratic one6 with N up to 70 seconds for N = 384. Computing times of this order are practical for the application at hand. For instance, in a few minutes a whole collection of sequences for the synthesis of different waveforms at different amplitudes can be obtained even on an embedded system. By allowing nmax to grow with N and by testing different NTFs even better results can be reached [3]. For N > 120 we can still get a sense of how far the ΔΣ solution value is with respect to the optimal one by computing a lower bound on such a value through the truncated branch-and-bound. However, as discussed, the branch-and-bound explores 6
The ΔΣ modulator operation takes a time that is approximately linear in nmax , while the scanning phase requires a time quadratic in N. For the N values under consideration, the quadratic nature is still not dominant.
Practical Solution of Periodic Filtered Approximation
159
the tree by depth-first search, and the lower bound it provides, if truncated after a short time limit, is generally not much better than the one computed at the root node. Not surprisingly, the root node bound deteriorates rather rapidly for N > 200 and its comparison with the ΔΣ solution value is not particularly meaningful. We are in the process of investigating ways of improving the lower bound at the root node (with a higher computational effort) for the only purpose of computing good lower bound. Using the branch-and-bound heuristically. As discussed above, computing times of the ΔΣ algorithm are considered to be practical for P-FA. Thus, as a final experiment, we decided to run our branch-and-bound algorithm in a truncated version by using the running time of the ΔΣ algorithm as a time limit. In such a way the branchand-bound is used as a heuristic algorithm and we compare its solutions with those of the ΔΣ heuristic. This is shown in Figure 5b. The results show that the truncated version of the branch-and-bound algorithm works better than the ΔΣ approach for N up to slightly larger than 250. However, for larger values of N, the performance degrades significantly and the ΔΣ takes over. The above results suggest that in principle some hybrid approaches combining truncated branch-and-bound and ΔΣ could potentially lead to interesting and practical results for P-FA. This is currently under investigation. We end the section by noting that the above tests were conducted in the spirit of using the branch-and-bound approach to solving P-FA in practice by either providing an optimal solution, for small values of N, or providing both lower and upper bounds in practical computing times. In [5] the branch-and-bound has been tested with a different perspective, i.e., with the aim of assessing its performance as an exact solver for general unconstrained convex quadratic ternary programs. To this end, it has been compared with several other exact approaches including commercial solvers such as IBM-Cplex 12 [7] and open-source software such as Bonmin [4]. The branch-and-bound was the clear winner of these comparisons. From an application point of view, these results are interesting, since they indicate cases where heuristic optimizers can act as substitutes for modulators and they provide a quantification of the gap between the current performance that can be expected from modulators and the ideal case. Also, they offer an original interpretation of the modulator operator in terms relating it to a heuristic optimizer.
Acknowledgments Research work supported by the University of Bologna through the financing of project “OpIMA”.
References 1. Antoniou, A.: Digital Signal Processing: Signals, Systems, and Filters. McGraw-Hill Professional, New York (2005) 2. Bizzarri, F., Callegari, S.: A heuristic solution to the optimisation of flutter control in compression systems (and to some more binary quadratic programming problems) via Δ Σ modulation circuits. In: Proc. of ISCAS 2010, Paris, FR (2010)
160
F. Bizzarri et al.
3. Bizzarri, F., Callegari, S., Rovatti, R., Setti, G.: On the synthesis of periodic signals by discrete pulse-trains and optimisation techniques. In: Proc. of ECCTD 2009, Antalya (TR), pp. 584–587 (2009) 4. Bonami, P., Lee, J.: BONMIN Users’ Manual (2009), http://www.coin-or.org/Bonmin/ 5. Buchheim, C., Caprara, A., Lodi, A.: An effective branch-and-bound algorithm for convex quadratic integer programming. In: Eisembrand, F., Shepherd, B. (eds.) IPCO 2010. LNCS, vol. 6080, pp. 285–298. Springer, Heidelberg (2010) 6. Callegari, S., Bizzarri, F., Rovatti, R., Setti, G.: On the approximate solution of a class of large discrete quadratic programming problems by Δ Σ modulation: the case of circulant quadratic forms. IEEE Transactions on Signal Processing (submitted, 2009) 7. IBM: IBM ILOG CPLEX, High-performance mathematical programming engine, version 12.0, http://www.ibm.com/software/integration/optimization/cplex/ 8. Lipshitz, S.P., Vanderkooy, J.: Pulse-code modulation — an overview. Journal of the Audio Engineering Society 52(3), 200–215 (2004) 9. Lodi, A., Allemand, K., Liebling, T.M.: An evolutionary heuristic for quadratic 0–1 programming. European Journal of Operational Research 119(3), 662–670 (1999) 10. Nielsen, K.: A review and comparison of Pulse Width Modulation (PWM) methods for analog and digital input switching power amplifiers. In: Proc. of the 102nd Convention of the Audio Engineering Society (1997) 11. Reefman, D., Nuijten, P.: Why Direct Stream Digital is the best choice as a digital audio format. In: Proc. of the 110th Convention of the Audio Engineering Society (2001) 12. Schreier, R.: The Delta-Sigma Toolbox. Analog Devices, 7.3 edn. (2009), http://www.mathworks.com/matlabcentral/fileexchange/, Also known as “DELSIG” 13. Schreier, R., Temes, G.C.: Understanding Delta-Sigma Data Converters. Wiley-IEEE Press, Chichester (2004)
Performance Analysis of the Matched-Pulse-Based Fault Detection Layane Abboud, Andrea Cozza, and Lionel Pichon
1 Introduction The problem of fault detection and location in wire networks has gained an increasing importance in the last few years [1]. Wired networks are found in all modern systems, and are used to transmit different signals (control, alarm, etc.). That is why the issue of safe and reliable wiring systems is among the primary concerns of researchers and government agencies today [2]. There are several methods for wire testing, such as visual inspection, impedance testing [1], and reflectometry methods which are widely used today to help detecting and locating wire faults. These methods send a predefined testing signal down the wire network to be examined. They include Time Domain Reflectometry (TDR), which uses a fast rise time step or pulsed signal as the testing signal, Frequency Domain Reflectometry [3] which uses multiple sinusoidal signals, sequence TDR [4] which uses pseudo noise, etc. Generally, hard faults (open and short circuits) are detectable through standard reflectometry, while soft faults (damaged insulation, etc.) are more critical to detect, especially when dealing with complex wire networks configurations. In [5], we introduced the Matched-Pulse approach (MP), based on the properties of Time Reversal [6], as an improvement of the existing standard TDR. The MP method proposes to adapt the testing signal to the network under test, instead of using a predefined testing signal, as opposed to reflectometry methods. We have Layane Abboud · Andrea Cozza ´ D´epartement de Recherche en Electromagn´ etisme, SUPELEC, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette, France e-mail: [email protected],[email protected] Lionel Pichon ´ Laboratoire de G´enie Electrique de Paris, LGEP - CNRS / SUPELEC - 91192 Gif-sur-Yvette, France e-mail: [email protected]
162
L. Abboud, A. Cozza, and L. Pichon
shown that this method results in a higher echo energy from the fault to be detected, when compared to TDR. In this paper, we propose a sensitivity analysis of the MP approach, i.e., to evaluate the effect of the network topology (junctions, discontinuities, etc.) on the performance of this approach when compared to standard TDR. Through topological analysis, we will illustrate the role of different network elements governing the performances of both the TDR and MP. A mathematical analysis then follows in order to compare our approach to the TDR. The whole study will allow us to predict, for a given system configuration, to which extent the MP would be more beneficial compared to TDR, and state the configurations where our approach presents major advantages. The paper is organized as follows: first we give the general assumptions we considered in our work, and then give a brief reminder about the MP approach, followed by a topological study of the influence of the network elements on the TDR performance. A mathematical analysis following this study allows to establish a comparison criterion of the MP and TDR, and finally simulation results permit a better validation of the discussed ideas.
2 The Wire Network In this paper, we will consider uniform lossless transmission lines. For the sake of simplicity, we also consider resistive parallel loads and do not use reactive components, so that there is no dispersion. We will also consider that the testing signal (i.e., the injected signal) is the input to the system, and the reflected voltage wave is the output. In the presence of an eventual fault, we can either analyze the reflected signal directly or take the difference of the reflected signals, both in the presence and absence of the fault, in order for the echo corresponding to the fault to be more easily detected. In this paper, provided that the systems we are studying are linear time invariant, we will be considering the difference of the reflected signals. Let the transfer function of the reference system without the fault be noted as H0 ( f ), and the one corresponding to the system with the fault noted as HF ( f ). Consequently, analyzing the difference signal is equivalent to analyzing the output of a system whose transfer function H( f ) is H( f ) = HF ( f ) − H0 ( f ) (1) We will refer to this system as the difference system.
3 The MP Approach Before proceeding with our topological study, we remind the basic concept of the MP approach. Instead of injecting a predefined testing signal into the network to be analyzed, as for existing reflectometry methods, the idea is to adapt the injected signal to the system, so that the energy of the echo from the fault is maximized,
Performance Analysis of the Matched-Pulse-Based Fault Detection
163
thus increasing its detection probability. Such a tailor-made signal, or MP, can be synthesized considering the matched filter approach used in signal processing [7]. According to this theory, a matched filter is obtained by correlating a known signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a timereversed version of the template. The matched filter is the optimal linear filter for maximizing the signal to noise ratio (SNR) in the presence of additive white noise. To resume, in the MP approach, the testing signal is synthesized as follows: we first inject the TDR testing signal in the network under test. The received echo is next time reversed; this signal is now the MP testing signal. We clearly notice that this signal changes according to the network’s architecture, as opposed to the standard TDR. In this paper, we propose an analysis of the performance of both TDR and MP, in order to better understand and characterize this approach. Through topological analysis and a mathematical study, we will be able to understand the reason why, in some cases, the MP presents major advantages over the TDR, and predict, for a given network architecture, the performance of the TDR and MP.
4 Topological Study After a brief description of the general assumptions considered in this paper, along with reminding the general idea of the MP approach, we now propose to analyze the impact of different network elements on the performances of both the TDR and MP approaches. But first let us introduce the topological representation we chose to better illustrate the discussed ideas.
4.1 Equivalent Topological Representation We represented a physical model by means of a topological one, in order to highlight the elements influencing the wave propagation in the network under test, along with having a simple model to understand. We modeled the uniform lossless transmission lines with delay lines, which means that those lines are not physical objects,
incident wave
reflection
transmission
Fig. 1 An example of an equivalent topological representation
164
L. Abboud, A. Cozza, and L. Pichon
fault reflection
source
transmission
junction Fig. 2 Equivalent topological representation of a wire network; the transmission lines are represented with delay lines, the source with a rectangle and the discontinuities in the system with circles. The fault’s position is indicated with two parallel lines
F
’ F
Fig. 3 Topological representation of a wire network illustrating the reflection coefficients ΓF and ΓF
they only introduce time delays. We also represented the discontinuities in the system (i.e., junctions, branches, mismatched loads, etc.) with nodes, to illustrate the reflection and the transmission of the incident voltage waves. If we observe the example of Figure 1, we notice how the physical elements are modeled; furthermore, we represented, on the topological model, the reflection and transmission phenomena corresponding to the junction: when a voltage wave arrives at this position, a part is reflected back, and another is transmitted. Another example is illustrated in Figure 2, where a network is represented by means of its topological model. A final important point to discuss in this section is the variation of the amplitude of the peaks in the received TDR echo. In fact, let us consider the example of Figure 3, where the fault is modeled with two parallel lines, τ is the transmission coefficient of the junction, and ΓF and ΓF are respectively the reflection coefficients at the fault position and the source position. We propose to illustrate how the amplitude of a certain peak changes in terms of the number of interactions with the system discontinuities. Therefore we chose a simple trajectory of the wave, which is the one from the source, through the first junction, towards the fault and then all the way back. So, if we inject a pulse into the system, its amplitude will be modified by a factor of τ when passing through the junction. This voltage wave will then reflect on the fault and its amplitude will be once more modified by a factor of ΓF . The reflected wave will then follow the reverse path, and when arriving to the
Performance Analysis of the Matched-Pulse-Based Fault Detection
165
source position, the modification in its amplitude compared to the amplitude of the initial wave is τ 2ΓF . This example illustrates the idea that whenever a wave passes through several junction, its amplitude will decrease each time due to the reflection phenomena. This fact will be useful in the section 5 when analyzing mathematically the performances of the TDR and MP.
i(t) source
e(t)
Fault 140 Ω
200 Ω
open
z (m) 0
5
14
18
21
Fig. 4 An example illustrating a wire network. The fault’s value is 600 Ω
4.2 Position of the Network Elements In the standard TDR case, when using the difference system, the first peak in the TDR echo we obtain corresponds to the first interaction with the fault. This idea is illustrated in the example of Figure 4. The network to be analyzed contains a fault hidden from the source by a discontinuity, at 14 m from the source. We consider that the fault is a parallel resistive load which value is 600 Ω . The echo obtained without the presence of the fault is shown in Figure 5(a), in the blue line. We notice how the different peaks correspond to the interaction of the pulse with the system discontinuities; in particular the first peak we obtain corresponds to the position of the 140 Ω discontinuity (the time value on the x axis is the round-trip time to the discontinuity). Furthermore, when examining the echo in the presence of the fault, we notice how its presence introduced the first variation at about t = 0.9 μ s from the origin, corresponding to the round-trip time to the position of this fault. Of course the peak corresponding to the 140 Ω discontinuity remains unchanged, while the peaks corresponding to the discontinuities behind the fault (from the source position) are affected by the fault’s presence, as we can clearly see when taking the difference signal shown in Figure 5(b). When analyzing this signal, we notice how the first variation corresponds to the fault’s position, that is why we look for this peak in the detection process in TDR. Furthermore, we notice that this peak does not have the highest amplitude, as clearly shown in the figure.
166
L. Abboud, A. Cozza, and L. Pichon
TDR echo with the fault TDR echo without the fault
voltage amplitude
0.03 0.02 0.01 0 −0.01 −0.02
0.5
1
1.5 t(s)
2
2.5
(a)
−7
x 10
−3
x 10
voltage amplitude
4 3 2 1 0 −1 −2 0
0.5
1
1.5 t(s)
2 (b)
2.5 −7
x 10
Fig. 5 (a) The reflected TDR signals in the presence and absence of the fault. (b) The difference of the two reflected signals. The studied system is illustrated in Figure 4
Based on this example, we can notice that the network can be divided into two main parts, regarding the fault’s position: the first one upstream of the fault (between the source and the fault) and the second downstream of the fault. When examining the reflected signal in the difference system, the peaks corresponding to the first part will eventually disappear because they are the same with and without the fault (as we already noticed in the previous example); as for the peaks corresponding to the discontinuities downstream of the fault, they will be modified by the presence of the fault, thus leaving several peaks in the difference signal. Another important point is that any change in the elements upstream of the fault would affect all the peaks, including the first peak (in the difference signal), whilst any change in the elements downstream of the fault will not affect this peak.
Performance Analysis of the Matched-Pulse-Based Fault Detection
167
5 Detection Gain After discussing the influence of different network elements on the performances of both TDR and MP, we need to find a criterion allowing us to compare those two approaches. Therefore, we propose to define a gain G, which we called the detection gain. Let i(t) be the injected pulse in the time domain, and e(t) the received signal. We note that αi is the amplitude of the peak number i in the echo of the difference system, and ti its position. Given that we do not have any dispersion, and also the system we are studying is a cable network, we can write the impulse response of the system as follows h(t) = ∑ αi δ (t − ti ) (2) i
where δ (t) is the Dirac pulse. Consequently, the reflected echo is e(t) = i(t) ∗ h(t) = ∑ αi i(t − ti )
(3)
i
Knowing that, in the detection process, we are only interested in the peak corresponding to the fault, we propose to define a gain, which we called the detection gain. Let us first calculate the energy of the received signal. It can be written as follows 2 2 E = e(t) dt = ∑ αi i(t − ti ) dt (4) i
when we calculate this expression, we find that E = ∑ αi2 i
i2 (t − ti )dt + ∑ αi α j i= j
i(t − ti )i(t − t j )dt
(5)
If Δ ti j = t j − ti , let us denote by ϕ (Δ ti j ) the autocorrelation function of the injected signal
ϕ (Δ ti j ) =
i(t − ti )i(t − t j )dt =
i(s)i(s − t j + ti )ds
(6)
In the TDR case, we assume that the injected pulse is normalized in terms of its energy. The energy of the echo can thus be written as E = ∑ αi2 + ∑ αi α j ϕ (Δ ti j ) i
(7)
i= j
We know that the coefficients αi are influenced by the discontinuities in the system (as discussed in section 5); their values decrease according to the number of interactions with the system discontinuities (the evolution of these values is a geometric series). We also note that Equation (7) is composed of two terms: the first one is the average energy of the signal ∑i αi2 , and the second one is the mutual energy of the peaks of this signal ∑i= j αi α j ϕ (Δ ti j ).
168
L. Abboud, A. Cozza, and L. Pichon
When examining the expression of the mutual energy, we can notice that it contains two important parts: the first one αi α j which depends on the number of interactions, and the second one is the correlation function ϕ (Δ ti j ). If the temporal support of this function is narrow, then this expression can be neglected and the dominant part would be that of the average energy. In fact, we know that the first peaks we observe have the highest amplitude, and when injecting a pulse into the system, those peaks are separated one from another. We also know that the peaks appearing after those first peaks (in time) would have small amplitudes and the mutual energy between them can be neglected. So, we can reasonably assume that
∑ αi α j ϕ (Δ ti j ) ∑ αi2
i= j
(8)
i
In this case, the energy of the first peak (the one we consider when detecting the fault) would be F ETDR = α12 (9) In the MP case, the same procedure is applied, but here the injected signal is the time reversed version of the TDR echo. The MP echo is written as e(t) = i(t) ∗ h(t) ∗ h(−t)
(10)
Defining g(t) = h(t) ∗ h(−t), we can write
g(t) = ∑ βk δ (t − tk )
(11)
k
where βi is the amplitude of the peak number i. In fact, if we calculate g(t) in terms of αi , t j and ti we find g(t) = ∑ ∑ αi α j δ (t + t j − ti ) i
(12)
j
Comparing Equations (11) and (12) we find that
βk =
k= j−i
∑
αi α j
(13)
i, j
and also
tk = t j − ti
(14)
Note that when we time-reverse the TDR echo to obtain the injected signal in the MP case, we are effectively doing a time shifting operation (i.e., g(t) = h(t) ∗ h(t1 − t)). The value of the peak at t = 0 corresponds to the fault; this peak has an amplitude equal to the average energy of the injected signal, that is the TDR echo in this case. Under the same assumption that the initial pulse (i.e., the injected pulse in the TDR) is normalized in energy, the energy of the MP peak is
Performance Analysis of the Matched-Pulse-Based Fault Detection
2 F EMP = ∑ αi2
169
(15)
i
Let ETDR and EMP be respectively the energies of the testing signals in the TDR and MP cases. In order to be able to compare the TDR and MP approaches, we need to have the same injected energy. That is why we propose to normalize according to the injected energy in both cases. The detection gain G can be thus defined as the ratio of the normalized energy of the MP peak to the normalized energy of the TDR peak E F /EMP G = FMP (16) ETDR /ETDR Based on the previous analysis, when the different peaks do not interfere one with another, G can be expressed as follows G=
2 ∑∞ i=1 αi α12
(17)
We can clearly notice that G ≥ 1, and consequently the MP approach always presents an advantage over the TDR; this advantage is greater in the case where we have multiple peaks, resulting in a higher gain. In practice, we consider the first N peaks when calculating the gain, so as to have G≈
∑Ni=1 αi2 α12
(18)
In the next section we propose to illustrate the previously discussed ideas through simulation results.
6 Simulation Results 6.1 Analyzed Configurations We consider the configurations illustrated in Figures 6 and 7, where the fault is in front of the source in the first case, then masked from the source by several discontinuities in the second case. We chose those configurations because they represent two extreme cases; thus it seems interesting to compare the performances of the MP and TDR in both cases.
6.2 Numerical Results We simulated the voltage propagation in the configurations of Figures 6 and 7 using transmission line theory as presented in [8]. The characteristic impedance of the lines is chosen to be 75 Ω (such as for some coaxial cables). We will verify the obtained values of the detection gain according to the general formula (Equation 16)
170
L. Abboud, A. Cozza, and L. Pichon branch 2 z=0
14
5
18.9
120 Ω source fault branch 1 60 Ω 4
branch 3 open 3
Fig. 6 The analyzed network in the first case, where the fault is in front of the source. All lengths are in meters
and the simplified formula (Equation 18). In this last case, we will be examining a limited number of peaks, thus specifying an inferior limit to the gain (referred to as a threshold in the Table 1). In the first case (Figure 6), we chose a value of the fault equal to 600 Ω , corresponding to a soft fault. The inferior limit of G in this case is 1.55 (we considered N=5). When we calculate its exact value, we find G = 1.73. In the second case (Figure 7) when the fault is masked from the source by several discontinuities, we chose several values of the fault, illustrated in Table 1. In this configuration, the fault is separated from the source by a discontinuity at 5 m and a junction at 14 m. The numerical values obtained when calculating the inferior limit of the detection gain, along with its exact values are illustrated in Table 1. When examining those results, we notice that the more the fault is soft (i.e., the reflection coefficient at the source position is small), the more the MP becomes effective when compared to standard TDR. If we also compared the two studied configurations (Figures 6 and 7), when the fault’s value is 600 Ω , we notice that when the fault is embedded in the system, the gain’s value is greater than the case where the fault is directly in front of the source. In fact, in the first case we have a greater number of peaks than in the second case; so as predicted by Equation 18, we have a greater value of the detection gain.
Performance Analysis of the Matched-Pulse-Based Fault Detection
171
branch 2 14
5
z=0
18.9
180 Ω
120 Ω
source
fault
branch 1
4
branch 3 open 3
Fig. 7 The analyzed network in the second case, where the fault is embedded in the system. All lengths are in meters
Table 1 Numerical results obtained for the configuration of Figure 7 Fault value (Ω ) Predicted gain (threshold) Calculated gain Short 2.33 2.6 30 3.4 4 600 7.7 9.22
7 Conclusion In this paper, we evaluated the impact of the network topology on the performance of the TDR and MP approaches. A topological study allowed us first to state what are the most influencing factors on the effectiveness of the TDR, then a mathematical analysis proved the advantage of the MP method over the standard TDR. The discussed ideas were finally verified through simulation results. This whole study allowed a better understanding of the factors influencing the TDR performance, thus enabling, for any configuration, to predict the effectiveness of the MP approach compared to standard TDR.
172
L. Abboud, A. Cozza, and L. Pichon
References 1. Furse, S., Haupt, R.: Down to the wire. IEEE Spectrum 38, 34–39 (2001) 2. Review of federal programs for wire system safety. National Science and technology Council, White House, Final Report (2000) 3. Furse, C., Chung, Y.C., Dangol, R., Mabey, M.N.G., Woodward, R.: Frequency domain reflectometry for on-board testing of aging aircraft wiring. IEEE Transactions on Electromagnetic Compatibility 45, 306–315 (2003) 4. Smith, P., Furse, C., Gunther, J.: Analysis of spread spectrum time domain reflectometry for wire fault location. IEEE Sensors Journal 5, 1469–1478 (2005) 5. Abboud, L., Cozza, A., Pichon, L.: Utilization of matched pulses to improve fault detection in wire networks. In: 9th international conference on ITS Telecommunications (2009) 6. Fink, M.: Time Reversal of Ultrasonic fields - part 1: basic principles. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 39, 555–566 (1992) 7. Papoulis, A.: Signal Analysis. In: Nalle, P., Gardner, M. (eds.). McGraw-Hill, New York (1977) 8. Paul, C.R.: Analysis of multiconductor transmission lines. In: Chang, K. (ed.). WileyInterscience, Hoboken (1994)
A Natural Measure for Denoting Software System Complexity Jacques Printz
*
1 Introduction The problem of complexity measurement is as old as programming, when programming became a major problem for the software industry in the sixties. The fact is clearly attested in the two NATO reports on software engineering [A14]. Von Neumann himself give a lot of attention to complexity in the last decade of his shortened lifetime. The problem of measure is a very classical one in physical science and everybody aware of history of science (epistemology) knows that it is an extremely difficult subject. With information measurement we enter in a kind of no man’s land with pitfalls everywhere where it is difficult to separate the observer (the user of the information) and the phenomenon of information itself. In this communication I propose to define complexity measurement as follows: Program A or software system A is more complex than program B or software system B if the cost of development/maintenance of A is greater than the cost of B. This measure is just common sense based on the fact that for a project manager cost (that means a compound of human cost, development time and expected quality) is the only thing worthwhile of his attention. To define the complexity measure more precisely, we need to define:
what is “cost”, what is “development”.
*
Jacques Printz Chair of Software Engineering, Conservatoire National des Arts et Métiers, 2 Rue Conté, PARIS Cedex 75141 – France e-mail: [email protected]
174
J. Printz
NB: To simplify, we will use the term complexity in a broad sense, and we will make no difference between intrinsic complexity of the problem and the complexity (i.e. complication) of the solution which aggregates technologies and project organization. For in depth discussion, see ref. [A8]. Since the very beginning of software engineering the software cost has been associated with programming effort, generally expressed in man-month or manday, while development was defined as the sum of activities such as: program definition / requirements (with customers and end-users), software design, programming (development / maintenance) + unit testing, integration [see ref. A14, A20]. Programming effort is the amount of programmer energy needed to deliver a program or a software system which satisfies the level of quality required by the target organization which will use it. The term “programmer energy” refer to the human part of the programming activity which is basically a translation, in a linguistic sense, of customer requirements into an executable program on a specific computer platform. We know that programmer performance may vary in a large range from 1 to 15, as reported in the literature (NB: I can testimony of variation from 1 to 10). So we need to define more formally what do we mean by programmer performance. In one of my books [A1], I have defined what I have called the “Perfect Isolated Programmer”, or PIP. A PIP is a kind of human robot, or a Maxwell’s daemon, with well formalized capacities (in the same sense as we define “perfect gas” in thermodynamics), able to perform the translation SYSTEM REQUIREMENTS → WORKING SOFTWARE. By definition a PIP is not submit to environment disturbance and human impedimenta. PIP effort is expressed in time unit (workhour) which is similar to an “energy”. If several PIP act together to create a program, one has to take in account the effort needed for the communications between them in order to maintain coherence, for example the effort needed to establish contracts between them to define the interfaces and to verify that they respect the contracts. The PIP programmer effort will be split in two parts:
A part for writing source code and test code to validate the program, i.e. the programming effort as understood by X-programming or agile community which promote the Test-Driven-Development approach. Another part for writing documentation to communicate, either face to face or group discussions, or Internet mailing, etc., i.e. the communication effort.
PIP programmer’s activity may be shown as depicted in figure 1. It creates three types of text : programs, tests and documentation.
A Natural Measure for Denoting Software System Complexity
Knowledge of the Knowledge of the programming programming languages libraries languages libraries
Knowledge of the Knowledge of the software system software system design designand and requirements requirements
175
Knowledge of the Knowledge of the executionplatform platform execution
Interactions Knowledge Knowledgeofofuser user environment and environment and ergonomics ergonomics
Knowledgeofofthe the Knowledge programming programming support environment support environment
Programmer efficiency in term of error rate
Program under test
Test programs
Documentation for team communication
Fig. 1 PIP Programmer at work
When we observe real programmers productivity at large scale, one states a remarkable stability of around 4.000 instructions per man-year (with a year of about 220 working days of 8 hours), or around 20 source line instructions per normal working day. Of course, instructions counting must be done with extreme care as defined in [A2]. When we observe the phenomenon at small scale with toy programs, it simply disappears. Instructions counting is the foundation of widely used cost estimation models such as COCOMO or Function Point. So programmer productivity is a statistical measure as exactly is the temperature. At atomic level, temperature is meaningless, because the measurement apparatus itself disturbs too greatly the temperature measurement (Cf. Heisenberg uncertainty relations). Programs are developed chunk by chunk by individual programmers or small groups of 2-3 programmers which are members of a programming team of around 7±2 developers and a team leader. For large or very large projects, up to 50-60 teams may be active at the same time. In that case, team organization become the main risk due to the interactions between teams; see [A18]. In large projects, a significant amount of effort must be dedicated to communicate, establish and validate the interface contracts. Contract information may use constraint languages such as OCL, now part of UML, or more specialized languages like PSL (IEEE Standard N°1850).
176
J. Printz
Returning to the definition we will now examine some situations to validate the size of tests as a good candidate for an effective measure of complexity. Programming activity in well managed projects may be depicted as in figure 2. Teams are organized to product elementary pieces of code (called building blocks, or BB) ready to integrate.
Building block contract
Elementary pieces of the programming process architecture A module in Parnas’s sense
Rules to be respected are part of the architecture framework The contract is under the responsibility of the architect
1 or 2 programmers testers Basic activities : Detailed Design, Programming, Unit Testing, Documentation, etc.
Elementary work unit of the project (cf. Agile methods, XP, …)
Building block ready to integrate
Integration contract must be validated to enter in the integration process
Average size and effort : 1-2 KSL, 2-3 M-M
Fig. 2 Building-block programming
2 Cyclomatic Number Measurement Considered Harmful Cyclomatic number, also called Mac Cabe complexity measure [A3], is one of the most popular complexity measure implemented in most CASE/QUALITY products to “measure” quality, but programming quality, i.e. semantic aspects, measurement is another story. However, when we consider the Mc Cabe measure from the project manager point of view, it has really no meaning. Of course everybody agree that from graph theory [A4] point of view, cyclomatic number means something important, but the problem is what does it mean for the programming effort point of view? With very simple examples, which are true counter-examples (in the meaning of K.Popper “falsification” ; see ref. [A5]), one can show easily why cyclomatic number complexity has no meaning. The following picture shows the most simple design pattern with sequential execution of N modules, M1, M2, …, M-N executed sequentially. Think to a manmachine interface (MMI) where screens are captured in a fixed order.
A Natural Measure for Denoting Software System Complexity
177
Data Context
Begin
M-1
M-2
DI-1
DI-2
...
M-N
End
DI-N
Fig. 3 Sequential flow of control
The cyclomatic number of the pattern is 1. The next picture shows another pattern where the end-user can filled the screens in any order. Basically the tests are the same.
M-1 DI-1
M-2 Begin
DI-2
End
...
M-N DI-N
Fig. 4 Parallel flow of control
In that case the cyclomatic number is N. The difference with the previous one is the loop from END to BEGIN, a very few instructions indeed. But the tests to validate the two patterns are almost the same and they have basically the same cost. Moreover, in figure 3, the cyclomatic number give an appearance of simplicity despite the fact that the programmers may have created a large data context to exchange information between modules. On the contrary, in figure 4, the programmers have organized the data in order to use a transaction programming style. Data are well isolated each others; the modules work as ACID transactions.
178
J. Printz
Despite the fact that the cyclomatic number N is much greater, the software architecture is much better. Thus the cyclomatic number alone gives a “complexity” measure totally inappropriate and deceptive. In order to take in account the real complexity based on the number of tests, the programmer has to consider the source_code×data matrix (cross references) which depends mainly on the data architecture which is basically the same in both cases. Another misleading situation comes from the event programming style allowed now by languages like Java, a risky style for the beginners. Figure 5 illustrates this situation. Main control flow graph
Separate sub-graphs Event Ex1
Node Nx1 Node Nx…
Node Nx2 Event Ex3
Node Nx3
Node Nx…
Driver for event Ex
Node Nx5 Node Nx6
Node Nx4
Event Ex6
Node Nx… Point of return
Autre Driver Other Drivers
Explicit graph constructed from control instructions of the program
Fig. 5 Invisible edges with event programming
The doted lines represent invisible edges, which will not be caught neither by quality tools, nor by compilers made unable to optimize the generated code. Any sequence of code able to raise the event Ex will create an additional edge. The computed cyclomatic number will be much smaller than in the real execution. It’s like having invisible GOTO statements hidden in the source code. Thus, again, the cyclomatic number gives a “complexity” measure totally inappropriate and deceptive. Worse, it may encourage event programming style which is the most difficult to master and needs a lot of experience in real-time and system programming. You will notice that the number of lines of code, in that case, will probably decrease, but the number of tests will increase dramatically.
3 Program Text Length Measurement In the early age of programming, in the sixties, managers in charge of large programming teams stated another remarkable fact which is a relation between the
A Natural Measure for Denoting Software System Complexity
179
length/size of a program expressed in number of kilo instructions (KSL/ kilo source line) effectively written by the programmers [see ref. A2, A20] and the effort, generally expressed in man-month, to delivered them to a customer. Numerous statistical studies [see A6 for details] has shown that the relation, most of the time, had the form of a power law:
Effort = k (KSL )
1+α
where α is a positive number >0, in fact in a range between [0.01 to 0.25] ; k is a constant which depends on the programmer capability and experience, and also on the overall project environment such as documentation. The effect of k on the effort is linear, and for this reason k is called the cost factor. The most interesting parameter in this experimental relation is the number α. But what α exactly means? What are the conditions which allow to say that α has such or such value? For example the classification of program types [such as: a) simple programs as in business applications, b) algorithmic driven such as compiler, data mining or complex data transformations, c) real time and events programming ; see ref. A6] plays an important role, but also team maturity in the sense of capability to work together as explained in the CMMI approach. The number α says something about complexity of the interactions between modules within the software system, as well as interactions between human actors, at individual level or at team level, within the development organization. For this reason α is called the scale factor. In my book [A1], I have demonstrated that α largely depends of the hierarchical structure of the interactions. When the system architect and the programmers strictly apply the Parnas’s modularity principles [A7], then the whole program organization is hierarchical, and the Effort/KSL relation holds. It shows the importance of a rigorous classification of functional blocks at design time. The PIP programmer of the COCOMO estimation model is supposed to master the Parnas’s modularity principles. With regards to our definition, a system software program which is composed from the integration of N modules M1, M2, ..., Mn with lengths of KSL1, KSL2, ..., KSLn, that is to say of a total length of KSL1 + KSL2 + ... + KSLn is more costly than the simple sum of each module taken independently. The difference is given by:
D = k (KSL1 + KSL2 + ...KSLn )
1+α
[
− k (KSL1 )
1+α
+ (KSL 2 )
1+α
+ ... + (KSLn )
1+α
]
The question is: what is the meaning of D? If our initial relation is true (i.e. experimentally verified), then D denotes a) the effort to break the whole software into N modules, plus b) the effort to establish interface contracts between the modules, and plus c) the effort to integrate the N modules, that is to say to verify that the contracts have been enforced. In other word, that means that the number α says something about the complexity of integration whose cost D depends on the number of interactions between modules. Thus, counting the relations between modules is a key element of the measurement process. For more details see ref. [A1, A18].
180
J. Printz
Now we have to examine the question of the legitimacy to use program length to denote program complexity. In the algorithmic information theory established by A.Kolmogorov and G.Chaitin [A12], the complexity of a problem is expressed by the length of the “program” to solve it. So, in this theory we have a clear indication that the textual length is well founded and means something important. We have not enough place in this communication to establish a more abstract link between the effort relation and his mathematical form as a power law. Power laws are very common in nature, especially in social sciences, as it has been established by several authors [A9]. It is not surprising to find them in this highly human activity that is programming [A10]. For a more detailed discussion, see [A8, chapter 16]. Despite the well founded legitimacy of program length as a measurement of complexity, something important is still missing from the project manager point of view. We know since the very beginning of programming that a program has always two faces: 1. 2.
a statically one which is effectively well denoted by the program length and structure, a dynamically one which is, for the moment, denoted by nothing, or in a weak sense by the scale factor α which take in account the combinatory aspect of the integration tree.
To go further, we need to revisit briefly the nature of programming activity.
4 Programmers Activity Revisited Every programmer knows that programming effort cover two types of activities:
the first one, and most well known, is the writing of programs in one or several programming languages, the second one is the testing activity, in his classical sense of validation and verification. Testing has been for a long time the hidden face, for not to say the shaming face of programming.
With eXtreme programming and agile development methods emphasis has been put on the so called “test driven development” approach. K.Beck [A13] has strong words : « you don’t get to choose whether or not you will write tests – if you don’t, you aren‘t extreme: end of discussion » ; this is an interesting position statement, quite new in the academic and industry software engineering landscape. In the real world, materials have defects and can break (strength of materials is a pillar of engineering sciences) , electromagnetic waves are noisy, a circle is never perfect, and so on. In the real world programming, real programs, written by real programmers, contains also programming and design flaws, because programmers work with an error rate, as everybody. We have statistics about the number of residual defects during software operation.
A Natural Measure for Denoting Software System Complexity
181
So testing is now recognized as being as much important as programming, for not to say more important. From a logical point of view, the program affirms something about the real world information processing that it models (a kind of theorem), but the tests establish the validity of the affirmation (a kind of proof). For a mathematician, a theorem without a proof has no value, and for him the demonstration is more important and often more interesting than the theorem itself. A good example is the recent demonstration by A.Wiles of Fermat’s last theorem (more than 200 pages of demonstration for a “one line” theorem). In physic we have the same thing with physical laws and the experiments which justify them (think to the LHC equipment at CERN in Geneva to show the existence of Higg’s boson for the gravitational force). We will not enter in this paper into a philosophical discussion about the epistemological status of what we call the “proof” of the program. What is important here is to notice that beside the program text, it exists another text which establishes its validity, even if this validity is in essence statistical, as in reliability theory. With regard testing, the programmer’s activity may be described as in figure 6. The Tester/Programmer Program under test Input state
Begin
Test data
Verification of the input state
...
• Additional data on demand AND / OR • Verification of intermediary state
End
...
Verification of the output state
Program life line
Test life line
I1 Ix
... Interruption
In
Observations
...
Interruption
Output state
Test program N° x, y, z, …
Fig. 6 Program and test working together
The program under test is written in some specialized languages such as debugger languages, or tests languages such as TTCN or PSL, or most of the time in the same language as the program under test. Test data must be carefully selected, according to the test objective. In fact, the test program and the corresponding test data are the specification of an experiment whose successful execution, under the control of the programmer/tester, will establish the validity of a part of the program under test. Data displayed from the program under test
182
J. Printz
execution or accepted by the test program may be written in some universal data language such as ASN.1 or XML. To validate the whole program under test, several test programs “experiment” must be written. Different methods exist for designing “good” tests. Model based methods seem the most promising for integration; for more details see [A23, A24]. The overall test process can be depicted as in figure 7. Test objective Set of BB and Interfaces to test Test specification Expected Results and behavior (Oracle)
Test use case + test data
Set-up of testing environment
Test execution Problem solving
Observed Results and behavior
Diagnostic
Comparison
Result analysis
Inductive analysis (more data needed) Deductive analysis
Correct
Incorrect
Test Archive with results Score board of the N test cases
Diagnostic
Modifications In the test case Update
In the program
Configurations management Sources +Tests + Documentation
Update
Configuration DB
Fig. 7 Overall test development process
It is clear that the test programs and the activities round them reflect the dynamical aspect and the behaviour of the program under test, with enough data to explore the combinatory of the mapping between input and output states. Test data may be quite difficult to compute. For more details on test, see [A17].
5 Classification of Building Blocks From the project manager and architect point of view, there are, roughly speaking, two types of building blocks (BB).
First type BB are those that we can call Functional BB, or FBB, because they code the semantic part of the software system. FBB correspond exactly to what D.Parnas called module in his famous paper [A7]. FBB interact each others to form services which have a meaning from the business process point of view.
A Natural Measure for Denoting Software System Complexity
183
Second type BB, or Service BB, represent more abstract functions which are use as an enhancement of an existing programming language. Early languages as FORTRAN and COBOL have extensively used that type of language extension with built-in functions like SORT or SIN, etc. Operating system functions or protocol functions are good examples of such extension. API (Application Programming Interface) play the same role. Today Java programmers may use, to enhanced their productivity, a large variety of SBB. An expert programmer knows hundredth of API. SBB are used as services or macro-instructions to code the FBB. The peculiarity of SBB is that they may be called from everywhere in the software system.
From the project manager point of view FBB and SBB have quite different status. By definition, SBB are abstract functions independent each others. The development cost of a SBB is totally independent of its usage, it is truly contextfree. The cost to use a SBB is the same everywhere, either in a small program or in a large one. When a SBB is mature enough its usage cost from a programmer point of view is the same as any available built-in functions of the programming language. Writing y=SIN(x) has the same programming cost that writing y=x+1 or y=SQRT(x). Thus, complexity, from the project manager point of view, comes only from the interactions between FBB, and not from the interactions between FBB and SBB. SBB are in fact a factor of simplicity despite the numerous references to them in the FBB. Again, this is another good example of the lack of meaning of the cyclomatic number. Figure 8 explains the situation. FBB_1 Caller×Called matrix
Data dependencies matrix Private data
FBB_1 FBB_2 Interface
SBB_1 Private data Private data
FBB_2 SBB_1
... FBB_x
... FBB_x
Private data
n edges
Shared data
Fig. 8 Minimisation of edges
Interface
184
J. Printz
The cyclomatic complexity number CN is obtained with the graph theory well known formula : CN = e − n + 2 × p , where e is the number of edges, n is the number of nodes, and p is the number of non connected sub-graphs. SBB_1 is a black box such as any language instruction or built-in function. The link between an invocation point from an FBB to the SBB is not a normal edge from the tester point of view. Thus the number of edges to take in account to compute the cyclomatic complexity is not e but e − k where k is the number of edges connecting the FBBs to the SBBs ; this number may be computed using the caller×called matrix. Moreover, all the nodes and edges belonging to the SBBs must also be subtracted, because the corresponding complexity will be hidden by the interfaces to the SBBs (again, see D.Parnas modularity principles, in [A7]). In the same way, the shared data between the FBBs, have no influence on the SBBs as long as the programmers respect strictly the interface contracts (to be checked by formal reviews, under the responsibility of the integration team leader). Concerning the data, it is to be noted that if m data items within the FBB have dependencies with n data item from the SBB, the cardinality of the set of data×data relations is Card = m × n (at least). If an interface has been defined, the cardinality becomes Card = m + n . While SBB is tested independently, then the number of data path to validate the integration of the FBB with the service blocks SBB is just m. The simplification brought by the modularity is well reflected by our complexity measure because modularity means less tests to develop. Identification of SBB is the result of the abstraction process which is the essence of good design for reuse ; and also for the rigorous foundation for DSL, see ref. [A26]. The effort to develop such a software system, using the Eff/KSL relation, is less than to develop the same software system when SBB has not been identified. The ratio of SBB code in a program is an indicator of a good design. According to our definition, a software system using SBB is less complex as any other because less tests are needed. So the definition still works.
6 Costs of Interfaces Once the design of the software system is completed, the situation is as follows:
SBB have been identified either in the software system itself, or reuse from an already existing one, or available in a specialized library. In the SOA approach this is the role of WSDL. Interactions between FBB are specified by contracts, according to figure 9.
FBB are connected to each other in order to form business services. The flow of control between FBB may be represented using UML activity diagrams such as flow graphs, or much better by finite states machines. However the flow of control
A Natural Measure for Denoting Software System Complexity
185
is just one way to define interfaces between FBB. Interfaces may also be defined from coupling between FBB through shared data and/or events, or by remote procedure call (RPC) invocations through the network. It is one of the most difficult aspect of good design to document them all. If some of the coupling have been forgotten, that means that it exists somewhere in the software system covert channels nowhere documented. Consequently, they will escape to systematic integration tests. Such a situation occurs when two or more programmers negotiate an interface without to inform the software architect. Prevention of such design flaws is the role of quality assurance (inspections, design reviews, …). For more details about interfaces see [A8, chapter 15] and the various kind of matrix representation such as N2 matrix [see A21, A8 chapter 2]. Note that the most common N2 matrix are FBB×FBB (Caller/Called matrix), FBB×Data, FBB×Events, but that Data×Data (functional dependencies), Events×Events (event management by abstraction level) must also be considered. More complex matrix of higher order are necessary to describe the relations between contracts, implementation, technologies. All these matrix have an impact on complexity and the cost of integration. Cost of an interface, in a first approximation, may be depicted as in figure 9. The cost is split into elementary costs such as:
Cost of the interfaces specification which needs explicit agreement between the programmers and actors which will use it (the end-users for man-machine interface). Cost of quality assurance during the programming of building blocks (inspections, reviews) to ensure that rules have been correctly applied in order to detect and remove defects early. Cost of the test programs development (in parallel to BB programming) to be run at integration time, in order to validate the integration of the FBB.
More complex situations may be observed with man-machine interface (MMI) [see A15] or with system interoperability in System of System (SoS) approach, now common in the enterprises and the administrations. The sum of all these costs corresponds to the integration effort. We can now refine our initial definition as follows:
Integration of the building blocks of system A is more complex than integration of the building blocks of system B is the cost of integration of A is greater than the cost of B. The cost of integration depends on the number of FBB and on the number of relations between them to validate. Complexity of the system is measured by the cost of integration of its FBB. Now, the last problem is to enumerate and classify all the various interactions and relations that exist between the FBB and the way to track them.
186
J. Printz Situation at specification time
Situation at integration time
The architect
The integration team
Delivery
Programmer N°1
Programmer N°2
FBB_1
FBB_2
Specified
Specified
FBB_1
FBB_2
Implemented Ready to integrate
Implemented Ready to integrate
Specification of the interface
Test of the interface
Fig. 9 Cost of interfaces
The specification of the interface and the proof of concept is part of the contract, as well as the test of the interface which in fine will validate the contract implementation.
7 The Complexity of Integration The complexity of integration depends on:
the number of building blocks to integrate, the number and nature of interfaces between blocks (synchronous or asynchronous flow of control, shared data, messaging, raising and catching of events, remote services invocation, return codes of services invocations), the size of an integration step, that is to say the number of BB to be integrated to form a new BB ; we will call this important number the “SCALE” of the integration.
For the integration step, human ergonomics constraints recommend to integrate with a SCALE = 7±2 BB at a time. With n blocks interacting, the errors are to be searched in the set of parts ( 2 n − 1 ) formed by interacting blocks (NB : that shows why “big bang” integration is just stupid !). Note that by definition the service blocks SBB may be validated independently of the context. Consequently, an efficient integration strategy will be to start the integration process, a) first with the in depth validation of SBB using a simulation test environment under control of the integration team, and then b) to go to the nominal integration environment using the FBB. Doing that way, one has the assurance that in case of errors with the SBB, the fault is in the calling context, not in the SBB itself.
A Natural Measure for Denoting Software System Complexity
187
« Height » of the integration tree
The integration process is depicted in figure 10.
Root of the integration tree (whole software system)
P_h
SoS
P_…
System and/or Sub-system
P_1
Application Component
Projects organization border
... BB are grouped together to form a larger BB of « reasonable » size, e.g. 7±2 BB → Hence the height h ...
Entry criteria to be integrated
Pre-integration Test N°_1
Pre-integration Test N°_2
Deliveries of BB to the integration team
BB_1
BB_2
... Pre-integration Test N°_N
...
BB_N
« Width » of the integration tree : N BB at the beginning
Fig. 10 Integration tree layout
At the beginning of the integration process (step 0) each BB is tested from the integration point of view. Tests are written according to the information contained in the contracts between the development teams and the integration team. They are mainly black-box tests, based on the interfaces known of the integration team. Robustness is an important aspect of pre-integration in order to check the behaviour of the BB when wrong data are injected, and more generally when interfaces are violated. The number of wrong cases to consider for these tests may be very high, according to the programming style. At each step, additional integration tests must be provided to validate the new BB constructions. The height of the tree varies like the log of the width of the tree (the base of the log is the scale 7±2 if one respects human ergonomics and communication constraints). For example a software system of size 500 KSL constructed with 250 BB of 2 KSL each in average, will have a height of 3. Number of additional tests to provide will be 30+4+1=35. At each step, there are less tests to provide but the size of the test programs (and effort to develop them) will be larger. In total, the number of tests set (test cases) for integration will be : 250 (step_0, initial) + 30 (step_1) + 4 (step_2) +1 (the whole system) = 285.
188
J. Printz
The strict application of the Parnas’s modularity principles forbids the construction of degenerated integration trees as in figure 11.
Explicit path for hierarchical coupling
System root
BB_...
BB_...
BB_x
BB_...
BB_y
Forbidden coupling → covert channel
Fig. 11 Degenerated integration tree
The effort at each step may be defined qualitatively as follows:
Step_0: Effort for the N pre-integration tests of the BB ready to integrate, Step_1: Effort for N1 new constructed BB (a % of step_0 effort) with N , N1 = scale Step_2: Effort for N2 new constructed BB (a % of step_0 + step_1 effort) with N 2 = N1 , scale and so on, until the root.
Again, we have a power law, if and only if the tree is rigorously hierarchical. This approach could be associated with what is called “recursive graphs” in [A19]. The weight to be used is what we called “cost”. Avoiding covert channels depends on the programming style, and also on the capability and experience of the programmers. Let’s catch a glimpse at the nominal structure of a BB.
A Natural Measure for Denoting Software System Complexity
189
Flow of control
End-user
MMI Received
Entry point
FIFO Queue CRUD Internal
Messaging
Remote services Private data
Send
FIFO Queue
Control graph Exit point
CRUD External
Shared data
Flow of control
Fig. 12 Nominal structure of a building block
To master the BB environment, the integration manager must know:
The flow graph between BB, i.e. the activity diagram connecting the BB each others, what is also called the caller/called matrix, or N2 matrix in system engineering [see A21]. The data graph between BB and the referenced data (private and shared), also called data-dictionary, indicating the type of the reference (CRUD/ Create, Retrieve, Update, Delete). Flow graph + data graph allow to detect data inconsistency as for example a wrong R or U operation on a data that has been D previously. The event graph in order to know which BB RAISE and/or CATCH such or such events, and filter the authorization for SEND/RECEIVED network operations.
Private data and shared data must have the property of transactional memory (ACID properties) in order to avoid data inconsistency when the BB execution is interrupted for any reason. On transactional memory, see [A8, A16]. Transactional memory means that the source code which modify the memory must be programmed in the transaction programming style ; on transactions, see ref. [A27]. Consequently, if the programmers are not aware of the architectural rules to be applied, and if the application of the rules is not rigorously verified (quality assurance) the tree will not be hierarchical. Covert channels will be unavoidable.
190
J. Printz
We know from distributed system architecture that n processes having each m n
states, when interleaving, will create a global state with m elementary states to check [see A22]. The m states are clearly related with the existence of shared data. If the Parnas’s principle is rigorously applied, i.e. m=0 states, the global state is 1, ∀n. This is a supporting evidence to strictly control the size of the shared data and to use transaction processing style. Notice that it is true with SBB, by definition of SBB. Let’s now see how the composition process works.
Received
Flow of control
Private data Send
Shared data
+
Received
FIFO Queue
Private data
Shared-Private data Send
FIFO Queue
Control graph
Private data
Received
Flow of control
Private data Send
Shared data
Result of the composition
Fig. 13 Building blocks composition
What was private data remains private, but what was shared may become private to the new composed BB. Doing that (this is an emerging architecture property) one give the new BB the capacity to control its environment. Notice that the new BB is now in conformance with Parnas’s principles. It is a way to master combinatory explosion. Usage of shared data has generally two main causes: 1. 2.
The first one is performance or safety (recovery from a failure), sometimes for good reasons, The second one comes, almost always, from programmers incompetence, especially when they are not aware of distributed system pitfalls, ignorant of transaction processing, or tempted by event programming.
Let’s have a look on the performance problem explained in figure 14.
A Natural Measure for Denoting Software System Complexity
191
Case N°2 is in conformance with the Parnas’s principles, but several messages, or large messages, may be necessary to transfer the information context between BB_1 and BB_2, thus creating a potential saturation of the network which eventually will create performance problems. Decision of the architect depends on the perception of risks. In all cases the access to the shared and private data must be carefully control. Case N°1
Case N°2
Flow of control - Messaging
BB_1
BB_2
BB_1
BB_2
Private data
Private data
Private data
Private data
Shared data Shared data
Fig. 14 Shared data and performance
If the architect decides that for performance reason a context has to be defined, extreme caution must be addressed to the programmers. Consider a set of BB sharing the same context CNTXT. The programmers of BB_x which want to be sure that the CNTXT is sound are encouraged to use defensive programming style; consequently, before to use the context they have to check it by on-line tests (semantic invariant, pre and post conditions, etc.). The size of on-line tests depends on the size of the shared data and of the property associated with the data, but also from the history of the successive modifications of CNTXT. If something wrong occurs, a diagnostic must be sent to the system administrator. That shows how carefully the design of shared data must be done. If the programming team is not aware of the danger of shared data, a quality catastrophe is unavoidable.
8 Interoperability The case of interoperability is interesting because this is a classical pattern with information systems and system of systems (SoS) ; see [A19, A25]. It shows how a “good” technical solution with a “software bus” may be entirely corrupted if the architecture of data has not been done correctly.
192
J. Printz
The “theory” of interoperability is summarized in figure 15. Business architecture
Functional architecture
1 BB_1 BB_2 BB_3 BB_4 BB_5 2
5
Adaptors
Exit
Entry
4
3
Business cases according to the usages
Software bus (EAI)
Entry
IHM
Exit
Integration complexity according to the architecture
Fig. 15 Interoperability, from business architecture to functional architecture
The flow graph of the functional architecture is simpler than the one of the business architecture (linear against quadratic). It corresponds to the deployment of a software bus, with adaptors to translate the messages exchanged. If we look at the data belonging to each BB, and if we try to organize them according to a classification with criteria private/public, the situation is like this: Each BB has his own private data, but if we consider the shared data we have to consider what is common for each pair (i.e.
n(n − 1) pairs), common to each 2
triplet, and so on, until we find what is common to all. This time we are facing the set of parts, i.e. 2 n − 1 which is an exponential. That shows why if the deployment of the EAI is not associated with an in depth reengineering of the data architecture for the whole system, and also a reorganization of the applications, the situation will probably be worst. Consequently, to take advantage of the introduction of an EAI or ESB, a complete reengineering of the various BB is mandatory in order to analyse the data model of the software system. The architecture with the EAI looks like sound to someone not aware of the pitfalls, but if we measure it with the tests to validate it, it still remains a very complex one. If we try to integrate systems which have redundancy, the shared data will be automatically large. Integrate them will oblige to decide who is responsible of what. All these examples show the extreme importance of the data reference graph and its impact on testing strategy as far as complexity is concerned. Data architecture must be done first. With the events, complexity is still evolving in the wrong way. In an on-line distributed architecture, which offers facilities like PUBLISH/SUBSCRIBE, the natural tendency, if not rigorously controlled by the architect, will be to publish to all. Doing that, one creates invisible links between the publisher and many
A Natural Measure for Denoting Software System Complexity
193
subscribers for which the message may have no meaning. The arrival of a message to a receiver which is not the good one may provoke inappropriate reactions according to the real situation. A “PUBLISH to all” command will normally generate appropriate tests in all BB that are potential receivers. Again, size of tests to provide for validating the PUBLISH command are a much better complexity indicator than the flow graph.
9 Temporary Conclusion Measuring, or trying to measure, complexity of a software system with the size of tests to validate it has several advantages.
It’s a real measure. Size of tests versus quality is tractable. If well managed more tests means better quality. More complexity means more tests: this is common sense that everybody can understand. Each time new features are required by the customer, requirements must be translated first in term of “how much test” are to be added and re-executed to validate the new features, from the customer point of view, in order to warrant the system SLA. It focuses the attention of the software architect on what will facilitate the testability of the software system. It will create a virtuous circle round the “design to test” concept as our hardware colleagues did since the very beginning of computer age. It focuses the attention of the software architect on dangerous software constructions which have the appearance of simplicity, i.e. a few lines of code, like in event programming, but that will create invisible connections and coupling between unexpected chunks of code; consequently new test programs costly to write will be necessary to warrant quality. It gives a rigorous meaning to what agile and X-programming communities call Test-Driven Development when they claim: “Do tests before the programs”. A program is similar to a theorem, but like in mathematics, the proof/test is the only valuable thing. What will be the status of a program/theorem without a proof/test? Just magic, not engineering. It is a way to make the risk visible to the customers with simple reasoning like this: “your requirements will add complexity; complexity means more tests; tests are costly and difficult to design. If you don’t accept to pay for more tests, you will have to accept more risks”, and so on. Its shows why integration is an intrinsic part of software architecture, and must not be separated from it: testable architecture, or design to test are key aspects of software engineering.
To conclude with a comparison with other fields of engineering such as aeronautics, nobody would accept to flight on an aircraft whose behaviour has not been proven correct and safe, with all available means, humanly and technically ; in civil engineering, the responsibility of the architect is committed during ten years. With software engineering state of the art, a lot of techniques are now
194
J. Printz
available to improve the quality and warrant SLA as stated by maturity models like CMMI, but too often the general management don’t yet accept to pay for it. The risk is then transferred to the customer and to the public, in total contradiction with what is claimed by quality management.
References [A1] Printz, J.: Productivité des programmeurs, Hermès (2001) [A2] Park, R.: Software size measurement: a framework for counting source statements, Technical report, CMU/SEI-92-TR-20 [A3] Mc Cabe, T.J.: A complexity measure. IEEE TSE SE-2(4) (December 1976) [A4] Berge, C.: Théorie des graphes et ses applications, Dunod (1963) [A5] Popper, K.: The logic of scientific discovery (En français chez Payot) (1968) [A6] Boehm, B.: Software engineering economics, 1981, and Software cost estimation with COCOMO II (2000) [A7] Parnas, D.L.: On the criteria to be used in decomposing systems into modules. Communications of the ACM 15(12) (1972) [A8] Printz, J.: Architecture logicielle, Dunod (2009) [A9] Zipf, G.K.: The principle of least effort. Hafter Publishing Co., – B. Mantelbrot, Les objets fractals, Flammarion (1965) [A10] Schooman, M.: Software engineering. McGraw Hill, New York (1988) [A11] Arndt, C.: Information measures – Information and its description in science and engineering. Springer, Heidelberg (2004) [A12] Chaitin: Information randomness and incompleteness. World Scientific, Singapore (1987); See also Delahaye, J.-P.: Information, complexité et hazard, Hermès, 1994 – Handbook of theoretical computer science, ch. 4, vol. A. MIT Press, Cambridge [A13] Beck, K.: eXtreme programming explained, 2000; and also: Test driven development (2003) [A14] NATO Science committee, Software engineering, Garmisch, 1968 et Rome (1969) [A15] Horrocks, I.: Constructing the user interface with statecharts. Addison-Wesley, Reading (1999) [A16] Larus, J., Kozyrakis, C.: Transactional memory. Communications of the ACM 51(7), 07/08; Shavit, N.: Transactions are tomorrow’s loads and stores. Communications of the ACM 51(8), 08/08; Harris, T.: Composable memory transaction. Communications of the ACM 51(8), 08/08; Cascaval, C.: Software transaction memory: why is it only a research toy? Communications of the ACM 51(11), 11/08 [A17] Pradat-Peyre, J.-F., Printz, J.: Pratique des tests logiciel, Dunod (2009) [A18] Printz, J.: Ecosystème des projets informatiques, Hermès (2006) [A19] Caseau, Y., Krob, D., Peyronnet, S.: Complexité des systèmes d’information: une famille de mesure de la complexité scalaire d’un schéma d’architecture, pôle System@tic [A20] Brooks, F.: Mythical man-month. Addison-Wesley, Reading (1975) (new edition in 2000) [A21] INCOSE System Engineering Handbook, version 3.2 (January 2010) [A22] Clarke, E., et al.: Model checking. MIT Press, Cambridge (1999) [A23] Utting, M., Legeard, B.: Practical model-based testing. Morgan Kaufmann, San Francisco (2007)
A Natural Measure for Denoting Software System Complexity [A24] [A25] [A26] [A27]
195
Legeard, B., et al.: Industrialiser le test fonctionnel, Dunod (2009) Caseau, Y.: Urbanisation, SOA et BPM, Dunod (2008) Kelly, S., Tolvanen, J.: Domain-specific modeling. Wiley, Chichester (2008) Gray, J., Reuter, A.: Transaction processing: concepts and techniques. Morgan Kaufmann, San Francisco (1993)
Appendix To go further we have to define a programme of work qualitatively and quantitatively. Qualitatively, we have to check with the academic and industrial community that measuring complexity by length of tests is pertinent. Quantitatively, it is not so easy, because industrial statistical data are needed. A first direction could be to capture test effort, for example integration test effort with regards the complexity of the integration tree based on the system design. It should be easy with software organizations claiming to use architectural patterns like MVC, EAI/ESB, SOA. As recommended by the CMMI, effective quality may be measured by the number of residual defects discovered during a given unit of time of system use (as for reliability measurement). Measuring quality by residual defects has been used intensively by NASA SEL many years ago and is still pertinent. Counting of effort is a basic measurement for project management fully mastered at level 2 of the CMMI scale. Thus working with organizations at level 2 could provide pertinent data, but level 3 organizations would be better. A second direction could be to define a unit for test length measurement, in the same way units of programming effort have been defined for cost estimation models. For example, COCOMO use a unit which is the effort to develop a chunk of 1.000 lines of code (one instruction per line, without comments ; see ref. [A2]). This is a statistical measure extracted from projects data analysis ; same thing with Function Point measurement based on data transformations. Test length, as previously seen, means:
Length of test programs. Length is easier to define if tests are expressed in some test languages or expressed in a classical programming language, in particular if the programs under test have been instrumented. Otherwise, it could be difficult. Length of test data to be used by test programs. Such data are easy to express in data language like ASN.1 or XML. Length of programs developed to compute pertinent data.
Notice that organizations claiming to achieve CMMI level 4 or 5 should have such metrics available on demand for evaluation purpose. We are currently in the process of creating a working group on “Integration and complexity” within CESAMES (see web site at http://www.cesames.net), an association chaired by Professor Daniel Krob, to validate this new approach. Thanks to those which have made comments and help to improve the third revision of this paper.
Flexibility and Its Relation to Complexity and Architecture Joel Moses
*
Abstract. We discuss the system property of flexibility and its relation to complexity. This relationship is affected by the generic architectures used in a system. We discuss the generic hierarchies based on tree structures and layers. We also discuss networks as a generic architecture. Since layered systems are less well understood than the other two generic architectures, we give some examples of layered systems and the flexibility inherent in them. In particular, we discuss the organization of the US health care system and the organization of its higher education system. Lateral alignment in hierarchical systems, which is related to layering, is discussed in the context of the US military.
1 Flexibility Flexibility is a term that has many meanings, even if we constrain the context to engineering and computer science. A key notion underlying flexibility in engineering and computer science is that in a flexible system it is relatively easy to make certain classes of changes. Flexibility is used here in a sense that is different from adaptability. Adaptability usually includes the ability to make continuous changes, as in a thermostat. We assume that flexibility involves discrete changes, as in making a choice among a number of alternatives or adding a new alternative. What do we mean by “relatively easy to make … changes?” We mean that the cost of making the changes is relatively low by relying on machinery and constraints built into a flexible system. The comparison is usually to making the changes from scratch. There is no magic here. That is, if one expects a person to fly due simply to his or her flexibility, then one is misusing the term flexibility or else one is misusing the notion of a person. A flexible system can handle certain classes of changes easily, but not all classes of changes can be easily made in a flexible system. Joel Moses Engineering Systems Division and Electrical Engineering and Computer Science Department, MIT, Cambridge, Massachusetts, USA e-mail: [email protected]
*
198
J. Moses
Making a change involves certain costs. These costs include time and money. In addition to the cost of making a change by switching among built-in alternatives, there is also the cost of operation of the system with the changed alternatives. It is often assumed that flexibility will result in reduced operating performance. There are, however, cases where there is no reduction in performance after design changes that increase flexibility have been made. The IBM 801 computer was the first Reduced Instruction Set Computer (RISC). It was specifically designed for compilers of high level languages. Thus while earlier, relatively complex computer architectures could be exploited by programmers to increase performance of the resulting code, this was no longer true of RISC microprocessors (Cocke and Marstein 2000). Flexibility is usually used to change the function of a system, namely what the system does. Thus using a digital tuner to change radio stations changes the radio’s function as a result of changing among the alternatives presented by the tuner. Changing gears in a bicycle changes the bicycle’s performance as a result of flexibility, although we believe this is a less common use of flexibility. Flexibility is related to robustness or resilience. When there is a failure in a system, a welldesigned flexible system would allow one to take an alternative path and restore some or nearly all of the prior function and performance. Sheffi discusses cases where resilience is usually a result of flexibility (Sheffi 2005). Due to the many relationships of flexibility to system concepts or ‘ilities,’ we consider flexibility to be the queen of the ilities. Thus we have indicated several issues related to flexibility: 1) Time when the change is made: during initial design (D), during redesign of an existing system (R), or during operation of the system (O) 2) Impact of changes: on the function (F) or performance (P). Below are a few examples of flexibility related to these issues. We indicate the flexibility-oriented issues close to each example. 1) Using a digital tuner in a radio to switch stations – O, F 2) Changing gears in a car or bicycle – O, P 3)Adding connections in infrastructures in order to increase flexibility and robustness – R, P 4) Switching roads to avoid congestion or accidents– O, P 5) Adding rules in a spread sheet program – O, R, F 6) Creating a new layer of software on top of or in between existing layers – D, R, F, possibly P Real option theory has been used to understand the value of flexibility (de Neufville et al 2006). This application of real options is usually used in design, often in the redesign of an existing system. In software or infrastructures flexibility can also be used during the operation of an existing system. Real options are mostly used in cases where there are relatively few alternatives. In contrast, software systems can have a very much larger number of alternatives, even if we ignore the possibility of loops or feedback.
Flexibility and Its Relation to Complexity and Architecture
199
2 Generic Architectures, Flexibility and Complexity Modifying a system to increase its flexibility has an impact on the system’s complexity (Moses 2004). This is a cost of flexibility that is in addition to the usual ones, such as time and money. It is rare that one is able to reduce the complexity of a system by making changes in it without reducing its function or performance. On the other hand, there are ways of designing a system so that certain classes of changes in it have relatively little impact on the overall complexity. This brings us to the relationship between increased flexibility and increased complexity. We believe that a key to such relationships is the generic architectures used in the design of the system. Such generic architectures that we consider here are A) Tree structured hierarchies B) Layered hierarchies C) Networks (other than the hierarchies in A and B) Each of the generic architectures has advantages and disadvantages. Tree structured systems are relatively inflexible. That is, increasing the flexibility of a tree structured system will tend to increase complexity a great deal, increase the size of the system a great deal or violate the structural rules of tree structured systems. Tree structured systems are relatively centralized, which can at times be a significant advantage. Layered structures can be far more flexible than tree structures. They can also be centralized. One disadvantage of layered systems is that many systems cannot be designed as layered structures or even nearly layered structures. Networks that are not hierarchical can be extremely flexible. They may, however, be difficult to control. The weather and the national economy can be modeled via large scale grid networks. Each of these systems can be difficult to control or possess behavior that is difficult to predict at times. Before we discuss in greater detail the various generic architectures and how they relate to flexibility, let us consider an approach to measuring flexibility. We define the measure of flexibility in a system to be the number of alternative paths in its implementation which start with a root node and end in a leaf node, but count cycles just once. We are aware that this definition is related to Shannon’s measure of information in the structure of a system when one considers the logarithm of the number of paths. If node A in a system has m alternative connections to other nodes, and node B has n alternatives, and there are no cycles, then the system has at least mn alternatives, each of which is a path that begins with node A. Counting the number of paths in a system is a way of measuring the number of its overall alternatives, and hence its flexibility. Cycles are important constructs of alternatives in a system, but can lead to an infinite number of paths. Thus we simplify our analysis by ignoring cycles. We claim that tree structured systems are relatively inflexible given our flexibility measure. This claim is based on the following argument: Each node in a pure tree structured system, other than the top node, has exactly one parent. Suppose you wanted to modify the system by adding a new alternative subsystem
200
J. Moses
at some node. If the alternative subsystem already exists in the system, then making a connection to it will violate the purity of the tree structure and unduly increase the complexity of the resulting system [Figure 1]. If the alternative already exists as a subsystem, but you avoid the issues just mentioned by copying the subsystem, then you unduly increase the size of the system and increase its overall complexity. The other generic architectures can permit new alternatives without as much of an increase in complexity or size. In a pure tree structure, without non-standard interconnections, the number of paths is equal to the number of bottom nodes. If the total number of nodes is n, the flexibility of a pure structure is O(n) or less. This may appear to be a large figure for large values of n, but in fact it is much lower than the comparable figures for the alternate generic architectures as we shall see.
Fig. 1 A non-standard tree structure with some increase in flexibility but with an increase in complexity
Consider a grid network where each node can be connected to, say, two or more neighboring nodes. The number of paths or flexibility in such a network can grow exponentially as a function of the number of nodes without violating the rules of the network structure. Since there are often few limits on the allowable interconnections in a network, the complexity can also grow quite a bit and control over the system’s behavior can be a serious issue. Layered systems have a flexibility measure which is intermediate between trees and grid networks. Figure 2 indicates that in a layered system a node can have more than one parent node and also have lateral connections within a layer. The flexibility measure of a layered system grows geometrically with the number of layers. All nodes at a given layer are assumed to be at the same level of abstraction, a notion that is absent in the other two generic architectures. Layered systems are hierarchical, but need not have a single node in the top layer. A layered system with d layers, each of k nodes, can have O(kd-1) paths from nodes at the top layer to nodes at the bottom layer. This figure would be even higher if one permits lateral connections as one would permit in human organizations.
Flexibility and Its Relation to Complexity and Architecture
201
When new connections need to be made in a layered system, they can often be made to nodes at the layer above or below or to a node in the same layer. These connections will not violate the rules underlying a layered structure, and thus will not unduly increase the system’s complexity. Layered systems are common in large scale hardware/software systems. For example, a personal computer will have a layer for the microprocessor, several layers for the operating system including a user interface layer, possible layers for a database system, and additional layers for application software. Layered systems are examples of levels of abstraction, a key concept in pure mathematics, especially abstract algebra. The hierarchy and the abstraction levels usually make layered systems easier to understand and control than systems having similar size and complexity that use the other generic architectures.
Fig. 2 A layered structure with three layers and eleven nodes. The interconnection pattern is an example of many possible interconnection patterns between nodes in human organizations
As noted above, layered systems play an important role in pure mathematics, especially abstract algebra. Consider the set of integers Z, the set of polynomials in x with integer coefficients, Pz(x), and the set of polynomials in y whose coefficients are polynomials in x with integer coefficients, Pz(x,y). These three sets form a tower of abstractions or layers (Figure 3). Polynomials in the top layer can be connected to several polynomials in x in the middle layer which act as their coefficients. Similarly, polynomials in x in the middle layer can be connected to several integers in the bottom layer. Layers in mathematics have similarities and differences from layers in engineering systems or human organizations. Elements in mathematical sets that act as layers are at the same level of abstraction. This is informally true in systems and organizations that are structured as layers. Mathematical sets can have an
202
J. Moses
infinite number of members, which is clearly not true of systems or organizations. Human layered organizations tend to emphasize horizontal links between members of the same layer. This is less common in layered systems and is usually not true in algebra.
Fig. 3 A tower of three layers – such as integers at the bottom, polynomials in x with integer coefficients in middle layer and polynomials in y at the top with coefficients that are polynomials in middle layer
Consider the polynomial xy2 + 3y – x3 + 5 This polynomial can be written as ((1) x)y2 + ((3))y + ((-1) x3 + (5)) Here the level of parentheses indicates the layer in which the term is located. Layered human organizations tend to have three layers. Some layered organizations have 5, 7 or even 9 layers. Due to human processing limitations most large scale layered organizations are actually hybrid ones, often using tree structures globally and layers or teams locally. Figure 4 presents an example of such a hybrid structure. Sometimes members of a team interact with members of other teams at a comparable level of abstraction. This has some of the interconnection structure of pure layered organizations. Such lateral interconnections increase the flexibility of the organization. Universities can have such a hybrid structure with provosts/rectors at the top, deans in the level below them, department chairs at a level below deans and the rest of the faculty just below the department heads. Successful administrators should get their staff at the next lower layer to work relatively closely together, which leads to horizontal interconnections in the organization. This hybrid structure can have advantages in promoting interdisciplinary research and education. Some companies can have many more employees than large universities, and a hybrid structure for such firms, were it to exist, will tend to have more levels than the small number of layers we have been discussing.
Flexibility and Its Relation to Complexity and Architecture
203
Fig. 4 A hybrid organization, globally a tree structure and locally teams
3 Layered Human Organizations and Industries – Health Care Layered human organizations and industries differ from some technical ones, such as the Internet. The Internet is based on a layered structure where the intelligence in the middle layer (TCP/IP) is intentionally limited (Clark et al 1984). Human organizations normally have no such limitations. Here we present several examples of layered human organizations and industries. The current overall architecture of health care delivery in the US can be described as a layered hierarchy having two layers. The bottom layer is that of primary care practices. The top layer is that of hospitals and specialists. There are other health care organizations, such as nursing homes, but we view these as separate from the main parts of the system. In many other countries the overall health care system is best described as a hierarchy with three layers. This includes a bottom layer which is composed of community clinics, largely run by nurses. Individuals expect to go to the local clinic for many of their symptoms and checkups, and expect to be referred to a physician or the emergency department of a local hospital if the situation so warrants. Nurses in these community clinics are empowered to deal with a sizable percentage of cases that are presented to the health care system. Some limitations on the ability of nurses to treat patients are clearly useful and needed, but careful empowerment of them can improve the overall performance of the health care system in terms of both cost and quality of outcomes. There are advantages and disadvantages to having a bottom layer of community clinics. Doctor’s offices are usually closed at night, whereas the community clinics can be open at late hours. Primary care doctors in the US have increasingly tended to avoid visiting patients in their home. Nurses could do that and check on elderly patients and those who are chronically ill. Many of the presenting symptoms to a primary care physician tend to be fairly straight-forward and could be safely handled by a nurse practitioner, but the income to the physician of such visits is important. Primary care physicians in the US tend to be paid less than many specialists, and this adds to their frustration with the job. The nature of the
204
J. Moses
job and the low compensation relative to that of many specialists are among the reasons it is difficult to attract young doctors to careers in primary care in the US. From a national perspective a heavy reliance in the US on physicians as the bottom layer in health care leads to an overly expensive system. A layered health care system with three layers will be more flexible than a layered one with just two layers in several ways. Clinics can be geographically located in many different places, each connected to various primary care practices, thus providing flexibility. Furthermore, clinics should have shorter waiting times than emergency rooms in hospitals. Advances in electronic health records and their increased use will help make the transitions from clinics to other parts of the health care system relatively smooth. Disadvantages of having nurse practitioner-based clinics as the first layer of the health care system include concern over patient safety. This can be alleviated by restricting the cases that are handled in such clinics, as well as having physician oversight of the clinics. Increasing reliance on nurse practitioners will require an increase in the number of nurses and will place great strain on the nursing education system. This situation is recognized in the health care law that was recently passed in the US. On the other hand, the number of new primary care physicians is decreasing in the US for the reasons noted above, and a significant increase in the number of clinics is consistent with that reduction.
4 Higher Education as a Layered System The public sector plays an important role in education as well as health care. In these fields individuals usually want to have the highest level of service, service that will often be heavily paid for by the government. The government, however, must pay attention to overall costs. Hence it will tend to emphasize obtaining services at the lowest applicable levels. A layered approach to education in the US shows up at the college level, in particular in California. Community colleges are the lowest layer of the system. The California State institutions form the middle layer, and the University of California campuses form the upper layer. The major private universities, such as Stanford and Cal Tech, are at the upper layer as well. In the upper layer professors show their mastery of the educational material by doing research in that area or one closely related to it. The hiring process of these faculty members also assures a level of mastery. The admissions process determines to a large degree into which layer a student enters. The cost to the state on a per student basis usually becomes higher as one moves up the layers. Parents can spend additional moneys to have their children go to private institutions, which presumably offer a premium level of education or at least provide an increased cachet at any given layer. Ideally, undergraduate education at the lowest layer, namely community colleges, should be of high quality although possibly not at the breadth of liberal arts schools nor offered by research faculty members. It is not clear that this level of quality has been or can be achieved in the US at this time.
Flexibility and Its Relation to Complexity and Architecture
205
Flexibility shows up in this educational system when students educated at the local community colleges are later admitted as undergraduates to institutions at the two higher layers. Much additional movement within or between layers occurs when students enter graduate schools. Community colleges have in many cases been able to offer new professionally oriented programs with relative ease. Increased reliance on community colleges will likely have a beneficial effect on the US economy.
5 Hybrid Organizations – Lateral Alignment In a tree structured organization there is a tendency for different parts of the organization to compete with each other for resources. One way of combating such a tendency is to create teams of people at roughly the same rank in the hierarchy that laterally cross the tree structure and attempt to align the goals of the different parts. Such lateral teams give the overall organization aspects of layering. One example of such lateral alignment is the organization of the US military in Iraq in 2003. The US Air Force and the US Army have not always worked well with each other. The air force would ideally like to do strategic bombing well ahead of army positions, thus getting much credit for their operations. The army instead would like the air force to provide close air support to army units. Dickmann shows the development of increased lateral alignments between air force and army personnel at strategic, operational and tactical levels from the first Gulf war in 1991 and the second one in 2003 (Dickmann 2009). The increased flexibility of this change in organization allowed the US to require less than one half as many soldiers in 2003 as before. Net centric technologies clearly enabled lateral alignment among other aspects of the second Iraq war, but the organizational changes were, we believe, necessary as well. Unfortunately, when the major fighting finished in 2003 the US did not have a clear idea how to proceed with some form of nation building.
6 Summary Flexibility is a key property of systems. It is related to complexity, robustness, and resilience. It is also related to the type of generic architecture used in the organization or system. We discuss flexibility in the context of three major generic architectures: tree structures, layered structures and networks. Tree structures are relatively inflexible using our measure of flexibility. Networks can be extremely flexible, but may be hard to control. Layered structures have properties that are in the middle between the other two generic architecture types. We present examples of organizations that rely on layering. In particular we discuss the US health care system and that of higher educational systems. We also discuss the evolution in the US military of lateral alignment, an organizational approach related to layering.
206
J. Moses
References Cocke, J., Markstein, V.: The evolution of RISC technology at IBM. IBM Journal of Res and Develop 44, 48 (2000) Sheffi, Y.: The Resilient Enterprise: Overcoming Vulnerabilities for Competitive Enterprise. MIT Press, Cambridge (2005) de Neufville, R., Scholtes, S., Wang, T.: Real Options by Spreadsheet: Parking Garage Case Example. J. Infrastructure Systems 12, 107–111 (2006) Moses, J.: Foundational Issues in Engineering Systems: A Framing Paper. In: ESD Symp. (2004), http://esd.mit.edu/symposium/pdfs/monograph/framing.pdf Clark, D.D., Reed, D., Saltzer, J.H.: End-To-End Arguments In System Design. ACM Transactions on Computer Systems 2, 277–288 (1984) Dickmann, J.: Operational Flexibility in Complex Enterprises: Case Studies from Recent Military Operations, PhD thesis, MIT (2009)
Formalization of an Integrated System/Project Design Framework: First Models and Processes J. Abeille, T. Coudert, E. Vareilles, L. Geneste, M. Aldanondo, and T. Roux
*
Abstract. This paper proposes first integrated models dealing with the management of the coupling between system design environment and project planning one. A benchmark done with fifteen companies belonging to the world competitiveness cluster Aerospace Valley has highlighted a lack of models, processes and tools for aiding the interactions between the two environments. An integrated model taking into account design and planning requirements as well as management of coupling is proposed in compliance with existing project and design standards. A process of coupling, carrying out design and project management in case of innovative design is presented. It is based on the generic formalization of the interactions and the propagation of decisions taken within an environment to another one.
1 Introduction This article presents the first integrated models about the coupling of system design environment and project planning environment. System design and project planning are two well-defined processes and many studies have been done on these topics leading to adapted and complete computer aided design and computer aided planning methods and tools. However, few studies are interested in interaction between these two processes as in integrated tools. Nevertheless, a decision made in the system design environment can have important effects on project planning environment, e.g. choosing a technology can require a more important delay, or particular and not available resources). Reciprocally, a decision made in project planning can have a strong influence on system design, e.g. a short delay or lack of resources do not allow to adapt a component to a specific function. J. Abeille · T. Roux Pulsar Innovation SARL – 31000 Toulouse France
*
J. Abeille · T. Coudert · L. Geneste Université de Toulouse – ENIT – Laboratoire Génie Production – 65016 Tarbes France J. Abeille · E. Vareilles · M. Aldanondo Université de Toulouse – Mines Albi – Centre Génie Industriel – 81013 Albi France
208
J. Abeille et al.
Therefore, the coupling between these two processes concerns the ability to propagate decisions made within one environment into the other one. The formalization of this problem has been done interviewing fifteen companies of the world competitiveness cluster Aerospace Valley (Abeille et al. 2009). This task is a part of the ATLAS project that involves five academic institutions and two companies, funded by the French Government (ANR project). The most important results of this benchmark can be summarized as follow: all the interviewed companies are confronted to this coupling problem but they have not implemented specific tools in order to support this process. Most of the time, the coupling is performed by means of non-formalized human interactions even some companies use procedures or make decisions based on human experience. However, only 18% of companies use softwares or collaborative tools. The majority of companies (50%) makes integrated decisions during meeting involving the different stakeholders. The use of standards or reference scenarios is also used by the most advanced of them. It concerns the use of generic models for designing different categories of systems or the reusing of capitalized design solutions into databases. Furthermore, complexity of systems and projects is increasing. Indeed, in a distributed multi-national context, the design of a system is often realized in several sites with several partners. So, the use of adapted and integrated tools to manage these complex design projects is becoming a requirement for them. These tools have to be adapted to multi-responsibilities projects. In order to state the global context of our study, it is considered that a project (associated to a system to design) is on the responsibility of a project manager, i.e. the highest person in the hierarchy, as shown in Fig. 1. The project manager interacts with (i) a design manager who works within a design system environment and (ii) a planning manager who works within a project planning environment. The difficulty to design the system as well as the complexity of the associated project leads to decompose them linearly and hierarchically. In such case, systems can be decomposed into sub-systems leading to decompose associated development projects into associated (more exactly coupled) sub-projects. The corollary is that complex projects can be decomposed into sub-projects leading to decompose the coupled system in the same manner. Therefore, at each level of the hierarchy, the interactions illustrated on figure 1 can be observed. In this context, the project manager, at his level, can be seen as a “coupling manager” who gives orientations, makes decisions and defines decision frames for the two other parts, taking into account integrated information on dashboards.
Fig. 1 The integrated and coupled design/project environment
Formalization of an Integrated System/Project Design Framework
209
The objective of this article is to present, on one hand, the integrated model that supports coupling between system design and project planning environments and, on the other hand, to formalize an adapted process dedicated to this coupling. In the second section, the background of this study is presented according to existing methods and standards. In the third section, the proposed integrated model able to support coupling is exposed and in the fourth section, the generic process for system design is illustrated by considering an innovative design context.
2 Background 2.1 Definition of Design and Planning Processes The “design process” proposed in this article is structured following four parts: (i) the definition and\or the specification of the requirements, ii) the identification of the technical solutions which can fill these requirements, iii) the associations requirements / solution and, (iv) according to the complexity, the decomposition of the design process until a certain level of abstraction (as shown in the left part of Fig. 2). According to the level of detail of these activities, the proposed design process is compliant to the typology of (Pahl and Beitz 1996), in a "Conceptual / Embodiment / Detailed” design context. The recursive decomposition of the design process complies with a top-down cycle that "zigzag" between requirements and solutions in compliance with the recommendations of "axiomatic design" proposed by (Suh 2001). The result of the design process is then considered as a set of associations (i.e. specified requirements coupled to technological solutions) structured in a hierarchical way. Indeed, specifications of requirements lead to some technological solutions and, when a system is decomposed into many subsystems, a technological solution for a system leads to the specifications of requirements for its sub-systems. Considering the project planning side, a project is considered as a set of activities or tasks made by resources (technological, human). The different tasks start from the first stages of the design process (specification of the requirements at the beginning of preliminary design) until the last tasks of realization of the product. These last ones differ as the product is a unitary one (tasks of supply of component, production and delivery to the customer for example) or made in series (tasks of production of the first validated series for example).
Fig. 2 Top-down approach and axiomatic design
210
J. Abeille et al.
A planning process is composed of the following activities: (i) the definition of the tasks of the design project, (ii) the estimation of durations and resource needs, (iii) the organization of these tasks and their monitoring (iv) the recursive decomposition of some tasks into sub tasks until a certain level of abstraction (as shown in the right part of Fig. 2). The proposed planning process (see section 3.2) is directly inspired by the “Project Time Management” one defined by the "Project Management Institute" (PMI 2004) which gathers six activities: identification, sequencing, estimation of resources and durations, organization or elaboration of the schedule, followed by the studies or the updating of the schedule. It is also based on engineering system standards and more particularly on the EIA-632 standard (AFIS 2005). This standard identifies five major processes and a top-down approach of the refining design in a recursive way using "building blocks" corresponding to the association of requirements and solutions. The result of the planning process can be considered as a set of associations (tasks, resources) structured in a hierarchical way. Indeed, a task can be decomposed into many sub-tasks.
2.2 Interaction between Design and Planning Processes The axiomatic design and the previous standards allow identifying four interacting domains: (i) the requirements or specifications, (ii) the solutions, (iii) the tasks or activities and, (iv) the resources. The first two domains are relative to the system design process and the last two domains to the project planning process. Although there are few studies about this coupling problem, one can mention: (i) the studies initialized at M.I.T (Eppinger et al. 1991) about the use of methods and techniques used on product design in order to facilitate the project design. They are at the origin of scientific developments around DSM (Design Structure Matrix), as those of (Lindemann 2007). The interactions between the four identified domains are defined; (ii) in the same way, the axiomatic design, proposed by (Suh 2001), identifies various domains (Customer Needs, Functional Requirements, Design Parameters and Process Variables) and makes them interacting. An example of implementation is presented in (Goncalves-Coelho 2004). The interactions between domains are clearly defined: design towards planning but also planning towards design; (iii) another approach, introduced by (Gero 1990), proposes models based on three domains: Function, Behavior and Structure (FBS). The aim of this study is to take into account the product behavior (expected and effective) and to inventory in a formal way eight sub-processes of design. However, tools for interactions between processes are not considered explicitly; (iv) a study very close to the problem addressed has been proposed by (Stewart and Tate 2000) who were interested in the coupling of axiomatic design with project planning in the case of the software engineering. Their idea was to associate design variables with tasks of the development process. This approach was implemented with an ad hoc development coupled with Microsoft Project software package and tested in the case of software engineering. All these studies indeed confirm the four reserved domains (requirements, solutions, tasks and resources) and the existence of causal links that involve interactions between these four domains. On the other hand, except in (Stewart and Tate 2000), there is a lack of tools to support or aid interactions between both design and planning processes.
Formalization of an Integrated System/Project Design Framework
211
3 Proposition of an Integrated Model The integrated (meta-)model proposed in this section is inspired by the EIA-632 engineering standard (AFIS 2005). It is proposed with the objective to develop an integrated computer aided design/planning tool (EIA-632 meta-model is further a high level model describing processes and entities). It consists of three modules: a system design module, a project planning module and a coupling and monitoring module. In order to simplify links between system design entities and project planning entities, we consider that an entity from the system design module is linked, via the coupling module, to one and only one entity from the project planning module and vice versa. The three modules are described in the three following sections using UML formalism.
3.1 System Design Module The main entity of the system design module is the system that is associated to a system concept. A system is associated to (at least) two entities: (i) the system requirement and (ii) one (or many) system alternative(s) as shown in Fig. 3. A system concept permits to characterize a system. A set of system concepts permits to build the domain ontology, i.e. a hierarchical classification of concepts. The most general concept is the "Universal" one. The ontology is defined using a tree of system concepts. The lowest concepts of the hierarchy are the most specialized ones and the highest the most general. A concept of the ontology is described by a set of variables used to characterize a system. A concept is associated to its own variables and it also inherits those of its ascendants. The association of a concept to a system permits to associate automatically appropriate design variables to this system in order to: (i) define the requirements and (ii) characterize the solutions. Therefore, a concept is also associated to a system alternative entity. The system requirement entity gathers all the technical requirements declined from needs (the expression by means of text of the stakeholders’ requirements or the specifications stemming from the upper level if it exists). A technical requirement is defined by a variable, either coming from the concept or defined by the designer, and a unary constraint. For instance, the need corresponding to “the component C must be as light as possible” can be translated into the system requirement R1: weight of C in [10gr, 20gr].
Fig. 3 Simplified system design model
212
J. Abeille et al.
A system alternative represents one solution for filling the system requirements. It is composed of a logical solution and a physical solution. The logical solution allows to describe the principles of functioning of the associated system and permits the hierarchical decomposition if needed. When decomposition is required, a logical solution (then, an alternative) is composed of, at least, two sub-systems. When the difficulty to design is considered as acceptable, the decomposition is stopped (no sub-systems). The physical solution allows the description of the physical components needed for the alternative. It is defined by a list of pairs (design variable, value) describing the solution and a list of physical components that can be built, supplied or sub-contracted. The variables come from the concept of the alternative and from the system requirements. The values given to these variables can be either singleton value, if the solution is certain and totally known or intervals if the solution is not complete or uncertain. For instance, if the material chosen for the component C is carbon, its weight will be between 10gr and 12gr depending of its shape S1: weight of C in [10rg, 12 gr].
3.2 Project Planning Module A generic project process (as shown in Fig. 4a) has been extracted from the EIA632 standard (AFIS 2005) that is going to be defined and planned.
Fig. 4a Generic design project process
The main entity of this generic project is the project management task that is associated to a project concept. This task is associated to (i) a System Requirements Search task (SRS) and (ii) one or many alternative development task(s), as shown in Fig. 4b. The project management task is driven by the project manager and corresponds to the definition of the objectives and constraints, and to the management of the whole design project. It is then active all along the design process. The project concepts are similar to the system concepts. A project concept permits to characterize a same kind of projects (plane design project for instance) by a same set of variables, such as the duration of the task, its cost and the associated risk. The project concepts are gathered into an ontology of hierarchical project concepts defining the project domain. Each concept in the hierarchy inherits the variables of its ascendants. Therefore, each project requirement is defined using a
Formalization of an Integrated System/Project Design Framework
213
unary constraint (one variable and its upper and lower bounds). Then, as the project planning is performed, project variables get numeric values corresponding to the progress of the project. The System Requirement Search task (SRS) corresponds to the selection of the system concept, to the record of the needs and requirements and to the search of the different design alternatives. This task is associated to a project concept in order to be characterized with specific indicators.
Fig. 4b Simplified project planning model
One alternative development task corresponds to the management of the design of one system alternative. For the same reason, it is also associated to a project concept. That task can be carried out in two different ways depending on the complexity of the corresponding system: if the system is simple enough and does not need to be split into sub-systems, the integrated system design path is chosen, otherwise, the modular system design path is chosen. In this second case, the modular system design task becomes a macro-task that is decomposed into at least two subprojects according to the same decomposition of the system into sub-systems. When all the components are designed, their integration has to be done, followed by the validation of the whole system.
3.3 Coupling and Monitoring Module The coupling and monitoring module has been created in order to facilitate interactions between the system design and the project planning modules. These interactions are directly associated to the level of available knowledge. In this paper, only methodological knowledge, based in our case on PMI and EIA-632 standard, is available: we place ourselves in a case of innovative design where neither information nor knowledge concerning the design project exist. The first goal of the coupling and monitoring module is to insure coupling of design and planning entities. In this article, we make the assumption that, a system entity is associated to one and only one planning entity and reciprocally, and that a system alternative has its own alternative development task and vice-versa. These assumptions permit us to automate the creation of an entity when another is created in the other environment (e.g. a system is automatically created when a project is created and reciprocally). This coupling module creates specific IDs for each entity and matches them: (i) a system id with a project management task id,
214
J. Abeille et al.
(ii) a system requirement id with a SRS task id and, (iii) a system alternative id with an alternative development task id as shown in Fig. 5a.
Fig. 5a Coupling module
These links between system and project entities allow, in case of innovative design, to memorize the different associations in order to facilitate the monitoring of design processes. When the project has been realized and the design is ended, everything is capitalized in the database and it can be reused for a new design project. When many designs have been performed, it can be possible to get generic rules about a specific type of project and design. A representation by constraints satisfaction problem is proposed in case of routine design in (Aldanondo et al. 2009) and (Vareilles et al. 2008). The monitoring concerns three kinds of variables: (i) system variables, (cf. section 4.1) corresponding to the variables of the system ontology, (ii) project variables (cf. section 4.2) corresponding to the variables of the project ontology and, (iii) monitoring variables that permit to monitor the system design and the project planning by giving an idea on the progress of the both side.
Fig. 5b Instance of a coupling dashboard
These variables and their values (expected or obtained) can be used to build a coupling dashboard that gathers system design and planning information, as shown in Fig. 5b. This dashboard can be used by the three actors in a verification way (Checking the compliance against requirements: have you done the job
Formalization of an Integrated System/Project Design Framework
215
right?), in a validation way (Check the satisfaction of stakeholders: have you done the right job?), in a controlling way (What are the progress reports in design and planning?) and in a selecting way in case of system alternatives or alternative development tasks (Which alternative is the best according to me and to the other side?).
4 Proposition of a Simple System Creation Process This section illustrates our propositions of methodological coupling. In the case of an ex nihilo system creation (see Fig. 6a), there is no information available about the design project. First of all, a project manager has to be appointed and he has to appoint the design manager and the planning manager. (A) He has to give them the orientations of the project and to define their decisions frames, for instance the global budgets (design and project ones), the delay to conduct the global project and the quantity of resources available. (B) Then, the planning manager instantiates a planning from the generic one by giving a delay, affecting the resources to each task and by planning his project. When the planning matches all his project constraints, he has to inform the designer, via the coupling module that his staff can start working. A system (including system requirements and a system alternative) is then automatically created and the design manager is informed about this creation.
Fig. 6a System creation and Alternative creation sequence diagrams
(C) At this moment, the design manager can choose to investigate different solutions or system alternatives (see Fig. 6b): (i) the design manager informs the planning manager that he wants to explore another design alternative, (ii) the planning manager validate (or invalidate) it by creating a new alternative development task, defines it, plans it and confirms that it fulfills project requirements (delay and availability of resources). After the confirmation, a system alternative is added to the system module and the design job for the investigated solution can start.
216
J. Abeille et al.
Fig. 6b Sub-system creation sequence diagram
(D) Sometimes, it appears that a system alternative is too complex to be designed, so the design manager has to split it into several sub-systems. In this case, (i) the design manager informs the planning manager that he wants to decompose his system into x sub-systems, (ii) the planning manager validates or not this request by the analyze of his project constraints (time and availability of resources). A discussion between project and design managers and the use of the coupling dashboard can be helpful for this kind of decision. If it is possible to create subprojects, the planning manager becomes the project manager for these subprojects: he has to appoint all the sub-planning and sub-design managers (as many as sub-systems) and to give them all orientations and their decisions frames for this new level as previously explained in the (A) sub-section. A complete coupling process (instantiating the project, planning it, investigating different solutions, adding system alternatives) can restart.
5 Conclusion and Further Studies The aim of this article has been to propose an architecture able to support a coupling between design process and planning process. We have firstly presented the context of our study, and secondly the background of this study. We have defined what we mean by design process and planning process and we have highlighted the first studies about the coupling of design process and project planning. On these studies and some standards, we have proposed our definitions of system design and project planning by the use of ontologies. We have then presented a way of coupling these two processes by making the assumption of a complete bijection between the design entities and the project entities. We have also introduced the notion of coupling dashboard which gathers information from the design and the project sides in order to help both parts to make the better decision. The support coupling model is finally presented and one of the way of using our
Formalization of an Integrated System/Project Design Framework
217
tool is illustrated on a simple example where only a methodological coupling knowledge is available. The development of the tool based on these assumptions is going to start in few months. The complete tool will be able to link system design and project planning by using different types of knowledge: methodological knowledge as presented here, contextualized knowledge stored in a data base and usable via a Case-Base Reasoning tool and formalized knowledge stored as a Constraint Based model (Aldanondo et al. 2009).
Acknowledgments The authors wish like to thank their partners in the ATLAS project, the French National Research Agency (ANR) and the 7th Strategic Activity Domain (Architecture and Integration) of Aerospace Valley for their involvement in this project.
References AFIS, Processes for engineering a system EIA-632 (2005), http://www.afis.fr/doc/normes.normes3.htm Aldanondo, M., Vareilles, E., Djefel, M., Gaborit, P., Abeille, J.: Coupling product configuration and process planning with constraints. In: INCOM 2009 (2009) Brown, D.C., Chandrasekaran, B.: Expert systems for a class of mechanical design activity. Knowledge Engineering in Computer-Aided Design, pp. 259–282. North-Holland, Amsterdam (1985) Eppinger, S., Whitney, D., Smith, R., et al.: Organizing the tasks in complex design projects. In: MIT Workshop on CAOPD. Springer, New York (1991) Gero, J.S.: Design prototypes: a knowledge representation schema for design. AI Magazine 11(4) (1990) Goncalves-Coelho, A.: Axiomatic Design and the Concurrent Engineering Paradigm. In: Proc. of COSME, Brasov Roumanie (2004) Lindemann, U.: A vision to overcome “chaotic” design for X processes in early phases. In: Int. Proc. of Conference on Engineering Design (ICED), Paris France (2007) Pahl, G., Beitz, W.: Engineering Design A Systematic Approach. Springer, Heidelberg (1996) PMI Corporate Author. A Guide to the Project Management Body of Knowledge: (Pmbok Guide). Project Management Institute (2004) Stewart, D., Tate, D.: Integration of Axiomatic design and project planning. In: Proc. of first Int. Conference on Axiomatic Design, Cambridge, USA (2000) Suh, N.: Axiomatic Design: Advances and Applications. Oxford Series (2001)
System Engineering Approach Applied to Galileo System Steven Bouchired and Stéphanie Lizy-Destrez
*
1 Introduction Developing a localization system, with more precise performances than GPS that guarantees Europe autonomy is a complex challenge that ESA and a large number of European economical actors of space industry were decided to meet. To design and manage such a huge system would have been impossible without applying System Engineering best practices, thanks to fundamental activities, multidisciplinary teams and dedicated tools. This paper gives an overview of the System Engineering approach applied to design and develop Galileo, the European Satellite Radio-Navigation System. Galileo system scope is so wide that we have decided to focus on some particular steps of the System Engineering processes that are: Requirements Engineering and Architecture. All along this paper, examples are given to illustrate the additional difficulties that have made Systems Engineering more and more complex.
2 Outline This paper deals with: 1. 2. 3.
Galileo system presentation Requirement Engineering Architectural design
3 Galileo System Presentation The Galileo System is the under deployment European Radio-Navigation Satellite System. The System will offer to end users all around the world a range of Steven Bouchired Thales Alenia Space, Toulouse, France
*
Stéphanie Lizy-Destrez Institut Supérieur de l’Aéronautique et de l’Espace Toulouse, France
220
S. Bouchired and S. Lizy-Destrez
services including accurate positioning and timing, integrity guarantee for novisibility landing, search and rescue. The system design is highly constrained by the demanding services performances: 4m (2-sigma) horizontal accuracy, 8m (2sigma) vertical accuracy, 99.5% availability, ability to detect and inform the users about a hazardous misleading information in less than 6 seconds. The Galileo Core System is composed of four main segments that are: the Space Segment, the Launch Service Segment, the Ground Segment and the User Segment.
3.1 The Space Segment It provides the satellites, which will constitute the Galileo Constellation. The Galileo constellation will comprise thirty satellites in medium-Earth orbit (MEO) deployed in a Walker 27/3/1 plus three in-orbit spares. Each satellite will broadcast four ranging signals carrying clock synchronization, ephemeris, integrity and other data.
3.2 The Launch Service Segment It is in charge of deploying the satellites on their orbits (Launch, LEOP: Low and Early Orbit Phase, In-Orbit Tests).
3.3 The Ground Segment It is composed of two parts the Ground Control Segment (GCS) and the Ground Mission Segment (GMS). The Ground Control Segment It is in charge of maintaining the satellites on their accurate orbits (5m 1-sigma radial orbit accuracy over 24 hours). The GCS is composed of two redundant control centers in Europe and of five TTC stations allowing S-Band TT&C communication with the Galileo satellites all around the Earth.
3.4 The Ground Mission Segment It is in charge of managing the Galileo Mission. This includes: • • •
Generating mission data (satellite ephemeris, clock corrections, etc) on the basis of the continuous observation of the Galileo satellites. The update rate of the navigation data depends on the desired positioning accuracy. Dissemination of the mission data to the satellites with the constraint that each satellite shall be provided with recent enough data to allow meeting the service performance. Disseminating data coming from external service providers (Search-andRescue Return Link, Commercial Service, Regional Integrity).
System Engineering Approach Applied to Galileo System
221
The GMS is composed of two redundant control centers collocated with the GCS ones, of nine C-Band uplink stations (ULS) to upload the data to the satellites and of a network of around 40 sensor stations (GSS) in charge of monitoring the Satellite signals and sending the observables to the control centers.
3.5 The User Segment It is not considered as part of the Galileo Core System, but its specification is under the responsibility of the Galileo System Prime. Test User Receivers are developed to validate and qualify the System. 30 MEO Satellites (27 operational + 3 spares) GCS element
TC S-band
TM S-band
Mission Data C-band
SIS L-band
Mission Data C-band
SIS L-band
SIS L-band
GMS element Site
+ TTC station
+ ULS Services uplink
+ Galileo Sensor Stations
ULS Services uplink
External Entity Galileo Sensor Stations
Galileo Sensor Stations
4 additional Combined ULS & GSS Sites
5 Combined TTC & ULS & GSS Sites
31 additional GSS only Sites Unmanned
GDDN-WAN
Manned
Ground Control Center (GCC) #2 ILS tool
CMCF
GACF
OSPF
IPF EDDN-WAN
SCCF
OPF
SCPF
MUCF
MSF
GCS KMF
MGF
PTF
CSP
ERIS
ESCC
GALSEE
GPS USNO
GRSP
GSMC
IOT
GOC-SC
TSP
FDF
CSIM
PKMF
MTPF
MKMF
Ground Control Center (GCC) #1
GNMF
SPF
COSPAS SARSAT
SATMAN
Fig. 1 Galileo System description
The Galileo System is large and complex industrial system (as a technical and political point of view). Its main particularities lay in its technological innovations (accurate performances are expected) and its European organization: a huge number of European industries are involved, increasing the interactions difficulties. Consequently, interfaces consolidation is one of the critical point. Designing and managing the Development such a sprawling System could not be possible without applying System Engineering good practices, methods and tools. The following figure sums up the global approach from needs elicitation to architecture.
4 Requirement Engineering The main objectives of Requirements Engineering are: • •
To precisely determine the Stakeholders’ needs To define the System boundaries
222
• •
S. Bouchired and S. Lizy-Destrez
To collect, refine and analyze the technical requirements To write the System specification.
4.1 About Galileo Lifecycle and Stakeholders The complex industrial organization of the Galileo program as well as the long duration, has defined the stakeholders’ needs and of the system boundaries a particularly tough exercise. Reaching the decision to start the Galileo program has taken around a decade. Consequently the program has been characterized by its succession of short phases and by the high granularity of its contractual breakdown. The duration and short phases induce frequent human turnover. In such a context following a strict System Engineering framework is more necessary than ever but also more difficult than in any other type of projects. For example, while focusing on early steps of the Galileo life-cycle1, the three main Galileo phases have been identified: the “Preparatory” phases, the In-Orbit Validation (IOV) phase and the Full Operation Configuration (FOC) phase. The “Preparatory” Phases (A, B, B1, B2, B2B, C0, C0 Rider) cover the period from 2000 to 2005. They correspond to phases A and B of the project. The European Space Agency (ESA) is the main Stakeholder on behalf of the European Commission (EC), while industrial companies perform feasibility and early design analyses in a succession of short contracts. The System PDR took place at the end of the B2B Phase, end of 2003. The In-Orbit Validation Phase starting in 2006 seals a stronger commitment of Europe towards Galileo. It aims at consolidating the design of the full Galileo System and at deploying half of the Ground Segment and 4 satellites in order to validate key system concepts. During this phase, the System Prime and the System boundaries have changed. Indeed ESA took over the System Prime responsibility in 2008. The System CDR of this phase started in September 2009, i.e. 6 years after the System PDR. The Full Operation Configuration (FOC) phase will complement the system to reach full operability and service provision by 2014. This phase is being negotiated at the time of writing this article.
4.2 System Prime Perimeter Evolution The System CDR boundaries are summarized in the following figure. The figure shows that the System Prime perimeter has increased when ESA2 have taken over the responsibility from the industrial consortium European Satellite Navigation Industries (ESNIS). These System boundaries are still increasing while entering the FOC phase with System Prime or Segments taking over the responsibility for some external entities.
1
Galileo program is an ESA program and is compliant with the ECSS. So the name of the different phases are coherent with European Standards. 2 The Galileo System Engineering activity is now under the responsibility of ESA.
System Engineering Approach Applied to Galileo System
223
MRD
GSRD (sat.only services)
Sites IRD
Segments (GMS, GCS, SSgt TUS)
Sites
External Entities
Other Services (Local Component, UMTS, ...)
Ext IRD
GOC Launchers, LEOP, IOT
GSMC
GRSP TSP RLSP MEOLUT ... EC perimeter
ESNIS Perimeter
ESA Perimeter (for IOV CDR)
Fig. 2 System Prime Perimeter Evolution in IOV Phase
Before ESA took over the System Prime role, the Stakeholders’ needs were considered to be described in an ESA document called GSRD (Galileo System Requirement Document). Industrial suppliers performed the System Design and the allocation to the Segments under strict supervision of ESA. In the current context, the Stakeholders’ needs are described in the MRD (Mission Requirement Document) managed by the Galileo Supervision Authority (GSA) on behalf of the European Commission (EC). The role of the GSRD has de-facto evolved: it has become the System Technical Specification Document. This evolution has been made without significantly changing the GSRD document. Indeed, changing the GSRD requirements at such an advanced stage in the project is not an easy task in particular for traceability maintenance reasons. The GSRD remains therefore a high-level specification document of around 340 requirements. The requirements derivation from the MRD is quite straightforward. The GSRD specifies: • • • •
The Galileo Services performances and environment The high level functions of the Space, Ground and User segments Some general operational constraints mostly related to the constellation deployment and maintenance High level safety definitions
The GSRD is derived to the Segment requirements as shown in Figure 3. The number of segment requirements is around 400 to 500 requirements per segment REQ document (i.e. GMSREQ, GCSREQ and SSREQ). Both Interface Requirement Documents (IRD) and Interface Control Documents (ICD) cover the interfaces with external entities like the Time Service Provider (TSP) or the SAR Return Link Service Provider (RLSP). As explained in the section on Architecture, this derivation is made through the Design Definition and Justification File (DDJF).
224
S. Bouchired and S. Lizy-Destrez MRD Satisfies
GSRD Satisfies
Galileo Core System
SSREQ
SSExtIRD
GCS-SS ICD
SISICD
3 SS-ExtI CD’s
SSgt External Entities
GMS-SS ICD TUSREQ GCSREQ
GMSREQ
GCSExtIRD
GMSExtIRD GCS-GMS ICD
GMS External Entities
3 GCS-ExtI CD’s 8 GMS-ExtI CD’s
GCS External Entities
Fig. 3 Requirement documentation organization
The management of the large number of documents, requirements and interfaces was made using the DOORS tool. The further derivation of the Segment Requirements by the Segment contractors is then imported in the database in order to guarantee a full top-down traceability. The DOORS capability to draw links between the documents objects (requirements, data-flows, use cases) and to develop customized analysis scripts (DXL) has helped ensuring the consistency of design to a great extent.
4.3 Example of Boundary Evolution between System and Segment Galileo system is such a complex system, that it can be considered as a system of systems. Each sub-system can also be considered as a system. A System engineering approach has to be applied at sub-system level. In particular, the boundaries of each sub-system have to be clearly defined. But, boundaries evolutions have also occurred between system and segments. As an example, the responsibility for the Galileo Constellation has changed. In Phase B and during the first year of the IOV Phase, the design of the Galileo constellation and the provision of the constellation in orbit were fully allocated to the Space Segment (SSgt) contractor (see Figure 4). The only SSgt “external” interface at constellation level were those with External Satellite Control Center (ESCC) needed for LEOP and those needed to hand-over the satellites (once on their final orbit) to the GCS. In Mid-2007, the SSgt boundaries have been restricted to the satellites level. This offered the main advantages of having a better control on the deployment strategy. The constellation deployment strategy allows System trade-offs in terms of schedule and service performance with intermediate constellations geometry. From a pure System Engineering perspective the SSgt
System Engineering Approach Applied to Galileo System
225
boundaries evolution implied that several interfaces which were SSgt internal, such as the interface between the satellites and the launchers, have become “external” and therefore under the responsibility of the System Prime. As mentioned above, the Launch Service has now become a Segment on its own.
SSgt (SSREQ) IOT (IOTREQ) SATMAN
Launcher Service (LSREQ)
SSgt Internal IF
Spacecraft (SATRD) GCS-SS ICD (TBC)
ESCC GCS-SATMAN ICD
GCS-ESCC ICD GCS-SS ICD
GCSExtIRD
GCS (GCSREQ)
Fig. 4 Space Segment boundaries before Mid-2007
Launcher Service (LSREQ) ICD’s managed by ESA
SATMAN
SSgt (SSREQ Æ SATRD) GCS-SS ICD (TBC)
GCS-SATMAN ICD
GMS-SS ICD SIS-ICD SS-ERIS ICD SS-SAR ICD SS-SAR ICD
ESCC (LEOPREQ)
IOT (IOTREQ)
GCS-ESCC ICD
GCS-IOT ICD
GCS-SS ICD
GCSExtIRD
GCS (GCSREQ)
Fig. 5 Space Segment boundaries now
5 Architectural Design After Requirements Engineering, the main step in System Engineering processes is to find the optimal solution that corresponds to the Stakeholders’ needs. Finding
226
S. Bouchired and S. Lizy-Destrez
such an optimal solution demands to design the best architecture at System level. Then System Engineers have to match the functional architecture and the organic architecture, so as to verify that the proposed solution complies with the expressed needs. This activity is named allocation. This section addresses the allocation of the system specification to the system segments. The requirement derivation addressed in the previous section is the result of the System Design Activity. Figure 6 shows that the derivation relation between the GSRD specification and the segment requirements and interfaces is indeed generated through the Design Definition File (DDF) and Justification and Performance Budget File (resp. DJF and PBF). The DJF Annex A provides for each GSRD requirement a short summary of the DDF and DJF/PBF objects that contributes to meeting the system requirement. DOORS scripts exploit the DDJF traceability to automatically reconstruct the derivation links between the GSRD and Segment Requirements (X-Traceability).
(1 to 1) Refers to
DJF Annex A
GSRD DOORS Scripts
Satisfies (X-Traceability)
Justifies Refers to Is Allocated by
DDF
Segment Requirements, Interfaces
Justifies
DJF, PBF Cluster Annex
Fig. 6 Specification documentation management
5.1 Functional Architecture The Functional Architecture constitutes the backbone of the Galileo System design. The Galileo System Functional Tree is an overlapping tree (i.e. branches can share leaves). The top of the tree is composed of the 15 main System Functional Chains: SFC ID SFC1 SFC2 SFC3 SFC4
System Functional Chain Provide and Maintain the Galileo Constellation Generate System Time and Geodetic Reference Maintain Overall System Synchronization Mission Services Provision
System Engineering Approach Applied to Galileo System
SFC5 SFC6 SFC7 SFC8 SFC9 SFC11 (Cla) SFC12 (Cla) SFC13 (Cla) SFC14 (Cla) SFC15 (Cla)
227
Monitoring and Control Archiving Support Commercial Service (CS) Support Search And Rescue (SAR) Service Support External Regional Integrity Service (ERIS) Control Access to the Ground Infrastructures Security Protection Of Satellite Monitoring And Control Security Protection For Mission Data Security Protection Of Safety of Life (SoL) Service Security Protection Of Public Regulated Service (PRS)
The SFC makes use of segment functions. The Segment functions are related to input and output signals and are specified in the Segment requirement documents.
DDF-I SFC
Is Allocated by
Is Composed of
DDF-I Segment Functions
Is Allocated by
Requirements, Interfaces
Is Mapped on
DDF-II Physical
Is Allocated by
(except Input Requirements)
Link Interface
SignalDB (Data-Dictionary)
Is Allocated by
Fig. 7 Relations between the DDF Modules and the Requirement and Interface Modules
The Figure 8 shows a DOORS to HTML export of a DDF section on the “Mission Planning” SFC (part of SFC5). In a format similar to a SYSML activity diagram, the DDF section describes how GCS and GMS functions interact to fulfill the SFC objective. Several System Interfaces contribute to the SFC, such as the GCS-GMS Interface. The diagram is also traced to the Segment Function descriptions and the involved logical signals. This allows to indirectly link the diagram to the Segment requirements and ICDs to which it is allocated.
228
S. Bouchired and S. Lizy-Destrez
Fig. 8 DDF System Functional Chain Activity Diagram
The segment functions are also organized in a tree grouping the functions by segment. The tree is modeled in SDL as shown on Figure 9. This functional tree goes quite deep into the segments design. As already mentioned, long “Preparatory” phase can result into boundary evolutions between System and Segments. The functional tree of the Galileo System (and therefore its requirement documents) still contains consequences of such boundaries evolutions. The previous section has given an example of a boundary transfer from Segment to System regarding the constellation and of the resulting major rework of segment requirement documents before the FOC phase. Transfers from System to Segments have also occurred. For instance, the Galileo Algorithms responsibility has been transferred from System to Segments around phase B2B. In such cases, the existing requirements are generally left to avoid felt unnecessary paper work. For instance, as the Galileo System has once had the responsibility for uplink scheduling algorithms, a GMS function has been specified in GMSREQ. This function generates the up-link schedule that allows the GMS to format the navigation messages and the ULS to track the satellites (see GMSF3.5 on Figure 8). This function is specified by more than 20 requirements. Some of them can be considered as resulting from GMS internal performance allocation (e.g. “In case of tracking plan change, the ULS shall reacquire a satellite in a maximum of 12min”). The real system need is indeed that GMS uplink data to the satellites under certain constraints linked to the services performances. The number of specified segment functions (e.g. 38 function leaves in the GMS part of the system functional tree) is probably greater than what it could have effectively been. This generates unnecessary complexity at System level, which sometimes has to be managed by justified Requests for Deviations (RFD) from the Segments. It is difficult to escape paper work.
System Engineering Approach Applied to Galileo System
229
Fig. 9 SDL Snapshot of the Functional Tree
5.2 Physic Architecture (Interfaces Problematic) As explained above, the Galileo System is composed of 5 segments (N-1). The System Design Definition File part II (DDF2) describes the Galileo System Organic Architecture down to N-2 level (i.e. segments components) in order to match at least the level of details contained in the Interface documents. The physical architecture consolidation is mainly about interface resolution. The two main tools that have been used to consolidate the interfaces have been: -
The Data-Dictionary resulting from the functional architecture The “Use Cases” Database.
5.3 The Data-Dictionary The SDL functional model allowed generating the System Data-Dictionary. The Data-Dictionary is the repository of all the data exchanged in the system. It can be seen as a tree of which the top level is composed by the logical signals derived from the functional analyses (e.g. DISS_MsgSubFr is the signal generated by GMS and to be up linked to the satellites), and the leaves are the elementary data carried on these signals (e.g. clkT0 which is a satellite clock reference time coded on 14 bits carried over DISS_MsgSubFr but also other signals). Practically the Data-Dictionary is an XML file originally exported from the SDL model. The file has then been complemented via a proprietary web-based tool to follow the interface consolidation process. The populated XML file is imported in DOORS in order to be traced to functions, requirements, ICDs and Use Cases.
230
S. Bouchired and S. Lizy-Destrez
The DOORS traceability allows checking that a parameter like almAf1 part of the satellite almanacs sent to the users to adjust their ranging measurements are properly encoded on the same number of bits (13) in all ICDs in which it appears (see Figure 10). Such an application of the Data-Dictionary traceability has proved particularly useful to check the coherency between the C-Band uplink ICD from GMS to the Satellites and L-Band downlink ICD from the satellites to the GMS and Users.
Fig. 10 Example of Data-Dictionary application
The amount of signals transiting over the Galileo System interfaces is in the order of several hundreds. However the data-dictionary contains much more signals (above 500) resulting from the SDL model segment internal function exchanges. The data-dictionary is indirectly made applicable to the segments via the ICDs for inter-segment signals, and by the segment requirements for segment internal logical signals.
5.4 The “Use Cases” Database The Galileo System “Use Cases” (UC) are sequences of interactions between the system components crossing at least one system Interface. Although the main UC´s have been defined early in the design phase based on the functional architecture and Stakeholders need specification, the UC Database has been expanded all along the projects in two directions: (1) horizontally - the number of use cases has been increased to take into account special scenarios coming from the Operation Needs or contingency cases identified by the RAMS analyses; (2) vertically – the use cases have been consolidated on the basis of the segments design at each segment CDR review in order to ensure and demonstrate overall system consistency. As shown in Figure 11, the Use Cases have played a central role for all the System Engineering activities. As a matter of fact, the UC´s are related to: •
The Operational Activities: The Operational Concept and GSRD Operational Requirements identified the main scenarios to be addressed. In addition, the Operation Engineering activities have continuously expressed complementary needs while consolidating their Operational Procedures. The Use Cases have been the tool to ensure the Design Operability.
System Engineering Approach Applied to Galileo System
•
• •
231
The RAMS Analyses: Through the Fault and Hazard Analyses (FHA) and Failure Mode Effects and Criticality Analyses (FMECA), the RAMS Activity identifies fault conditions to which the system must be resilient. The Use Cases contingency scenarios allowed demonstrating the sequence of events allowing maintaining or returning the System in a safe and operational state. The Functions and Signals: The Use Cases Database is traced to the System Interfaces documents in order to allow ICD coherency with the Design. The System Integration and Verification Test Cases: The Use Cases Database has been used as a starting point for System Test Cases Definition. GSRD
“Refers to” via SYSBSLJF Annex A (to GSRD section 8 “Operational Requirements”) Mapping table in SIVP Annex SIV: Test Cases
FMECA RAMS (pdf )
Mapping table in DDF2 Use Cases Annex DDF2 Use Cases
Mapping table in DJF Volume 4 on RAMS (not a DOORS Link)
(not a DOORS Link)
MOCD OPS (pdf )
Link Interface SignalDB
Is Allocated by Requirement, Interf ace
N-1 OPS Scenarios • GMS OPS Scenarios: Æ Input to OPS Concept TN • GCS OPS Scenarios: Æ Input to OPS Concept TN • SSgt OPS Scenarios
Fig. 11 Relation between the Use Cases and the other System Documents
6 Conclusion This article has provided a rapid overview of the organization of the Galileo System Engineering activities in the IOV Phase. It has illustrated the importance of strictly following a solid System Engineering process in a large scale program such as Galileo. Indeed, hurdles are numerous: long phases duration involving people turn-over, system boundaries evolutions, complex industrial organization, and last but not least a real technical challenge. The emphasis has been put on the system boundaries evolutions along the project. Despite those program difficulties, interesting methods and tools (like DataDictionary for domain analysis and System Functional chain for Functional Architecture) were set-up for Galileo and could be spread out to be reuse on other space complex System programs. Furthermore, as the verification and validation activities are still on going, during this article writing, no mature lesson learned can be at the moment concluded. A further analysis must be interesting, to complete the present one so as that the whole V cycle will have been analyzed. This shall lead to instructive conclusions on the application of System Engineering methods on such a real complex system.
232
S. Bouchired and S. Lizy-Destrez
Acronyms List Acronyms CDR
Critical Design Review
Definition
CMCF
Central Monitoring and Control Facility
CS
Commercial Service
CSIM
Constellation SIMulator
DDJF
Design Definition and Justification File
DDF
Design Definition File
EC
European Commission
ECSS
European Cooperation on Space Standardization
ERIS
External Regional. Integrity Systems
ESA
European Space Agency
ESCC
External Satellite Control Center
ESNIS
European Satellite Navigation Industries
FDF
Flight Dynamics Facility
FMECA
Failure Mode, Effects, and Criticality Analysis
FHA
Fault and Hazard Analyses
FOC
Full Operation Configuration
GACF
Ground Assets Control Facility
GCC
Ground Control Center
GCS
Ground Control Segment
GCS KMF
GCS Key Management Facility
System Engineering Approach Applied to Galileo System
Acronyms GDDN
Definition Galileo Data Delivery Network
GMS
Ground Mission Segment
GNMF
Galileo Network Monitoring Facility
GPS
Global Positioning System
GOC
Galileo Operating Company
GRSP
Geodetic Reference Service Provider
GSA
Galileo Supervision Authority
GSMC
Galileo Security Monitoring Center
GSRD
Galileo System Requirement Document
GSS
Galileo Sensor Station
ICD
Interface Control Document
ILS
Integrated Logistics Support
IRD
Interface Requirement Document
IOT
In Orbit Test
IOV
In-Orbit Validation
IPF
Integrity Processing Facility
IVVQ
Integration, Verification, Validation, Qualification
LEOP
Launch and Early Orbit Phase
MEO
Medium Earth Orbit
MGF
Message Generation Facility
MKMF
Mission Key Management Facility
233
234
S. Bouchired and S. Lizy-Destrez
Acronyms MRD
Definition Mission Requirement Document
MSF
Mission Support Facility
MTPF
Maintenance and Training Platform
MUCF
Monitoring and Uplink Control Facility
OPF
Operation Preparation Facility
OSPF
Orbit Synchronization Processing Facility
PDR
Preliminary Design Review
PKMF
Public Regulated Service Key Management Facility
PTF
Precise Timing Facility
RAMS
Risk Analysis Management System
RLSP
Return Link Service Provider
SAR
Search And Rescue
SATMAN
SATellite MANufacturer
SCCF
Satellite Central Control Facility
SCPF
Spacecraft Constellation Planning Facility
SFC
System Functional Chain
SIS
Signal In Space
SoL
System of Life
SSgt
Space Segment
TC
TeleCommand
TM
TeleMetry
System Engineering Approach Applied to Galileo System
Acronyms TSP
Time Service Provider
Definition
TTC
Telemetry, Tracking & Control
ULS
UpLink Station
UMTS
Universal Mobile Telecommunications System
USNO
US Naval Observatory
235
A Hierarchical Approach to Design a V2V Intersection Assistance System Hycham Aboutaleb*, Samuel Boutin, and Bruno Monsuez 1
2
3
Abstract. The key challenge in enhancing intersection safety is to identify vehicles that have a high potential to be involved in a collision as early as possible and take preventive action thereof. Such a system design and implementation needs an analysis phase during which the system is analyzed and decomposed. Given that most large-scale complex engineering systems need to be simplified and layered before being designed, a hierarchical approach is necessary to ensure a global and structured understanding of the whole system, including involved stakeholders, use cases and associated requirements. Despite the fact that use cases in themselves are quite intuitive, the process around them is a much bigger challenge since it usually varies from one situation to another. In this paper we analyze and model a cooperative intersection safety system using a hierarchical method to represent use cases. This approach simplifies the understanding of the intersection crossing problem by applying transformations that reduce its complexity. We also show that we get a first functional architecture of the system based on the use cases analysis.
1 Introduction 1.1 Context The accident statistics show that intersections are still considered high-risk areas. Indeed, accidents at intersections represent between 40% and 60% of roads Hycham Aboutaleb System Engineer, Knowledge Inside e-mail: [email protected]
1
Samuel Boutin President and Chief Technical Manager, Knowledge Inside e-mail: [email protected] 2
Bruno Monsuez Associate Professor, UEI, Ensta ParisTech e-mail: [email protected] 3
* Author to whom correspondence should be addressed.
238
H. Aboutaleb, S. Boutin, and B. Monsuez
accidents in different countries. These accidents cost more than 100 billion euros per year in Europe and over 100 billion dollars in the U.S. Moreover, they represent 25% to 35% of the number of victims of road accidents.[1] To improve the cooperation between drivers, devices were gradually grafted to all vehicles such as the horn, turn signals, brake lights ... This form of "communication" is now essential but not sufficient with respect to new needs and especially with the emerging technologies which can provide substantial aid. The revolution in the mobile telecommunications market and the successes of wireless communications systems have been two factors that have prompted researchers in the field of ITS to introduce this technology in the automotive field. The possibilities offered by telecommunications have reshaped driving thinking in which cooperation between vehicles requires a significant position in the prevention of risk. In this paper, only Point to Point (or Ad Hoc) Networks is considered; in this type of network, communication is done directly between vehicles without going through an infrastructure of communication management. This network is called Vanet (Vehicle Ad Hoc Network).
1.2 Motivation and Scope The problem of assisting the intersection crossing is very complex, due to the huge number of possible scenarii that can occur at a road junction. Therefore, we need to manage the combinatorial explosion of possible use cases by modeling the problem as faithfully as possible to ensure trustworthiness and completeness of the model. By analyzing the causes of accidents in intersections, we note that the lack of information and / or driver's attention remain the primary causes of accidents. This lack of information is due to the topology of the intersection where visibility between vehicles is possible only from a certain distance of it. First, we present our approach by defining the use cases problem and the followed methodology. A second part is dedicated to the case study results we get by applying this approach on the intersection crossing problem. Finally we discuss the advantages of this approach and future works needed.
2 Methodology 2.1 Overview Usually, the approach we have depends on how the results will be used. To optimize the design time, it is important to have a useful framework for analyzing complex systems and study their evolution. The use of such a framework requires an understanding of the components of a given system, its representation, the evolution of its model and ways of representation [2][3]. To analyze the given system, we need to evaluate its complexity. The complexity of these systems can be classified into two categories:
A Hierarchical Approach to Design a V2V Intersection Assistance System
239
• Structural complexity: It involves the physical and spatial description of the considered system (here, the whole intersection) and as a consequence it is measured in physical and spatial dimensions. • Behavioral complexity: It involves the description of the behavior that emerges due to the manner in which sets of components of the considered system interact. It is thus described in temporal dimension. Indeed, the complexity of systems is often characterized, beyond the inherent complexity of components and their variety, by the complexity of the interaction network, from which emerge behaviors as intentional and unintentional, which may be harmful and difficult to predict and control. To manage the complexity of a system, it is necessary to have a global approach based on decomposition and hierarchy, the goal is to simplify our problem into a set of independent problems, simpler and easier to deal with.[4] Thus we need to identify all the scenarii and to classify them hierarchically in space and time dimensions. We proceed to the decomposition of our problem according to the use cases and their characteristics, which will provide a multilevel scenariooriented model. Therefore, there is an order for complexity reduction: 1. 2. 3. 4.
Structural Reduction Dynamical Reduction Behavioral Reduction Decisional Reduction
2.2 Resulting Transformations For the intersection crossing problem, we have a top-down strategy as follows: 1.
Structural Reduction a. Identify the system
The first step is to identify the considered system. In this paper we consider that the global system is the roads network, composed of systems of a lower scale: roads and intersections. We define our system of interest as an intersection (which is a subsystem of the roads network) b.
Manage topologies diversity
To understand the origins of risk intersections and better design a system to manage these risks, we analyze in detail the intersections and their configurations. Cross shape: according to the form of branches (connections) that lead to the crossroads, the shape of the central area and the right of way, we can define three classes of crossroads: a cross-junction, a roundabout and a round-about. Number of branches (connections): the connection to an intersection is the way in which vehicles arrive. The more connections is high over the flow of arriving vehicles is high and the driver must be vigilant. However, the number of connections
240
H. Aboutaleb, S. Boutin, and B. Monsuez
gives a clue to the flux density of vehicles entering the intersection, and identifies possible trajectories. Number of lanes (per connection): the number of lanes also gives information on the density and the possible trajectories of other vehicles. Type of signal: the signal is designed in order to impose a code common to all drivers and exists in three forms: lights, stop or priority to the right. Area of conflict: We define the conflict zone of the intersection as the central hub where the trajectories intersect. The risk analysis is done primarily in this area. [5] 2.
Dynamical Reduction a. Reduce the system to involved elements
Not all the elements of the considered system are to be taken into account. Thus, it is necessary to identify dynamic elements that we consider for our analysis. These elements are the vehicles that share the same arrival time at the intersection. b.
Reduce the complexity using symmetry
To simplify our understanding of the problem we decide to take the point of view of a vehicle arriving at the intersection. This vehicle will be called Subject Vehicle (SV) and will always be our reference point. All other vehicles will be called Intruder Vehicle (IV). The choice of the vehicles SV and IV is completely arbitrary and does not determine the priority of each vehicle. In fact, any vehicle located at the intersection can be regarded as the vehicle SV at any time. Scenarii can be decomposed into a combination of parallel scenarii. By this way we can focus on 2-vehicles scenarii. 3.
Behavioral Reduction a. Reduce to vehicles behavior
All possible scenarii for the considered pair of vehicles are identified. In order not to forget any critical scenario, we first set the SV direction and cover all the IV directions. We verify if their expected paths intersect: if they intersect, then it is a critical scenario. We change the SV direction and start again, until we cover all possible directions for SV. b.
Identify scenario constraints
The target is to manage the transition of the two vehicles, according to the priority of each. For this, it is necessary to know everyone's priority. 4.
Decisional Reduction a. Identify decisions for the scenario
A Hierarchical Approach to Design a V2V Intersection Assistance System
241
Depending on the priority and the risk of collision, the criticity of the situation is estimated. The risk is based on the predicted velocity curve. b.
Identify actions for the scenario
Depending on the priority and the risk of collision, the actions to be undertaken will be determined.
3 Results 3.1 Top Level: Environment Starting from the whole roads network, we identify a road-junction with all the vehicles as our system of interest. In this paper, we limit to a cross intersection with one lane per direction. We include in our model the objective of our system and justify the need to achieve the defined goal.
3.2 First Level: Selecting the Vehicles It is relevant to restrict communications to vehicles whose collision is the most likely, i.e. restrict communications to vehicles that have the same arrival time at the intersection, taking into account changes that may occur during the moments preceding the arrival at the intersection, and updating consequently.
242
H. Aboutaleb, S. Boutin, and B. Monsuez
If time of arrival is below a certain predetermined value (e.g. one second), then the vehicle is in the first interval; if the arrival time is between one second and two seconds, then the vehicle is in the second interval; and so on. Hence, this method allows tiling of the time, which yields to a tiling of space that is unique to each vehicle. The vehicles with the same interval of time of arrival then communicate together.
A Hierarchical Approach to Design a V2V Intersection Assistance System
243
3.3 Second Level: Selecting the Pairs of Vehicles To simplify our understanding of the problem we decided to take the point of view of a vehicle arriving at the intersection. This vehicle will be called Subject Vehicle (SV) and will always be our reference point. All other vehicles will be called Intruder Vehicle (IV). The choice of the vehicles SV and IV is completely arbitrary and does not determine the priority of each vehicle. In fact, any vehicle located at the intersection can be regarded as the vehicle SV at any time. Scenarii can be decomposed into a combination of parallel scenarii. By this way we can focus on 2-vehicles scenarii.
3.4 Third Level: Identifying All the Scenarii for Each Pair of Vehicles All possible scenarii are identified. Here we limit to scenarii where the vehicle SV goes straight. A change of direction implies a change of use case. This is interpreted in the diagram below by transition arrows (green or red) between use cases.
244
H. Aboutaleb, S. Boutin, and B. Monsuez
3.5 Fourth Level: Managing Priorities for Each Scenario We now know exactly what the scenario that takes place is. The target is to manage the transition of the two vehicles, according to the priority of each. For this, it is necessary to know everyone's priority, which can be variable (traffic lights) or not (stop, priority right ...). The change of priority implies the passage from one state to another.
3.6 Fifth Level: Acting and Deciding We now know if the vehicle SV has priority or not. Depending on the priority and the risk of collision, the actions to be undertaken will be determined. The risk is based on the predicted velocity curve, if the current speed is greater than the expected speed, then the risk is greater.
A Hierarchical Approach to Design a V2V Intersection Assistance System
245
Vehicle speed reduces when approaching an intersection as shown in the curves featured in case SV has the priority: - Green zone: far from intersection with high speed decreasing - Orange zone: close to intersection, low speed decreasing, higher risk - White zone: intersection crossed safely, speed increasing. In this case, the system should either: - Inform the driver (to draw his attention to the fact that he is in an intersection and must be vigilant) - Warn the driver of an imminent collision
Vehicle speed reduces when approaching an intersection as shown in the curves featured in case SV does not have the priority: - Green zone: far from intersection with high speed decreasing - Orange zone: close to intersection, low speed decreasing, higher risk - Red zone: at intersection, speed should be equal to zero, otherwise imminent collision possible - White zone: intersection crossed safely, speed increasing. In this case, the system should either: - Inform the driver (to draw his attention to the fact that he is in an intersection and must be vigilant) - Warn the driver of an imminent collision - Act: e.g. strongly amplify user braking demand.
246
H. Aboutaleb, S. Boutin, and B. Monsuez
4 Advantages of Our Approach This is one of the first approaches whose main goal is to organize and reduce use cases. Usually, when using UML to represent use cases, there is no way to organize them in a way that simplifies the understanding of the system behavior. It is rather an elicitation of all scenarii possible (or required) in a linear form. Besides, unlike UML or SysML where the functional architecture is totally independent from the use cases which might lead to incoherencies that cannot be checked; using our method establishes a strong and intuitive link between use cases and functional architecture.[6] At each level of the model, we identify the functional requirements, and thus the functions to implement; we can also add verification and validation tests. By this way we get the functional architecture and part of the dynamic architecture.
Graphical traceability is possible to check coherence. Finally, this approach can be used as guidelines for states reduction and simplification.
5 Conclusion and Future Works There is little scientific literature about use cases issue, although use cases affect greatly model-based system design. Despite the fact that use cases in themselves are quite intuitive, the process around them is a much bigger challenge since it usually varies from one application to another. We have introduced a new
A Hierarchical Approach to Design a V2V Intersection Assistance System
247
approach for analyzing the use cases that are necessary to get a model-based design for a cooperative intersection collision avoiding system. The top-down approach that we followed leads to greater efficiency in complex tasks. It began with a thorough analysis of accidents at intersections and providing the main characteristics of these accidents. From the type and severity of accident scenarios, a classification of relevant scenarios is made. The model-based design is a result of this decomposition. It is expected that the resulting system will be applicable to a wider range of accident scenarios. Future work would be to verify the model through simulation (MIL, SIL, HIL), which needs to model and simulate a user behavior and his reactions to MMI messages in each situation. Another question to rise is the relationship between the problem and the transformations that were applied to reduce the complexity. It is interesting to study if a generalization of this approach is possible, and therefore identify the system properties that are needed to be able to follow this approach.
References [1] Requirements for intersection safety applications, Prevent Project (2005) [2] Mostashari, A.: Stakeholder-Assisted Modeling and Policy Design Process for Engineering Systems (2005) [3] Mostashari, A., Sussman, J.: Engaging Stakeholders in Engineering Systems Representation and Modeling (2004) [4] Simon, H.A.: The Architecture of Complexity (1962) [5] Ammoun, S.: Contribution des communications intervéhiculaires pour la conception de systèmes avancés d.aide à la conduite (2007) [6] Salim, F.: A Context-Aware Framework for Intersection Collision Avoidance (2008)
Contribution to Rational Determination of Warranty Parameters for a New Product Zdenek Vintr and Michal Vintr
*
Abstract. The article includes a procedure of evaluation of a research of customers’ behavior and following lessons learned used to propose a statistic model that determines warranty parameters under stated level of warranty costs when twodimensional warranty is granted. Possibilities of practical use of the proposed method are demonstrated on an example of determination of warranty parameters of a lower medium class passenger vehicle produced in the Czech Republic.
1 Introduction It is nearly obvious in modern market economies that a warranty for quality is provided for delivered products to ensure that a customer receives only a quality product. A warranty outlines a guaranteed quality and compensations to be provided to the customer for a not kept an established quality. On the other hand, it also usually outlines operation and maintenance conditions under which a warranty provided will be valid and which must be respected by a customer. During selling consumer products, warranties play an important role, because recently they became a significant tool of competition fight and also because providing warranties has significant economic impacts on the supplier. Providing the warranty is always accompanied with additional costs, called warranty costs. Any decision connected with establishing a scope of provided warranties should be supported with an appropriate analysis. Currently, the so-called two-dimensional warranties are used for a lot product (for example for virtually all vehicles on the market). The warranty termination point or agreement for two-dimensional warranties can be defined in two ways, i.e. as a guaranteed calendar time of use and a guaranteed operating time. A warranty period Zdenek Vintr Faculty of Military Technology, University of Defence Kounicova 65, 662 10 Brno, Czech Republic e-mail: [email protected]
*
Michal Vintr Faculty of Mechanical Engineering, Brno University of Technology Technicka 2896/2, 61669 Brno, Czech Republic e-mail: [email protected]
250
Z. Vintr and M. Vintr
terminates when any of the mentioned values is achieved. The way the warranty of specific product terminates and the amount of realized operating time during warranty period do not depend only on given warranty parameters but also on the intensity of product usage by the particular customer. The article proposes method that respects above described facts and makes it possible to determine parameters of two-dimensional warranty so that the maximum possible warranty extent could be provided at a given level of warranty costs. The method is based on usage of results of statistical evaluation of customers’ behavior research.
2 Two-Dimensional Warranty Procedures and methods presented in this article are focused on two-dimensional non-renewing free-replacement warranty, which is most frequent used type of two-dimensional warranty, is used. In this case, a warranty is characterized by a region in a two-dimensional plane, where one axis usually represents a calendar time and an operating time is represented by the second axis [2, 3]. For example, a warranty of the vehicles is characterized by a guaranteed calendar time of use t0 (measured from the date of purchasing the vehicle) and a guaranteed operating time u0 expressed by a number of kilometers covered. A warranty terminates when any of the mentioned values is reached. Any warranty repair or replacement costs of the product are fully borne by a vendor and they do not impact duration of warranty period, which is fixed. This defined warranty region is shown in Fig. 1.
Fig. 1 Warranty region coverage and potential behavior of warranty period
However, the end of warranty of a specific product is not only determined by the established parameters of warranty, but it is especially influenced by behavior
Contribution to Rational Determination of Warranty Parameters for a New Product
251
of the customer and also how intensively the product is used. And behavior of the customers can be different on the single markets. The product usage rate can be generally characterized by an operating time carried out during a unit of calendar time. This defined product usage rate can be expressed by the relation:
x=
u t
(1)
where x = reported usage rate, u = operating time and t = calendar time of the use. Depending on the product usage rate, in principle, there are three types of warranty courses that also determine the way how warranty is finished (See Fig. 1). We assume that throughout the whole warranty period the product usage rate remains almost the same and it can be considered as constant. In the first case (See Fig. 1, line 1), at the end of warranty, both guaranteed calendar time of use t0, and guaranteed operating time u0 will run out. This situation occurs when a product is used with an intensity x0 expressed by the relation:
x0 =
u0 t0
(2)
which we denote as the warranty usage rate. In the second case (See Fig. 1, line 2), a warranty terminates when guaranteed operating time u0 is reached, but a guaranteed calendar time of use t0 was not exhausted and warranty terminated after an elapse of time t1 < t0. This situation may happen when a customer uses a product with a higher intensity than a warranty usage rate:
x > x0
(3)
In the third case (See Fig. 1, line 3), a warranty terminates when the guaranteed calendar time of use t0 is achieved, but the operating time u0 was not spent but the operating time u1 < u0 was exhausted during a warranty. This situation occurs when a customer uses a product with a lower intensity than a warranty usage rate:
x < x0
(4)
For consumer products, where two-dimensional warranties are usually used, at individual consumers on single markets we can see various product usage rate with a wide scale of specific values. In general, the product usage rate can be considered as a continuous random variable (in time when a product is sold to the customer, it is not possible to identify in advance what will be the product usage rate).
3
Statistical Evaluation of Customers’ Behavior Research
While introducing a product on a new market (an area, a country) it is essential to know the behavior of the customers on this market. For that reason it is necessary to know a distribution of random variable – usage rate of studied product on given market. A series of procedures can be used to determine the appropriate distribution.
252
Z. Vintr and M. Vintr
Further, a procedure based on processing the results of research of the customer’s behavior when an appropriate type of product is used on given market. A sufficient number of existing or potential users of the appropriate type of product is inquired for a usage rate with which they use a product (at existing users), or with which usage rate they would use a product when they purchased it (at potential users). Selection of users inquired should be done so that a sample of users would represent a whole target group of users, the product is designed for. Information collected are statistically processed and a distribution of studied random variable is substituted with certain known continuous probability distributions and described by an appropriate function, i.e. distribution function F(x) or probability density function f(x). The mutual relationship of these functions is expressed by a known expression [1, 5]:
F (x ) =
x
∫ f (z )dz
(5)
−∞
In the integral of the Equation (5), the value x = -∞ is given as a lower limit. This notation is correct and it results from a general definition of distribution function and a function of density of probability. But due to a character of analyzed random variable it is evident that this variable can assume only non-negative values. As a suitable alternative of initial statistic information can be therefore regarded only such distribution of probability, for which it will hold that P(X<0) → 0. In this case, in integrals of the following relations instead of x = -∞ at the value of lower limit can be given a value x = 0 without making non acceptable inaccuracies during own calculations. Using the Equation (5) it is also possible to determine a probability how a warranty will be finished. In accordance with a condition given in the Equation (4), a warranty will terminate when a guaranteed calendar time of use is exceeded in case that a product is used with a lower usage rate than a warranty usage rate x0. Probability that a warranty of this product will be terminated in such a way can be defined with the use of the relation [1, 5]: P (0 ≤ X < x 0 ) =
x0
∫ f (x )dx
(6)
0
By analogy, it is possible to determine a probability that a warranty of the product will be ended by exceeding a guaranteed operating time, i.e. in fulfilling a condition of the Equation (3), when a product is used with a higher usage rate than a warranty usage rate x0. The appropriate probability can be determined using the relation [1, 5]: P ( X > x0 ) =
∞
∫ f (x )dx
(7)
x0
An example of possible character of probability density function and graphical representation of probability expressed by Equations (6) and (7) is shown in Fig. 2.
Contribution to Rational Determination of Warranty Parameters for a New Product
253
Fig. 2 Probability density function of random variable x
4 Determination of Parameters of Two-Dimensional Warranty Following the next solution it is assumed that the maximum accepted level of warranty costs per one product Cmax is established. It is also assumed that there is previous experience with the product on a different market. In this case knowledge of the value of unit warranty costs c is expected. These unit costs represent mean costs to be expended to settle claims of the product under warranty, related to operation time unit. The unit warranty costs defined this way might be used for the products where time between failures follows the exponential distribution. This simplified assumption is based especially on the following facts: • exponential distribution is characteristic for many modern and very reliable products; • thanks to a widely applied sophisticated systems of quality control in the industrial processes, at the modern products early failures of products with qualityrelated defects appear very rarely (the first part of bathtub curve, so called infant mortality); • a period of “infant mortality” is usually significantly shorter than an usual length of warranty period and it usually has an influence on the product reliability that can be neglected; • during a warranty period, wear-out failures typical for the final phase of the product’s life cycle, are very seldom. Because of the limited extent of this article, practical possibilities and methods of how to determine unit warranty costs are not solved here. They are dealt within other literature [2, 3, 4]. Knowing the unit warranty costs, mean warranty costs for one product can be calculated from the following relation: C = c uW
(8)
254
Z. Vintr and M. Vintr
where c = unit warranty cost and uW = mean operating time realized during the warranty. While knowing accepted level of warranty costs per one product the relation can be put this way: C max = c uW
(9)
In the relation mentioned above the values Cmax and c are known. The mean operating time of the product under warranty uW is an unknown value influenced by warranty parameters t0, u0 and behavior of the customers, and is determined by probability density function f(x). From the general characteristics of the density of probability results that a mean value of the random variable can be reached using the relation [1]: ∞
x = ∫ x f (x ) dx
(10)
0
Based on the definition of used random variable expressed in Equation (1), value x is product’s mean usage rate. With this value, it is then possible to calculate a mean operating time u (t ) of the product during any calendar time t of use: ∞
u (t ) = t x = t ∫ x f ( x ) dx
(11)
0
However, to calculate a mean operating time of the product under warranty, the Equation (11) cannot be used, since in calculating a mean operating time for calendar time t0 of use, the calculation would also include this part of operating time, which according to the Equation (7) with a certain probability would be realized only after the end of warranty because a guaranteed operating time would be exceeded. Therefore, a calculation of a mean operating time of the product under warranty is to be divided into two parts. To calculate values of random variable X < x0, standard calculation of mean value can be used, but to calculate values of random variable X > x0, a constant value x = x0 must be used, because it enables that a calculation takes into account only an operating time realized within a warranty. To calculate a mean operating time of the product under warranty, the following relationship can be used [7, 8]: ∞ ⎛x ⎞ ⎛x uW = t 0 ⎜ ∫ x f ( x ) dx + ∫ x 0 f (x ) dx ⎟ = t 0 ⎜ ∫ x f ( x ) dx + x 0 ⎜0 ⎟ ⎜0 x ⎝ ⎠ ⎝ 0
0
0
∞
∫
x0
⎞ f ( x ) dx ⎟ ⎟ ⎠
(12)
The Equation (9) can be modified to the final form by substituting from the Equation (12): ⎛x C max = c t 0 ⎜ ∫ x f (x ) dx + x 0 ⎜0 ⎝ 0
∞
∫
x0
⎞ f ( x ) dx ⎟ ⎟ ⎠
(13)
Contribution to Rational Determination of Warranty Parameters for a New Product
255
In this equation the values Cmax, c and f(x) are given, the values of warranty parameters t0 and u0 (or x0) are those which are to be specified. The Equation (13) might be modified as follows: t0 =
C max
⎛ c ⎜ ∫ x f ( x ) dx + x 0 ⎜0 ⎝ x0
∞
∫
x0
⎞ f (x ) dx ⎟ ⎟ ⎠
(14)
After modifying the Equation (2) and substituting the Equation (14) we can get the following relation: u 0 = x0
C max
⎛ c ⎜ ∫ x f ( x ) dx + x 0 ⎜0 ⎝ x0
∞
∫
x0
⎞ f (x ) dx ⎟ ⎟ ⎠
(15)
The quantities t0 and u0 determine the values of warranty parameters at a maximum accepted level of warranty costs. Next, the value t0 can be calculated following the Equation (14), and the value u0 can be counted following the Equation (15) for adequate number of selected values x0 using the interval (0;∞) (in practice it is suitable to choose the minimum and maximum value x0 using a realistic amount). The pairs of the values got this way might be put in the graph (See Fig. 3) and interlaid with a curve. The curve in Fig. 3 divides the space into two parts – the field of acceptable values of warranty parameters t0, u0 and the field of unacceptable values of warranty parameters. The points situated on the curve determine combinations of warranty parameters t0 and u0, for which the warranty costs will be at the maximum acceptable level.
Fig. 3 Relationship of warranty parameters for determined level of warranty costs
256
Z. Vintr and M. Vintr
5 Example of Practical Usage of Proposed Procedure Possibilities of practical use of proposed method are demonstrated on an example of determination of parameters of two-dimensional warranty at the lower medium class passenger vehicle manufactured in the Czech Republic.
5.1 Evaluation of Customers’ Behavior Research For the needs of solving the problem, a behavior of customers (vehicle owners) was studied to find a vehicle usage rate at individual customers. In total, almost 600 vehicle owners have been inquired. The survey encompassed vehicle owners whose vehicle s were put into operation not more than 7 years. Every person of the inquiry was asked for a date of putting his/her vehicle into operation and overall number of covered kilometers till the survey. From the information gathered, the usage rate of every vehicle as an average number of covered kilometers per one year of use of the vehicle was calculated. Collected and adjusted data were statistically processed using STATISTICA software [7, 8]. The histogram of usage rate of covered kilometers per one year of use is shown in Fig. 4. During further processing, it was found using distribution fitting that random variable (number of covered kilometers per one year of use) can be well described through a log-normal distribution with parameters μ = 10.037 and σ2 = 0.241 (many references confirm that the log-normal distribution is often an adequate choice for modeling mileage accumulation data [6]). A chart of probability density function of this distribution is seen in Fig. 5. 140 120
Frequency
100 80 60 40 20 0 5
10 15 20 25 30 35 40 45 50 55 60 65 Usage rate - x [1000⋅km/year]
Fig. 4 Histogram of usage rate
Contribution to Rational Determination of Warranty Parameters for a New Product
257
Probability density function - f (x)
5,0E-05
4,0E-05
3,0E-05
2,0E-05
1,0E-05
0 0
20 000
40 000
60 000
80 000
Usage rate - x [km /y ear]
Fig. 5 Probability density function
5.2 Determination of Warranty Parameters at Limited Level of Warranty Costs For determination of warranty parameters presented, the authors of the article had no detailed data on warranty costs of a specific type of a vehicle available. Therefore, in further calculations, expert predicted unit warranty costs were applied: c = 0.0002 € € / km
(16)
The maximum level of warranty costs per one vehicle was also chosen by expert estimate: Cmax = 14 €€
(17)
Knowing both of these values as well as the results coming from interpretation of a customer survey the values t0 and u0 for appropriately selected values x0 might be calculated using the Equations (14) and (15). Because practical calculations performed with probability density function are slightly complicated, all the calculations were performed while using the software MathCad. In Fig. 6 there is a graph showing the calculation results, the way of making the graph is described in a detailed manner in Fig. 3. The curve interlaid with single points in Fig. 6 determines the combinations of the values of the warranty parameters t0 and u0 where the warranty costs will be at a maximum accepted level. The area above the curve is the area of unacceptable values of the warranty parameters, the area stretching below the curve is the area of acceptable values of the warranty parameters.
258
Z. Vintr and M. Vintr
Guaranteed operating time - u 0 [1000·km]
140 130 120 110 100 90 80 70 60 2
3
4
5
6
7
8
Guaranteed calendar time of use - t0 [years]
Fig. 6 Relationship of warranty parameters for determined level of warranty costs
6 Conclusion When knowing the behavior of the customers and the level of unit warranty costs, the supplier, while using the suggested procedure, is able to specify quite easily the parameters of two-dimensional warranty for maximum warranty costs he is able to invest. This method can be used especially when determining the range of warranties during placing already established products on new markets where different behavior of the customers might be expected.
References [1] Birolini, A.: Reliability Engineering – Theory and Practice. Springer, Berlin (2004) [2] Blischke, W.R., Murthy, D.N.P.: Warranty Cost Analysis. Marcel Dekker, New York (1994) [3] Blischke, W.R., Murthy, D.N.P.: Product Warranty Handbook. Marcel Dekker, New York (1996) [4] Ebeling, C.E.: An Introduction to Reliability and Maintainability Engineering. McGraw-Hill, New York (1997) [5] Hoang, P.: Handbook of Reliability Engineering. Springer, Berlin (2003) [6] Krivtsov, V., Frankstein, M.: Nonparametric estimation of Marginal Failure Distributions from Dually Censored Automotive Data. In: Proc. Ann. Reliability & Maintainability Symp. IEEE, Piscataway (2004) [7] Vintr, Z., Vintr, M.: Estimate of Warranty Costs Based on Research of the Customer’s Behavior. In: Proc. Ann. Reliability & Maintainability Symp. IEEE, Piscataway (2007) [8] Vintr, Z., Vintr, M.: Influence of Customer behaviour on passenger car warranty cost. In: Risk, Reliability and Societal Safety: Proceedings of the European Safety and Reliability Conference ESREL. Taylor & Francis Group, London (2007)
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems and Proof-Based System Engineering Gérard Le Lann and Paul Simon
*
Abstract. Numerous roadblocks can be encountered when managing projects directed at deploying complex computer-based systems, or systems of systems (SoS), bound to operate autonomously. Existing system engineering (SE) methods and supporting tools are not applicable as they stand for mastering the complexity involved with modern (current, future) applications and/or operations, in the civilian domain as well as in the defense domain. We report on the outcomes of a study sponsored by French DGA, directed at investigating issues raised with autonomous systems, such as robots or drones, as well as with systems of such systems, such as fleets or swarms of terrestrial, or underwater, or aerial, autonomous systems. This study should be continued and expanded under a European programme. Slashing the acquisition costs of autonomous systems has been the primary motivation for the launching of this study, hence the focus on openness and interoperability. It was also decided to test the applicability of formal/scientific proof-based SE (PBSE) methods for managing the lifecycle of such systems. One goal pursued during this study was to explore the following double conjecture: (1) Is it the case that greater reliance on, and exploitation of, exact sciences should help circumvent the weaknesses intrinsic to current SE methods? (2) If the case, how to “hide” the introduction of exact sciences within the SE processes followed by engineers working on a project? The fact that autonomy and interoperability were two major keywords in that study matched ideally with the goal of exploring PBSE methods, since proofs of Gérard Le Lann Institut National de Recherche en Informatique et en Automatique, Ministry of Research and Ministry of Industry, France e-mail: [email protected] Paul Simon Délégation Générale pour l’Armement, Ministry of Defense, France e-mail: [email protected]
260
G. Le Lann and P. Simon
stipulated properties (future operational behaviours) are of utmost importance with systems and SoS meant to operate autonomously, in cooperation with others, be they the result of SE work planned ahead of time, or be they ad hoc SoS, set up in limited time in operational theatres. The role of manufacturers of robots and drones which participated into the study was triple. Firstly, they were responsible for bringing in real world scenarios of SoS in three domains (aerial, terrestrial, underwater). Secondly, they had to participate in the deployment of the PBSE methods for these SoS. Thirdly, they had to draw conclusions from their direct exposure to PBSE. French DGA was also directly involved, in order to gain a better understanding of what PBSE may offer to a prescribing authority. The study has produced convincing cases in favour of PBSE, from both a “theoretical” viewpoint and a “practical” viewpoint. On the “theoretical” side, it was demonstrated that semi-formal PBSE methods are inevitable, given that current formal PBSE methods suffer from limitations, especially regarding (1) requirements capture phases, (2) identification of generic problems and solutions, (3) automated reuse of existing design solutions and proofs during system design & validation phases. A rather encouraging lesson has been learned: When combined together, formal and semi-formal PBSE methods can encompass an entire project lifecycle, maintaining a continuous “proof chain” all the way through. On the “practical” side, besides meeting the contractual goals, such as showing how to encapsulate scientific results in order to make them “one click away” for project engineers, the study led to the inception of a novel lifecycle model, rooted into PBSE principles, while being fully compatible with popular SE lifecycle models, such as, e.g., ISO/IEC 15288. Hence, regarding standards, the study reached beyond the intended goals. Rather than delivering proposals for technical standards only, the PBSE-centric lifecycle model turned out to be a quite attractive basis for a methodological standard. This resulted into the OISAU methodological standard for open interoperable autonomous systems and SoS.
1 The OISAU Study in a Nutshell A study named OISAU, sponsored by French DGA, has been conducted in 20082009 by a consortium of three French companies. OISAU stands for “Ouverture et Interopérabilité des Systèmes AUtonomes). Issues that arise with autonomous systems, autonomous systems of (autonomous) systems (SoS), were investigated, for three types of environments/theatres, namely terrestrial, aerial, and underwater. Fleets or swarms were assumed to comprise heterogeneous robots and/or drones, of various origins (manufacturers, countries). In the sequel, we use the term system to refer to systems or to systems-of-systems, indifferently. The following four major objectives were set to this study: • To carry out the work involved with Requirements Capture – the first phase in lifecycles – by resorting to formal and/or scientific methods,
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
261
• To gain a better understanding of “how much” autonomy is achievable with current state of the art (technologies, sciences), • To identify standards (for autonomous systems) needed for openness (“ouverture”) and interoperability (“interoperabilité”), • To set the stage for a subsequent main study, on the same topics, at a European level. The initial motivation that led DGA to launch the OISAU study lies with the ever increasing acquisition costs of autonomous systems, as well as with the costs of projects or programmes directed at specifying and developing such systems. The focus on openness and interoperability derives from this initial motivation. The fact that autonomy and interoperability were two major keywords in that study matched ideally with the goal of exploring formal/scientific methods, since proofs of stipulated properties (future operational behaviours of operational systems) are of utmost importance with systems meant to operate autonomously, in cooperation with others, be they the result of designs planned and validated by engineers working in offices, or be they set up in limited time by authorized designated people located in operational theatres (ad hoc SoS). Two methods were deployed in the course of the OISAU study, the B method (Abrial 1996) and the TRDF method (Le Lann 1998), (Le Lann 1999). In this paper, we report on the work carried out with the TRDF method. Final OISAU reports are available from DGA (DGA/OISAU 2009). A number of findings, including unexpected ones, resulted from the OISAU study. Also, it turned out that the work conducted went beyond the originally targeted topics: • Results encompass all major phases in a lifecycle, not just the Requirements Capture phase, • Recommendations for standards are not restricted to technical standards, as originally anticipated. In fact, a major achievement of the OISAU study is the definition of a methodological standard. This methodological standard is an instantiation of the basic principles that underlie Proof-Based System Engineering (PBSE), a scientific extension of traditional system engineering (SE) practice. The implicit challenge posed by DGA with the required use/exploration of formal and/or scientific methods could be summarized as follows: “Show us how to tap relevant results established in appropriate scientific disciplines, and how to “import” them into our projects and programmes conducted with our contractors/developers … without knowing it”. This challenge has been met with OISAU. As can be guessed, the motivation behind that challenge is a very pragmatic one: how to cure or circumvent the weaknesses of current SE methods and processes. Due to space limitations, detailed information regarding PBSE (such as, e.g., fundamental principles, associated SE processes and techniques) cannot be repeated here. We refer the interested reader to the existing literature – see References.
262
G. Le Lann and P. Simon
2 Weaknesses in Current SE Practice Remember the “faster, better, cheaper” motto popularized by NASA and US DoD in the 90’s? Why is it that these goals remain unattained? In this paper, as was the case in the OISAU study, we restrict our analysis to system engineering (SE) issues as they arise with computer-based systems (informatics, information systems) – denoted CBS.
2.1 Requirements Capture Basically, a Requirements Capture (RC) phase in SE lifecycles directed at CBS consists in translating some initial documents describing application/operational requirements – denoted OR – into a specification of system-centric requirements – denoted E. OR is written in some natural language, i.e. prone to ambiguities, misinterpretations, contradictions, and so on. Ideally, E is a demonstrably faithful translation of OR stated into some language such that E is non ambiguous, selfconsistent, complete, and so on. The “purist” view mandates formal languages. The “pragmatic” view mandates natural languages. None of these “extremist” views is satisfactory. Current RC processes lack rigor, a well known and well documented fact. Major weaknesses are as follows: • RC documentation contains hundreds of requirements, mixing application level concerns and solution-centric considerations, functional and non functional requirements, contradictions, TBDs (“mute” requirements that turn out to be overlooked or forgotten in subsequent lifecycle phases). • Functional analyses are based upon particular scenarios (or “use cases”), with no means provided for inferring the complete set of possible scenarios that may/will be encountered by a system while in use/operation. • Non functional analyses do not rest upon verifiable calculi. • Previous designs and/or systems and/or sub-systems are hardly reusable. Justifying whether a given design or system or sub-system elaborated for some past project may be reused as such, or not, for a new project, usually is a daunting task. • Integration of COTS products into new designs/systems faces the same difficulties, due to the fact that the technical documentation made available with COTS products is almost always inadequate. These weaknesses are serious roadblocks for engineers working on a project, in offices, targeting the delivery of an operational system months or years later. These weaknesses are even more severe obstacles for those authorized agents in charge of setting up a system-of-systems (SoS), upon short notice, within a few days, in an operational theatre. Among the systems that are available “on the spot”, which are interoperable? How to “demonstrate” in a few dozens of hours that an ad hoc SoS, made out of supposedly interoperable systems, is correct for a stipulated mission?
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
263
Lack of rigor has well known detrimental consequences: operational failures, costs and/or delays out of control, cancelations of projects. There is evidence that investing more time and higher budgets into RC phases results into big payoffs – see Figure 1, excerpted from (Bowen and Hinchey 2006). y
Moving from x = 2% to x = 5% results into a 50% reduction ratio for y (overrun costs divided by 2)
x
By courtesy of IEEE Computer Society
Fig. 1 An example of quantified savings achieved with small increases of budgets allocated to Requirements Capture
Moreover, given that E is almost never a specification (of system-centric requirements) in the strict sense, no validation, in the strict sense, can be conducted in subsequent lifecycle phases. Stated differently, no correctness proofs can be established in the course of an SDV phase (see below), when a system (solution) specification is being constructed. As explained below, this is another reason why the “faster, better, cheaper” goals remain elusive.
2.2 System Design and Validation Basically, a System Design & Validation (SDV) phase in SE lifecycles directed at CBS consists in delivering a system (solution) specification – denoted SYS – which is shown to be “valid” in reference to E. Too often, “validation” boils down to simulations, (rapid) prototyping, testing, and so on, which is totally antagonistic with the “faster, better, cheaper” goals. Clearly, no development activities, which take time and money, should precede validation work. The only sensible approach is reasonably obvious: proof obligations must be met first, showing that specification SYS satisfies/implies specification E. One can then proceed with development, i.e. implementation of (validated) SYS, without risking to be forced to cancel on-going implementation work because “there is something wrong with the original specification”. Having SYS at hands, formal or informal methods can be
264
G. Le Lann and P. Simon
resorted to in order to conduct and verify hardware and/or software implementations of SYS modules. Hence, in a lifecycle, PBSE-driven work necessarily precedes activities aimed at developments and verifications, be they conducted with formal or informal methods. Once done for a pair {E, SYS}, proofs need not be redone whenever system problem E is encountered anew, and SYS is an acceptable solution vis-à-vis feasibility conditions & dimensioning. If not the case (despite being correct, SYS might be “too slow” or entail too high an overhead), another solution SYS’ must be looked for in the open literature, or devised and specified, and proofs built. Existing solutions and their companion proofs can be used right away whenever needed. It is easy to see that reusability of designs and their companion proofs is an extraordinarily efficient feature vis-à-vis reaching the “faster, better, cheaper” goals. For reasons that could be valid in the past, it has long been believed that it is not possible to meet correctness proof obligations when considering specifications of CBS. Over the past 40 years or so, a huge amount of results have been established by engineers and scientists in the area of CBS, which contradict this mistaken belief, especially with regard to trans-functional requirements (see §3.2). Times are changing: SE for CBS cannot be less rigorous than engineering disciplines in daily use in other areas (electrical engineering, civil engineering, and so on). The term “proof” being assigned various meanings, it is important to understand that “proofs” should not be seen as implying formal techniques. Proofs can be formal (e.g. theorem proving)1 or semi-formal (“conventional” mathematics). Up to now, the domains where formal proofs have been applied successfully are rather restricted. For the vast majority of real world problems, existing formal proofs cannot be considered (see §2.4). Consequently, most often, there is no choice: semi-formal proofs shall be considered/re-used. For the sake of illustration, imagine that one of the problems stated in specification E is a well known problem in graph theory – e.g., shortest path. Why should engineers be forced to waste their time in reinventing existing proven solutions, e.g. the Dijskstra algorithm? And there is nothing wrong with the fact that Dijkstra’s proofs are not formal.
2.3 Feasibility and Dimensioning Any real world system must meet functional and trans-functional requirements that involve physics, the case e.g. with fault-tolerance (resiliency) and real time. Since any real world system is endowed with finite resources (computing power, memory/storage space, I/O bandwidth, and so on), it is necessary to compute the minimal “amount” needed for each resource in order to meet such functional and trans-functional requirements. For example, one must be able to “bind together” the following pairs of variables: • Loads (densities of event arrivals that trigger processes) and process termination deadlines – they both increase or they both decrease according to “laws” which must be expressed, 1
Note that « model checking » or “testing” is not proving.
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
265
• Density of failure occurrences and number or algorithmic rounds for reaching termination – they both increase or they both decrease according to “laws” which must be expressed, • Worst- case sojourn times in a waiting queue and scheduling algorithm – formulae giving smallest upper bounds of sojourn times, which are attained with an optimal scheduler, must be expressed, • Smallest acceptable size of waiting queues and worst- case sojourn times – they are linked by formulae which must be expressed, • Worst-case execution times in the absence of failures and worst-case execution times in the presence of failures (for various failure models and for given densities of failure occurrences) – they are linked by formulae which must be expressed. Such “bindings” are correctly established by resorting to analytical techniques. For the first bullet, analytical techniques commonly used in scheduling theory are an obvious choice. For the last bullet, fixed point calculations are appropriate – such calculations are resorted to in some of the reports produced for the OISAU scenarios (see §6.1), for deriving worst-case end-to-end delays in the presence of failures. Unfortunately, analytical techniques, notably those permitting to predict worstcased system behaviours in the presence of worst-case adversary scenarios, are seldom used under current SE practice. Simulations and stochastic analyses are useful. However, they do not help in predicting worst-cases. As a result, assertions made about the feasibility of stipulated requirements and/or correctness of the dimensioning of a system’s resources are not well substantiated. Therefore, systems (including their human environments) that are deployed are put at risk. Mistaken system dimensioning is known to be one of the major causes of operational failures. Meeting proof obligations in the course of an SDV phase has a very valuable by-product, which are commonly called feasibility conditions (FCs). FCs are those analytical “bindings” which are missing under current SE practice, which are made mandatory under a PBSE approach (see §3.1).
2.4 Conclusions It is becoming irrefutably obvious that “more science in SE practice” is the privileged approach for meeting the “faster, better, cheaper” challenge. It is even more obvious that “more science in SE practice” is a realistic target with transfunctional requirements. Real world stakeholders must take notice. When the early tools supporting the OISAU methodological standard emerge – see §4 – tapping established relevant technical and scientific state of the art will be performed fully transparently in the course of projects/programmes undertaken by system designers/integrators which will have acquainted themselves with PBSE. For others, risks of becoming non competitive might be high. The scientific community must also take notice, since it bears the responsibility of working out validated solutions for real world problems, which problems are compositions of “unitary” problems, those traditionally explored by scientists. Examples of composite real world
266
G. Le Lann and P. Simon
problems would be combinations of such “unitary” problems as real-time, distribution, fault-tolerance, security, data consistency, safety. Published solutions for composite real world problems are rather exceptional. There is an urgent need for the scientific community to start addressing problems “in the large”, under assumptions (computational models, failure semantics, laws of failure occurrence, event arrival models, timeliness constraints, and so on) that match reality – rather than assumptions that make it easier to build correctness proofs.
3 Lessons Learned with OISAU Before they would be accepted and “sealed” in OISAU reports, recommendations related to PBSE and OISAU standards that were emerging from the OISAU work have been tested on scenarios that were extensions (increased autonomy) of real world scenarios. They have been provided by the companies that build drones and/or robots, members of the OISAU consortium, and approved by representatives of governmental bodies. Lessons reported below have been learned via these tests.
3.1 Migration from SE to PBSE Is an Evolutionary Process Lesson 1: Migration from current SE methods toward PBSE methods is an evolutionary process. One requirement in the OISAU study was “Proposals must be compatible with existing SE methods (gradual evolution)”. How this requirement has been met is shown Figure 2, where one sees three PBSE extensions to the well known ISO/IEC 15288 standard (ISO/IEC). The purpose of these PBSE extensions is to eliminate the weaknesses reported §2. Some real world prescriber (“client/user”) wants to be delivered a CBS that satisfies his/her application/operational requirements, under his/her environmental and technological assumptions. Let us assume that such a request/call-for-tender has not been processed previously (i.e. one must go through all lifecycle phases). As will be seen further (§4), this is not always the case, fortunately, and there are ways of knowing whether or not the unrolling of all phases is necessary. RC (Requirements Capture) Input: Initial documents describing application/operational requirements and assumptions, in natural language (OR) Output: Specification E = [{m.E}, {p.E]]. Models of the system-centric environment and technology (assumptions) are specified in {m.E}. Required system-centric properties (functional and non-functional requirements) are specified in {p.E}. E must be verifiably derived from OR. Questionnaires serve this purpose. The questionnaire germane to the TRDF PBSE method encompasses three domains (distribution, real-time, resiliency). A digest is given in (Le Lann 2007). The class of failure models found in this questionnaire may serve as an illustration. Failure models/semantics have been formalized by the scientific community. Moreover,
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
267
since this class is associated a partial order, stakeholders can explore the lattice and pick up a particular model for a specific element (environment, system), knowing what is implied with any particular choice in terms of complexity (i.e. the degree of “aggressiveness” exhibited by an adversary) and coverage (Powell 1992). E is the specification of a system problem, stated in restricted natural language (computer science vocabulary, other scientific disciplines vocabularies). It is therefore possible to check whether E is complete, non ambiguous, self-consistent. What is implied with problem specification E is as follows: A system solution SYS is sought such that (1) properties ensured with SYS are “stronger” than {p.E}, (2) assumptions that underlie SYS are “weaker” than {m.E}. ENTERPRISE ENVIRONMENT MANAGEMENT INVESTMENT MANAGEMENT SYSTEM LIFE CYCLE MANAGEMENT RESOURCE MANAGEMENT QUALITY MANAGEMENT ENTERPRISE SYSTEM LIFE CYCLE
ACQUISITION AGREEMENT SUPPLY PROJECT
PROJECT PLANNING PROJECT ASSESSMENT PROJECT CONTROL DECISION MAKING RISK MANAGEMENT CONFIGURATION MANAGEMENT INFORMATION MANAGEMENT
RC phase
TECHNICAL TRANSITION
STAKEHOLDER REQUIREMENTS DEFINITION
VALIDATION
REQUIREMENTS ANALYSIS
OPERATION
ARCHITECTURAL DESIGN
SDV phase
MAINTENANCE
IMPLEMENTATION
DISPOSAL
INTEGRATION VERIFICATION
FD phase
Fig. 2 PBSE “augments” traditional SE
SDV (System Design & Validation) phase Input: Problem specification E = [{m.E}, {p.E]]. Output: Complete solution S(E). S(E) includes the following 4 elements: - Specification of the system solution SYS (specification to be implemented), - Proofs, or links/hyperlinks/pointers to existing proofs (SYS satisfies E), - Feasibility conditions of couple {E, SYS}, denoted FCs, - Specification of a software program which “implements” FCs – denoted Φ(E, SYS). Usually, FCs are a set of analytical constraints linking problem variables (appearing in E) together with solution variables (appearing in SYS). When an SDV phase is completed, results are materialized under the form of a Technical Leaflet – denoted TL[E, S(E)], which comprises specification E and S(E).
268
G. Le Lann and P. Simon
FD (Feasibility & Dimensioning) phase This phase can be invoked as often as desired, once specification Φ(E, SYS) has been implemented. The resulting software program is referred to as a feasibilityand-dimensioning oracle, denoted FD-Oracle(E, SYS). Invoking an FD phase consists in exercising FD-Oracle(E, SYS), so as to assign values to variables in FCs, and to check whether or not FCs are violated. If at least one of the constraints in FCs is violated, FD-Oracle(E, SYS) returns a “no”, and the constraint(s) violated is/are exhibited. When there are no FC violations, FD-Oracle(E, SYS) returns a “yes”, as well as a correct dimensioning of system variables (appearing in SYS), i.e. values computed by the Oracle so as to match values freely assigned to problem variables. For example, one picks up desired values for termination deadlines and densities of failure occurrences (problem variables). FD-Oracle(E, SYS) returns the smallest acceptable activation period of the scheduler, as well as the smallest acceptable degree of redundancy. Input: A potential valuation of problem variables. Output: Either “no” or “yes”. If “yes”, a correct valuation of solution variables is returned.
3.2 “Functional versus Non Functional” Is Too Crude a Dichotomy Lesson 2: The concept of trans-functional requirements is missing. The terminology itself is sometimes fuzzy. There is no controversy regarding what is meant with “functional requirements”. Conversely, there is no widespread agreement on how to label such requirements as, e.g., “real-time” or “fault tolerance”. Depending on the sources, these requirements may be categorized as “functional” or “non functional”. In the course of OISAU, we have come to the conclusion that it is convenient to discriminate between requirements that are to be met on line (by a system in use) and requirements that are to be met off line (while a system is not being operated). In the former category, referred to as “trans-functional” requirements, one finds “real-time”, “fault tolerance”, and so on, as well as those requirements related to global on line properties, such as, e.g., distributed data consistency, distributed synchronization, global time, and so on. In the latter category, referred to as “non functional” requirements, one finds “maintainability”, “plug-and-play”, “dismantling”, and so on. This clear separation between “functional” requirements and “trans-functional” ones was found to be very convenient since it matches almost perfectly well the distinction between, respectively, problems that have no pre-existing solutions and/or companion proofs, and problems – often referred to as generic problems – that have pre-existing (and published) solutions and companion proofs.
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
269
3.3 Existing Solutions and Companion Proofs Can Be Tapped Lesson 3: Proof obligations can be met … without having to do the proofs. Contrary to mandatory practice in formal software engineering, “doing the proofs” des not stand on a project’s critical path with PBSE. This is due to the fact that SE is mostly concerned with trans-functional problems. A vast majority of such problems are generic: they are taught at universities, and published solutions along with correctness proofs are available in the open literature. Mutual exclusion in distributed systems in the presence of failures would be a good example. Therefore, it would be absurd – and counterproductive – to ask engineers working on a project, under time pressure most often, to divert time and energy in order to “reinvent” existing solutions and proofs. Having said this, the question that arises is “How do engineers know (that they should not be “re-inventing”)”? Tools, in particular tools that store Technical Leaflets, are the answer. Conclusions of the OISAU study have confirmed intuition. It has been demonstrated that semi-formal PBSE methods are inevitable, given that current formal PBSE methods suffer from limitations, noticeable with (1) requirements capture phases, (2) identification of generic problems and solutions, (3) automated reuse of existing design solutions and proofs (during system design & validation phases). A rather encouraging lesson has been learned: When combined together, formal and semi-formal PBSE methods can encompass an entire project lifecycle, maintaining a continuous “proof chain” all the way through.
3.4 Nothing Specific with COTS Products Lesson 4: Under a PBSE approach, issues raised with COTS products are just … ordinary SE issues. How to cope with COTS products in rational ways? Technical Leaflets (TLs) are the answer. Granted, except on very rare occasions, COTS products do not have companion TLs. Building a TL a posteriori for some COTS product X entails some reverse engineering work, necessarily conducted in cooperation with the company that manufactures X. Companies that build COTS products for mass markets may refuse to cooperate. In the defense sector, rules of the game are – or can be – slightly different, given that the systems under consideration are life-, or mission-, or environmental-, critical. Consequently, inclusion of a COTS product X into such systems can/should be decided only after one has been shown that X meets a given problem specification E*. Three cases may occur: • The manufacturer of X cooperates. Reverse engineering work leads to finding out E(X) – the specification of the problem “solved” by X. RC questionnaires are to be used. Reverse engineering also leads to finding out SYS(X) – the specification of the system solution implemented by X, along with its proofs, and FCs, which yields complete solution S(E(X)). TL[E(X), S(E(X))] is created and stored in matrix E/S(E) – see §4.2, possibly subject to confidentiality and/or access restrictions.
270
G. Le Lann and P. Simon
It is then quite straightforward to check whether E(X) matches (X can be used) or does not match (X should not be used) specification E* arrived at in the course of an SDV phase. • The manufacturer of X does not cooperate: o Product X is banned, o Product X is declared usable, provided that the manufacturer guarantees that X is a correct product vis-à-vis specification E* – the manufacturer is invited to sign a document attesting that he/she is knowledgeable of E*, as well as of the responsibilities endorsed in case the cause of an operational failure could be traced back to product X.
3.5 PBSE Practice Can Be Supported by Tools, in Conformance with a Methodological Standard Lesson 5: Knowledge which has been paid for once does not evaporate, and its reuse can be automated. Sound principles are fine. However, in real projects, engineers accomplish their work by resorting to specific SE processes, assisted by tools. The bridging of the “gap” between sound principles and PBSE-driven processes is by no means original. It is the very same type of bridging found in other engineering disciplines, namely: • Knowledge (RC questionnaires, problem specifications, solution specifications, TLs) is stored into knowledge bases, databases, repositories – for details, see §4. This is how the outputs of RC, SDV, FD, phases are made accessible and reusable at will. • Search engines for “navigating” within, for enriching, the body of accumulated knowledge, are developed. • Software programs that implement FD-Oracles are developed. All the above is to be found in PBSE tools. Portable formats are defined for TLs, as well as convenient tool interfaces. It is now possible to describe the OISAU methodological standard.
4 The OISAU Methodological Standard When defining the OISAU methodological standard, special attention has been paid to certification issues, namely, how to simplify the task of those experts who will be in charge of checking whether a given system or sub-system has been designed in conformance to the OISAU standard.
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
IN
MR_1
Operational Requirements OR previously processed?
yes
TL[E,S(E)] that matches OR is available through linking matrices
271
OUT
no
OR translated into a system problem specification E?
Certifications 1 and 2 previously granted
yes
no
MR_2
Build system problem specification E
Certification 1
yes Does E contain known problems? no
Known problem previously processed?
yes
Existence Test
MR_4
MR_3
Certification 2
Availability Test
no
Complete solution S(E) (which includes a validated specification of a system solution SYS)
Build TL[(E, S(E)) and create new entries for OR into linking matrices
OUT
Fig. 3 The OISAU methodological standard
One striking feature of existing SE methods and lifecycle models is the absence of a test at the “entry” of two major phases: • RC: Has this document (OR) been processed previously (i.e., translated into E)? • SDV: Has this problem specification E been processed previously (i.e., do we have a solution & proofs recorded and ready-to-use)? In other words, no means are provided to prevent engineers from redoing what has been done in the past. This is another weakness dealt with in the OISAU standard, and eliminated with MR-1.
4.1 Methodological Requirements Methodological Requirements MR are shown on Figure 3, as well as OISAU Certifications 1 and 2. Every product (system element, system, SoS) that has gone through both OISAU Certification levels successfully can be fielded, with the guarantee that it will always cope successfully with any “adversary” capable of deploying every possible scenario “encapsulated” within its associated FCs. MR-1 Check whether operational requirements OR have been worked out previously. This check rests on matrix OR/E. If the case, check whether OR have been worked out fully. If the case, a matching specification E exists already, and a Technical Leaflet TL[E, S(E)] is available in matrix E/S(E). If not the case, check whether previous work has stopped after E was built.
272
G. Le Lann and P. Simon
MR-2 If no specification E exists, one must be built, and stored in matrix OR/E. Certification 1 is granted under this condition. Therefore, Certification 1 “tells” that an RC phase is completed. When a specification E is examined, one checks whether a solution S(E) has been constructed in the past (see §3.1). If E contains known problems and each of them is already processed, then a Technical Leaflet TL[E, S(E)] is available in matrix E/S(E). MR-3 Specification E has never been processed fully before. Hence, no solution S(E) exists. One must be built. Certification 2 is granted under this condition. Therefore, Certification 2 “tells” that an SDV phase is completed. MR-4 Check whether pair {E, solution S(E)} has been recorded as a Technical Leaflet TL[E, S(E)]. If not the case, build Technical Leaflet TL[E, S(E)] and store it in matrix E/S(E).
4.2 Matrices Matrix OR/E Every set of operational requirements OR that has been processed is assigned a line/entry in this matrix. This matrix has 3 columns: • Column 1: Name of set OR (name given by client/user) • Column 2: Name of the matching composite problem E (name given by those in charge of the OISAU standards) • Column 3: Names of the unitary problems that appear in problem E (names given by those in charge of the OISAU standards) Matrix OR/E is utilized for conducting checks associated with MR-1 and MR-2. Matrix E/S(E) Every (unitary, composite) problem specification E which has been processed is assigned a line/entry in this matrix. This matrix has 4 columns: • Column 1: Name of problem E (name given by client/user) • Column 2: Free variables in specification E • Column 3: Name of the solution S(E) and a reference pointing at Technical Leaflet TL[E, S(E)]. • Column 4: Free variables in specification SYS Free variables in E, in SYS, are those variables that are assigned values when an FD-Oracle is run. Matrix E/S(E) is utilized for conducting checks associated with MR-3 and MR-4.
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
273
Technical Leaflets are stored in the repositories separately from the matrices. Every item appearing on a line of matrix E/S(E) is a candidate for technical standardization (see §6.2).
5 The European Dimension As was mentioned previously, the OISAU study should be continued under the form of a European project (a Programme d’Etudes Amont, in DGA terminology). It may well be that we are getting close to enter an interesting era in human history of technologies, that is when we know how to come to grips successfully and affordably with the complexity of CBS, especially critical autonomous CBS. Indeed, there is a noticeable convergence between visions recently expressed by the formal software community on the one hand, and the proof-based system community on the other hand. That convergence is especially striking with RC issues. Given that exploring the applicability of formal/scientific methods to RC issues was one of the initially stated major objectives of the OISAU study, it is worth elaborating on this observed convergence. In 2007, one of the most respected scientists in formal software methods wrote the following (Hoare 2007): But the second weakness remains. It lies at the critical point of the whole endeavour: the very first capture of the properties of the real world, of its inhabitants, and of the expectations they have of a new or improved computer system. If we get those wrong, none of our software will be dependable, and none of the verification tools will even detect the fact. (An initial proposal) is written in natural language. It describes the real world. It concentrates on the environment in which a new or improved computer system will operate. It lists the external environmental constraints on the system, including constraints of material and technology and costs and timescales…. There are no better words for acknowledging the existence of a huge territory that has remained unexplored by the formal software community, which is now invited to address the issue of how to bridge the huge gap between fancy formalisms and simplifying assumptions/axiomatics on the one hand, and the “ugly” reality (asynchrony, failures, non predictable conflicts/contention, non cooperative environments, timeliness requirements in the presence of overloads, and so on) on the other hand. An initial proposal is never written in temporal logic – to pick up one example. Will it always be possible to translate the “ugly” reality, described in natural language, into temporal logic specifications? There are good reasons to believe that the answer is no. A large number of real world requirements do not boil down to mathematical logic, but, rather, to combinatorial optimization, scheduling/game theory, analytical calculus, queuing theory, stochastic analyses, to name a few. And that is where PBSE-driven methods meet formal software methods. Over the last 30 years, the “system” community has devised many “models” for representing and abstracting reality as faithfully as possible, be it environmental reality or technological reality.
274
G. Le Lann and P. Simon
It is possible to choose among a dozen of computational models if one is interested in assumptions regarding system/network delays, among some 20 models if one is interested in failure semantics, and so on. One fundamental virtue of such models is their dual nature: they are formally defined, but they have been given names in natural languages. For example, when one reads “sporadic arrivals” or “Byzantine failure” in a specification, there is a unique possible interpretation for each of these terms, although one reads words that belong to the English dictionary. Therefore, it is easy to see why the “system” culture – which underlies PBSE – is an ideal vehicle for getting started with the translation of (1) operational requirements OR expressed in some natural language into (2) specifications E containing terms that have formal or formalized definitions despite being readable by human beings. This first step in an RC phase is a fundamental one, since it is the only step where a client/user/prescriber on the one hand and a main contractor/system integrator on the other hand, can reach an agreement about the fact that problem specification E is indeed a correct translation of initial documentation OR. There are other “contact surfaces” between formal software methods and science-driven system methods. The convergence and complementarity between both cultures and expertise have been very concisely stated in a recent publication written by another widely respected scientist in formal software methods (Rushby 2007): The world at large cares little for verified software; what it cares about are trustworthy and cost-effective systems that do their jobs well. In his 2007 paper, T. Hoare advocates for the launching of a European initiative called “Verified Software: Theory, Tools, Experiments”. Such an initiative is welcome, but unnecessarily restricted. An umbrella that would bring together all the parties interested in tackling the complexity of CBS, especially critical autonomous CBS, successfully and affordably, would be a European initiative called “Validated System and Verified Software: Theory, Tools, Experiments”. Assuming its blessing by DGA and some European Defense agencies, a big project in continuation of the OISAU study would be a first building block for such an initiative.
6 Technical Issues and Standards Recall that one of the OISAU objectives was to gain a better understanding of the limits of autonomy that can be “delegated” to systems operating in aggressive environments. Consequently, the work in OISAU was conducted considering the most extreme case of autonomy, which is no reliance on human agents. Of course, that does not imply that “final” irrevocable on line decisions are made by the systems, rather than by human agents. The end result of our “full autonomy” approach is that DGA or/and operational users can decide how to make use of the OISAU outcomes, striking any desired or convenient balance between CBS made decisions and human based decisions, freely, possibly differently for different releases of a given system or SoS.
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
275
6.1 Scenarios Worked Out Due to space limitations, the 9 scenarios which were worked out during the OISAU study can only be mentioned. These scenarios are Urban violence, Protection of air landing base (aerial drones and smart dust), Urban warfare, Joint aerial/terrestrial operations, Underwater mine warfare (2 reports), Joint aerial (VTOL)/sea landing operations, Joint aerial (tactical drones)/terrestrial operations (2 reports). These scenarios are extensions of problems recently tackled by the two companies in the OISAU consortium that build drones and robots. The role of these manufacturers was triple. Besides being in charge of providing real world scenarios of SoS in three domains (aerial, terrestrial, underwater), they had to participate in the deployment of the PBSE methods for these SoS, and, thirdly, they had to draw conclusions from their direct exposure to PBSE. French DGA was also directly involved, in order to gain a better understanding of what PBSE may offer to a prescribing authority. In the reports written for these scenarios, one finds detailed descriptions of how the RC phase has been conducted in each case. These reports are available from DGA.
6.2 Generic Problems and Solutions, Standards and Interoperability Generic system/SoS problems and solutions By definition (openness, interoperability, resiliency requirements), systems and SoS considered in the OISAU study are distributed systems. According to well known results established since the late 70’s, essential characteristics of a distributed system are: • No central locus of control, • It is impossible to know a system’s instantaneous global state. Since the late 70’s, a huge number of results have been established by various scientific communities, ranging from impossibility results, optimality results, specifications of solutions (architectures, protocols, algorithms) and companion proofs for generic problems, algorithmic complexity, physical performance, and so on. Recall that generic problems are precisely those arising with trans-functional requirements. Explained in somewhat naive terms, the crux of the problems with distribution and autonomy lies with the fact that elements (respectively, systems) “brought together” in something called a system (respectively, an SoS) will behave according to their own local “rules”, which inevitably results into incorrect global behaviours, or even anarchy. Protocols and algorithms designed for distributed systems act as a “glue”, maintaining the desired cohesion among elements, yielding systems (respectively, SoS) whose global behaviours match stipulated properties {p.E}. To put it simply, there are two major categories, as follows:
276
G. Le Lann and P. Simon
• Protocols, which ensure communication services through networks (static, mobile, subject to partitioning, etc), • Algorithms, which ensure global coordination services, split in two subcategories: o Information coordination (consistent tactical views, etc), o Decision coordination (agreement in the presence of failures, etc). We provide below a non exhaustive list of trans-functional generic problems, whose solutions could be the subject of the initial OISAU technical standardisation work. Quite clearly, problems that have satisfactory standardized solutions (e.g. adaptive routing in mobile networks) have not been considered. Notations Since all the problems considered (hence their solutions) are distributed, prefix D is not shown. Therefore, when a problem appears with prefix CC or FT or RT, one should read “composite problem” D/CC or D/FT or D/RT. CC stands for Concurrency Control, FT stands for Fault-Tolerance, RT stands for Real Time. The list could be expanded with S, standing for Security, and so on. Prefix IC stands for Information Coordination. Prefix DC stands for Decision Coordination. Information coordination Decision coordination IC-FT1: Preservation of connectivity in mobile networks DC-FT1: Mutual exclusion IC-FT2: Ordered message broadcasting DC-FT2: Leader election IC-FT3: Terminating reliable broadcast DC-FT3: Exact agreement (Consensus) IC-FT4: Stable memory DC-FT4: Polarized Consensus IC-FT/CC1: Past global state (past consistent snapshot) DC-FT5: Group membership IC-FT/CC2: Consistency of persistent updatable shared data DC-FT6: Non blocking atomic commit IC-FT/CC3: Mutual consistency of multi-copied data DC-FT7: Approximate agreement IC-RT1: Delivery of a message in finite bounded time IC-RT2: Task termination within strict deadlines, in the absence of overloads IC-RT3: Task terminations that maximise a time-value function, in the presence of overloads
Since the problems listed above are generic, their specifications as well as specifications of their solutions and associated proofs can be found in the open literature – e.g. see (Lynch 1996). In the OISAU reports, one can find relevant hyperlinks for these problems, such as, for example: M. Clegg, K. Marzullo, "A low-cost processor group membership protocol for a hard real-time distributed system”, Proceedings of the 18thIEEE Real-Time Systems Symposium, vol. 2(5), Dec. 1997, pp.90-98. URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=641272&isnumber=13913 Standards and interoperability As explained under §4, these generic problems, solutions, and associated proofs are constituting elements of Technical Leaflets stored in matrix E/S(E). Consequently, the data made available to standardization bodies whenever it is felt desirable to turn an existing solution into an international standard is fully “sanitized”, in the sense
Open Interoperable Autonomous Computer-Based Systems, Systems-of-Systems
277
that only validated solutions will/can be submitted. An international nomenclature of standards can be maintained, such that it is straightforward to check whether any two products (sub-systems, systems) are interoperable, just by looking at their references in the nomenclature. Looking to the future and assuming that such a nomenclature exists (and systems bound to become elements of systems-of-systems meet OISAU –like standards), one sees that those difficult questions related to interoperability, as currently faced with the setting up of ad hoc SoS in operational theatres, vanish. Authorized agents in charge just have to check the references of systems that are available “on the spot” to know which are interoperable. Doing this does not take dozens of hours. Issues other than interoperability need be addressed with SoS (Gorod et al. 2008). Nevertheless, one of the benefits of the OISAU study stems from having shed some light on how interoperability issues can be tackled in rigorous ways, from scientific and operational perspectives.
7 Conclusions To the best of our knowledge, the OISAU study is the first contractual study explicitly aimed at investigating formal/scientific approaches in the context of SE for computer-based systems at the core of open and interoperable autonomous systems and SoS, with an initial focus on the Requirements Capture phase, which is known to be the “weakest” of all lifecycle phases under current SE practice. Reliance upon PBSE principles has been shown to be well founded. OISAU, a “paper study” that ended mid 2009, has set the stage for a European project that would be focused on the same topics and issues, with the more ambitious goals of prototyping and experimenting autonomous systems and SoS designed in conformance with the OISAU methodological standard. As a matter of fact, the outcomes of OISAU are already being applied in the framework of a recently launched Programme d’Etudes Amont (DGA) directed at real world mobile radio based terrestrial operations involving sensors, computerized agents, vehicles, and “deciders”. One fundamental requirement in this PEA is to minimize – or eliminate, if at all possible -- human interventions in the global “command-and-control loop”, theatre-wide. Hence, autonomy is at the core of this PEA. The PBSE/OISAU method is being applied in the work package “in charge of” issues related to distributed real-time tracking and coordination (consensus, leader election, etc.), encompassing the RC phase, the SDV phase, and the FD phase. Implementation of the system solution as well as experimentation and demonstration are to be completed in 2010. Views similar to those presented here can be found in recent publications originating from various authors – see (Denning and Riehle 2009) for an example. It may well be that we are getting very close to the inflection point in the “CBScentric SE history”, beyond which SE can only be PBSE.
278
G. Le Lann and P. Simon
References Abrial 1996 Abrial, J.-R.: The B-book: Assigning programs to meanings, 779 p. Cambridge University Press, NY (1996) Bowen and Hinchey 2006 Bowen, J., Hinchey, M.: Ten Commandments of Formal Methods ...Ten Years Later. IEEE Computer Journal 9(1), 40–48 (2006) Denning and Riehle 2009 Denning, P., Riehle, R.: The Profession of IT – Is Software Engineering Engineering? Communications of the ACM 52(3), 24–26 (2009) DGA/OISAU 2009 OISAU-070-DJE-STB, Dossier de Justification des Exigences de la STB OISAU, 87 p. (Juillet 2009); OISAU-021A-STB, Spécification Technique de Besoins, 58 p. (Septembre 2009); OISAU-021B-Annexe STB, Annexe de la STB OISAU, 41 p. (Septembre 2009); OISAU-021C, Terminologie, 31 p. (Septembre 2009); 9 Rapports « Scenarii OISAU » (rapports d’application à des systèmes de systèmes autonomes opérant en milieux air-sol, terrestre, sous-marins) Gorod et al. 2008 Gorod, A., Sauser, B., Boardman, J.: System-of-Systems Engineering Management: A Review of Modern History and a Path Forward. IEEE Systems Journal 2(4), 484–499 (2008) Hoare 2007 Hoare, T.: Science and Engineering: A collusion of cultures. In: 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2007), pp. 2–9 (2007) ISO/IEC ISO/IEC 15288: Systems and software engineering – System life cycle processes, http://www.iso.org/iso/home.htm Le Lann 1998 Le Lann, G.: Proof-Based System Engineering and Embedded Systems. In: Rozenberg, G. (ed.) EEF School 1996. LNCS, vol. 1494, pp. 208–248. Springer, Heidelberg (1998) Le Lann 1999 Le Lann, G.: Models, Proofs and the Engineering of Computer-Based Systems: A Reality Check. In: Proceedings of the 9th Annual Intl. INCOSE Symposium on Systems Engineering: Sharing the Future, Brighton, UK, June 1999, vol. 4, pp. 495–502 (1999) (Best Paper Award) Le Lann 2007 Le Lann, G.: Ingénierie système prouvable pour les systèmes temps réel critiques", papier invité, Ecole d’Eté Temps Réel, Nantes, septembre 2007, Hermes, 15 p. (2007) Lynch 1996 Lynch, N.: Distributed Algorithms, 870 p. Morgan Kaufmann, San Francisco (1996) Powell 1992 Powell, D.: Failure Mode Assumptions and Assumption Coverage. In: Proceedings of the 22nd IEEE International Symposium on Fault-Tolerant Computing, June 1992, pp. 386– 395 (1992) Rushby 2007 Rushby, J.: What Use is Verified Software? In: 12th IEEE International Conference on the Engineering of Complex Computer Systems (ICECCS), June 2007, pp. 270–276 (2007)
Managing the Complexity of Environmental Assessments of Complex Industrial Systems with a Lean 6 Sigma Approach* François Cluzel, Bernard Yannou, Daniel Afonso, Yann Leroy, Dominique Millet, and Dominique Pareau
Abstract. The integration of environmental concerns into the product design process has highlighted a new problem that arises when confronted with complex systems. Indeed environmental assessment methodologies like Life Cycle Assessment (LCA) become in this case particularly heavy to implement. Considering aluminium electrolysis substations as a complex industrial system, we propose a new eco-design methodology based on a Lean Six Sigma approach. Including the environmental parameter as the fourth dimension of the Quality, Costs, Time triangle this methodology has the advantage to cover and systematize the entire eco-design process. It answers to most of the limits raised in our study and allows managing a part of the complexity that appears in particular during the goal and scope definition and the inventory phases of LCA. An application of aluminium electrolysis substations is mentioned. Keywords: eco-design, Life Cycle Assessment (LCA), Lean Six Sigma, complex industrial system, electrical substation. François Cluzel AREVA T&D, Power Electronics Massy, Massy, France François Cluzel · Bernard Yannou · Yann Leroy Ecole Centrale Paris, Laboratoire Génie Industriel, Chatenay-Malabry, France Daniel Afonso CUBIK Partners, Paris, France Dominique Millet SUPMECA, Laboratoire d’Ingénierie des Systèmes Mécaniques et Matériaux, Toulon, France Dominique Pareau Ecole Centrale Paris, Laboratoire de Génie des Procédés et Matériaux, Chatenay-Malabry, France
280
F. Cluzel et al.
1 Introduction Eco-design has become a major concern for many large companies in the last decade. It has first interested B to C firms for their consumer goods, but B to B firms now feel concerned too. Even if this growing awareness is not independent from the recent environmental regulations (for example the WEEE [1] and RoHS [2] European directives for the electrical and electronic sector), many companies attempt to go further and to propose more eco-friendly products. In some industrial fields, the product size and complexity make the environmental studies delicate. This is particularly true for the high voltage systems provided by AREVA T&D. It is then extremely important to own environmental tools that are able to consider such systems. Life Cycle Assessment is probably the most powerful tool in this field. However it presents some limits hard to overcome when dealing with complex industrial systems. At the same time another approach has appeared in the late 90’s, based on the Lean Six Sigma theory. We called this new trend Lean & Green (term used by the US Environment Protection Agency from these years [3]). The environmental dimension is taken into account at the same level as Quality, Cost and Time. We propose in this paper to make the link between Lean Six Sigma and Life Cycle Assessment, and more globally eco-design. Thus we propose a metamethodology based on a DMAIC approach (Define, Measure, Analyze, Improve, and Control) that covers the entire eco-design process. This methodology ensures the continuity of the project too. It is particularly adapted to complex industrial systems. We first describe the problem from statements stemming from an industrial case study. The second part is devoted to the existing eco-design process with a focus on Life Cycle Assessment. This introduces a study of the limits encountered for the eco-design of complex industrial systems. The fifth part presents the Lean Six Sigma concepts and tools on which the meta-methodology presented in part 6 is based. We conclude on some perspectives.
2 How to Eco-Design Complex Industrial Systems? We describe in this first part AREVA’s aluminium electrolysis substations before stating the eco-design related issues that appear. These issues will introduce the methodology proposed later in this paper.
2.1 Aluminium Electrolysis Substations AREVA T&D PEM (Power Electronics Massy) designs, assembles and sells in the whole world substations for the electrolysis of aluminium. These are electrical stations to convert energy from the high voltage network to energy that can be used for aluminium electrolysis, which is a particularly polluting and
Managing the Complexity of Environmental Assessments
281
energy-consuming activity. An electrolysis substation is made of thousands tons of power electronics components and transformers, for a cost of several dozens of millions Euros. An electrolysis substation is made of several groups (often 4 on Fig. 1) that are composed of a regulating transformer, a rectifier transformer and a rectifier. The groups are connected on one side to the high voltage network through an electrical substation, and on the other side to a busbar that is directly connected to the electrolysis potline. All the groups are supervised by control elements that are connected to the electrolysis pots to regulate the process. The amount of energy consumed by a recent primary aluminium plant is comparable to the amount of energy delivered by a nuclear plant unit (about 1 GW).
Fig. 1 Example of an AREVA T&D aluminium electrolysis substation: ALUAR (Argentina)
In this context, AREVA T&D PEM wishes to minimize the environmental impacts of its products to answer to the environmental policy of the company. It also represents a way to be differentiated from the competitors. From the current substations design, we first want to: • Evaluate the environmental impacts through the product life cycle. We ideally want to know the substation intrinsic impact, but also the proportion of the whole aluminium plant impacts due to the substation. • Identify design parameters/impacting factors whose variation would permit to minimize the environmental impact while preserving the other design aspects. • Conduct the environmental improvement of the substations. • Ensure that the results are capitalized and reusable in the future.
282
F. Cluzel et al.
2.2 The Aluminium Electrolysis Substation: A Complex Industrial System We consider the substations as complex industrials systems because: • The number of subsystems and components is considerable. Some of these subsystems could themselves be considered as complex industrial systems (like transformers of rectifiers). • The life time of the substation is really long, up to 35 or 40 years. Many uncertainties appear for the use and end-of-life phases. No end-of-life scenario is clearly known. • The substation is only a part of the aluminium plant. Their processes are closely connected and interdependent. It is then easy to understand that the complexity of the considered system makes the study delicate. The question is now: how to eco-design such a complex system? How to apprehend the complexity through the entire life cycle?
3 LCA-Based Eco-design This part describes the eco-design process based on Life Cycle Assessment (LCA), which is a common approach in many large companies.
3.1 Eco-design Process Standard ISO/TR 14062 [4] about the integration of environmental aspects into product design and development proposes guidelines to introduce eco-design in the design process. It considers four aspects: • Strategic considerations: the company has to define its own environmental policy, which will directly influence the competitors, customers, suppliers, investors, and more globally all the stakeholders. This policy should promote in particular an early integration of eco-design in the design process. • Management considerations: the commitment of the top management is essential to support the integration of eco-design. Suitable resources and proactive and multidisciplinary approaches are necessary to reach significant results. • Product considerations: the integration of environmental considerations must occur upstream from the design process. All life cycle phases have to be considered to identify the most relevant impacts on the environment. The main objectives are the saving of resources and energy, the promotion of recycling, and more globally the prevention of pollutions and wastes. • Product design and development process: it is important to consider environmental aspects through the various stages of the product design and development process. ISO/TR 14062 [4] describes the possible actions related to each stage: planning, conceptual design, detailed design, testing prototype, production/market launch and product review.
Managing the Complexity of Environmental Assessments
283
3.2 Life Cycle Assessment One of the currently most used tools in eco-design is Life Cycle Assessment (LCA). According to ISO 14040, Life Cycle Assessment (LCA) is an evaluation tool that “addresses the environmental aspects and potential environmental impacts […] throughout a product’s life cycle from raw materials acquisition trough production, use, end-of-life treatment, recycling and final disposal (i.e. cradle-to-grave)” [5]. LCA can be integrated into product design and development process at the early stages, but the assessment has to be based on existing products [6]. Despite this consideration, it is commonly considered as a powerful tool. As it is supported by international standards (ISO 14040 [5] and ISO 14044 [7]), LCA is also useful for environmental communication. LCA counts four phases: • Goal and scope definition: its goal is to detail the objectives of the study and its field of application, in particular the system boundaries and the functional unit (“quantified performance of a product system for use as a reference unit” [5]). • Life cycle inventory analysis: the system is divided in elementary flows that permit to identify the system inputs and outputs. This assessment of the materials and energies is called Life Cycle Inventory. • Life cycle impact assessment: this third phase evaluates the potential environmental impacts using the inventory results. These impacts are processed with specific environmental impact categories and category indicators. • Life cycle interpretation: the date of the previous stages are combined and analyzed to deliver consistent results according to the goal and scope. The limitations and recommendations are clarified too.
4 Limits of the Current Eco-design Approach Once we have explained the LCA-based eco-design process, we now propose to study the limits of the eco-design process and LCA for complex systems like ours.
4.1 Technical LCA Limits The current eco-design limits, in particular for LCA are a recurrent discussion topic. Reap [8, 9] gave a list of LCA problems by phase. We consider in this part some of these problems. The boundary selection is hard to manage for complex industrial systems because the high number of subsystems and the interactions with surrounding systems make the boundaries fuzzy. For the same reasons, it is hard to allocate “the environmental burdens of a multi-functional process amongst its functions or products” [8]. In particular the distinction between the intrinsic environmental impacts of a product sub-system and the impacts of the whole system specifically due to the considered sub-system is not clearly made.
284
F. Cluzel et al.
Another problem concerns the inventory data granularity to choose, and more globally the data availability and quality. This problem is also taken into account by Leroy [10]. The last problems raised by Reap that interest us deal with the spatial and temporal dimensions: how to consider local data and local environmental impacts? Which information is necessary to include these elements in the study? Moreover how to know the temporal evolution of the site? We clearly need to manage the uncertainties about spatial and temporal dimensions to obtain significant results. These technical problems are well-known by LCA practitioners. We do not pretend to solve them, but we look for a methodology that will help us to systematically consider them.
4.2 Overall LCA and Eco-design Limits Except those technical limits, other problems of the eco-design process management should be considered in our study. The first one is that LCA is an evaluation tool and not an improvement tool. It is then only the first stage of an eco-design process (see Fig. 2 [11]). Fig. 2 is also interesting because if shows that LCA is able to feed environmental improvement tools but it needs to be based on an existing product. It is not adapted for a new product design [6].
Fig. 2 Categorization of holistic eco-design tools according to type of feedback and time of application (from [11])
Furthermore ISO/TR 14062 [4] specifies the need of a multi-disciplinary team all along the eco-design process. But it does not precise how to build the team. The eco-design process is globally defined, but neither standardized nor systematic deliverables and milestones exist. Finally there is no clear way to include in the study the customer requirements that will orient the decisions all along the process.
Managing the Complexity of Environmental Assessments
285
4.3 Methodology Requirements According to the previous parts, we need to define a methodology: • That is able to systematically consider the technical LCA limits concerning complex industrial systems, • That can be declined on different systems and subsystems levels, • That consider a reference product to improve, • That supports ISO standards about LCA, • That covers both the environmental evaluation and improvement phases, • That offers a rigorous framework with precise milestones and deliverables, • And that is able to take into account customer requirements.
5 About Lean Six Sigma Because Lean Six Sigma seems to have more formalized high-level problemsetting and problem solving procedures, we consider this approach as a rigorous framework that can support the eco-design process. In this fifth part the main concepts of Lean Six Sigma are explained to introduce the new methodology.
5.1 Continuous Improvement and Lean Six Sigma Lean Six Sigma is a continuous improvement approach. This kind of approach gives competitive advantages and creates value for the stakeholders. Historically, increasing the performance of one dimension of the Quality, Cost, Time triangle meant decreasing the performance of the two other dimensions. In the continuous improvement paradigm (including Lean Six Sigma), all dimensions increase together, as shown on Fig. 3. Lean Six Sigma consists in the mix of Lean Manufacturing (no wastes) and Six Sigma (increasing quality by killing variation). We focus in the next paragraph on the DMAIC approach (Define, Measure, Analyze, Improve, Control) that is one of the main Lean Six Sigma methodologies. Quality Quality Continuous improvement
Cost
Time
Cost
Time
Fig. 3 Quality, Cost and Time evolution in a continuous improvement approach
286
F. Cluzel et al.
5.2 DMAIC Approach Contrary to the PDCA (Plan, Do, Check, Act) approach that increases performance thanks to successive iterations, the DMAIC approach offers an incremental performance improvement. It is based on a rigorous methodology that is adapted to complex problems whose no solution is known. It proposes to increase performance through a structured and systematic way. A DMAIC project is supported by a multi-disciplinary team and a project leader, who is an expert in the field. It lasts from 4 to 6 months and is formalized by precise deliverables. The DMAIC project is structured in 5 stages (see Fig.4). Define
Measure
Analyze
Improve
Control
Fig. 4 DMAIC approach
5.2.1 Define Description: This first step in the starting point of the project and formalizes the problem thanks to a project charter. Main deliverables: project charter, voice of the customer, team definition. The team mission is described in the project charter that is a structured document in six stages. As for the goal and scope definition in LCA, a badly defined project charter often leads to the project failure. Fig. 5 illustrates this deliverable. The filling order is not the same as the presentation order that is adapted to communication.
Business impact
Key metrics
Project plan
4 2
6
Problem/opportunity statement 1 Project scope
Team selection
3
5
Fig. 5 Project charter
1. The Five Ws (and one H) formalism is first used to describe the problem or the opportunity: Who, What, Where, When, Why, How. 2. The objectives are quantified by key indicators that cover all aspects of the problem. 3. This step is needed to identify the project perimeter and the team scope so that only the necessary and sufficient elements are included.
Managing the Complexity of Environmental Assessments
287
4. The business impact formalization permits to answer the following question: why to perform this project? This is needed to list the material and immaterial expected benefits and the necessary effort. 5. The team is selected in two stages: necessary skills identification and selection of the corresponding team members. 6. The project milestones are planned to follow the project progress. 5.2.2 Measure Description: This phase identifies the problem reference base and collects the data needed to know the fundamental causes. Main deliverables: definition and identification of the keys factors, process flow diagrams, and measure system analysis. 5.2.3 Analyze Description: The fundamental causes of the project are identified, that means the 20% of causes that produce 80% of the effects. Main deliverables: identification of the potential causes, estimation of the effects on the consequences, and validation of the fundamental causes and priorisation. 5.2.4 Improve Description: This phase permits to define, deploy and validate the solutions that answer to the fundamental causes. Main deliverables: identification of innovative solutions, validation of the solutions impact, and realization of a pilot project. 5.2.5 Control Description: This last step permits to ensure the continued existence of benefits and to standardize the solutions through the company. Main deliverables: poka yoke, procedures, training, standardization, duplication…
5.3 Lean & Green Lean & Green is an interesting approach that appeared some years ago. We define it as a mix between Lean Six Sigma and environmental considerations in order to minimize the environmental impact of a product, service or process. Some companies or organisms propose Lean & Green approaches. The US Environmental Protection Agency has used this term from 2000 in a document called The Lean and Green Supply Chain [3]. The EPA has gone further since then and now proposes a structured and well-detailed approach called Lean Manufacturing and the Environment [12]. Different interesting toolkits are available: • Lean and Environment Toolkit [13], which is oriented on the identification of the environmental wastes in a supply chain.
288
F. Cluzel et al.
• Lean and Energy Toolkit [14], whose aim is to identify energy losses in an industrial process to improve performance. Furthermore IBM has offered for several years a consulting offer called Green Sigma. “This is a new solution offering, which merges IBM’s deep expertise in Lean Six Sigma with other robust green initiatives, resources and intellectual capital across the company” [15]. The Green Sigma project is divided into five stages: define key performance indicators, establish metering, deploy carbon dashboard, optimize processes and control performance. Those two Lean & Green approaches have advantages (use of the rigorous Lean Six Sigma framework to optimize complex systems), but we consider that they stay site-oriented and are hardly applicable to products (we consider the whole product life cycle). They potentially offer powerful tools to assess the environmental quality of supply chains and organizations and, consequently, they are more oriented for environmental management systems (see ISO 14001 [16]). Furthermore LCA is well-known and mature methodology developed for some decades but it reaches limits for complex systems. We are convinced that Lean Six Sigma is able to help us to environmentally assess industrial complex systems. That is why we propose in the next part a Lean & Green approach for complex product environmental assessment and improvement.
6 Proposition of a Meta-methodology Then the need to offer a rigorous frame to the eco-design process appears when complex systems are considered. We propose in this part to use the DMAIC approach previously described.
6.1 General Concept We not only consider the three dimensions (Quality, Cost, Time) commonly used in a Lean Six Sigma approach, but also a fourth dimension, as shown on Fig. 6: Environment. Quality
Quality Environment Continuous improvement
Cost
Time
Cost
Fig. 6 Integration of the environmental dimension in the QCT triangle
Time
Managing the Complexity of Environmental Assessments
289
As we will see in the next paragraph, one of the main advantages of the new methodology is to cover the eco-design process form the beginning (requirements definition) to the end (environmental improvement validation). We include at the same time environmental evaluation (here, Life Cycle Assessment) and environmental improvement. That is why we call it a meta-methodology. This new methodology is based on DMAIC and permits to clearly formalize and systematize all the stages of the eco-design process, particularly the two first LCA phases, which appear to be the more delicate phases considering complex systems. The DMAIC approach will also allow using other Lean Six Sigma tools to improve the overall performance all along the process (for example Six Sigma statistical tools). A question that could appear quickly is the following: why to consider a DMAIC approach instead of a DMADV (Define, Measure, Analyze, Design, Verify) approach that is oriented towards new processes design (Design for Lean Six Sigma theory)? LCA needs to work on an existing product, because it needs lots of precise data that are not available during the first phases of the new product design process [6]. So even if we consider a product development process, we work from an existing product. That is why we do not consider the DMADV approach.
6.2 A DMAIC Approach for Eco-design A classical DMAIC process is applied on clearly identified processes starting from a supplier to a customer. In our situation, we consider that the studied process is the life cycle of the product, or a part of this life cycle. The associated suppliers and customers are all the stakeholders of the product. 6.2.1 Define The first phase of the DMAIC project is clearly adapted for the Goal and Scope definition of LCA. The points 1, 2 and 3 of the project charter are easily able to integrate the ISO requirements about LCA. This document is detailed in Table 1. Moreover the Define phase offers tools such as the Voice of Customer that will permit to connect the eco-design study on real and tangible requirements. It is really important to consider here not only the final customer, but also all the stakeholders. The team definition is another element of the Define phase that is not clearly identified in a classical eco-design project and that will permit to directly focus the right resources on the project. 6.2.2 Measure The second phase, Measure, includes in the new methodology the second and third LCA phases: Life Cycle Inventory and Life Cycle Impact Assessment. Those two stages actually provide the data needed to know the fundamental causes of the problem. The flow diagram that is a key element of the inventory can be drawn up thanks to Lean Six Sigma tools like VSM (Value Stream Mapping) or SIPOC (Supplier, Input, Process, Output, and Customer).
290
F. Cluzel et al. Table 1. The new project charter in line with the ISO standards dedicated to LCA 4. Business impact
1. Problem/opportunity statement
For example: AREVA T&D PEM (Who?) whishes to optimize the environmental impact of its aluminium electrolysis substations (What?) For example, the expected benefits could during the design process (When?). These substations are sold worldwide to primary be: aluminium plants (Where?) to convert energy • Environment: decreasing of the form high voltage networks to energy that is environmental impact on the whole usable for aluminium electrolysis. The study will lifecycle, permit to minimize the environmental impact • Cost: decreasing of the Life Cycle through the product life cycle while still Cost (LCC), considering the technical and economical criteria • Quality: increasing of the components (How?). It is a way for AREVA T&D PEM to answer to AREVA’s environmental policy and to quality be differentiated from the competitors (Why?). • Time: extension of the product life time. The material and immaterial expected benefits are listed, as well as the efforts needed to reach these benefits.
2. Key metrics The objectives are described according to ISO 14040 [5]:
3. Project scope The expected information asked by ISO 14040 to define the scope of the study is [5]:
•
Intended application,
•
Studied product system,
•
Reasons for carrying out the study,
•
Functions of the product system,
•
Intended audience,
•
Functional unit,
•
Are the results intended to be used in public comparative assertions?
•
System boundary,
•
Allocation procedures,
The key indicators are the environmental indicators chosen for the study according to the objectives and the intended audience. Other indicators can be considered such as technical or economical, or even social in a sustainable development perspective.
•
Selected impact categories and impact assessment methodology,
•
Data requirements,
•
Assumptions,
•
Limitations,
•
Initial data quality requirements
•
Type of critical review, if any,
•
Type and format of the report.
These elements have to be detailed enough to meet the DMAIC requirements. 6. Project plan The project milestones are defined.
5. Team selection The members of the eco-design team are selected.
6.2.3 Analyze Thanks to the LCI and the LCIA the fundamental causes are identified in the Life Cycle Interpretation, for example by performing sensitivity and uncertainty analysis. It is the last LCA phase, and it corresponds to the Analyze phase of the DMAIC approach.
Managing the Complexity of Environmental Assessments
291
Then the product environmental evaluation is ended. The main environmental impacts have been identified and some leads appear to improve the product; the environmental improvement phase begins. 6.2.4 Improve Thanks to the LCA results and environmental improvement tools, technological solutions answering to the fundamental causes are identified. Different environmental tools exist, such as standards, lists (guidelines, checklists, material lists), guides or software. Lots of large companies have defined their own rules and procedures to improve the environmental impact of their product, like the materials lists. Luttropp proposes also the Ten Golden Rules in Eco-Design [17], which are generic rules to eco-design a product. They can be adapted for more specific fields. The technical solutions can be validated by performing comparative LCA with the technologies used in the original product. 6.2.5 Control In the Control phase, the new product is validated by aggregating the whole data in a comparative LCA between the old and the new design. The concerned actors then need to be trained. Finally, the environmental benefits are internally (and eventually externally) communicated to ensure the spreading of good practices.
6.3 Meta-methodology Deployment on Aluminium Electrolysis Substations We now want to deploy this meta-methodology on AREVA’s aluminium electrolysis substations. This project will last several months and will follow the different steps tackled in this paper. First, the DMAIC approach for eco-design will permit to clearly set the problem, define the objectives and the working group in accordance with the ISO standards. This accordance is important to be able to communicate significant results at the end of the study to be differentiated from the competitors. The next step will be the environmental evaluation of the substations from an existing reference product to identify the most impacting factors. The third step will be the environmental improvement using the key factors above, and the comparison between the reference and the new product. Finally the results will be communicated towards the stakeholders and possibly a wider audience. They will also be stored for future projects. It is important to notice that this DMAIC process could be adapted to different levels: a global implementation as described above will allow identifying the key factors in the different life cycles phases and among the numerous subsystems.
292
F. Cluzel et al.
7 Conclusions and Perspectives We have proposed in this paper a clear enrichment of the present standardized ISO 1404X LCA process for complex systems through the deployment of a Lean Six Sigma approach. The new proposed DMAIC project permits to comprehend the system complexity thanks to a rigorous framework covering the environmental evaluation (LCA) and the environmental improvement of the product. Fig. 7 summarizes the different approaches conducting to the new DMAIC project for eco-design. We have compared the classical eco-design process to a DMAIC project stemming from the Lean Six Sigma theory. The advantages of the two methodologies have been compiled in the new one. The main contributions of Lean Six Sigma to our approach are: • The covering of the entire eco-design process, • The clear formalization of the problem in particular thanks to the project charter, • The rigorous framework of the project thanks to precise milestones, • The clear definition of the team and their role all along the project according to these milestones, • The contribution of other Lean Six Sigma tools all along the project. We now need to validate our approach through an application on AREVA’s aluminium electrolysis substations. 1. A classical eco-design process Environmental Improvement
Environmental Evaluation
2. A classical DMAIC project Define
Measure
Analyze
Improve
Control
3. The new DMAIC project for eco-design Define
Goal and scope
Mea-
-sure
LCI
LCIA
Analyze Interpretation
Improve Environmental Improvement
Control Environmental Control
Fig. 7 The different approaches considered in the paper
Even if the proposed methodology allows managing the eco-design project for complex systems, some stages remains hard to perform. Some perspectives appear to simplify them. They could also be applied on our case study but have to be discussed before: • The results of the PhD thesis of Yann Leroy [10] could have a great impact on our own works. Leroy has designed a methodology to make the results of LCA
Managing the Complexity of Environmental Assessments
293
more reliable by working on the inventory data quality. One of his results is the possibility to identify and locate the data the most influential on the quality index. It is then possible to optimize the data collection and the allocated resources. • It could be interesting too to adopt an approach like Analytical Target Cascading [18]. ATC allows optimizing the global system through the optimization of the subsystems and the aggregation of these results by simulation. ATC is based on a hierarchical decomposition of the system and the definition of design targets at each level (from the system to the components). This theory will be studied in more details in the next months.
References [1] European Union, Directive 2002/96/EC of the 27 January 2003 on waste electrical and electronic equipment (WEEE) (2003) [2] European Union, Directive 2002/95/EC of the 27 January 2003 on the restriction of the use of certain hazardous substances in electrical and electronic equipment (2003) [3] US Environmental Protection Agency, The Lean and Green Supply Chain: a practical guide for materials managers and supply chain managers to reduce costs and improve environmental performance (2000) [4] International Organization for Standardization, ISO/TR 14062:2002 - Environmental management - Integrating environmental aspects into product design and development (2002) [5] International Organization for Standardization, ISO 14040:2006 - Environmental management - Life cycle assessment - Principles and framework (2006) [6] Millet, D., Bistagnino, L., Lanzavecchia, C., Camous, R., Poldma, T.: Does the potential of the use of LCA match the design team needs? Journal of Cleaner Production 15, 335–346 (2007) [7] International Organization for Standardization, ISO 14044:2006 - Environmental management - Life cycle assessment - Requirements and guidelines (2006) [8] Reap, J., Roman, F., Duncan, S., Bras, B.: A survey of unresolved problems in life cycle assessment - Part 1: goal and scope and inventory analysis. The International Journal of Life Cycle Assessment 13, 290–300 (2008) [9] Reap, J., Roman, F., Duncan, S., Bras, B.: A survey of unresolved problems in life cycle assessment - Part 2: impact assessment and interpretation. The International Journal of Life Cycle Assessment 13, 374–388 (2008) [10] Leroy, Y.: Development of a methodology to reliable environmental decision from Life Cycle Assessment based on analysis and management of inventory data uncertainty, PhD Thesis, Ecole Nationale Supérieure d’Arts et Métiers, Chambéry, France (2009) [11] Dewulf, W.: A pro-active approach to ecodesign: framework and tools, PhD Thesis, Katholieke Universiteit Leuven (2003) [12] US Environmental Protection Agency, Lean Manufacturing and the Environment (2009), http://www.epa.gov/lean/leanenvironment.htm [13] US Environmental Protection Agency, The Lean and Environment Toolkit (2007) [14] US Environmental Protection Agency, The Lean and Energy Toolkit (2007)
294
F. Cluzel et al.
[15] IBM, Green Sigma - How to optimise your carbon management through Green Sigma (2009) [16] International Organization for Standardization, ISO 14001:2004 - Environmental management systems - Requirements with guidance for use (2004) [17] Luttropp, C., Lagerstedt, J.: EcoDesign and The Ten Golden Rules: generic advice for merging environmental aspects into product development. Journal of Cleaner Production 14, 1396–1408 (2006) [18] Kim, H.M., Michelena, N.F., Papalambros, P.Y., Jiang, T.: Target Cascading in Optimal System Design. Journal of Mechanical Design 125, 474–480 (2003)
Multidisciplinary Simulation of Mechatronic Components in Severe Environments* Jérémy Lefèvre, Sébastien Charles, Magali Bosch, Benoît Eynard, and Manuel Henner
Abstract. Improving the competitivity of a product often means enhancing its quality and its functionalities without impacting its price. Moreover, issues of sustainable development, especially in transport, leads to a reduction of product’s weight achieved by increasing integration of its components. This strong need for integration is the origin of mechatronics. To integrate multiple components and ensure if they are compatible and work in synergy implies to use many multidisciplinary simulations based on high performance calculators. The variety of simulation software, data formats and methodologies adds numerous issues which need to be solved. This paper introduces some solutions to develop interoperability in mechatronics simulation. Keywords: Mechatronics, Multidisciplinary Simulation, Code Coupling, Product Lifecycle Management (PLM), Simulation Lifecycle Management (SLM).
1 Introduction The word "mechatronics" was first used in Japan in 1969 [1], to describe the integration process of 4 disciplines: mechanics, electronics, computing and control / command in a context of product design (Figure 1). Since 1969, the word “mechatronics” is used but systems were not fully mechatronic. Furthermore, the integration level and its fields of application are constantly evolving. This definition is Jérémy Lefèvre · Magali Bosch · Benoît Eynard Université de Technologie de Compiègne, Centre Pierre Guillaumat BP 60319, rue du Docteur Schweitzer, 60203 Compiègne Cedex, France e-mail : [email protected] Sébastien Charles Université de Versailles Saint-Quentin, IUT de Mantes en Yvelines 7 rue Jean Hoët, 78200 Mantes-en-Yvelines, France e-mail : [email protected] Manuel Henner Valeo Systèmes Thermiques, Branche Thermique Moteur 8, rue Louis Lormand La Verrière, 78320 Le Mesnil Saint Denis, France e-mail: [email protected]
296
J. Lefèvre et al.
not universally recognized, it differs somewhat with the industrial and scientific context. Indeed, in some cases mechatronics integrates other disciplines such as optics and automation.
Fig. 1 40 years later, the mechatronics approach is the same [Rensselaer Polytechnic Institute, Troy, New York, USA, 2007]
Many systems and products are sometimes regarded as mechatronic although the integration level of different disciplines is not always fair or even disproportionate [2]. But keep in mind that mechatronics is an approach which is in perpetual evolution. Indeed, mechatronics used to be characterized with classical design partitioned in many fields but now it has evolved into a merged integration of components and fields (Figure 2).
Fig. 2 Increased integration
Within this context, this paper will state how my study will be conducted and what the research directions are. The aspects covered are related to the design and simulation of mechatronic products and systems. This paper will also show the
Multidisciplinary Simulation of Mechatronic Components in Severe Environments
297
options for linking different simulation softwares, including through the using of coupling codes. The management of these data and results will also be addressed with a suitable tool. All these tools are compatible with neutral formats, they will also be discussed.
2 Mov'eo: EXPAMTION Project Mechatronics in recent years has become a major challenge for research and development for many companies to maximize the attractiveness and make performance of their products better in many fields such as automotive, aerospace, medicals and robotics. More specifically, the multidisciplinary High Performance Computing (HPC) is a new scientific aspect because of the many new opportunities and fields which it allows. In this context, many ambitious projects have emerged as the EXPAMTION, a competitiveness project in MOV'EO and SYSTEM@TIC clusters which have been launched in 2008 and aims to improve the performance of automotive suppliers, large or small.
2.1 Involved Partners Within the partners list, we can find some SMEs which are editors of ComputerAided Engineering (CAE) such as CadLM, SIMPOE, Altair, and Intes. They gradually bring their experiences during the progress of the project. Valeo provides six test cases to validate the coupling model calculations on the unified platform: • • • • • •
Mechanics Rheological Aeroacoustics Thermomechanics Vibroacoustics Application (O2M demonstrator)
Bull supports data access from the outside by taking into account security aspects, access rights and quotas. Universities support the industry in all work packages, which are responsible for some training and awareness of students and SMEs to that kind of infrastructure.
2.2 Issues This will be achieved through the implementation of a collaborative CAE platform combining all areas of mechatronics. The development time of products will be reduced. Indeed, the implementation of computing means, secured management rights for each actor, the unification of licensing and coupling of different CAE codes will achieve this goal.
298
J. Lefèvre et al.
2.3 Our Contribution to the Project This project has highlighted a problem that will complement the six test cases. This has given rise to a thesis developed in collaboration with Valeo Systèmes Thermiques, a collaborator of the project. The proposed approach will be tested through a test case which couples thermics, electrics and mechanics. This test case is based on control components of motor fan in the car’s front ends. The challenge is limiting the number of design loops, improving management and quality of data exchange between different software, innovating in terms of scientific simulation behavior of electronic components in severe environment. Right now, simulations of highly stressed electronic components are based on very simplified models that lead to poor reliable results. The project aims to develop physical models and software environment that can greatly improve the accuracy of simulation in a comprehensive approach to mechatronics.
2.4 The Problem of FEM Code Coupling The lack of interoperability of CAE software adds constraints in terms of development and slows the outcome. HPC vendors do not focus on the file exchange compatibility. That poses difficulties in a multidisciplinary approach to implement many exchanges. In this context it is important to bring software solutions particularly in terms of neutral file format and data management, including through the SLM [3].
3 Proposed Solutions Our approach to enhance multidisciplinary simulation of behavior of mechatronic components in severe environments is based on the Modelica multidisciplinary modeling language, on the MpCCI (Mesh-based parallel Code Coupling Interface) coupling code, on the STandard for the Exchange of Product data model (STEP) Application Protocols (AP) 210 neutral file format and on the Eurostep Share-A-Space PLM environment. The MpCCI environment can respond to that case [4] by coupling different finites elements codes in PLM approach. The MpCCI server can be installed on many operating systems like Windows Vista / XP or Linux. By cons, it only takes into account that codes: Permas, Abaqus, Ansys, Fluent, FLUX3D, Icepak, Msc.Marc, StarCD and Radtherm. However, the creation of specific adapters is apparently not excluded. Eurostep proposes Share-A-Space, an environmental data management techniques in a PLM approach through the using of STEP protocol and more specifically the data format Product Life-Cycle Support (PLCS - AP239). Share-A-Space is a multidisciplinary environment that was used specifically as part of a project in aeronautics: VIVACE (Value Improvement through a Virtual Aeronautical Collaborative Enterprise). Share-A-Space has been very satisfactory. The using of neutral file format STEP AP239 could complement MpCCI ensuring the durability and ease of data exchange between different simulation software.
Multidisciplinary Simulation of Mechatronic Components in Severe Environments
299
3.1 Modelica Modelica is a non-proprietary, object-oriented, equation based language to conveniently model complex physical systems containing mechanical, electrical, electronic, hydraulic, thermal, control, electric power or process-oriented subcomponents [5, 6]. This language is an object-oriented physical systems modeling language (OOML) [7] that has meanwhile replaced Dymola [8] as the most widely used OOML. Whereas the Dymola language had been marketed by Dynasim as a proprietary code, Modelica has been designed by a standard committee and is non-proprietary. Anyone may develop a modeling compiler and an underlying simulation engine based on the Modelica language specification, and indeed, there already exist several implementations of Modelica like in the next version of Dassault System CAD software CATIA V6. In the context of our study, Modelica is used to model and simulate the behavior of electronic components subjected to high temperatures and vibration [9].
3.2 MpCCI In the framework of the Work Package 5 "Optimization and porting codes", we can extend this aspect to the coupling of different computational codes. The MpCCI code (http://www.mpcci.de/) allows coupling of codes of many editors (Figure 3) and responds to the problem of real-time software simulation interoperability. However, MpCCI required to implement gateways software / physical and is not yet adapted to all multidisciplinary simulations or all editors.
Fig. 3 Coupling architecture of MpCCI with simulation codes by using Share-A-Space with STEP
300
J. Lefèvre et al.
Fig. 4 MpCCI modular architecture
3.3 STEP The creation, management, improvement and exchange of CAD/FE models within the product lifecycle became one of the major stakes for many companies and especially in mechatronics engineering field [10]. Indeed, in a context of collaborative engineering and extended enterprise, which require large range of heterogeneous software, the major stakes are reducing transfer times and improving quality of data exchange by using neutral file formats [11, 12]. The STEP standard is recognized as one of the best solution for these kinds of exchange issues. The STEP is an international standard of International Standard Organisation (ISO) referenced as ISO 10303. The STEP standard aims to define a nonambiguous, computer-interpretable representation of the data related to the product throughout its lifecycle [13]. STEP allows the implementation of consistent information systems through multiple applications and materials. This standard also proposes various means for the storage, exchange and archiving of product data in a strategy of long-term re-use. The STEP AP specifies data-models (EXPRESS Schema) applicable to an industrial field of activity and product lifecycle phase [14]. The AP are formal representations of a particular field of the data field which use and add semantics to data entities defined in the STEP IR. These IR are shared models which have the role of ensuring interoperability between the various AP [15]. To summarize, the APs are used as bases to implement the STEP import and export modules in software [16].
Multidisciplinary Simulation of Mechatronic Components in Severe Environments
301
The latest developments of STEP Application Protocols (AP) such as AP210 provide some new prospects for data exchanges in electronic design. In April 1999, the International Organization for Standardization (ISO) voted to accept STEP Application Protocol for Electronic Assembly Interconnect and Packaging Design (AP210) as an International Standard [17]. This Application Protocol (AP) provides the groundwork for significant advances in product data reuse and cycle time reduction by defining a standardized, computer-interpretable method for representing and communicating the design of: Electronic Assemblies (their Interconnection and Packaging), Modules, Printed Wiring Assemblies (PWAs), and Printed Wiring Boards (PWBs).
Fig. 5 AP210: Electronic Assembly, Interconnect and Packaging Design
STEP AP210 is the base of the new neutral format we are defining in the EXPAMTION Project. Our file neutral format will enhance this protocol by adding some new capability like simulation entities to perform mechatronics analysis (physical laws, data coupling, software interoperability, etc.). To summarize, STEP neutral format is suitable for mechatronic application. In fact, the various existing AP can model and simulate a complex mechatronic system:
3.4 The PLM Approach The Share-A-space Product Suite is built to enable product data integration in heterogeneous organizations, processes and IT-system environments using state of
302
J. Lefèvre et al.
: Application Protocols Lifecycle management: AP239
System Engineering Tools: AP233
Mechanics: AP203 and AP214
Electronics: AP210 and AP211
Integrated ressources for mechatronic design Fig. 6 STEP: suitable for mechatronics design
the art technologies and standards, such as Web Services and ISO 10303-239 (STEP AP239), PLCS (http://www.eurostep.com/global/solutions/share-a-spaceproduct-suite.aspx). The Share-A-space Product Suite consists of the Share-A-space Server Solution, the Share-A-space Business Adaptors and the Share-A-space Development Tools. The Share-A-space Server is the premier product lifecycle solution for the engineering supply chain. It enables business partners to share specific product information without exposing the whole information set and risking loss of IPR. The Share-A-space Product Suite complements the Share-A-space Server Solution with toolboxes, software development kits (SDKs) and ready-made clients and interfaces. They enable adoptions to the variety of systems and formats that builds up the intellectual capital in the engineering supply chain. The Toolboxes and SDKs provide means which significantly reduces the time taken to develop and deploy high quality PLCS implementations such as PLCS data importers, exporters and specialized integrations. The Share-A-space Business Adaptors are the bridges you need to integrate your database with other systems, letting you implement the solutions you have chosen quickly and easily. These ready-made clients and interfaces provide extensions to existing systems like SAPR/3, SmarTeam and RequisitePro [18]. PLCS (AP239, which is based on AP214, AP233 and other APs) manages the whole life cycle of products. The CAD geometry is only a small part which is integrated on the product structure. PLCS allows many powerful links between different APs of STEP ISO 10303. The strength of the PLCS is that it is generic; it is
Multidisciplinary Simulation of Mechatronic Components in Severe Environments
303
a standard that can be specialized for many fields with standard DEX (Data EXchange). The DEX are reviewed by OASIS comity, they also approve and validate them; this comity is headed by W3C. The DEX define the methodologies for data exchange, and the BIR (Business Information Requirement) has to be defined in UML. To define a DEX, we must know the other existing templates of DEX to avoid recreation of existing or similar DEX. STEP AP239 PLCS uses the new approach Web Ontology Language (OWL) which is based on semantic web for exchanging product information. It uses that for reference data libraries to extend the standard to develop Data EXchange specifications (DEX) in OASIS. Our problematic leads us to define a DEX for mechatronics; there would be the first in the field. PLCS enables all neatly, and will be scalable and sustainable. It is a stable model, however, modeling must be done properly and not to go quickly. This part of modeling must remain in the hands of an expert. Share-A-Space will manage data PLCS is a repository of structured data. It is used to realize a long-term archiving. Share-A-Space perfectly matches the requirements of the EXPAMTION project. Indeed, mechatronics design implies numerous exchanges of multidisciplinary data between various experts using heterogeneous simulation software. Share-A-Space is required to control quality and consistency of the exchanges in a PLM approach.
4 Conclusion In this paper, we introduce some solutions to improve interoperability in the development of mechatronics products and systems: Modelica for modeling complex systems, MpCCI for code coupling, STEP for neutral data exchanges and ShareA-Space for managing the data in a PLM approach. Our goal is to combine these solutions and complete them to implement an environment to master the development of mechatronics products. Through this environment, we will improve global quality of mechatronics product and reduce their development time.
Acknowledgement The writing of this paper was supported by all the team of EXPAMTION project which bring many automotive industry partner and universities together.
References 1. Mori, T.: Company Yaskawa, Japan (1969) 2. Choley, J.Y.: Conférence Formation Mécatronique Innovaxiom. Supméca, Paris (2009) 3. Lalor, P.: Simulation Lifecycle Management. NAFEMS – BENCHmark Journal (2007)
304
J. Lefèvre et al.
4. Wolf, K.: MpCCI – A general coupling library for multidisciplinary simulation, Fraunhofer Institute. Sankt Augustin, Germany (2001) 5. Casella, F., Franke, R., Olsson, H., Otter, M., Sielemann, M.: Modelica Language specification. Version 3.1 (2009) 6. Fritzson, P.: Principles of Object-oriented modeling and simulation with Modelica 2.1. IEEE, Los Alamitos (2006) 7. Elmqvist, H., Mattsson, S.E., Otter, M.: Modelica – a language for physical system modeling, visualization and interaction. In: Proc. IEEE Intl. Symp. Computer Aided Control System Design, Kohala Coast, HI, pp. 630–639 (1999) 8. Brück, D., Elmqvist, H., Olsson, H., Mattsson, S.E.: Dymola for multi-engineering modeling and simulation. In: Proc. 2nd Intl. Modelica Conf., Oberpfaffenhofen, Germany, pp. 55.1–55.8 (2002) 9. Cellier, F., Clauß, C., Urquía, A.: Electronic circuit modeling and simulation in Modelica. EUROSIM, Ljubljana, Slovenia (2007) 10. Charles, S.: Gestion intégrée de données CAO et EF – Contribution à la liaison entre conception mécanique et calcul des structures. UTT, Troyes (2005) 11. Scheder, H.: Product data integration – Needs and requirements from industry. In: Proceedings of European Product Data Technology Days 1995 – Using STEP in Industry. Munchen, Germany (1995) 12. Spooner, S., Hardwick, M.: Using views for product data exchange. IEEE – Computer Graphics and Applications 17 (1997) 13. Fowler, J.: STEP for data management Exchange and Sharing. Technology appraisals (1995) 14. Hardwick, M., Morris, K.C., Spooner, D.L., Rando, T., Denno, P.: Lessons learned developing protocols for the industrial virtual enterprise. Computer-Aided Design 32 (2000) 15. Zhang, Y., Zhang, C., Wang, B.: Interoperation of STEP application protocols for product data management. Concurrent Engineering Research and Applications 6 (1998) 16. Yeh, S.C., You, C.F.: Implementation of STEP-based product data exchange and sharing. Concurrent Engineering Research an Applications 8 (2000) 17. Thurman, T., Smith, G.: Overview & Tutorial of STEP AP 210. Standard for Electronic Assembly Interconnect and Packaging Design, PDES, Inc. (1999) 18. CIMdata: Eurostep’s Share-A-Space, Product Lifecycle Collaboration trough Information Integration (October 2007)
Involving AUTOSAR Rules for Mechatronic System Design* Pascal Gouriet
Abstract. This paper describes a new approach for automotive model-based design: It explores how AUTOSAR concepts are mapped with a common ESC model based design. AUTOSAR (AUTomotive Open System ARchitecture) is an open and standardized automotive software architecture, jointly developed by automotive manufacturers, suppliers and tool developers. Formally signed off in July, 2003, AUTOSAR proposes rules and tools to build easier and faster any automotive software architecture. Consequently, it must help engineers to build a robust and reliable mechatronics systems, by provided rules like a standardized data dictionary. It’s also possible to split a global control loop design in several hardware architectures. AUTOSAR doesn’t introduce new ideas, but a common language between model-based designers and software engineers. This approach is shown through a common Electronic Stability Control -ESC- model-based development. Keywords: AUTOSAR, Simulink®, Model-based Design, Authoring Tool.
1 Introduction The Automotive market is the one which offers great opportunities for mechatronics systems. The first Anti-blocking system (ABS) was introduced in 1978. Since this date, the control of the vehicle dynamic behaviors has been evolving, to gain in terms of performances, integration, mass and reliability. At the same time, software becomes more and more complex. New ESC generation needs roughly a memory space up to 2 MBytes when the first ESC System born in 1995 took less than 64 kBytes. To manage this complexity, rules and tools are necessary for design, integration and validation. In fine, standardization is required.
2 Contexts 2.1 About Concept and Components for ESC System The Electronic Stability Control -ESC- is a closed-loop control system which prevents lateral instability of the vehicle. While ABS prevents wheel lockup when Pascal Gouriet PSA Peugeot Citroën, 18 rue des Fauvelles, 92256 La Garenne Colombes, France
306
P. Gouriet
braking, ESC prevents under steering or over steering of the vehicle, so that vehicle handling is in line with driver’s requests and prevailing road conditions. From the point of view of control command logic, the vehicle is a complex and nonlinear dynamic system with degrees of freedom (DOFs). To control these DOFs (independent motion directions), especially those that essentially influence the system's stability and quality of its dynamic behavior, as shown on figure1, a common ESC model-based design consists on building strategy based on the control of the slip angle and the yaw rate moment.
Fig. 1 Vehicle Dynamic Model-based Strategy
In fine, a generic ESC model-based design can be based on a three main subsystems: • Observer: It estimates values of vehicle-motion variables, such as vehicle slip angle, • Higher-level supervisor: It coordinates orders, due to side slip angle and yaw rate moment values, according requested and actual dynamic behaviors of the vehicle. • Slip controller: It drives actuators for influencing the tractive and the braking forces, following requested orders and physical limits. As shown on Figure 2, a common realistic ESC system is an hydraulic system with brake-pressure sensor, pump, valves and an attached Electronic Control Unit -ECU-. Wheel-speed sensors, steering wheel angle sensor, yaw-rate slope and lateral-acceleration sensors are required for suitable operations.
Involving AUTOSAR Rules for Mechatronic System Design
307
Fig. 2 Components for a common ESC System
2.2 About Simulation and Validation Tools Selection of the best control strategy represents a challenge which requires knowledge of vehicle’s dynamic behaviors and constraints on a realistic system. At the end, simulation tools are necessary to assist designers in developing such model-based concept. To evaluate a concept, the very common approach used by PSA consists on linking the model with a global dynamic vehicle model, so called SimulinkCarTM. Running on Matlab/Simulink® tools and parameterized by a configuration file ( mass, inertia, tire model, … ), this legacy tool developed by PSA allows the control of 24 degrees of freedom. To evaluate the concept in the car, the model is embedded thanks to code generator tools like TargetLink® (Figure 3), with associated physical models such as networks.
Fig. 3 Straight from from Simulink® Model to ECU with TargetLink ECU with TargetLink® ( www.dspace.com}
After simulation and validation, requirements are notified. The generated Ccode can be made either by hand or by coding generator. Deviations can be observed between original concept and final control loop in serial production. It can be due to situation not seen during development phases: life situation motion
308
P. Gouriet
of the vehicle, physical limits, constraints on the realistic system, etc. Another disadvantage met is the fact that the software design is closed to the hardware: A new hardware platform leads to develop and validate new software. Rationalization is under request.
3 AUTOSAR Concepts 3.1 Project Objectives As said, AUTOSAR (AUTomotive Open System ARchitecture) is a worldwide development partnership of car manufacturers, suppliers and other companies from the electronics, semiconductor and software industry. Since 2003 they have been working on the development and introduction of an open, standardized software architecture for the automotive industry. By simplifying the exchange and update options for software and hardware with the AUTOSAR approach, it forms the basis for reliably controlling the growing complexity of the electrical and electronic systems in motor vehicles. AUTOSAR also improves cost efficiency without compromising quality. The "core partners" of AUTOSAR are the BMW Group, Bosch, Continental, Daimler, Ford, Opel, PSA Peugeot Citroën, Toyota and Volkswagen. In addition to these companies, approximately 50 "premium members" play an important role in the success of the partnership. Companies which join the AUTOSAR development partnership can use the specifications free of charge.
3.2 Main Working Topics As shown on figure 4, AUTOSAR works on 3 Axis and provides as results standardized infrastructure - Architecture- , methodology, and application interfaces for efficient software sharing:
Fig. 4 Main working topics for AUTOSAR
• Architecture: Software architecture including a complete basic software stack for ECUs – the so-called AUTOSAR Basic Software – as an integration platform for hardware independent software applications
Involving AUTOSAR Rules for Mechatronic System Design
309
• Methodology: Exchange formats ( templates) to enable a seamless configuration process of the basic software stack and the integration of application software in ECUs. • Application Interfaces: Specification interfaces of typical automotive applications from all domains in terms of syntax and semantics, which should serve as a standard for application software modules.
3.3 Technical Overview AUTOSAR provides a common software infrastructure for automotive systems of all vehicle domains based on standardized interfaces for the different layers shown on Figure 5.
Fig. 5 AUTOSAR Software Architecture
The SW-Cs encapsulate an application which runs on the AUTOSAR infrastructure. AUTOSAR approach is based on layered software architecture: Application Software Component: This layer contains functional application, Runtime Environment (RTE): it implements all communication mechanisms and essential interfaces to the basic software, defined previously by a Virtual Functional Bus (VFB). Basic Software: it provides the infrastructural functionality on an ECU. By specifying interfaces and their communication mechanisms, the applications are decoupled from underlying Hardware and Basic Software.
3.4 AUTOSAR Authoring Tool The development of SW-Cs requires interaction with AUTOSAR Authoring tools (AAT) used to develop a vehicle's architecture and ECU topology as well as RTE generation environments, as shown on Figure 6. The communication layer in the basic software is encapsulated and not visible at the application layer. The xml
310
P. Gouriet
formal description, a standard document for engineering tool is generated. It encloses the description of Application Software components, Electronic Control Unit hardware and system topology. It also includes the configuration of basic software modules like operating system, infrastructure and networks. With AUTOSAR authoring tool, allocation becomes compatible with various architectures.
Fig. 6 AUTOSAR Authoring Tool
These Authoring tools must help designer to build a robust and reliable mechatronic system by using such standardized definitions.
3.5 AUTOSAR Software Component An application in AUTOSAR consists of interconnected "AUTOSAR Software Components", so called SW-Cs. Each SW-C implementation is independent from the infrastructure.
Involving AUTOSAR Rules for Mechatronic System Design
311
The image in Figure 7 shows an application consisting of three SW-Cs which are interconnected by several connectors. Each SW-C encapsulates part of the functionality of the application. AUTOSAR does not prescribe how large the SW-Cs are. Depending on the requirements of the application domain a SW-C might be a small, reusable piece of functionality (such as a filter) or a larger block encapsulating an entire automotive functionality. However, the SW-C is a so-called "Atomic Software Component". It cannot be distributed over several AUTOSAR ECUs. Consequently, each instance of a SW-C that should be present in a vehicle is assigned to one ECU.
Fig. 7 AUTOSAR Application example
A model-based design with AUTOSAR rules is a software architecture based on several standardized atomic software components. Links between components are so called ports. Each port contains data elements or events. Ports connected each other must be consistent. In the release R3.0, AUTOSAR defines all interfaces onto a so called Integrated Master Table ( Release R3.0). Of course, designer can import data element, software module from a common library. For the interfaces as well as other aspects needed for the integration of the SW-Cs, AUTOSAR provides a standard description format template; It includes: - General characteristics (name, etc, .. ) - Communication properties (ports, interfaces, .. ) - Inner structure (sub-systems, connections, .. ) - Required Hardware resources (processing time, scheduling, memory size, … ) By using graphical representation tool compliant with AUTOSAR modeling, each designer can model AUTOSAR systems. Figure 8 shows an example how Simulink® tool is mapped to AUTOSAR rules.
312
P. Gouriet
Fig. 8 Simulink® Style Guide for AUTOSAR (source: www.theMathWorks.com)
3.6 Benefits for Model-Based Design AUTOSAR concept doesn’t introduce new ideas, it’s a skeleton-like model, built and shared by software engineers. In fine, the SW-Cs have well-defined descriptions which are described and standardized within AUTOSAR rules, its integration becomes easier. To be a success, both software engineers and modelbased designers must fulfill a common Software Component Description.
4 Model-Based Design with AUTOSAR 4.1 Atomic Software for ESC System As said, to ease the re-use of software components across several products, AUTOSAR proceeds on the standardization of the application interfaces agreed among the partners. It was achieved for a common ESC System and published in release R3.0. Atomic Software: AUTOSAR defines all the ComponentType ‘ESC’ into one atomic software component, as shown on Figure 9. Interfaces: Signals are core if necessary, conditional specific variants ports or optional. The integer and physical ranges, the convention name for existing signals are specified, as shown on Figure 10 for yaw rate base signal. Currently, time features and safety level considerations have to be specified by designers. As core, a standardized ESC model-based design must provide system and actuator Status, Lateral Acceleration and Yaw rate, Longitudinal Acceleration from wheel speed, Brake Pedal Pressed Value, Total Powertrain Torques At Wheels requested by ESC, Longitudinal Vehicle Speed, Wheel Velocity Angular and Wheel Distance Traveled Distance. Valves and Electric motor pump interfaces are not standardized.
Involving AUTOSAR Rules for Mechatronic System Design
313
ESP-Sensors ESP-Sensors Base Sensor Signals
I1 Interface of ESP and VLC
Ínterface of ESP and external yaw rate controller
I4
ESP ESP
I6
2nd 2nd Yaw Yaw Rate RateController Controller
SW-Component SW-Component
System-level Brake Actuator Interface
Vehicle Vehicle Longitudinal Longitudinal Controller Controller Standard Signals from ESP
I2
I3
Information signals from other functions / domains
Command signals to other functions / domains
I7 I5
Brake Brake Actuator Actuator
Fig. 9 ESC SW Component
Data Type Name
YawRateBase
Description
Yaw rate measured along vehicle z- axis (i.e. compensated for orientation). Coordinate system according to ISO 8855
Data Type
S16
Integer Range
-32768..+32767
Physical Range
-2,8595..+2,8594
Physical Offset
0
Unit
rad/sec
…
….
Remarks
This data element can also be used to instantiate a redundant sensor interface. Range might have to be extended for future applications (passive safety).
…
Fig. 10 Standardization of the application interfaces
As core, the inputs available for ESC System are Acceleration Pedal Ratio, Acceleration Lateral Base and Yaw rate base provided by sensor components, some vehicle status like Combustion Engine Status, Energy Management, Gear Activation, Operation Mode, Parking Brake Active. ESC is also informed about
314
P. Gouriet
actual and requested total tractive torque provided by powertrain. ESC knows also how the vehicle is steered by estimated Road Wheel Angle, Steering Wheel Speed and Angle values.
4.2 AUTOSAR Rules for Model-Based Design Each atomic Software component includes internal behaviors so called runnables. These not standardized routines includes services to be achieved. Well-known features must be provided by software engineers, such as needed libraries and resources, timing requirements ( period, reaction time ). Other feature introduced by AUTOSAR is the schedulability, e.g. sequences of execution of all runnables entities, including initialization tasks. AUTOSAR introduced additional topics to Model-based designers: • variant e.g. software configuration – • modes e.g. dependencies on Modes Matrix where runnable entities and ports are enabled or not depending on the modes coming from the state managers level. The Model-based design is delivered on xml format, including behavior description, functions, and interfaces. Services for diagnosis, lifetime situation are well established. Out1
RTEEv ent
ALatStdBy ESC
AccrPedlRat Y awRateStdBy ESC CmbEngSts
Atomic Software Component2
If _Interf ace5
WhlVAgrStd Egy Mngt
INIT
Y awStaby CtlAcv GearAct AbsCtlAcv OperMod ALgtStdFromWhlSpd
PrkgBrkAcv
PtTqTotActAtWhls
BrkActrAcv
RoadWhlAgActEstim dFrnt If _Interf ace1
BrkPedlPsd
Atomic Software Component1
If _Interf ace2
SteerWhlAgStd EscSts
Sensor Software Component
SteerWhlAgSpdStd TotalPowertrainTroquesAtWheels ALatBas VehSpdLgt If _Interf ace3
Y awRateBase WhlDstTrv ldStd WheelPulse BrakeTorque
SecondYawRate
WheelPulse
If _Interf ace7
If_BrakeTorque[4]
VLC_Output
VLC_Input
In1
Out1
ActuatorSoftwareComponent
AtomicESC_SoftwareComponent PulseInterface
SIMULINK/Car V2R6
Steering Angle
Modèle simplifié de Dynamique Véhicule
Pneus Pacejka92
Power Torque [4]
Fig. 11 ESC Overview within AUTOSAR concept
Additional advantage for software engineers using AUTOSAR approach is the fact that some shared controls can be made on each AUTOSAR SW-C component, before integration into the complete system:
Involving AUTOSAR Rules for Mechatronic System Design
315
Memory Mapping : To check if all modules uses same data definition without redundancies Calibration data: to check common and shared declaration Initialization task : each runnable includes periodic task as usual (TIMEEVENT like 5 ms, 100ms,..) , and an initialization task ( SWITCH-EVENT). These controls are possible with AUTOSAR authoring tool, they occur before the integration of the module. Thus, as summarized on figure 11, model based designers and software engineers have to work together some items to integrate easier a functional module.
4.3 Chassis Domain Overview The figure 12 shows the composition of chassis domain. The task of work package chassis is focused on data description of functional domain chassis within the AUTOSAR Application Layer. SW-Compositions are described, including Adaptive Cruise Control -ACC- attached with the Vehicle Longitudinal Control VLC- , ESC –Electronic Stability Control-, Electronic Parking Brake –EPB- and its high logic command named Stand Still Manager -SSM-, Roll Stability Control –RSC-, Steering and High Level Steering components and Suspension.
Fig. 12 AUTOSAR Chassis Overview
5 Conclusion Standardization means a shared platform for model-based development, by offering atomic software decomposition, libraries and interfaces. AUTOSAR helps designers to build a robust and reliable mechatronic system, by using standardized rules. In the end, engineers have to be able to solve their challenges by using existent, standardized and validated modules where no innovation occurs. Tool suppliers need to support them as much as possible. AUTOSAR is not as frightening as it sometimes seems, however, a guideline for model-based designers to develop functions must be established.
316
P. Gouriet
Acknowledgement I acknowledge the contribution of my colleagues to this paper, particularly MM E.Gravier and A.Gilberg and also all the work provided by AUTOSAR consortium, particularly work package WP10.3.
References [1] http://www.autosar.org, Official website of the AUTOSAR partnership [2] AUTOSAR partnership: Achievements and exploitation of AUTOSAR development, Convergence CTEA (2006) [3] Niggemann, O., Beine, M.: System Desk and TargetLink – AUTOSAR compliant Development at System and Function Level, dSPACE GmbH (October 2007) [4] AUTOSAR partnership: Applying ASCET to AUTOSAR, Document ID 227 [5] AUTOSAR partnership: Applying Simulink to AUTOSAR, Document ID 185 [6] BOSCH, Automotive Handbook 5th edn. (2000) [7] Sandmann, G., Thompson, R.: The MathWorks, Inc., Development of AUTOSAR Software Components within Model-Based Design, 2008-01-0383 [8] AUTOSAR partnership, AUTOSAR on the road, Convergence (2008) [9] Fürst, S.: BMW AUTOSAR – An open standardized software architecture for the automotive industry, October 23 (2008) [10] Schuette, H., Waeltermann, P.: dSPACE GmbH, Hardware-in-the Loop Testing of Vehicle Dynamics Controllers – A technical Survey, SAE 2005-01-1660
Glossary AUTOSAR BSW ESC ECU E/E ESC HW OEM RTE SW-C XML
: AUTomotive Open System ARchitecture : Basic Software : Electronic Stability Control : Electronic Control Unit : Electric/Electronics : Electronic Stability Control : Hardware : Original Equipment Manufacturer : Runtime Environment : Software Component : eXtensible Mark up Language
Enterprise Methodology: An Approach to Multisystems* Dominique Vauquier
Abstract. Praxeme is an open method backed by many companies and organizations. Its scope is the “enterprise”, understood in its most generic and widest meaning. The methodology is a tool – sometimes a weapon – that helps us to cope with the complexity of the objects we want to create or to transform. The Enterprise System Topology provides us with a methodological framework that identifies and links the aspects of the enterprise together. Based on this framework, the convergence theory describes the way of transforming a collection of companies into a cohesive and efficient single system. The range of topics includes business knowledge, organization and IT. Keywords: Methodology, transformation, convergence, system, modeling.
enterprise,
complexity,
merge,
The initiative for an open method involves public organizations (French Army, Direction Générale de la Modernisation de l’Etat…) as well as private companies (AXA Group, SMABTP, SAGEM…) and consulting firms. It has resulted in the Praxeme method whose scope embraces all the aspects of the enterprise, from strategy to deployment. After a brief presentation of its principles, this paper will focus on its contribution to the transformation of the enterprise. More specifically, it will address the question of how to master the complexity of the extended enterprise and how to organize the convergence of its constituents. The limited length of this paper and lecture makes it necessary to leave a couple of issues aside. As a result, it may sound a bit of a utopian approach. Indeed, we assert the claim: if we really are to transform the enterprise – as more and more decision-makers are proclaiming – we must be able to think it anew. That is not to say that our approach is not realistic. The methodology is precisely there for making good-will and ambition effective. The main obstacle on the road consists in the preconceived ideas that the crowd of naysayers and ideal-killers endlessly reiterate. Answering this criticism is easy – technically speaking – but it would require more space and time for delving into the details of the method. We will only highlight the core message as far as Dominique Vauquier Praxeme Institute, Group IT Strategy & Enterprise Architecture team at AXA Group Cell: +33 (0)6 77 62 31 75 e-mail: [email protected] www.praxeme.org
318
D. Vauquier
complexity is concerned. Firstly, we will expose the methodological framework which provides the theoretical basis of the method. Then, we will apply it to the challenge of convergence inside large organizations.
1 The Enterprise System Topology 1.1 Notion of Enterprise System We call “Enterprise System” the enterprise that perceives itself as a system1. Using such a phrase expresses a strong tenet: in the face of complexity, we adopt a specific sort of rationality, made up of scientific assessment, engineering, system theory… In the workplace, this is not such a natural posture. The Enterprise System is the entire enterprise, a complex object. It must not be confused with the notion of an IT system at the scale of the enterprise (namely a group or a federation). Beyond the IT system, it includes the numerous and various constituents of the enterprise, some of which are material – like buildings, equipment, people – others are resolutely abstract – like values, goals, knowledge… A great deal of the complexity stems from the diversity of these constituents as well as from the fact that they are hugely intertwined. To give an example, a person as a worker is an obvious constituent of the enterprise; this person assumes a role in the organization depending on his/her skills and behaves in accordance with his/her personal values. For every task undertaken, there is a potential for conflict between these individual values and the asserted or real values of the group. As a result, solutions – processes, software… – may or may not work depending on the level of harmony that has been established between the value system of the group and the one of its members. All elements cited in this example are part of the Enterprise System. Remain oblivious to these elements and our action will soon be hindered. Recognizing this reality is common wisdom. Taking it into account in our thoughts and actions is less common and, on the contrary, it requires thorough attention and constant endeavor. That is the meaning and content of the phrase “Enterprise System”.
1.2 Methodological Framework If enterprises are deemed complex systems, how should we address them? Practical questions follow: What is to be represented? How should we deal with the amount of information to be collected and the decisions to be made? These questions call for a methodological framework. At the heart of the software engineering tradition, the “Separation of concerns” principle sets the stage. Over the decades, there have been a few proposed frameworks, varying from a systematic approach (Zachman’s framework with its 30 cells) to simpler and more popular forms (e.g., TOGAF with only four types of architecture). A methodological framework always conveys strong assumptions and expresses an 1
Cf. the Enterprise Transformation Manifesto, http://www.enterprisetransformationmanifesto.org
Enterprise Methodology: An Approach to Multisystems
319
in-depth mindset. These assumptions and mindsets determine the way we see things and the way we act. Therefore, it is of paramount importance to unveil these largely unconscious ideas. As a methodological framework, the Praxeme method proposes the Enterprise System Topology2. It stems from the necessity of capturing all knowledge related to the enterprise, in an actionable manner. “Topology” as a term is to be understood according to its strict etymology: the discourse about the places – answering the question: where should we put every bit of information and decision in the enterprise transformation process? The Enterprise System Topology identifies and links the aspects of the enterprise together (see figure 1).
Fig. 1 The Enterprise System Topology
2
At first glance, the term “topology” can be understood in its basic meaning, that given by the etymology: the discourse (logos) on the location (topos). Using this view, the topology of enterprise system explains how to position elements of information and decision, which appear all the way along the enterprise transformation chain. However, topology also deals with the relations between elements. Although no mathematical approach of the Enterprise System Topology has been attempted yet, there is a striking parallelism between this empirical approach and the mathematical theory. Indeed, the notion of neighborhood obviously applies to the elements of models. It is possible to define a topology for each aspect of the Enterprise System. In reverse, each aspect requires a specific topology with a dedicated notion of neighborhood that takes into account the meaning of the relations between elements. For instance, the valid relations between logical constituents clearly differ from the relations used to express the business knowledge in the semantic model. How far this difference goes is of importance from both a methodological and practical perspective. Obviously, the UML notion of a package is tantamount to the mathematical notion of a subset and it makes sense to ask whether a package is “opened” or “closed”, depending on the topological rules that constrain the design. It is a way of assessing the quality of an architecture.
320
D. Vauquier
The comprehensive framework identifies and articulates nine aspects. We can formally model each of these aspects, in order to master information and decisionmaking regarding the enterprise. The “political” aspect is better named “teleonomic”; it gathers scoping elements (elements of knowledge and management): objectives, requirements, vocabularies, rules… These elements are then linked to model elements dispatched in the other aspects, depending on their nature3. This article emphasizes some points related to the framework and the paradigm shift it embodies. Compared to other frameworks, the characteristics of the Enterprise System Topology include:
insistence on the semantic aspect, which is necessary for establishing a proper representation of the core business knowledge, ahead of the processes; place of the logical aspect as an intermediary between business and IT; inclusion of the information or data point of view, all the way along the chain of aspects; emphasis on relations between the modeling elements (cf. metamodel).
Praxeme recognizes its debt to Zachman’s framework, which has inspired the Enterprise System Topology. The latter proposes a simpler – and so, more actionable – order than the former. As regards TOGAF and the frameworks of this generation, we believe, on the one hand, that four or so planes are not enough for organizing the material we have to cope with. On the other hand, as Praxeme focuses on modeling techniques, it is orthogonal to the processes recommended by these repositories of practices. As a result, it is easy to combine the processes and practices with the modeling techniques4.
1.3 How to Describe the “Business” Reality We use the term “business” as opposed to “IT”, meaning the part of the business reality without its software equipment. 3
To follow on from the previous note, it is possible to consider the enterprise itself as a topological space. As a methodological framework, the Enterprise System Topology summarizes relations that exist between elements of various aspects, enabling us to outline a multi-aspect topology. In so doing, we provide the methodology with a mathematical basis for the sake of the analysis and assessment, as far as derivation rules and traceability are concerned. These rules, which automatically link elements from one aspect to another, can be seen as applications – most often injections – and some could reveal themselves as homeomorphisms. An example is the user interface, genuinely derived from a real semantic model by the method. Quality evaluation also benefits from this mathematical tool, since the absence of such formal homeomorphisms indicates divergence and lack of alignment. 4 An in-depth comparison of the methodological frameworks would cast much light on our practices and backgrounds. Such a hygienic exercise pertains to methodology, strictly speaking (i.e., an application of epistemology). We definitely need this kind of endeavor if we are to fix our dysfunctional behaviors. This analysis would require more space and is out of the scope of this paper.
Enterprise Methodology: An Approach to Multisystems
321
Fig. 2 The right description of the business encompasses the core business knowledge as well as the processes and organization
As far as business is concerned, we generally use process representations, capability models, use-case models or any other expression describing the business activity. This spontaneous approach of business reality ranks among the functionalist approaches. It entails a difficulty: we are considering the enterprise in its organizational aspect. Yet, what we see in this aspect are actors and roles, activities and habits, processes and procedures, use-cases... All of these convey organizational choices. Therefore, representations of this aspect can hardly be shared and generalized. When the purpose is convergence, simplification, agility... we need a more generic representation. We need to isolate the core business knowledge, using abstraction and expelling variability. Above this “pragmatic” (organizational) aspect we must recognize a more abstract one, made of business objects, regardless of organizational habits and, of course, independent of technical choices. We call this the “semantic” aspect. The semantic model is not only a sort of conceptual data model; it intends to express the business knowledge. We can use here an objectoriented approach, which provides us with all the tools we need:
322
D. Vauquier
• • •
class diagram to structure the concepts, state machines to catch the transformation and object life cycles, etc.
An object-oriented approach is connoted software but is built upon philosophical works. That explains its ability to efficiently structure representations. It can really empower the formal expression of business knowledge.
1.4 How to Design the IT System When equipped with the two business representations – semantic and pragmatic – we can search for a better structure for the software solution. If we conceive this structure directly in terms of technology and technical choices, we will get a representation which will be subjected to technical change. Also, there will be a risk of entering into excruciating details. Such a representation will make it impossible to drive the IS transformation in the long term. For all these reasons, our framework introduces an intermediate aspect, between business and IT: the “logical” aspect. It is where the structural decisions regarding the software system will be made. For instance, SOA (service oriented architecture) as a style for designing an IT system pertains to the logical aspect. The logical aspect is linked with the previous aspects. The methodology states the derivation rules which help discover the logical services.
Fig. 3 The optimal structure of the IT system takes heed of both models of the business reality
Enterprise Methodology: An Approach to Multisystems
323
1.5 Impact of This Approach on a Single System By applying this approach, we deeply change the structure of the system. Indeed, the logical architect receives a list of object domains from the semantic model. Object domains are an alternative way to structure a model, as opposed to functional domains. For more details, see the “Guide to the logical aspect”.
Fig. 4 The impact of this approach on the architecture of the IT system
2 The Convergence Approach How can we master the complexity of the extended enterprise? That is the purpose of the convergence approach, based upon the Enterprise System Topology. This approach provides transformation programs with a strong and willful vision. It helps to prioritize the investments and to drive the transformation, keeping the focus on the essential and avoiding common pitfalls. The convergence goal arises each time a group wants to better integrate its components, either in the case of a merger or in the search for savings. The resulting process includes these steps (which are more architecture principles than project phases): 1. Separate the concerns 2. Share the core business knowledge 3. Factorize the practices A constant endeavor to isolate the variation points affects every step.
324
D. Vauquier
Step #1: Separate the Concerns The starting point is represented by legacy systems where the aspects have not been separated. The systems have been developed, one application after the other. So, the result is just normal. In such systems, it is very difficult to isolate the business rules, to adapt the software to a new organization and to avoid redundancy. The first thing to do is to separate the aspects. The IT system does not necessarily change at this stage, but at least we obtain a better representation of it and we can compare it to other systems. Figure 5 symbolizes three systems, first as a mess of various concerns and then at the stage where the concerns have been disentangled, in accordance with the Enterprise System Topology.
Fig. 5 Adopt the “separation of concerns” principle
Step #2: Share the Core Business Knowledge This comparison reveals that the core business knowledge is much the same from one company to another. With good will and appropriate modeling techniques, it is possible to establish a common semantic model. MDM (master data management) and BRMS (business rules management system) are solutions that facilitate the agreement on a common semantic model, by providing means for capturing part of the variations. Therefore, the companies can refer to the same model and adapt it to their context. For example, national rules that constrain products or practices will be expressed in a BRMS rather than exposed in the common model. In fact, BRMS and MDM are technical mechanisms that are to be considered when it comes to IT architecture. We evoke these mechanisms at this stage of semantic modeling, just to draw attention to the fact that the model may be parameterized and contain meta-data. This remark calls for a specific procedure of modeling. In the picture below (figure 6), every part of the pie chart represents a different company. At this stage, the building of the IT systems still differs from one company to another, but it refers to a common model as far as core business knowledge is concerned. In addition, there is no attempt at this stage to establish a common representation of business activities.
Enterprise Methodology: An Approach to Multisystems
325
Fig. 6 A common semantic model shared by the entities of the federation
Step #3: Factorize the Practices The assumption that characterizes stage 3 states that it is possible to give a generic description of the processes and activities, providing that: •
The appropriate modeling techniques are applied, especially by referring to the semantic model and the lifecycles of the business objects. • The modeler thinks of various possible usages and prepares the system adaptability by means of parameters. Regarding the pragmatic aspect, the solutions provided in the field of business process management are particularly useful. These solutions will be implemented later on, on the technical architecture and in the software. Knowing that, the modeler is free for a more generic design5. 5
To go back to our discussion on mathematical formalization, we can now suggest treating several companies as “adjunction spaces”. Interoperability and convergence can be approached through sets of components – in the latter case – or sets of flows – in the former one. Both goals – interoperability and convergence – have to be specified in terms of the aspects that are targeted. For instance, interoperability can be sought at the technical level only (without convergence of the content) or at the software level (implying identity inside the antecedent aspects). The corresponding transformations can be represented in terms of applications and their combinations (adjunction, disjoint union, product…). Intuitively, this is a potential starting point for establishing a rigorous approach of enterprise systems, including federations of systems and their evolutions. Firstly, such an approach would benefit from findings in the field of architecture and complexity measurement (cf. Y. Caseau and D. Krobs, M. Aiguier). Secondly, it would expand to multi-systems. As it includes the multi-aspect dimension, a consequence of the “separation of concerns” principle, it does not limit itself to a mere evaluation tool but also conveys strong recommendations for transforming the systems.
326
D. Vauquier
Fig. 7 A single description of processes with their parameters
Operating Rule: “Isolate the Variation Points” The sequel can go further. Each time, for every aspect, the idea is to seek the possibility of sharing a common description and expelling the causes of variation in the shape of an ad hoc device. The technical choices are not the only ones that can be shared, nor the most critical, nor necessarily the priorities. One has to ask where is there the most value in convergence? For instance, a logical model with specification of services in an SOA approach is easier to consider as a reference rather than its translation into software. Indeed, the software takes into account various technical architectures…
Enterprise Methodology: An Approach to Multisystems
327
MDM = Master Data Management; BRMS = Business Rules Management System; BPM = Business Process Management Fig. 8 Types of variation points depending on the aspects
3 Conclusion The convergence approach summarizes a kind of “utopian” approach, since it reverses the approach generally adopted. It is typical of a top-down approach, which we observe more and more scarcely. Indeed, we are facing a paradox: on the one hand, decision-makers are calling for innovation and transformation more and more often; on the other hand, the practices of design and architecture have dramatically regressed and are receding in front of the so-called pragmatic approach. The complexity of the matters is put forward as an alibi for avoiding tough decisions. As a result, it has become an urgent matter to propose concrete guidelines and to reform our practices. In the face of complexity, Praxeme is an attempt to reactivate the methodological tradition and to provide proper guidelines. This method and the multisystems approach have been applied to several programs in various sectors: UAV6 control systems (SAGEM), insurance (Azur-GMF, SMABTP, AXA Group)… Sustainable IT architecture illustrates the methodology that has been applied to the overhauling of an entire information system. The initiative for an open method has already made guides and training support available. We are aware the current corpus lacks many topics and procedures. Let us end this article with a call for contributions to the initiative and to build the method our enterprises definitely need. 6
Unmaned Air Vehicle.
328
D. Vauquier
Bibliography Caseau, Y., Krob, D., Peyronnet, S.: Complexité des systèmes d’information: une famille de mesures de la complexité scalaire d’un schéma d’architecture, Génie logiciel (2007) Aiguier, M., Le Gall, P., Mabrouki, M.: A formal denotation of complex systems: how to use algebraic refinement to deal with complexity of systems, Technical Reports, http://www.epigenomique.genopole.fr Strategy & Leadership in turbulent times, McKinsey Quarterly (2010) Aiguier, M., Le Gall, P., Mabrouki, M.: Complex software systems: Formalization and Applications. International Journal on Advances in Software 2(1), 47–62 (2009) Bonnet, P., Detavernier, J.-M., Vauquier, D.: Sustainable IT Architecture. Wiley, Chichester (2009) Vauquier, D.: Praxeme methodological guides, http://www.praxeme.org http://www.enterprisetransformationmanifesto.org (In the face of complexity, this manifesto articulates core principles and offers an escape from confusion, gloom and doom. It aims to reinforce our ability to act) Curley, M.: Managing Information Technology for business value. Intel Press (2007) Depecker, L.: Rédacteur, Terminologie et sciences de l’information, Le savoir des mots (2006) Rozanski, N., Woods, E.: Software Systems Architecture. Addison Wesley, Reading (2005) Longépé, C.: Urbanisation du Système d’information, Dunod (2001) Hofmeister, C., Nord, R., Soni, D.: Applied Software Architecture. Addison-Wesley, Reading (1999) Meinadier, J.P.: Ingénierie et intégration des systèmes, Hermès (1998) Strassmann, P.A.: The politics of information management. The Information Economics Press (1995) Vauquier, D.: Développement Orienté Objet, Eyrolles (1993) Tardieu, H., Rochfeld, A., Coletti, R.: La méthode Merise, Editions de l’Organisation (1989) Léo Apostel, Syntaxe, Sémantique et Pragmatique, in Logique et Connaissance Scientifique dir. Jean Piaget, coll. La Pléiade, Gallimard (1967) Piaget, J.: Épistémologie des mathématiques, in Logique et Connaissance Scientifique dir, Piaget, J.: Coll. La Pléiade, Gallimard (1967)