i
Requirements Engineering for Sociotechnical Systems José Luis Maté Universidad Politécnica de Madrid (UPM), Spain Andrés Silva Universidad Politécnica de Madrid (UPM), Spain
Information Science Publishing Hershey • London • Melbourne • Singapore
ii Acquisition Editor: Senior Managing Editor: Managing Editor: Development Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Mehdi Khosrow-Pour Jan Travers Amanda Appicello Michele Rossi Toni Fitzgerald Sara Reed Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Information Science Publishing (an imprint of Idea Group Inc.) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.idea-group.com and in the United Kingdom by Information Science Publishing (an imprint of Idea Group Inc.) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 3313 Web site: http://www.eurospan.co.uk Copyright © 2005 by Idea Group Inc. All rights reserved. No part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Library of Congress Cataloging-in-Publication Data Requirements engineering for sociotechnical systems / Jose Luis Mate and Andres Silva, editors. p. cm. Includes bibliographical references and index. ISBN 1-59140-506-8 (h/c) -- ISBN 1-59140-507-6 (s/c) -- ISBN 1-59140-508-4 (ebook) 1. Software engineering. I. Mate, Jose Luis. II. Silva, Andres. QA76.758.R455 2004 005.1--dc22 2004022149 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
iii
Requirements Engineering for Sociotechnical Systems Table of Contents
Foreword ....................................................................................................................... vii Bashar Nuseibeh, The Open University Preface ........................................................................................................................viii José Luis Maté, Universidad Politécnica de Madrid (UPM), Spain Andrés Silva, Universidad Politécnica de Madrid (UPM), Spain Section I: Basics Chapter I. Requirements Engineering: Dealing with the Complexity of Sociotechnical Systems Development ............................................................................1 Päivi Parviainen, VTT Technical Research Centre of Finland, VTT Electronics, Finland Maarit Tihinen, VTT Technical Research Centre of Finland, VTT Electronics, Finland Marco Lormans, Delft University of Technology, The Netherlands Rini van Solingen, LogicaCMG Technical Software Engineering (RTSE1), The Netherlands Chapter II. Challenges in Requirements Engineering for Embedded Systems ......... 21 Eman Nasr, Cairo University, Egypt Chapter III. Requirements Elicitation for Complex Systems: Theory and Practice . 37 Chad Coulin, University of Technology Sydney, Australia Didar Zowghi, University of Technology Sydney, Australia Chapter IV. Conceptual Modeling in Requirements Engineering: Weaknesses and Alternatives ................................................................................................................. 53 Javier Andrade Garda, University of A Coruña, Spain Juan Ares Casal, University of A Coruña, Spain Rafael García Vázquez, University of A Coruña, Spain Santiago Rodríguez Yáñez, University of A Coruña, Spain
iv
Chapter V. Combining Requirements Engineering and Agents ............................... 68 Ricardo Imbert, Universidad Politécnica de Madrid, Spain Angélica de Antonio, Universidad Politécnica de Madrid, Spain Chapter VI. Maturing Requirements Engineering Process Maturity Models ......... 84 Pete Sawyer, Lancaster University, UK Chapter VII. Requirements Prioritisation for Incremental and Iterative Development .............................................................................................................. 100 D. Greer, Queens University Belfast, UK Chapter VIII. A Quality Model for Requirements Management Tools .................... 119 Juan Pablo Carvallo, Universitat Politècnica de Catalunya (UPC), Spain Xavier Franch, Universitat Politècnica de Catalunya (UPC), Spain Carme Quer, Universitat Politècnica de Catalunya (UPC), Spain Section II: Challenges Chapter IX. Composing Systems of Systems: Requirements for the Integration of Autonomous Computer Systems ............................................................................... 139 Panayiotis Periorellis, University of Newcastle upon Tyne, UK Chapter X. Requirements Engineering for Technical Products: Integrating Specification, Validation and Change Management ................................................. 153 Barbara Paech, University of Heidelberg, Germany Christian Denger, Fraunhofer Institute for Experimental Software Engineering, Germany Daniel Kerkow, Fraunhofer Institute for Experimental Software Engineering, Germany Antje von Knethen, Fraunhofer Institute for Experimental Software Engineering, Germany Chapter XI. Requirements Engineering for Courseware Development .................. 170 Ines Grützner, Fraunhofer Institute for Experimental Software Engineering, Germany Barbara Paech, University of Heidelberg, Germany Chapter XII. Collaborative Requirements Definition Processes in Open Source Software Development ............................................................................................... 189 Stefan Dietze, Fraunhofer Institute for Software and Systems Engineering (ISST), Germany
v
Chapter XIII. Requirements Engineering for Value Webs ..................................... 209 Jaap Gordijn, Free University Amsterdam, The Netherlands Chapter XIV. Requirements Engineering in Cooperative Systems ........................ 226 J.L. Garrido, University of Granada, Spain M. Gea, University of Granada, Spain M.L. Rodríguez, University of Granada, Spain Section III: Approaches Chapter XV. RESCUE: An Integrated Method for Specifying Requirements for Complex Sociotechnical Systems ............................................................................. 245 Sara Jones, City University, UK Neil Maiden, City University, UK Chapter XVI. Using Scenarios and Drama Improvisation for Identifying and Analysing Requirements for Mobile Electronic Patient Records ............................ 266 Inger Dybdahl Sørby, Norwegian University of Science and Technology, Norway Line Melby, Norwegian University of Science and Technology, Norway Gry Seland, Norwegian University of Science and Technology, Norway Chapter XVII. Elicitation and Documentation of Non-Functional Requirements for Sociotechnical Systems ............................................................................................ 284 Daniel Kerkow, Fraunhofer Institute for Experimental Software Engineering, Germany Jörg Dörr, Fraunhofer Institute for Experimental Software Engineering, Germany Barbara Paech, University of Heidelberg, Germany Thomas Olsson, Fraunhofer Institute for Experimental Software Engineering, Germany Tom Koenig, Fraunhofer Institute for Experimental Software Engineering, Germany Chapter XVIII. Capture of Software Requirements and Rationale through Collaborative Software Development ......................................................................... 303 Raymond McCall, University of Colorado, USA Ivan Mistrik, Fraunhofer Institut für Integrierte Publikations - und Informationssysteme, Germany Chapter XIX. Problem Frames for Sociotechnical Systems ................................... 318 Jon G. Hall, The Open University, UK Lucia Rapanotti, The Open University, UK
vi
Chapter XX. Communication Analysis as Perspective and Method for Requirements Engineering ....................................................................................... 340 Stefan Cronholm, Linköping University, Sweden Göran Goldkuhl, Linköping University, Sweden About the Editors ....................................................................................................... 359 About the Authors ..................................................................................................... 360 Index ........................................................................................................................ 370
vii
Foreword
Requirements engineering (RE) lies at the heart of systems development, bridging the gap between stakeholder goals and constraints, and their realization in systems that inevitably combine technology and human processes, embedded in a changing organizational or social context. RE is therefore multi-disciplinary in both its outlook and its deployment of techniques for elicitation, specification, analysis, and management of requirements. This is an important book because it recognizes the multi-disciplinary dimensions of RE and because its contributions seek to strengthen the bridging role of RE. It is all too easy for technologists and engineers to ignore the social context in which their technology will be deployed, and all too easy for humans and organizations to be unaware of the opportunities and difficulties that technology brings. The contributions in this book, written by a diverse group of researchers and scholars, have been thoughtfully organized and edited to appeal to different audiences: the student learning about the area, the researcher seeking challenges to investigate, and the practitioner looking for practical techniques to apply. A single book cannot be everything to everybody, but this edited volume is a tremendously valuable introduction to the many facets of requirements engineering for sociotechnical systems. Bashar Nuseibeh The Open University, May 2004
viii
Preface
The Concept of Sociotechnical Systems: History and Definition The term “Sociotechnical System” comes from the field of organizational development. The goal of this field is to improve the performance and effectiveness of human organizations. The term was introduced by Emery and Trist (1960), organizational development researchers at the Tavistock Institute of Human Relations in London (www.tavinstitute.org). By coining the term “sociotechnical system” they were challenging the conventional position at the time, which was based on technological determinism. Technology determinism postulated that: •
Technology is autonomous.
•
The individual and social conditions of human work must follow the technical structures (Ropohl, 1999).
•
Training is a process that adapts humans to machines, so humans can be quickly replaced (if need be).
The organization of labor known as Taylorism can be seen as a natural consequence of technological determinism, and Henry Ford’s synchronized mass production methods are its most prominent example. This is the way of thinking that Charlie Chaplin’s film Modern Times and many similar dystopias narrated in 20th century literature criticize. By contrast, Emery and Trist thought that there is, or there should be, an interdependent and reciprocal relationship between humans and technology. Hence, from the point of view of work design, both the social and the technological aspects of work need to be in harmony to increase effectiveness and “humanize” the workplace. This would be achieved mainly by user participation in the design of the systems and devices that users are to operate at the workplace. From this discussion, it can be seen that the term “sociotechnical” comes from the analysis of labor relations in the industrial era. This new view of the interdependency of the technical and the social also emerged in high-tech industries. For instance, after an in-depth analysis of the development process of a defense-related aircraft, Law and
ix
Callon (1988) found that engineers “are not just people who sit in drawing offices and design machines;” they are also “social activists who design societies or social institutions to fit those machines.” The introduction of computers at the workplace soon led to new views and extensions of this work. Research into labor relations and work design became more and more concerned with the use of computing systems (Scacchi, 2004). An outstanding contribution came from the so-called “Scandinavian School.” This school advocates that, at design time, apart from user participation, there is also a need to address the politics of labor and democratize the workplace (Scacchi, 2004). This position had a heavy bearing on software and systems engineering, so much so that Friedman and Kahn (1994) later affirmed, in a purely computing context such as the “Communications of the ACM”, that “computer technologies are a medium for intensely social activity” and “system design –though technical as an enterprise — involves social activity and shapes the social infrastructure of the larger society.” It is also important to note that, at the same time, the field of computer ethics was developing in response to the risks inherent to computing systems, and the ACM “Code of Ethics” was published in 1992. The term “sociotechnical” is widely embraced by people interested in computer ethics, and it is from this field that we have borrowed, slightly modified, what we believe to be the most complete definition: A sociotechnical system is a complex inter-relationship of people and technology, including hardware, software, data, physical surroundings, people, procedures, laws and regulations (www.computingcases.com, 2004). Soon the software engineering community realized that systems are dynamic, as the organizational and social context in which they are embedded is also dynamic (Truex, 1999). Projects became more and more socially self-conscious, or, in other words, more aware that their objectives are to alter both the technical and the social (Blum, 1996). Accordingly the term “sociotechnical” started to be adopted in software engineering and systems engineering. Two main points can be made as to the use of the term: (i) the term normally applies to the product, not the process, because the process is tacitly recognized as sociotechnical by the software engineering community; (ii) the term is normally used in an attempt to emphasize the socially self-conscious feature, as defined above, and underline opposition to technological determinism.
Sociotechnical Systems and Requirements Engineering In no other field is the need to attach just as much importance to the system as to the people so clear as in software and systems engineering. This is because, due to its inherent flexibility (at least theoretically), software can be configured by the designer/ developers to fit any particular situation and to achieve almost any purpose. In practice, however, this flexibility comes at a price, because the number of different software + hardware + people configurations is so high that it is extremely difficult to find out exactly which is the best one at a particular point in time to satisfy the stakeholders’
x
goals. This has been less of a problem in “traditional” engineering, like mechanical or civil engineering, at least until now. But nowadays, in the so-called Information Society, traditional engineering is not free from this problem. Software is now an essential part of products and services offered by industries traditionally unrelated to software, like the automotive industry, photography, telephony, medicine, and so forth (for instance, as Paech, Denger, Kerkow, and von Knethen say in this book, a modern car contains more executable code than the first Apollo that flew to the moon). At the same time, software is an essential instrument for the designers of these new products and services. But a software system is of no help unless it is built according to its requirements. Requirements engineering (RE) provides the methods, tools, and techniques to build the roadmaps that designers and developers of complex software/people systems should follow, as it is the discipline concerned with the real-world goals for, functions of, and constraints on those systems (Zave, 1997). It is the discipline that most helps to achieve success in system development and, in particular, in sociotechnical system development. The RE discipline plays an important role in raising the socially self-conscious factors related to complex system development. Additionally, success in RE essentially depends on it being founded on a sociotechnical position. The goal of this book, written by practitioners and researchers, is to promote the sociotechnical aspects that permeate RE. The book adds to existing literature revealing that the RE field (both in research and in practice) is immensely mature as regards accepting and dealing with the multidisciplinary issues required to properly build sociotechnical systems, even though there is still a lot of ground to be covered. In this book, we present 20 contributions from different authors, divided into three sections: (I) Basics, (II) Challenges, and (III) Approaches.
Section I: Basics Section I presents eight chapters that introduce important topics in the RE area, always from a sociotechnical perspective. These chapters are, however, not confined to a mere description of the topics. Instead the authors criticize some of the existing approaches and move into new territory. In Chapter I Parviainen, Tihinen, Lormans, and van Solingen introduce RE for sociotechnical systems. The authors describe the terminology and the process in detail. Nasr, in Chapter II, introduces the topic of RE for embedded systems, in which software is just a part of a complex system. An important topic closely related to the sociotechnical side of RE is that of elicitation. In Chapter III Coulin and Zowghi review the topic and propose some future directions. The problems related to, and the alternatives to, conceptual modelling in RE are the topic of Chapter IV by Andrade Garda, Ares Casal, García Vázquez, and Rodríguez Yáñez. In Chapter V de Antonio and Imbert clarify the use of the term “agent” in RE and in agent-oriented software, and conclude that the different uses of “agent” are not unrelated as they may appear. Sawyer, in Chapter VI, reviews the topic of software process improvement from a sociotechnical perspective
xi
and considers lessons learned from industrial pilot studies. Chapter VII, by Greer, discusses the topic of requirements prioritization for incremental and iterative projects and proposes a method for optimally assigning requirements to product releases. The topic of requirements management tools is considered by Carvallo, Franch, and Quer in Chapter VIII, in which a method is presented for building quality models for requirements management tools.
Section II: Challenges The six chapters included in Section II introduce some important and difficult topics that requirements engineers and system developers have to deal with to build genuine sociotechnical systems. In Chapter IX Periorellis explains the problems and opportunities related to the composition of existing systems in order to build new systems, even transcending organizational boundaries. The complexity of modern technical products that incorporate software is the subject of Chapter X, with a focus on the automotive industry. In this chapter Paech, Denger, Kerkow, and von Knethen present the QUASAR process that faces some of the challenges posed by those systems. Grützner and Paech, in Chapter XI, introduce the challenges and possible solutions for courseware development, clearly a kind of system with a strong sociotechnical component. The Open Source software development offers a new playground for RE, based on a collaborative, feedback-based process. Chapter XII, by Dietze, presents some insight into this process. The multidisciplinary task of creating value webs, and a methodology for their development, is the topic of Chapter XIII by Gordijn. Technology is opening many possibilities for workgroups. In Chapter XIV Garrido, Gea, and Rodríguez review the topic of RE for cooperative systems and propose a methodology based on behavior and task models.
Section III: Approaches Finally, Section III proposes some methods and techniques that can help practitioners to solve the complex problems involved in sociotechnical system development. In Chapter XV Jones and Maiden present RESCUE, a method for requirements specification that has been applied to complex Sociotechnical Systems like air traffic control. Hospital information systems have a clear sociotechnical nature. Chapter XVI, by Sørby, Melby, and Seland, proposes observational studies and drama improvisation as a means to identify and analyze requirements for those systems. An approach to elicit the sometimes subjective and elusive non-functional requirements is described in Chapter XVII, by Kerkow, Dörr, Paech, Olsson, and Koenig. McCall and Mistrik, in Chapter XVIII, propose to use a lightweight natural language processing approach for discovering requirements from transcripts of participatory design sessions. In Chapter XIX Hall and Rapanotti bring one of the most innovative topics in RE, namely, problem frames, closer to the topic of sociotechnical systems. Finally, in Chapter XX, Cronholm and
xii
Goldkuhl present a method based on the perception that the main purpose of information systems is to support communications between different actors.
References Blum, B. (1996). Beyond programming: To a new era of design. Oxford University Press. Computing Cases (2004). Site devoted to computer ethics. Retrieved from www.computingcases.org. Emery, F. E., & Trist, E. L. (1960). Sociotechnical systems. In C. W. Churchman & M. Verhulst (Eds.), Management sciences: Models and techniques (Vol. 2, pp. 83-97). Pergamon Press. Friedman, B., & Kahn, P. H. (1994). Educating computer scientists: linking the social and the technical. Communications of the ACM, 37(1), 64-70. Law, J., & Callon, M. (1988). Engineering and sociology in a military aircraft project: A network of analysis of technological change. Social Problems, 35, 284-297. Ropohl, G. (1999). Philosophy of sociotechnical systems. Society for Philosophy and Technology Vol. 4, No 3. Retrieved http://scholar.lib.vt.edu/ejournals/SPT/v4n3/. Scacchi, W. (2004). Sociotechnical design. In W. S. Bainbridge (Ed.), The Encyclopedia of Human-Computer Interaction. Berkshire Publishing Group. Truex, D. P., Baskerville, R., & Klein, H. (1999). Growing systems in emergent organizations. Communications of the ACM, 42(8), 117-123. Zave, P. (1997). Classification of research efforts in requirements engineering. ACM Computing Surveys, XXIX(4), 315-321.
xiii
Acknowledgments
The editors would like to acknowledge the help of all our colleagues and friends at the Universidad Politécnica de Madrid. In particular we are very grateful to Juan Pazos and Salomé García for their help. Thanks to Rachel Elliot for her help with the technical translation. Thanks for their interest in the book to all the authors, who also acted as referees, providing constructive reviews to other chapters. Their effort led to a true collaborative work. Finally, we would like to thank our family and friends. We are also very grateful to Luis Muñoz and Leopoldo Cuadra for their support during the process. J.L. Maté and A. Silva
xiv
Section I: Basics
Requirements Engineering: Sociotechnical Systems Development
1
Chapter I
Requirements Engineering:
Dealing with the Complexity of Sociotechnical Systems Development Päivi Parviainen, VTT Technical Research Centre of Finland, VTT Electronics, Finland Maarit Tihinen, VTT Technical Research Centre of Finland, VTT Electronics, Finland Marco Lormans, Delft University of Technology, The Netherlands Rini van Solingen, LogicaCMG Technical Software Engineering (RTSE1), The Netherlands
Abstract This chapter introduces requirements engineering for sociotechnical systems. Requirements engineering for sociotechnical systems is a complex process that considers product demands from a vast number of viewpoints, roles, responsibilities, and objectives. This chapter explains the requirements engineering terminology and describes the requirements engineering process in detail, with examples of available methods for the main process activities. The main activities described include system requirements development, requirements allocation and flow-down, software Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
2
Parviainen, Tihinen, Lormans and van Solingen
requirements development, and continuous activities, including requirements documentation, requirements validation and verification, and requirements management. As requirements engineering is the process with the largest impact on the end product, it is recommended to invest more effort in both industrial application as well as research to increase understanding and deployment of the concepts presented in this chapter.
Introduction1 The concept of sociotechnical systems was established to stress the reciprocal interrelationship between humans and machines and to foster the program of shaping both theoretical and social conditions of work (Ropohl, 1999). A sociotechnical system can be regarded as a theoretical construct for describing and explaining technology generally. This chapter helps to describe a multidisciplinary role of requirements engineering as well as the concept of workflow and patterns for social interaction within the sociotechnical systems research area. Requirements engineering is generally accepted as the most critical and complex process within the development of sociotechnical systems (Juristo, Moreno, & Silva, 2002; KomiSirviö & Tihinen, 2003; Siddiqi, 1996). The main reason is that the requirements engineering process has the most dominant impact on the capabilities of the resulting product. Furthermore requirements engineering is the process in which the most diverse set of product demands from the most diverse set of stakeholders is being considered. These two reasons make requirements engineering complex as well as critical. This chapter first introduces background information related to requirements engineering, including the terminology used and the requirements engineering process in general. Next a detailed description of the requirements engineering process, including the main phases and activities within these phases, is presented. Each phase will be discussed in detail, with examples of useful methods and techniques.
Background A requirement is a condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents (IEEE Std 610.12, 1990). A well-formed requirement is a statement of system functionality (a capability) that must be met or possessed by a system to satisfy a customer’s need or to achieve a customer’s objective, and that is qualified by measurable conditions and bounded by constraints (IEEE Std 1233, 1998).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
3
Requirements are commonly classified as (IEEE Std 830, 1998): •
Functional: A requirement that specifies an action that a system must be able to perform, without considering physical constraints; a requirement that specifies input/output behavior of a system.
•
Non-functional: A requirement that specifies system properties, such as environmental and implementation constraints, performance, platform dependencies, maintainability, extensibility, and reliability. Non-functional requirements are often classified into the following categories:
•
Performance requirements: A requirement that specifies performance characteristics that a system or system component must possess, for example, max. CPUusage, max. memory footprint.
•
External interface requirements: A requirement that specifies hardware, software, or database elements with which a system or system component must interface or that sets forth constraints on formats, timing, or other factors caused by such an interface.
•
Design constraints: A requirement that affects or constrains the design of a system or system component, for example, language requirements, physical hardware requirements, software development standards, and software quality assurance standards.
•
Quality attributes: A requirement that specifies the degree to which a system possesses attributes that affect quality, for example, correctness, reliability, maintainability, portability.
Requirements engineering contains a set of activities for discovering, analyzing, documenting, validating, and maintaining a set of requirements for a system (Sommerville & Sawyer, 1997). Requirements engineering is divided into two main groups of activities, requirements development and requirements management. Requirement development includes activities related to discovering, analyzing, documenting, and validating requirements, where as requirement management includes activities related to maintenance, namely identification, traceability, and change management of requirements. Requirements validation consists of activities that try to confirm that the behaviour of a developed system meets its user needs. Requirements verification consists of those activities that try to confirm that the product of a system development process meets its technical specifications (Stevens, Brook, Jackson, & Arnold, 1998). Verification and validation include: •
Defining the verification and validation requirements, that is, principles on how the system will be tested.
•
Planning the verification and validation.
•
Capturing the verification and validation criteria (during requirements definition).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
4
Parviainen, Tihinen, Lormans and van Solingen
•
Planning of test methods and tools.
•
Planning and conducting reviews.
•
Implementing and performing the tests and managing the results.
•
Maintaining traceability.
•
Auditing.
In sociotechnical systems software is understood as a part of the final product. System requirements are captured to identify the functioning of the system, from which software requirements are derived. Deciding which functionality is implemented where, and by which means (software, hardware, mechanics, and so forth) is merely a technical decision process in which feasibility, dependability, and economics play a role. A well-structured and technically sound requirements engineering process is, therefore, of utmost importance.
Requirements Engineering Process Figure 1 describes a requirements engineering process where the main processes of system and software requirements engineering are depicted. Requirements engineering activities cover the entire system and software development lifecycle. On the other hand the requirements engineering process is iterative and will go into more detail in each iteration. In addition the figure indicates how requirements management (RM) is understood as a part of the requirements engineering process. The process is based on Kotonya and Sommerville (1998), Sailor (1990), Thayer and Royce (1990). The process describes requirements engineering for sociotechnical systems, where software requirements engineering is a part of the process. Traditionally requirements engineering is performed in the beginning of the system development lifecycle (Royce, 1970). However, in large and complex systems development, developing an accurate set of requirements that would remain stable throughout the months or years of development has been realized to be impossible in practice (Dorfman, 1990). Therefore requirements engineering is an incremental and iterative process, performed in parallel with other system development activities such as design. The main high-level activities included in the requirements engineering process are: 1)
System requirements development, including requirements gathering/elicitation from various sources (Figure 1 shows the different sources for requirements), requirements analysis, negotiation, priorisation and agreement of raw requirements, and system requirements documentation and validation.
2)
Requirements allocation and flow-down, including allocating the captured requirements to system components and defining, documenting, and validating detailed system requirements.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
5
Figure 1. System and software requirements engineering (Parviainen, Hulkko, Kääriäinen, Takalo, & Tihinen, 2003) Business requirements
Customer requirements
Standards In house inventions
Constraints User requirements
System Requirements development RM Planning
Gathering
Requirements documentation
High-level analysis Detailed analysis
Validation and verification
Traceability
Validation:
Identification
Allocation
- user requirements - customer requirements
Flow-down Traceability
System requirements specification IEEE Std 1233-1998
Detailed System Requirements Software Req. HW Req. Mechanics
Change control
Traceability
Software Req. HW Req.
Software Req. HW Req.
Software Req. HW Req.
Verification: - implementation (code) -architecture -design
Software requirements specification IEEE Std 830-1998
Traceability
Traceability
Other SW development phases Requirements Management
3)
Software requirements development, including analyzing, modeling and validating both the functional and quality aspects of a software system, and defining, documenting, and validating the contents of software subsystems.
4)
Continuous activities, including requirements documentation, requirements validation and verification, and requirements management.
Each of these high-level activities will be further detailed in the following sections.
System Requirements Development The main purpose of the system requirements development phase is to examine and gather desired objectives for the system from different viewpoints (for example, customer, users, system’s operating environment, trade, and marketing). These objectives are identified as a set of functional and non-functional requirements of the system. Figure 2 shows the context for developing system requirements specification (SyRS).
1. Requirements Gathering/Elicitation from Various Sources Requirements gathering starts with identifying the stakeholders of the system and collecting (that is, eliciting) raw requirements. Raw requirements are requirements that
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
6
Parviainen, Tihinen, Lormans and van Solingen
Figure 2. Context for developing SyRS (IEEE Std 1233, 1998) RAW REQUIREMENT
CUSTOMER CUSTOMER FEEDBACK
CUSTOMER REPRESENTATION
ENVIRONMENT
CONSTRAINT / INFLUENCE
DEVELOP SYSTEMS REQUIREMENTS COLLECTION
TECHNICAL FEEDBACK
TECHNICAL REPRESENTATION
TECHNICAL COMMUNITY
have not been analyzed and have not yet been written down in a well-formed requirement notation. Business requirements, customer requirements, user requirements, constraints, in-house ideas and standards are the different viewpoints to cover. Typically specifying system requirements starts with observing and interviewing people (Ambler, 1998). This is not a straightforward task, because users may not possess the overview on feasibilities and opportunities for automated support. Furthermore user requirements are often misunderstood because the requirements collector misinterprets the users’ words. In addition to gathering requirements from users, standards and constraints (for example, the legacy systems) also play an important role in systems development.
2. Requirements Analysis and Documentation After the raw requirements from stakeholders are gathered, they need to be analyzed within the context of business requirements (management perspective) such as costeffectiveness, organizational, and political requirements. Also, the requirements relations, that is, dependencies, conflicts, overlaps, omissions, and inconsistencies, need to be examined and documented. Finally the environment of the system, such as external systems and technical constraints, need to be examined and explicated. The gathering of requirements often reveals a large set of raw requirements that, due to cost and time constraints, cannot entirely be implemented in the system. Also the identified raw requirements may be conflicting. Therefore, negotiation, agreement, communication, and priorisation of the raw requirements are also an important part of the requirements analysis process. The analyzed requirements need to be documented to enable communication with stakeholders and future maintenance of the requirements and the system. Requirements documentation also includes describing the relations between requirements. During requirements analysis it gives added value to record the rationale behind the decisions made to ease future change management and decision making. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
7
Table 1. System requirements development methods Activity Gathering requirements
Example methods Ethnography (Suchman, 1983) Protocol Analysis (Ericsson & Simon, 1993) Focus groups (Maguire, 1998) JAD (Joint Application Development) (Ambler, 1998) Volere (Robertson & Robertson, 1999)
Requirements analysis
QFD (Quality Function Deployment) (Revelle, Moran, Cox, & Moran, 1998) SCRAM (Scenario-based Requirements Engineering) (Sutcliffe, 1998)
SSADM (Structured System Analysis and Design Methodology) (Ashworth & Goodland, 1990)
Negotiation and priorisation
CORE (Controlled Requirements Expression) (Mullery, 1979) WinWin approach (Bose, 1995)
System requirements documentation
IEEE Std 1233-1998 VDM (Vienna Development Model) (Björner & Jones, 1978) Specification language Z (Sheppard, 1995) HPM (Hatley-Pirbhai Methodology) (Hatley & Pirbhai, 1987) VORD (Viewpoint-Oriented Requirements Definition) (Kotonya & Sommerville, 1996)
Description Observing methods use techniques that may help to understand the thoughts and needs of the users, even when they cannot describe these needs or they do not exactly know what they want. Meeting techniques cover separate techniques for meetings and workshops for gathering and developing requirements from different stakeholders. Provides a generic process for gathering requirements, ways to elicit them from users, as well as a process for verifying them. Identifying customer needs, expectations, and requirements, and linking them into the company's products. Develop requirements (whole RE) using scenarios. The scenarios are created to represent paths of possible behavior through use cases, and these are then investigated to develop requirements. Can be used in the analysis and design stages of systems development. It specifies in advance the modules, stages, and tasks that have to be carried out, the deliverables to be produced, and the techniques used to produce those deliverables. The purpose of viewpoint-oriented methods is to produce or analyze requirements from multiple viewpoints. They can be used while resolving conflicts or documenting system and software requirements. Standards define the contents of a SyRS. In formal methods, requirements are written in a statement language or in a formal -mathematical -- way. Gives support for documenting and managing of system requirements. Helps to identify and prioritize requirements and also can be utilized when documenting system and software requirements.
3. System Requirements Validation and Verification In system requirements development, validation and verification activities include validating the system requirements against raw requirements and verifying the correctness of system requirements documentation. Common techniques for validating requirements are requirements reviews with the stakeholders and prototyping. Table 1 contains examples of requirements engineering methods and techniques used during the system requirements development phase. The detailed descriptions of the methods have been published in Parviainen et al. (2003). Several methods for gathering, eliciting, and analyzing requirements from users and other stakeholders can be used. Table 1 includes several observing methods (for example, Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
8
Parviainen, Tihinen, Lormans and van Solingen
ethnography), meeting techniques (for example, focus groups) and analyzing techniques (for example, QFD) that can be used to gather requirements and avoid misunderstandings of users needs. The methods help in identifying needs of individuals and converting them into requirements of a desired product. At the same time social actions and workflows, safety-critical aspects, or technical constraints have to be taken into consideration. The results of the system requirements development phase are captured as top-level system requirements that are used as input for the allocation and flow-down phase.
Allocation and Flow-Down The requirements allocation and flow-down process’ purpose is to make sure that all system requirements are fulfilled by a subsystem or by a set of subsystems collaborating together. Top-level system requirements need to be organized hierarchically, helping to view and manage information at different levels of abstraction. The requirements are decomposed down to the level at which the requirement can be designed and tested; thus, allocation and flow-down may be done for several hierarchy levels. The level of detail increases as the work proceeds down in the hierarchy. That is, system-level requirements are general in nature, while requirements at low levels in the hierarchy are very specific (Leffingwell & Widrig, 2000; Stevens et al., 1998). The top-level system requirements defined in the system requirements development phase (see previous subsection) are the main input for the requirements allocation and flow-down phase. In practice, system requirements development and allocation and flow-down are iterating; as the system level requirements are being developed, the elements that should be defined in the hierarchy should also be considered. By the time a draft of the system requirements is complete, a tentative definition of at least one and possibly two levels of system hierarchy should be available (Dorfman, 1990).
1. Requirements Allocation Allocation is architectural work carried out in order to design the structure of the system and to issue the top-level system requirements to subsystems. Architectural models provide the context for defining how applications and subsystems interact with one another to meet the requirements of the system. The goal of architectural modeling, also commonly referred to as high-level modeling or global modeling, is to define a robust framework within which applications and component subsystems may be developed (Ambler, 1998). Each system level requirement is allocated to one or more elements at the next level (that is, it is determined which elements will participate in meeting the requirement). Allocation also includes allocating the non-functional requirements to system elements. Each system element will need an apportionment of the non-functional requirements (for example, performance requirement). However, not all requirements are allocable; nonallocable requirements are items such as environments, operational life, and design standards, which apply unchanged across all elements of the system or its segments. The
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
9
allocation process is iterative; in performing the allocation, needs to change the system requirements (additions, deletions, and corrections) and/or the definitions of the elements can be found (Dorfman, 1990; Nelsen, 1990; Pressman, 1992; Sailor, 1990). The overall process of the evaluation of alternative system configurations (allocations) includes: •
Definition of alternative approaches.
•
Evaluation of alternatives.
•
Selection of evaluation criteria; performance, effectiveness, life-cycle cost factors.
•
Application of analytical techniques (for example, models).
•
Data generation.
•
Evaluation of results.
•
Sensitivity analysis.
•
Definition of risk and uncertainty.
•
Selection of the configuration (Blanchard & Fabrycky, 1981; Pressman, 1992).
Once the functionality and the non-functional requirements of the system have been allocated, the system engineer can create a model that represents the interrelationship between system elements and sets a foundation for later requirements analysis and design steps. The decomposition is done right when: •
Distribution and partitioning of functionality are optimized to achieve the overall functionality of the system with minimal costs and maximum flexibility.
•
Each subsystem can be defined, designed, and built by a small or at least modestsized team.
•
Each subsystem can be manufactured within the physical constraints and technologies of the available manufacturing processes.
•
Each subsystem can be reliably tested as a subsystem subject to the availability of suitable fixtures and harnesses that simulate the interfaces to the other system.
•
Appropriate regard is given to the physical domain – the size, weight, location, and distribution of the subsystems – that has been optimized in the overall system context (Leffingwell & Widrig, 2000).
2. Requirements Flow-Down Flow-down consists of writing requirements for the lower-level elements in response to the allocation. When a system requirement is allocated to a subsystem, the subsystem must have at least one requirement that responds to the allocation. Usually more than one requirement will be written. The lower-level requirement(s) may closely resemble the higher-level one or may be very different if the system engineers recognize a capability
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
10
Parviainen, Tihinen, Lormans and van Solingen
that the lower-level element must have to meet the higher-level requirements. In the latter case, the lower-level requirements are often referred to as “derived” (Dorfman, 1990). Derived requirements are requirements that must be imposed on the subsystem(s). These requirements are derived from the system decomposition process. As such alternative decompositions would have created alternative derived requirements. Typically there are two subclasses of derived requirements: •
Subsystem requirements that must be imposed on the subsystems themselves but do not necessarily provide a direct benefit to the end user.
•
Interface requirements that arise when the subsystems need to communicate with one another to accomplish an overall result. They will need to share data or power or a useful computing algorithm (Leffingwell & Widrig, 2000).
In the allocation and flow-down phase, requirements identification and traceability have to be ensured both to higher-level requirements as well as between requirements on the same level. Also the rationale behind design decisions should be recorded in order to ensure that there is enough information for verification and validation of the next phases’ work products and change management. The flowing down of the top-level system requirements through the lower levels of the hierarchy until the hardware and software component levels are reached in theory produces a system in which all elements are completely balanced, or “optimized.” In the real world, complete balance is seldom achieved due to fiscal, schedule, and technological constraints (Sailor, 1990; Nelsen, 1990). Table 2 includes few examples of methods available for the allocation and flow-down. The results of allocation and flow-down are detailed system-level requirements and the “architectural design” or “top-level design” of the system. Again needs to change the system requirements (additions, deletions, and corrections) and/or the definitions of the Table 2. Allocation and flow-down methods Activity Allocation
Example methods SRA (System Requirements Allocation Methodology) (Hadel & Lakey, 1995)
ATAM (Architecture Trade-off Analysis Method) (Kazman, Klein, Barbacci, Longstaff, Lipson, & Carriere, 1998) HPM (Hatley-Pirbhai Methodology) (Hatley & Pirbhai, 1987) QADA (Matinlassi & Niemelä, 2002) FAST (Weiss & Lai, 1999) Flow-down
ATAM (Architecture Trade-off Analysis Method) (Kazman et al., 1998) HPM (Hatley & Pirbhai, 1987)
Description A customer-oriented systems engineering approach for allocating top-level quantitative system requirements. It aims at creating optimized design alternatives, which correspond to the customer requirements using measurable parameters. Helps for performing trade-off analysis and managing non-functional requirements during allocation. Verifies requirements allocation and manages changes during allocation phase. Methods for architecture design and analysis. See more from Dobrica & Niemelä, 2002. Facilitates communication between stakeholders for gaining a rationale of requirements flow-down.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
11
system elements may be found. These are then fed back to the system requirements development process. Allocation and flow-down starts as a multi-disciplinary activity, that is, subsystems may contain hardware, software, and mechanics. Initially they are considered as one subsystem; in later iterations the different disciplines are considered separately. Software requirements development will be described in detail in the next section.
Software Requirements Development The software requirements development process is the activity determining which functionality of the system will be performed by software. Documenting this functionality together with the non-functional requirements in a software requirements specification is part of this phase. Through the system mechanism of flow-down, allocation, and derivation, a software requirements specification will be established for each software subsystem, software configuration item, or component (Thayer & Royce, 1990).
1. Software Requirements Analysis Software requirements analysis is a software engineering task that bridges the gap between system-level software allocation and software design. Requirements analysis enables the specification of software functions and performance, an indication of the software interfaces with other system elements, and the establishment of design constraints that the software must meet. Requirements analysis also refines the software allocation and builds models of the process, data, and behavioral domains that will be treated by software. Prioritizing the software requirements is also part of software requirements analysis. To support requirements analysis, the software system may be modelled, covering both functional and quality aspects.
2. Software Requirements Documentation In order to be able to communicate software requirements and to make changes to them, they need to be documented in a software requirements specification (SRS). An SRS contains a complete description of the external behavior of the software system. It is possible to complete the entire requirements analysis before starting to write the SRS. However it is more likely that as the analysis process yields aspects of the problem that are well understood, the corresponding section of the SRS is written.
3. Software Requirements Validation and Verification Software requirements need to be validated against system-level requirements, and the SRS needs to be verified. Verification of SRS includes, for example, correctness, consistency, unambiguousness, and understandability (IEEE Std 830, 1998).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
12
Parviainen, Tihinen, Lormans and van Solingen
A requirements traceability mechanism to generate an audit trail between the software requirements and final tested code should be established. Traceability should be maintained to system-level requirements, between software requirements, and to later phases, for example, architectural work products.
Table 3. Software requirements development methods Activity Software requirements analysis
Example methods OMT (Object Modeling Technique) (Bourdeau & Cheng, 1995) Shlaer-Mellor Object-Oriented Analysis Method (Shlaer & Mellor, 1988) UML (Unified Modeling Language) (Booch, Jacobson, & Rumbaugh, 1998) SADT (Structured Analysis and Design Technique) (Schoman & Ross, 1977) SASS (Structured Analysis and System Specification) (Davis, 1990)
B-methods (Schneider, 2001) Petri Nets (Girault & Valk, 2002; Petri, 1962) XP (Extreme Programming) (Beck, 1999)
CARE (COTS-Aware Requirements Engineering) (Chung, Cooper, & Huynh,, 2001) OTSO (Off-the-Shelf Option) (Kontio, 1995) Planguage (Gilb, 2003)
Requirements documentation
IEEE Std 830-1998
Requirements validation
Volere (Robertson & Robertson, 1999) Storyboard Prototyping (Andriole, 1989)
Also several other methods, such as object-oriented methods, provide some support for validation and verification
Description Object-oriented methods model systems in an object-oriented way or support objectoriented development in the analysis and design phases. Structured methods analyze systems from process and data perspective by structuring a project into small, well-defined activities and specifying the sequence and interaction of these activities. Typically diagrammatic and other modeling techniques are used during analysis work. Formal methods are often used for safetycritical systems. Agile methods are not specially focused on RE, but they have an integral point of view where RE is one of the activities of the whole cycle. See more from Abrahamsson et al., 2002. Specific methods for RE when using COTS (Commercial off-the-shelf). COTS is a ready-made software product that is supplied by a vendor and has specific functionality as part of a system. Consists of a software systems engineering language for communicating systems engineering and management specifications, and a set of methods providing advice on best practices. IEEE defines contents of an SRS. The standard doesn't describe sequential steps to be followed but defines the characteristics of a good SRS and provides a structure template for the SRS. This template can be used in documenting the requirements and also as a checklist in other phases of the requirements engineering process. Provides process for gathering/eliciting and validating both system and software requirements. Sequences of computer-generated displays, called storyboards, are used to simulate the functions of the formally implemented system beforehand. This supports the communication of system functions to the user and makes the trade-offs nonfunctional and functional requirements visible, traceable and analyzable.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
13
The outcome of the software requirements development phase is a formal document including a baseline of the agreed software requirements. According to SPICE (1998), as a result of successful implementation of the process: •
The requirements allocated to software components of the system and their interfaces will be defined to match the customer’s stated needs.
•
Analyzed, correct, and testable software requirements will be developed.
•
The impact of software requirements on the operating environment will be understood.
•
A software release strategy will be developed that defines the priority for implementing software requirements.
•
The software requirements will be approved and updated as needed.
•
Consistency will be established between software requirements and software designs.
•
The software requirements will be communicated to affected parties.
Table 3 gives examples of methods or techniques available for software requirements development.
Continuous Activities Documentation, validation, and verification of the continuous activities are included in the main process phase where the activity is mentioned the first time. Only requirements management viewpoints (identification, traceability, and change management) are discussed in this section. Requirements management controls and tracks changes of agreed requirements, relationships between requirements, and dependencies between the requirements documents and other documents produced during the systems and software engineering process (Kotonya & Sommerville, 1998). Requirements management is a continuous and cross-section process that begins from requirements management planning and continues via activities of identification, traceability, and change control during and after requirements development process phases. Requirements management continues also after development during maintenance, because the requirements continue to change (Kotonya & Sommerville, 1998; Lauesen, 2002). Each of the requirements management activities is introduced in the following.
1. Requirements Identification Requirements identification practices focus on the assignment of a unique identifier for each requirement (Sommerville & Sawyer, 1997). These unique identifiers are used to refer
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
14
Parviainen, Tihinen, Lormans and van Solingen
to requirements during product development and management. Requirements’ identification support can be divided into the three classes (Berlack, 1992; Sommerville & Sawyer, 1997): 1.
2.
3.
Basic numbering systems •
Significant numbering
•
Non-significant numbering
Identification schemes •
Tagging
•
Structure-based identification
•
Symbolic identification
Techniques to support and automate the management of items •
Dynamic renumbering
•
Database record identification
•
Baselining requirements
2. Requirements Traceability Requirements traceability refers to the ability to describe and follow the life of a requirement and its relations with other development artefacts in both forward and backward direction (Gotel, 1995). This is especially important for trade-off analysis, impact analysis, and verification and validation activities. If traceability is not present, it is very difficult to identify what the effects of proposed changes are and whether accepted changes are indeed taken care of.
3. Requirements Change Management Requirements change management refers to the ability to manage changes to requirements throughout the development lifecycle. Change management, in general, includes identifying, analyzing, deciding on whether a change will be implemented, implementing and validating change requests. Change management is sometimes said to be the most complex requirements engineering process (Hull, Jackson, & Dick, 2002). A change can have a large impact on the system, and estimating this impact is very hard. Requirements traceability helps making this impact explicit by using downward and upward traceability. For every change, the costs and redevelopment work have to be considered before approving the change. Change management has a strong relationship with baselining. After requirements’ baselining, changes to the requirements need to be incorporated by using change control procedures (Hooks & Farry, 2001). Examples of requirements management methods, techniques, or approaches have been listed in Table 4.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
15
Requirements management tools have been developed because of the problems of managing unstable requirements and the large amount of data collected during requirements engineering process. A large set of tools – both commercial and non-commercial) – is available; for examples, see Parviainen et al, 2003. Requirements management tools collect together the system requirements in a database or repository and provide a range of facilities to access the information about the requirements (Kotonya & Sommerville, 1998). According to Lang & Duggan (2001), software requirements management tools must be able to: •
Maintain unique identifiable description of all requirements.
•
Classify requirements into logical user-defined groups.
•
Specify requirements with textual, graphical, and model-based description.
•
Define traceable associations between requirements.
•
Verify the assignments of user requirements to technical design specifications.
•
Maintain an audit trail of changes, archive baseline versions, and engage a mechanism to authenticate and approve change requests.
•
Support secure, concurrent co-operative work between members of a multidisciplinary development team.
•
Support standard systems modeling techniques and notations.
•
Maintain a comprehensive data dictionary of all project components and requirements in a shared repository.
Table 4. Requirements management methods Activity Requirements traceability
Change management
Example methods Cross reference, traceability matrices, automated traceability links (Sommerville & Sawyer, 1997) IBIS derivatives (Pinheiro, 2000) Document -centred models Database guided models (Pinheiro, 2000) RADIX (Yu, 1994) QFD (West, 1991) Languages, for example, ALBERT (Dubois, Du Bois, & Petit, 1994) or RML (Requirements Modeling Language) (Greenspan, Mylopoulos, & Borgida, 1994) Olsen’s ChM model (Olsen, 1993) V-like model (Harjani & Queille, 1992) Ince's ChM model (Ince, 1994)
Description Techniques can be used for presenting and managing requirements as separate entities and describing and maintaining links between them, for example, during allocation, implementation, or verification. Methods present traces and provide information to capture design rationale, for example, by providing automated support for discussion and negotiation of design issues. Traceability can be supported by using languages or notations.
Change management process models and approaches.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
16
Parviainen, Tihinen, Lormans and van Solingen
•
Generate predefined and ad-hoc reports.
•
Generate documents that comply with standard industrial templates.
•
Connect seamlessly with other tools and systems.
Conclusion Requirements engineering for sociotechnical systems is a complex process that considers product demands from a vast number of viewpoints, roles, responsibilities, and objectives. In this chapter we have explained the activities of requirements engineering and their relations to the available methods. A large set of methods is available, each with their specific strengths and weaknesses. The methods’ feasibility and applicability do, however, vary between phases or activities. Method descriptions also often lack the information of the methods’ suitability to different environments and problem situations, thus making the selection of an applicable method or combination of methods to be used in a particular real-life situation complicated. Requirements engineering deserves stronger attention from practice, as the possibilities of available methods are largely overlooked by industrial practice (Graaf, Lormans, & Toetenel, 2003). As requirements engineering is the process with the largest impact on the end product, it is recommended to invest more effort in industrial application as well as research to increase understanding and deployment of the concepts presented in this chapter. This chapter has only listed a few examples of methods. For a more comprehensive listing of methods see Parviainen et al. (2003).
References Abrahamsson, P., Salo, O., Ronkainen, J. & Warsta, J. (2002). Agile software development methods: Review and analysis. Espoo: Technical Research Centre of Finland, VTT Publications. Ambler, S. W. (1998). Process patterns. building large-scale systems using object technology. Cambridge University Press. Andriole, S. (1989). Storyboard prototyping for systems design: A new approach to user requirements analysis. Q E D Pub Co. Ashworth, C., & Goodland, M. (1990). SSADM: A practical approach. McGraw-Hill. Beck, K. (1999). Extreme programming explained: Embrace change. Reading, MA: Addison-Wesley. Berlack, H. (1992). Software configuration management. John Wiley & Sons.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
17
Björner, D., & Jones, C.B. (Eds.). (1978). The Vienna development method: The metalanguage: volume 61 of lecture notes in computer science. Springer-Verlag. Blanchard, B.S., & Fabrycky, W.J. (1981). Systems engineering and analysis. PrenticeHall. Booch, G., Jacobson, I., & Rumbaugh, J. (1998). The unified modeling language user guide. Addison-Wesley. Bose, P. (1995). A model for decision maintenance in the winwin collaboration framework. Proceedings of the 10th Conference on Knowledge-Based Software Engineering, 105-113. Bourdeau, R.H., & Cheng, B.H.C. (1995). A formal semantics for object model diagrams. IEEE Transactions on Software Engineering, 21(10), 799–821. Chung, L., Cooper, K., & Huynh, D.T. (2001). COTS-aware requirements engineering Technique. Proceedings of the 2001 Workshop on Embedded Software Technology (WEST’01). Davis, A. M. (1990). Software requirements: Analysis and specification. Prentice Hall. Dobrica, L., & Niemelä, E. (2002). A survey on software architecture analysis methods. IEEE Transactions on Software Engineering, 28(7), 638-653. Dorfman, M. (1990). System and software requirements engineering. In R.H. Thayer & M. Dorfman (Eds.) IEEE system and software requirements engineering, IEEE software computer society press tutorial. Los Alamos, CA: IEEE Software Society Press. Dubois, E., Du Bois, P., & Petit, M. (1994). ALBERT: An agent-oriented language for building and eliciting requirements for real-time systems. Vol. IV: Information systems: Collaboration technology organizational systems and technology. Proceedings of the Twenty-Seventh Hawaii International Conference on System Sciences, 713 -722. Ericsson, K.A., & Simon, H. A. (1993). Protocol analysis - revised edition. MIT Press. Gilb, T. (2003). Competitive Engineering. Addison-Wesley. Girault, C., & Valk, R. (2002). Petri nets for system engineering: A guide to modeling, verification and applications. Springer-Verlag. Gotel, O. (1995). Contribution structures for requirements traceability. PhD thesis, Imperial College of Science, Technology and Medicine, University of London. Graaf , B.S., Lormans, M., & Toetenel, W.J. (2003). Embedded software engineering: state of the practice [Special issue]. IEEE Software magazine, 20(6), 61-69. Greenspan, S., Mylopoulos, J., & Borgida, A. (1994). On formal requirements modelling languages: RML revisited. Proceedings of ICSE-16, 16th International Conference on Software Engineering, 135-147. Hadel, J.J., & Lakey, P.B. (1995). A customer-oriented approach for optimising reliabilityallocation within a set of weapon-system requirements. Proceedings of the Annual Symposium on Reliability and Maintainability, 96-101.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
18
Parviainen, Tihinen, Lormans and van Solingen
Harjani, D.R., & Queille, J.P. (1992). A process model for the maintenance of large space systems software. Proceedings of Conference on Software Maintenance, Los Alamitos: IEE Computer, 127-136. Hatley, D.J., & Pirbhai, I.A. (1987). Strategies for real-time system specification. Dorset House. Hooks, I., & Farry, K. (2001). Customer-centred products. Amacom. Hull, M.E.C., Jackson, K., & Dick, A.J.J. (2002). Requirements Engineering. Berlin: Springer-Verlag. HUSAT Reasearch Institute. (1998). User centred-requirements handbook (Version 3.2). [Handbook]. Loughborough, Leicestestershire, UK: Maguire. Ince, D. (1994). Introduction to software quality assurance and its implementation. McGraw-Hill. Institute of Electrical and Electronics Engineering, Inc. (1990). IEEE Standard Glossary of Software Engineering Terminology (IEEE Std 610.12). Institute of Electrical and Electronics Engineering, Inc. (1998). IEEE Guide for Developing System Requirements Specifications (IEEE Std 1233). Institute of Electrical and Electronics Engineering, Inc. (1998). IEEE Recommended Practice for Software Requirements Specifications (IEEE Std 830). International Organisation for Standardisation. (Ed.). (1998). Information technology software process assessment part 2: A reference model for processes and process capability. (SPICE: ISO/IEC TR 15504-2). Geneva, Switzerland: ISO. Juristo, N., Moreno A.M., & Silva, A. (2002). Is the European industry moving toward solving requirements engineering problems? IEEE Software, 19(6), 70-77. Kazman, R., Klein, M., Barbacci, M., Longstaff, T., Lipson, H., & Carriere, J. (1998). The architecture tradeoff method. Proceedings of the fourth IEEE International Conference on Engineering of Complex Computer Systems, 68-78. Komi-Sirviö, S., & Tihinen M. (2003, July 1-3). Great challenges and opportunities of distributed software development - an industrial survey. Proceedings of Fifteenth International Conference on Software Engineering and Knowledge Engineering, SEKE2003, San Francisco. Kontio, J. (1995). OTSO: a systematic process for reusable software component selection, version 1.0. The Hughes Information Technology Corporation and the EOS Program. Kotonya, G., & Sommerville, I. (1996). Requirements engineering with viewpoints. Software Engineering Journal, 11(1), 5-11. Kotonya, G., & Sommerville, I. (1998). Requirements engineering: process and techniques. John Wiley & Sons. Lang, M., & Duggan, J. (2001). A tool to support collaborative software requirements management. Requirements Engineering, 6(3), 161–172. Lauesen, S. (2002). Software requirements: styles and techniques. Addison-Wesley. Leffingwell, D., & Widrig, D. (2000). Managing software requirements - a unified approach. Addison-Wesley. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering: Sociotechnical Systems Development
19
Matinlassi, M. & Niemelä, E. (2002). Quality-driven architecture design method. International Conference of Software and Systems Engineering and their Applications (ICSSEA 2002), Paris, France. Mullery, G.P. (1979). CORE – A method for controlled requirement specification. Proceedings of IEEE Fourth International Conference on Software Engineering. Nelsen, E. D. (1990). System engineering and requirement allocation. In R.H. Thayer & M. Dorfman, IEEE system and software requirements engineering, IEEE software computer society press tutorial. Los Alamos, CA: IEEE Software Society Press. Olsen, N. (1993). The software rush hour. IEEE Software Magazine, 29-37. Parviainen, P., Hulkko, H., Kääriäinen, J., Takalo, J., & Tihinen, M. (2003). Requirements Engineering. Inventory of Technologies. Espoo: VTT Publications. Petri, C.A. (1962). Kommunikation mit Automaten. Bonn Institut für Instrumentelle Mathematik. Schriften des IIM Nr. 2. Pinheiro, F. (2000). Formal and informal aspects of requirements tracing. Proceedings of III Workshop on Requirements Engineering, Rio de Janeiro, Brazil. Pressman, R. S. (1992). Software engineering, a practitioner’s approach, third edition. McGraw-Hill Inc. Revelle, J.B., Moran, J.B., Cox, C.A., & Moran, J.M. (1998). The QFD handbook. John Wiley & Sons. Robertson, S., & Robertson, J. (1999). Mastering the requirements process. AddisonWesley. Ropohl, G. (1999). Philosophy of sociotechnical systems. Society for Philosophy and Technology 4(3). Retrieved May 5, 2004, from http://scholar.lib.vt.edu/ejournals/ SPT/v4_n3pdf/ROPOHL.PDF. Royce, W. W. (1970). Managing the development of large software systems. Proceedings of IEEE Wescon, August 1970. Reprinted in Proceedings of 9th Int’l Conference Software Engineering 1987, 328-338, Los Alamitos: CA. Sailor, J. D. (1990). System engineering: an introduction. In R.H. Thayer & M. Dorfman, IEEE system and software requirements engineering, IEEE software computer society press tutorial. IEEE Software Society Press. Schneider, S. (2001). The B-method: an introduction. Palgrave. Schoman, K., & Ross, D.T. (1977). Structured analysis for requirements definition. IEEE Transactions on Software Engineering 6–15. Sheppard, D. (1995). An introduction to formal specification with Z and VDM. McGrawHill. Shlaer, S., & Mellor, S. (1988). Object-oriented system analysis: modeling the world in data (Yourdon Press computing series). Prentice-Hall. Siddiqi, J. (1996). Requirement engineering: the emerging wisdom. IEEE Software, 13(2), 15-19. Sommerville, I., & Sawyer, P. (1997). Requirements engineering: A good practise guide. John Wiley & Sons.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
20
Parviainen, Tihinen, Lormans and van Solingen
Stevens, R., Brook, P., Jackson, K., & Arnold, S. (1998). Systems engineering - Coping with complexity. London: Prentice Hall. Suchman, L. (1983). Office procedures as practical action. ACM Transactions on Office Information Systems, 320-328. Sutcliffe, A. (1998). Scenario-based requirement analysis. Requirements Engineering Journal, 3(1), 48-65. Thayer, R. H., & Royce, W. W. (1990). Software systems engineering. In R.H. Thayer & M. Dorfman, M. (Eds.), IEEE system and software requirements engineering, IEEE software computer society press tutorial. Los Alamos, CA: IEEE Software Society Press. Weiss, D., & Lai, C. (1999). Software product-line engineering: A family-based software development process. Reading, MA: Addison-Wesley. West, M. (1991, May 1-7). Quality function deployment in software development. Proceedings of IEEE Colloquium on Tools and Techniques for Maintaining Traceability During Design. Yu, W. D. (1994). Verifying software requirements: a requirement tracing methodology and its software tool-RADIX. IEEE Journal on Selected Areas in Communications, 12(2), 234 -240.
Endnote 1
This chapter describes the requirements engineering process based on work done in the MOOSE (Software engineering methodologies for embedded systems) project (ITEA 01002). The authors would like to thank the partners in the MOOSE project (http://www.mooseproject.org/), as well as the support from ITEA and the national public authorities.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Challenges in Requirements Engineering for Embedded Systems
21
Chapter II
Challenges in Requirements Engineering for Embedded Systems Eman Nasr, Cairo University, Egypt
Abstract In this chapter we are particularly interested in requirements engineering of software where the software is part of a complex engineered system; that is, embedded software. Embedded software is the software that runs on a computer system that is integral to a larger system whose primary purpose is not computational. Embedded software systems are of rising significance in the industry, and they are found in a wide range of industries in the modern world, including medical, nuclear, chemical, rail networks, aerospace, and automotive industries. The RE of this category of software is challenging because of its special properties that add to its complexity and make it more expensive and error-prone as compared with other software categories, for example, business applications. In this chapter we identify the special properties of embedded software systems to help in better understanding of such domain, discuss the special RE challenges that the special properties introduce, and introduce the main current RE approaches for the domain.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
22 Nasr
Introduction Modern computer-based systems are becoming increasingly complex ensembles of hardware and software, thus adding more challenges to the software requirements engineering process. Requirements engineering (RE) is usually known as the branch of software engineering that deals with the early phase of software development. Although there is no single definition for RE, because the field of research is still maturing, a wellaccepted definition is (Zave, 1995): “Requirements engineering is the branch of software engineering concerned with the real-world goals for functions of, and constraints on, software systems.” RE deals with activities that attempt to understand the exact needs for a softwareintensive system and to translate such needs into unambiguous requirements that will be used in the development and testing of the software system. RE is considered a combination of mainly three interacting activities: eliciting requirements related to a problem domain, specifying the requirements in an unambiguous way, and ensuring the validity of such requirements (Loucopoulos & Karakostas, 1995). It is one of the most important activities in software development because errors made in requirements become increasingly costly to repair later in development and extremely costly to repair after delivery (Brooks, 1987; Heitmeyer, 1997; Hofmann, 1993; Wieringa, 1996). The product of RE is a requirements specification, which forms a foundation for the whole subsequent/concurrent development. Among the important properties of a good requirements specification are: completeness, lack of ambiguity, good structure, and ease of understanding by all of the stakeholders involved in the software system. A good requirements specification should seek to bridge the communication gap between domain experts and software experts. It is widely accepted that a good RE method is crucial for any successful large-scale software system development. It is also widely recognised that the most serious embedded software failures can be traced back to defective specification of requirements (Knight, 2002; Leveson, 1995; Lutz, 1993). In this chapter we are particularly interested in RE of software where the software is part of a complex engineered system, that is, embedded software. Embedded software is the software that runs on a computer system that is integral to a larger system whose primary purpose is not computational (Lutz, 1993). An embedded software system usually provides at least partial control over the hardware system in which it is embedded. An embedded software system is usually highly reactive, as it responds to various sensor inputs, interrupts, or alarm conditions from its environment. Embedded software systems are of rising significance in the industry, and they are found in a wide range of industries in the modern world, including medical, nuclear, chemical, rail networks, aerospace, and automotive industries. Embedded software systems have proliferated almost everywhere over the past few years, from household appliances, like toasters and washing machines, to cars to aircraft and spacecraft. Of course the software embedded in these products varies in complexity as widely as the style of the products.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Challenges in Requirements Engineering for Embedded Systems
23
The RE of this category of software is challenging because of its special properties. Among the major properties that characterise this special category of software are: the embedded software’s context, the consideration of the software requirements at a later stage in the whole system development life cycle, the broad range of stakeholders, and the periodicity of most of the software’s functions. These special properties of an embedded software system add to its complexity and make it more expensive and errorprone as compared to other software categories, for example, business applications. In this chapter we identify the special properties of embedded software systems to help in better understanding of such domain, discuss the special RE challenges that the special properties introduce, and introduce the main current RE approaches for the domain.
Special Properties of Embedded Software Systems This section defines and discusses some of the fundamental characteristics of embedded software systems. Each of the special characteristics is defined and discussed in one of the following subsections.
Context of Embedded Software Systems By definition, an embedded software system is part of a broader engineered hardware system and usually provides at least partial control over the hardware system in which it is embedded (Heimdahl & Leveson, 1996). Embedded software is usually deeply embedded in engineered systems, which range from simple appliances, for example, toasters, to highly complex machines, for example, spacecrafts. Because of the context of embedded software systems, they are usually tightly coupled to their physical environment through sensors and actuators. This type of software is often reactive. It must react or respond to environmental conditions as reflected in the inputs arriving to the software. An embedded software system interfaces most of the time with hardware components, or other external hardware and software systems (in complex engineered systems), rather than with human operators. For example, in modern aircraft that incorporate embedded software control systems, the so called fly-by-wire (FBW), the embedded software is responsible for controlling subsystems developed by a range of other engineering disciplines. The embedded software receives its inputs from the aircraft’s sensors and other control elements (for example, pedals) and results in electrical output signals to the actuators that move the control surfaces of the aircraft. Such a context of embedded software systems adds to their complexity and makes them difficult to comprehend.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
24 Nasr
The Embedded Software Requirements Phase Comes at a Later Stage in the System Development Life Cycle Because of the context of an embedded software system, the development activities are usually part of a bigger hybrid (hardware and software) project that is underway. First the requirements of the system as a whole are considered, followed by design for the system, after which the embedded software requirements are considered. Figure 1 (Davis, 1990) gives a simplified diagram of the initial sequence of development phases for a hybrid system. The figure illustrates in a simplified way how the requirements phase for an embedded software system comes at a later stage of the development life cycle. During the system design activity most decisions regarding the software vs. hardware breakdown are settled. When the system design is complete, and all major external interfaces of major components are defined, only then software requirements can be elaborated (Davis, 1990). This is considered the current popular conventional wisdom, but since it is expected in the future that software will be gaining more ground, considering software requirements from the very beginning will be crucial. This current conventional wisdom usually results in producing poor software requirements, as they are usually written by the hardware specialists without involving the software engineers; in addition, this contributes to the lack of mutual understanding and appreciation of each other’s points of view. Although specifying the components of an engineered system is essential because different suppliers provide them, experience in automotive development (Weber Figure 1. Conventional wisdom about hybrid (software and hardware) system project’s initial sequence of development phases system requirements
system design
hardware requirements
software requirements
hardware design
software design
• • •
• • •
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Challenges in Requirements Engineering for Embedded Systems
25
& Weisbrod, 2003) has proved that separating software from hardware makes things harder, and it was reported that a single component must be specified as two interactive “problems” — a hardware problem and a software problem.
Broad Range of Stakeholders The stakeholders of a system are the people and organisations that have a stake in the operation of the system (Chonoles & Quatrani, 1996). Because of the context of embedded software systems, the stakeholders cover a very wide range of people and organisations. An example of some stakeholders could be hardware designers, software analysts, domain specialists, customers (usually companies or governments in the case of large engineered systems), subcontractors, regulatory organisations, and standards groups. The stakeholders do not necessarily include end users, as in the information software systems domain, because in engineering industries, such as aerospace, customer organisations tend to concentrate on engineering aspects of the requirements and neglect the user’s view (Lubars, Potts, & Richter, 1993). Large complex engineered systems are customer specific rather than user specific; for example, the requirements of an aerospace are dictated by the customer (organisation or government agency) and do not directly represent the interests of a pilot. One of the major problems of this broadness of the range of stakeholders for large complex embedded systems is getting agreement between stakeholders.
Periodicity of Most of the Embedded Software Functions In engineered projects, most of the functions (which are also referred to as tasks or jobs or processes) performed by embedded software systems are periodic or cyclic. They execute regularly and repetitively, sometimes at a high frequency as in aircraft applications, for example, monitoring the speed, altitude, and attitude of an aircraft every 100ms. Sensor information is used by periodic functions that move the control surfaces of the aircraft (for example, rudder and engine thrusts) in order to maintain stability and other desired characteristics (Krishna & Shin, 1997). Periodic software functions could be considered autonomous, that is, activated regularly by the system. Indeed this does not preclude the existence of periodic embedded software functions that are initiated depending on the state of the controlled process or on the operating environment or on a command from the operator’s input panel (Krishna & Shin, 1997). Therefore it is not only sufficient to specify the functional requirements of large complex embedded systems but also a reflection on the initiator of a function and on timing requirements is essential.
Criticality of Embedded Software Large embedded software systems that provide control activities, for example, controlling an aircraft engine, have a critical need to meet time deadlines under all foreseeable circumstances. These systems are often referred to as safety-critical real-time or hard Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
26 Nasr
real-time systems. This is because of consequences of the functions not being executed on time are catastrophic, for example, killing people or causing major damage to the environment. Safety-critical systems are found in a wide range of industries, including the nuclear, chemical, and automotive industries. Embedded control software is highly reactive to its environment. It has to react to various sensor inputs, interrupts, or alarm conditions from the environment within severe imposed time limits. For systems operating under severe timing constraints, a thorough understanding of what goes on at what time is required (Hruschka, 1992). Thus there are significant safety issues and temporal requirements that must be considered.
Use of Redundancy Because of the safety-critical nature of large embedded software systems, redundancy is used as a technique for providing reliability (or fault tolerance). Redundancy provides for multiple implementations of a function, for example, by the use of more than one processing channel (Hruschka, 1992). The use of redundancy is a result of a design decision that is based on the assumption that a given set of faults with the same system effect will not occur simultaneously in two or more independent elements (Society of Automotive Engineers, 1994). For example, the Boeing 777 primary flight control system uses three separate channels for redundancy; each channel is implemented with three separate lanes, each of which uses different processors and different compilers (Knight, 2002). The understanding of this property adds to the understanding of the domain, as it is usually reflected in the embedded software requirements.
Lots of Sources for the Embedded Software Requirements Phase There are lots of input sources for the software RE phase of embedded systems. Figure 2 gives a simple diagram to show the five basic sources of embedded software requirements: system requirements specification document(s), current system design document(s), previous designs, software safety requirements specification document(s), and stakeholders. The initial software needs for embedded software systems are usually distributed across (and within) lots of documents. The section above observed that the software requirements phase comes at a later stage in the development life cycle of the system, after both the identification of the system requirements and the system design activity, hence the inputs from the system requirements specification and current design documents in Figure 2. Previous designs also provide input to the embedded software requirements phase, as shown in Figure 2. This is because engineered systems, in most cases, are not developed from scratch. They are the result of upgrades and enhancements over many years. For example, in the automotive domain (Weber & Weisbrod, 2003) systems are typically built in increments; a new car series inherits most functionality from existing ones, with various adaptations, extensions, or innovations.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Challenges in Requirements Engineering for Embedded Systems
27
Figure 2: Inputs to RE phase for large complex embedded software systems Previous
Embedded Software Requirements Phase
System Requirements Specification
Current Design Documents
Stakeholders
Software Safety Requirements Specification
Earlier we highlighted safety as a major issue for safety-critical systems. Although it is usually classified as a non-functional requirement that is considered at the system requirements phase, many standards demand that safety requirements be separately identified. Therefore, hazard analysis for a system takes place first, before the software requirements phase, and produces some software safety specifications, which have to be taken into account during specification of the embedded software system. Thus the input of software safety requirements specification to the embedded software RE phase is shown in Figure 2. The section above gave a general definition for stakeholders that includes domain specialists. Although most of the requirements of the stakeholders are reflected in the four types of documents mentioned above, domain specialists always continue to be a very important source of requirements for the embedded software because of the continuous need for elaborating the issues related to the complex system hardware design. Therefore stakeholders are also shown as an input source for embedded software requirements in Figure 2. Having lots of, and diverse, input sources for the initial needs of the embedded software make the elicitation of the requirements difficult. One of the problems is that the initial needs are distributed across (and within) the four types of documents mentioned above, and the more complex the system is, the more distributed the initial software needs are. This results in challenges such as finding all of the requirements, reconciling them to ensure consistency, and ensuring completeness.
RE Challenges for the Domain of Embedded Software Systems As could be seen from the previous section, the intrinsic properties of embedded software systems introduce challenges and problems that exceed those of other software
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
28 Nasr
categories. This, in general, makes embedded software development more expensive and error prone, and in particular makes RE more difficult. Some of the extra RE challenges resulting from the properties of the domain are: •
Complexity. This is the most obvious challenge. It mainly emerges from the complexity of the context of embedded software systems discussed earlier. Complexity is a direct barrier to understanding, and therefore is a major concern. Sommerville and Sawyer (1997) state that problems of requirements increase exponentially with the size of the system, as a requirements document could be organised into several volumes with thousands of individual requirements.
•
Requirements redundancy. Redundancy in information in the requirements specifications may lead to inconsistencies and double work. In large projects, such as automotive system development, engineers create several lengthy specifications and related documents under tight deadlines, which typically contain redundant information (Weber & Weisbrod, 2003).
•
Specifiability. There are many eventualities that embedded software systems need to consider. In addition, because of the complexity of large systems, there is usually significant difficulty in eliciting requirements for embedded software systems and producing appropriate, adequate, and effective specifications (McDermid, 1993). The difficulty of eliciting requirements emerges also from having lots of input sources for the software requirements phase, discussed earlier.
•
Continuously changing requirements. Requirements of embedded systems undergo constant maintenance and enhancements during the development lifetime, and therefore they must be extensible. One of the possible reasons for the constant maintenance of the requirements of embedded software systems is gaining better understanding, which is a problem caused by, for example, having a broad range of stakeholders, discussed earlier. A lot of assumptions are usually made early in the development of a project, which are then resolved as the system matures (Weber & Weisbrod, 2003).
Coming up with solutions to the RE challenges and problems of the embedded software systems domain could be quite challenging. Two basic properties should always be available in any RE method for handling embedded software systems: simplicity and structure. Simplicity is the only real way to ensure understandability of the systems being produced, understanding the system, its environment, its workings, etc. (McDermid, 1993). Furthermore, for large embedded software systems, the requirements specifications need to be structured at several layers of abstraction to help in better understanding and allow for requirements reuse (Weber & Weisbrod, 2003).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Challenges in Requirements Engineering for Embedded Systems
29
Current RE Approaches for Embedded Software Systems This section highlights the main approaches of RE for embedded software systems available in the literature. Macaulay (1996) generally identifies nine different groups in the community with a principal interest in the problem of requirements. The nine groups are (Macaulay, 1996): 1.
Marketing. The marketing group is interested in the relationship between requirements and the success of a product in the market place.
2.
Psychology and Sociology. The psychology and sociology group is interested in the relationship between requirements and the needs of people as intelligent and social beings.
3.
Structured Analysis (SA). The SA group is interested in the relationship between requirements and the software process, starting from a process and data perspective.
4.
Object-Oriented Analysis (OOA). The OOA group is interested in the relationship between requirements and the software development process starting from a realworld object perspective.
5.
Participative Design. The participative group is interested in requirements as part of the process empowering users by actively involving them in the design of systems that affect their own work.
6.
Human Factors and Human-Computer Interaction. The human factors and humancomputer interaction group is interested in the acceptability of systems to people, the usability of systems and the relationship between requirements and evaluation of the system in use.
7.
Soft Systems. The soft systems group is interested in the relationship between requirements and how people work as part of an organisational system.
8.
Quality. The quality group is interested in the relationship between requirements and the quality of a product, in relation to process improvements that lead to customer satisfaction.
9.
Formal Computer Science. The formal computer science group is interested in the relationship between requirements and software engineering’s need for precision.
Each of the nine groups advocates the use of specific techniques, but only three of the groups are interested in the relation between requirements and the software process, and, hence, are of relevance to RE of embedded software systems. They are SA, OOA, and Formal Computer Science groups. The rest of this section discusses briefly the three relevant approaches to RE of embedded software systems; it is not meant to be exhaustive
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
30 Nasr
in covering all of the RE methods for embedded software systems available in the literature. The RE methods for embedded software systems available in the literature could be mainly categorised into SA methods, OO methods, or formal methods.
Structured Analysis Methods for Embedded Systems SA was originally developed in the 1970s following on from structured programming and structured design. The central principle of this movement is that software systems should be modular, that is, partitioned into components with close cohesion and loose coupling (Wieringa, 1998). SA approaches evolved from work on management information systems (MIS) and are concerned with the relationship between requirements and the software process, starting from a process and data perspective. A typical SA starts with a context diagram, which shows the software system and all the external entities, terminators, that interact with it. The requirements are then defined by decomposing the software system into a number of processes (also known as functions or transformations) and defining the inputs and outputs of each process. Processes exchange data either with other processes, data stores, or sources/destinations, terminators, outside the boundary of the software system. The notation used for modeling is referred to as the data flow diagram (DFD), which provides a graphic representation of the movement of data through the system and the transformations on the data. The processes in turn are then broken down to other DFDs, describing the processing the node does to transform the incoming data items to the outgoing data items, until elementary processes are reached. When an elementary process is reached, it is then defined in a MiniSpec. Data dictionaries are used to support the DFDs by providing a repository for the definitions and descriptions of each data item on the DFDs. For some time, developers of large embedded software systems used SA as the best available method to capture their requirements, but they shared a feeling that it was not adequate, as it did not take into account the specific problems of this type of software system category (Hruschka, 1992), like concurrency, for example. Between 1982 and 1987 Ward and Mellor (Ward & Mellor, 1985) and Hatley and Pirbhai (Hatley & Pirbhai, 1987) adapted and extended the ideas of SA methods to cope with the specific complexities arising from large complex embedded systems. Ward and Mellor were the first to publish their techniques for the analysis of real-time, embedded software systems in Structured Development for Real-Time Systems (Ward & Mellor, 1985). They made extensions to SA to adapt the method to the needs of embedded software control systems mainly by adding new control constructs, which are analogous to the data constructs, to capture control behaviour (Svoboda, 1997). Their process model is visualised by means of a transformation schema that contains the traditional DFD constructs in addition to control constructs. On the DFD the interactions of both those processes that transform data and those that react to events in the environment and exercise control over other processes could be shown. Data processes explode to lower-level transformation schemas, which can also contain both data and control constructs. However, the explosion of control processes contributes to the control model of the system under development only. Hatley and Pirbhai published their technique in Strategies for Real-Time System Specification (Hatley & Pirbhai, 1987) in 1987. Like Ward and Mellor’s approach, the real-
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Challenges in Requirements Engineering for Embedded Systems
31
time extensions developed by Hatley and Pirbhai take care of the control aspects. They introduced the same control constructs as Ward and Mellor, but they used them differently. Hatley and Pirbhai used the control constructs to create control flow diagrams (CFD) parallel to the DFDs with control nodes having the same names as the data process nodes. Similar to the MiniSpec, which they call instead a process specification (PSPEC), they introduced control specifications (CSPECS), which capture any interaction between the DFDs and CFDs. Although the above real-time SA methods are in widespread use for large embedded software systems, especially in the avionics industry, there are a number of common criticisms. These are (Faulk, 1995; Hall, 1997; Wieringa, 1998): •
SA is not an RE method. Apart from the context diagram, SA does not address requirements; it is more of a system design method. This might be attributed to the fact that SA methods are concerned with the whole of the system development lifecycle and not only requirements. This tends to make the division between requirements analysis and design difficult.
•
Insufficient process guidance. There are no explicit process steps. Not enough guidance is given on which part of the problem to model as a control process, what to model as a data transformation process, and what to model as data. Practitioners also find it difficult to know when to stop process decomposition and addition of detail. Particularly in the hands of less experienced practitioners, data flow models continue to incorporate a variety of detail that more properly belongs to design or implementation, and the diagrams themselves become unmanageably complex.
•
Separation between data and control is often confusing. SA separates data flows and data processes from events flows and control processes. This distinction between data and control is often detrimental to the clarity of the model.
•
Inappropriateness of separation of memory from processing. The bulk of memory of the software in SA resides in data stores. This notion of separating data from processing is implementation dependent; therefore it is inappropriate if we need to abstract from implementation technology in the RE phase to offer more flexibility.
Object-Oriented Methods for Embedded Systems OOA is a relatively new approach to requirements analysis; it became the focus of interest in the 1990s. Like SA, OOA has its roots in programming (starting with Simula 67 and then with Smalltalk 80), and design. OO programming, which is also modular, was seen as a promising approach to the industrialisation of the software development process. As OO programming languages have become more popular, this has justified switching to OO design and analysis in order to avoid having to switch from one paradigm to the other within the development process for a single system. Bailin (1989), for example, created an OO requirements specification method to be used during the requirements analysis phase to avoid the laborious process of recasting dataflow diagrams. SA and OOA are considered to be fundamentally incompatible. OOA differs from SA in the primary perspective adopted in the models developed, taking objects rather than Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
32 Nasr
processes as the fundamental modeling unit. In SA functions are grouped together if they are constituent steps in the execution of a higher-level function; the constituent steps may operate on entirely different data. While in OOA functions are grouped together if they operate on the same data. According to Davis (1993), the primary motivation for object-orientation is that, as a software system evolves, its functions tend to change but its objects remain unchanged. Thus a system built using OO techniques may be inherently more maintainable than one built using more conventional functional approaches (Macaulay, 1996). The literature contains lots of commercial success stories about the use of OO methods (Graham, 1995; Jacobson, 1996). A lot of SA methodologists, including Ward (Selic, Gullekson, & Ward, 1994; Ward, 1989) and Mellor (Shlaer & Mellor, 1988), who are among the inventors of Real-Time SA methods, have made a paradigm shift to object-orientation. Like SA, the way most of the OOA methods handle requirements could not be considered an RE process. For example, most OOA methods do not address developing a good requirements specification; they focus on problem analysis and produce a static object model. It was not until the introduction of use cases by Jacobson (Jacobson, Christerson, Jonsson, & Oevergaard, 1992) that OO methodologists started to consider the behavioural aspects early in requirements modeling, which resulted in the incorporation of use cases, in one form or another, as a front end to the different OOA methods. Now the use case notations are part of the OO standard Unified Modeling Language (UML) (Rational Software Corporation, 1999). Although the use case technique is gaining in popularity for handling requirements within OO methods, its use for handling the requirements of embedded software systems still is immature. Several limitations have been identified for the use case notations and their pragmatics in the literature. Research is going on to provide viable solutions to minimise practical confusion by, for example, enhancing the definitions of the use case technique’s constructs and introducing necessary new concepts, especially for the domain of embedded software systems (Nasr et al., 2002).
Formal Methods for Embedded Systems Formal methods are often proposed for RE of embedded software systems to attempt to produce precise, complete, unambiguous, and consistent requirements specifications (Macaulay, 1996). Most of the RE formal methods found in the literature are actually specification languages rather than complete RE processes, for example Soft Cost Reduction (SCR) (Heitmeyer, Kirby, & Labaw, 1998; Heninger, 1980), Albert II (Du-Bois, Dubois, & Zeippen, 1997), and Requirements State Machine Language (RSML) (Leveson, Heimdahl, Hilldreth, & Reese, 1994). Although it is thought that producing formal requirements specifications forces the stakeholders, especially the Requirements Engineer, to think more carefully about the nature of the system being defined and how exactly it will operate, it cannot guarantee complete requirements. There will still exist some tacit (unstated) requirements. The use of formal specification techniques are steadily increasing because they are usually complemented by a wide variety of analysis tools for model checking, verification, and animation. Despite that there are several challenges for the use of current formal
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Challenges in Requirements Engineering for Embedded Systems
33
methods for requirements, which include (Gray & Thayer, 1991; Hall, 1997; Hsia, Davis, & Kung, 1993; Krishna & Shin, 1997): •
Communication. Not all stakeholders find it easy to follow or understand formal notations. That’s why in some formal requirements specification methods, for example, Albert II (Du-Bois et al., 1997), and Hard Real-Time Hierarchical ObjectOriented Requirements Analysis (HRT-HOORA) (Piveropoulos, 2000), the formal notations specifying a requirement have to be accompanied by the natural language requirement to achieve better communication with the stakeholders and quicker requirements validation. This adds to the volume of the requirements specification documents, especially for large complex systems.
•
Expressiveness. Most if not all of the current formal notations are incapable of expressing all aspects of the domain, for example, NFRs such as reliability, safety, performance, and human factors.
Conclusion Developing large, complex embedded software systems requires a deep understanding of their essential properties. In this chapter we attempted to promote a deeper understanding of this special category of software systems by presenting some of their special characteristics. Then we discussed the main challenges that these special properties pose for RE of this special type of software systems. Finally we highlighted the main current RE approaches for the domain and some of their current weaknesses. We believe that the many issues we raised in this chapter are crucial for understanding RE for the domain of embedded software systems, which is important for improving RE of embedded software systems in the future.
References Bailin, S.C. (1989). An object-oriented requirements specification method. Communications of the ACM, 32(5), 608-623. Brooks, F. P. (1987). No silver bullet: Essence and accidents of software engineering. IEEE Computer, 20(4), 10-19. Chonoles, M.J., & Quatrani, T. (1996). Succeeding with the booch and OMT methods: A practical approach. Addison-Wesley. Davis, A. (1993). Software requirements: Objects, functions and states. NJ: PrenticeHall. Davis, A.M. (1990). Software requirements: Analysis and specification. Prentice-Hall.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
34 Nasr
Du-Bois, P., Dubois, E., & Zeippen, J.M. (1997). On the use of a formal RE language. Proceedings of the Third IEEE International Symposium on Requirements Engineering, January, 128-137. Faulk, S.R. (1995). Software requirements: A tutorial. (MR 7775). Washington, DC: Naval Research Laboratory. Graham, I. (1995). Migrating to object technology. Addison-Wesley. Gray, E.M., & Thayer, R.H. (1991). Requirements. In C. Anderson & M. Dorfman (Eds.), aerospace software engineering: A collection of concepts. (pp. 89-122). The American Institute of Aeronautics and Astronautics. Hall, A. (1997). What’s the use of requirements engineering? Proceedings of The Third IEEE International Symposium on Requirements Engineering, January, 2-3. Hatley, D.J., & Pirbhai, I. A. (1987). Strategies for real-time system specification. New York: Dorest House. Heimdahl, M.P.E., & Leveson, N. G. (1996). Completeness and consistency in hierarchical state-based requirements. IEEE Transactions on Software Engineering, 22(6), 363-75. Heitmeyer, C. (1997). Welcome from the general chair. Proceedings of The Third IEEE International Symposium on Requirements Engineering, January, vii-ix. Heitmeyer, C., Kirby, J., & Labaw, B. (1998). Applying the SCR requirements method to a weapons control panel: An experience report. Proceedings of Formal Methods in Software Practice. Heninger, K.L. (1980). Specifying Software Requirements for Complex Systems: New Techniques and Their Application. IEEE Transactions on Software Engineering, 6(1), 2-12. Hofmann, H.F. (1993). Requirements engineering: A survey of methods and tools. (Tech. Rep. No. 93.05). Institute fur Informatik der Universitat Zurich. Hruschka, P. (1992). Requirements engineering for real-time and embedded systems. In M. Schiebe & S. Pferrer (Eds.), Real-Time Systems: Engineering and Applications. Kluwer Academic Publishers. Hsia, P., Davis, A., & Kung, D. (1993). Status report: Requirements engineering. IEEE Software, 10(6), 75-79. Jacobson, I. (1996, May). Succeeding with objects: A large commercial success story based on objects. Object Magazine, 8. Jacobson, I., Christerson, M., Jonsson, P., & Oevergaard, G. (1992). Object oriented software engineering: A use case driven approach. Addison-Wesley. Knight, J.C. (2002). Safety-critical systems: challenges and directions, invited minitutorial. Proceedings of ICSE’2002: 24th International Conference on Software Engineering, 547-550. Krishna, C.M., & Shin, K.G. (1997). Real-time systems. McGraw-Hill. Leveson, N.G. (1995). Safeware: System safety and computers. Addison-Wesley. Leveson, N.G., Heimdahl, M. P. E., Hilldreth, H., & Reese, J.D. (1994). Requirements specification for process-control systems. IEEE Transactions on Software EngiCopyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Challenges in Requirements Engineering for Embedded Systems
35
neering, 20(9). Loucopoulos, P., & Karakostas, V. (1995). System requirements engineering. McGraw Hill. Lubars, M., Potts, C., & Richter, C. (1993). A review of the state of the practice in requirements modeling. Proceedings of The First IEEE International Symposium on Requirements Engineering, January, 2-14. Lutz, R.R. (1993). Analyzing software requirements errors in safety-critical, embedded systems. Proceedings of The First IEEE International Symposium on Requirements Engineering, January, 126-133. Macaulay, L.A. (1996). Requirements engineering. Springer Verlag. McDermid, J. (1993). Issues in the development of safety-critical systems. In F.J. Redmill & T. Anderson (Eds.), Safety-critical systems: Current issues, techniques and standards. Chapman & Hall. Nasr, E. et al. (2002). Eliciting and specifying requirements with use cases for embedded systems. Proceedings of the Seventh IEEE International Workshop on Objectoriented Real-time Dependable Systems (WORDS). Piveropoulos, M. (2000). Requirements engineering for hard real-time systems. D.Phil, University of York. Rational Software Corporation. (1999, March). Unified modeling language specification. (vol. 1.3). Rational Software Corporation. Available: http://www.rational.com/ uml. Selic, B., Gullekson, G., & Ward, P.T. (1994). Real-time object-oriented modeling. John Wiley & Sons. Shlaer, S., & Mellor, S. J. (1988). Object-oriented systems analysis: Modeling the world in data. New Jersey: Prentice-Hall. Society of Automotive Engineers. (1994). Certification considerations for highlyintegrated or complex aircraft systems. (ARP 4754). Society of Automotive Engineers. Sommerville, I., & Sawyer, P. (1997). Requirements engineering: A good practice guide. John Wiley & Sons. Svoboda, C.P. (1997). Structured analysis. In R.H. Thayer & M. Dorfman (Eds.), Software requirements engineering (pp. 303-22). IEEE Computer Society. Ward, P.T. (1989). How to integrate object-orientation with structured analysis and design. IEEE Software, 6, 74-82. Ward, P.T., & Mellor, S.J. (1985). Structured development for real-time systems Vol. 1: Introduction & tools. Prentice-Hall. Weber, M., & Weisbrod, J. (2003). Requirements engineering in automotive development: experiences and challenges. IEEE Software, (January/February), 16-24. Wieringa, R.J. (1996). Requirements engineering: Frameworks for understanding. Springer Verlag. Wieringa, R.J. (1998). Advanced structured and object-oriented requirements specification methods. Proceedings of Third IEEE International Conference on RequireCopyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
36 Nasr
ments Engineering - Tutorial Notes. Zave, P. (1995). Classification of research efforts in requirements engineering. Proceedings of The Second IEEE International Symposium on Requirements Engineering, March, 214-6.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Elicitation for Complex Systems: Theory and Practice
37
Chapter III
Requirements Elicitation for Complex Systems: Theory and Practice Chad Coulin, University of Technology Sydney, Australia Didar Zowghi, University of Technology Sydney, Australia
Abstract This chapter examines requirements elicitation for complex systems from a theoretical and practical perspective. System stakeholders, requirements sources, and the quality of requirements are presented with respect to the process, including an investigation into the roles of requirements engineers during elicitation. The main focus of the chapter is a review of existing requirements elicitation techniques and a survey of current trends and challenges. It is concluded with some views on the future direction of requirements elicitation in terms of research, practice and education. It is the intention of the authors that readers of this chapter will be sufficiently informed on the concepts, techniques, trends, and challenges of requirements elicitation to then apply this knowledge to system development projects in both industrial and academic environments.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
38 Coulin and Zowghi
Introduction If elicitation is considered the initial phase in requirements engineering, then it can also be regarded as the first stage of system development. It is at this point in the process where the needs of the users and the goals for the system are determined. Despite its obvious importance to the development of systems, requirements elicitation has only received significant attention in research and practice over the past decade or so. Although seen as a fundamental part of the system development process, requirements elicitation is often considered a major problem area in projects for computer-based systems. Eliciting requirements for complex systems is a difficult and expensive process, and consequently a key issue in software and systems engineering. As part of essentially a social activity, the issues and challenges associated with requirements elicitation cannot be addressed by technical solutions alone. It is for these reasons that a structured and rigorous approach must be employed for this activity. In practice requirements elicitation is a multifaceted, incremental, and iterative process that relies heavily on the capabilities of requirements engineers, and the commitment and cooperation of stakeholders. The type of system to be developed and its intended purpose will have a significant effect on the way in which this task is conducted. The specific techniques used to elicit requirements during a project will often depend on a number of additional factors, including time, cost, and the availability of resources. In this chapter we will examine requirements elicitation for complex systems from a theoretical and practical perspective. It is intended, through a review of existing theory and an assessment of current practice, readers will be sufficiently informed of the techniques, approaches, trends, and challenges in requirement elicitation to then be able to apply this knowledge to system development projects in industrial or academic environments.
Background The elicitation of requirements can be broadly defined as the acquisition of goals, constraints, and features for a proposed system by means of investigation and analysis. Furthermore it is generally understood that requirements are elicited rather than captured or collected (Goguen, 1996). This implies both a discovery and development element to the process. Requirements can be elicited from a variety of sources using a range of different techniques and approaches. Invariably the system should be defined in terms of the operations it must perform, referred to as functional requirements, and the non-functional aspects of the system, such as performance and maintainability. In all projects it is important that during this process both the problem and solution domains are thoroughly examined (Jackson, 1995). By this it is meant that the goals for the system must be investigated as well as the options available to satisfy them.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Elicitation for Complex Systems: Theory and Practice
39
Although in this chapter we are primarily concerned with the development of sociotechnical systems, in practice the role of the requirements engineer during elicitation, also know as the analyst, can be performed in a number of different project settings (Hickey & Davis, 2003) including: •
The development of customised systems for specific customers
•
The development of commercial off-the-shelf (COTS) products
•
The evaluation and selection of alternative systems
•
The implementation of large and complex systems
Target System Stakeholders The elicitation of requirements is based around the process of describing a future or target system. All parties involved and affected, directly or indirectly, by the development and implementation of the target system are known as stakeholders. Typically stakeholders include groups and individuals internal and external to the organization. The needs of these stakeholders will be different, as will be the way in which they express them. It is critical for successful requirements elicitation that all the target system stakeholders are involved in the process from an early stage. The customer, and more specifically the project sponsor, is usually the most apparent stakeholder of the system. In some cases, however, the actual users of the system may be the most important. Managers of departments containing users must also be considered stakeholders, even if they are not directly users of the system themselves. Other parties whose sphere of interest may extend to some part of the target system operations, such as those responsible for work process standards, customers, and partners, should also be regarded as stakeholders if affected. An often-forgotten group of stakeholders in system development projects are the developers themselves, including designers, programmers, testers, and implementation consultants.
Sources of Requirements With any system development project there is a number of possible sources for requirements. Stakeholders represent the most obvious source of determining requirements for the target system. Subject matter experts are used to supply detailed information about the problem and solution domains. Active systems and processes represent another source for eliciting requirements, particularly when the project involves replacing an existing legacy system. Documentation on the current systems, processes, organization, and environment can provide a detailed foundation of requirements and supporting rationale.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
40 Coulin and Zowghi
Quality of Requirements The success of a system is heavily dependent on the quality of the requirements used for its development. This can be expressed in terms of the correctness, completeness, consistency, and clarity of the elicited requirements (Davis, 1994). Other commonly used quality attributes for requirements include their relevance to the scope of the project, the extent to which they are feasible given the constraints of the project, and the ability to trace their source and rationale. It is also important that requirements are stated in such a way as they can be tested or measured to determine their quality and if they have been fulfilled (Lauesen, 2002).
The Requirements Elicitation Process It is important to remember at this point that the process of system development, and therefore requirements elicitation, does not occur in a vacuum. It is strongly related to the context in which it is conducted and specific characteristics of the project, organization, and environment (Christel & Kang, 1992). In practice the budget and schedule of the project have a significant effect on the process and the way in which it is performed. The structure and maturity of the organization will determine how requirements are elicited, as will the way in which the target system will interact with users and other systems. The level of volatility within a project must also be considered, as this will affect directly the quality of requirements and the elicitation process itself. Typically the process begins with an informal and incomplete high-level mission statement for the project. This may be represented as a set of fundamental goals, functions, and constraints for the target system, or as an explanation of the problems to be solved. In order to develop this description, stakeholders and other sources of requirements are identified and used for elicitation. These preliminary results form the basis of further investigation and refinement of requirements. Over the years a number of process models have been proposed for requirements elicitation (Constantine & Lockwood, 1999; Kotonya & Sommerville, 1998; Sommerville & Sawyer, 1997). For the most part these models provide only a generic roadmap of the process with sufficient flexibility to accommodate the basic contextual differences of individual projects. The inability of these models to provide definitive guidelines is a result of the wide range of activities that may be performed during requirements elicitation, and the sequence of those activities depending on specific project circumstances. The variety of issues that may be faced, and the number of techniques available to use, only adds to this complexity. In most cases the process of requirements elicitation is performed incrementally over multiple sessions, iteratively to increasing levels of detail, and at least partially in parallel with other system development activities such as modeling and analysis. In reality its conclusion is often determined as a result of time and cost constraints rather than achieving the required level of quality for the requirements. Typically the result of this process is a detailed set of requirements in natural language text and simple diagrammatic representations describing the sources, priorities, and rationales.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Elicitation for Complex Systems: Theory and Practice
41
Requirements Elicitation Techniques The following section details some of the more widely used requirements elicitation techniques for the development of sociotechnical systems. This selection is by no means complete; however it is representative of the range of available techniques described in the literature and performed in practice. It is generally accepted that no one technique is suitable for all projects. The choice of techniques to be employed is dependent on the specific context of the project and is often a critical factor in the success of the elicitation process (Nuseibah & Easterbrook, 2000).
Questionnaires Questionnaires are mainly used during the early stages of requirements elicitation. For them to be effective, the terms, concepts, and boundaries of the domain must be well established and understood by the participants and questionnaire designer. Questions must be focused to avoid gathering large amounts of redundant and irrelevant information. They provide an efficient way to collect information from multiple stakeholders quickly but are limited in the depth of knowledge they are able to elicit. Questionnaires lack the opportunity to delve further on a topic or expand on new ideas. In the same way, they provide no mechanism for the participants to request clarification or correct misunderstandings. Generally questionnaires are considered more useful as informal checklists to ensure fundamental elements are addressed early on and to establish the foundation for subsequent elicitation activities.
Interviews Interviews are probably the most traditional and commonly used technique for requirements elicitation. Because interviews are essentially human-based social activities, they are inherently informal and their effectiveness depends greatly on the interaction between the participants. There are fundamentally two types of interviews: unstructured and structured. Unstructured interviews are conversational in nature where the interviewer enforces only limited control over the direction of discussions. Because they do not follow a predetermined agenda or list of questions, there is the risk that some topics may be completely neglected. It is also a common problem with unstructured interviews to focus in too much detail on some areas and not enough in others (Maiden & Rugg, 1996). This type of interview is best applied when there is a limited understanding of the domain or as a precursor to more focused and detailed structured interviews. Structured interviews are conducted using a predetermined set of questions to gather specific information. The success of structured interviews depends on knowing the right questions to ask, when should they be asked, and who should answer them. Templates that provide guidance on structured interviews for requirements elicitation such as Volere (Robertson & Robertson, 1999) can be used to support this technique. Although Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
42 Coulin and Zowghi
structured interviews tend to limit the investigation of new ideas, they are generally considered to be rigorous and effective. Like questionnaires, interviews provide an efficient way to collect large amounts of data quickly. The results, however, can vary significantly depending on the skill of the interviewer and consequently the usefulness of the information gathered (Goguen & Linde, 1993).
Group Work Group work is a well-established and often-used technique in requirements elicitation. The most common forms of this technique include brainstorming and Joint Application Development (JAD) as described below. Brainstorming involves participants from different stakeholder groups engaging in informal discussion to rapidly generate as many ideas as possible without focusing on any one in particular. It is important when conducting this type of group work to avoid exploring or critiquing ideas in great detail. It is not usually the intended purpose of brainstorming sessions to resolve major issues or make key decisions. This technique is often used to develop the preliminary mission statement for the project and target system. One of the advantages in using brainstorming is that it promotes freethinking and expression, and allows the discovery of new and innovative solutions to existing problems. JAD (Wood & Silver, 1995) involves all the available stakeholders investigating through general discussion both the problems to be solved and the available solutions to those problems. With all parties represented, decisions can be made rapidly and issues resolved quickly. A major difference between JAD and brainstorming is that typically the main goals of the system have already been established before the stakeholders participate in JAD sessions. Group work is often performed using support materials such as documents, diagrams, and prototypes to promote discussion and feedback. This technique encourages stakeholders to resolve conflicts and develop solutions themselves, rather than relying on the analyst to drive the process. Group work sessions can be difficult to organize due to the large number of different stakeholders who may be involved in the project. Managing these sessions effectively requires both expertise and experience to ensure that individual personalities do not dominate the discussions. A key factor in the success of group work is the makeup of participants. Stakeholders must feel comfortable and confident in speaking openly and honestly. It is for this reason that group work is less effective in highly political situations.
Card Sorting Card sorting requires the stakeholders to sort a series of cards containing the names of domain entities into groups according to their own understanding. Furthermore the stakeholder is required to explain the rationale for the way in which the cards are sorted.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Elicitation for Complex Systems: Theory and Practice
43
It is important for effective card sorting that all entities are included in the process. This is possible only if the domain is sufficiently understood by both the analyst and the participants. If the domain is not well established, then group work can be used to identify system entities. This technique is typically used more for the categorization and clarification of requirements. Class Responsibility Collaboration (CRC) cards (Beck & Cunningham, 1989) are a derivative of card sorting that is also used to determine program classes in software code. In this technique cards are used to assign responsibilities to users and components of the system. Because entities represent such a high level of system abstraction, the information obtained from this technique is limited in its detail.
Laddering When using laddering (Hinkle, 1965) stakeholders are asked a series of short prompting questions, known as probes, and required to arrange the resultant information into an organized hierarchical structure. For this technique to be effective, stakeholders must be able to express their understanding of the domain and arrange that knowledge in a logical way. Like card sorting, laddering is mainly used as a way to clarify requirements and categorize domain entities.
Repertory Grids Repertory grids (Kelly, 1955) involve asking stakeholders to develop attributes and assign values to a set of domain entities. As a result the system is modeled in the form of a matrix by categorizing the elements of the system, detailing the instances of those categories, and assigning variables with corresponding values to each one. The aim is to identify and represent the similarities and differences between the different domain entities. These represent a level of abstraction unfamiliar to most users. As a result this technique is typically used when eliciting requirements from domain experts. Although more detailed than card sorting, and to a lesser degree laddering, repertory grids are somewhat limited in their ability to express specific characteristics of complex requirements.
Task Analysis, Scenarios and Use Cases Task analysis, scenarios, and use cases are alike in that they all describe to some degree the series of actions and events the system and users perform during operation; however, they are different in their focus and usage. Unlike scenarios and use cases, task analysis employs a top-down approach where high-level tasks are decomposed into subtasks until all actions and events are detailed. In particular the diagrammatic and tabular representations of use cases (Cockburn, 2001) make them easy to understand and flexible enough to accommodate context-specific information. Use cases can be reused later in the development process to determine components and classes during system design and when creating test cases. These techniques are especially effective in projects where Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
44 Coulin and Zowghi
there is a high level of uncertainty or when the analyst is not an expert in that particular domain. A disadvantage of using task analysis, scenarios, and use cases is the amount of effort and detail required to define the steps and sequences for all possible process combinations.
Protocol Analysis Protocol analysis is where participants perform an activity or task while talking it through aloud, describing the actions being conducted and the thought process behind them. This technique can provide the analyst with specific information on the actual processes the target system must support (McGraw & Harbison-Briggs, 1989). In most cases, however, talking through an operation is not the normal way of performing the task and as a result may not necessarily represent the true process completely or correctly. Likewise minor steps performed frequently and repetitively are often taken for granted by the users and may not be explained and subsequently recorded as part of the process.
Ethnography Ethnography involves the analyst overtly or covertly participating in the normal activities of the users over an extended period of time whilst collecting information on the operations being performed. Observation is one of the more widely used ethnographic techniques. As the name suggests, the analyst observes the actual execution of existing processes by the users without direct interference. This technique is often used in conjunction with others such as interviews and task analysis. As a general rule ethnographic techniques are very expensive to perform and require significant skill and effort on the part of the analyst to interpret and understand the actions being performed. The effectiveness of observation and other ethnographic techniques can vary as users have a tendency to adjust the way they perform tasks when knowingly being watched. Despite this, these techniques are especially useful when addressing contextual factors such as usability and when investigating collaborative work settings where the understanding of interactions between different users with the system is paramount. In practice ethnography is particularly effective when the need for a new system is a result of existing problems with processes and procedures.
Prototyping Providing stakeholders with prototypes of the system to support the investigation of possible solutions is an effective way to gather detailed information and relevant feedback. Prototypes are typically developed using preliminary requirements or existing examples of similar systems. This technique is particularly useful when developing human-computer interfaces or when the stakeholders are unfamiliar with the available solutions. There are a number of different methods for prototyping systems with varying levels of effort required. In many cases they are expensive to produce in terms of time
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Elicitation for Complex Systems: Theory and Practice
45
and cost. An advantage of using prototypes is that they encourage stakeholders, and more specifically the users, to play an active role in developing the requirements. Prototypes are extremely helpful when developing new systems for entirely new applications.
Roles of the Requirements Engineer In this section the various roles that may be required of analysts performing requirements elicitation for complex systems are examined. It is important to note that analysts may not necessarily carry out all these roles within all projects. The responsibilities are dependent on the project and the context in which it is conducted.
Manager A fundamental part of requirements engineering is related to project management. Analysts must manage the process of requirements elicitation and communicate it effectively to the system stakeholders. This activity involves more than the obvious decision-making and prioritization tasks. Analysts are often required to initiate meetings with stakeholders, produce status reports, and remind stakeholders of their responsibilities. In many cases the analyst is the primary contact for questions from stakeholders relating to the project, the process, and the target system.
Analyst A large part of elicitation involves analyzing not just the processes that the target system must support but also the requirements themselves. Analyst must translate and interpret the needs of stakeholders in order to make them understandable to the other stakeholders. Requirements are then organized in relation to each other and given meaning with respect to the target system. Often analysts are required to use a certain amount of introspection when eliciting requirements, especially when stakeholders are not able to express their needs clearly or are unfamiliar with the available solutions.
Facilitator When eliciting requirements by conducting interviews or group work sessions, the analyst is not only required to ask questions and record the answers but must guide and assist the participants in addressing the relevant issues in order to obtain correct and complete information. The analyst is also responsible for ensuring that participants feel comfortable and confident with the process and are given sufficient opportunity to contribute. It is important for the analyst to encourage stakeholders to express their
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
46 Coulin and Zowghi
needs in terms of requirements that can be validated and verified and understood by other stakeholders.
Mediator During elicitation conflicts between requirements and stakeholders are inevitable. In many cases the prioritization of requirements from different stakeholders groups is a source of much debate and dispute. When these situations occur the analyst is often responsible for finding a suitable resolution through negotiation and compromise. It is important that the analyst is sensitive to all the political and organizational aspects of the project when mediating discussions related to the target system.
Developer Analysts are often required to assume the various roles of the developer community during requirements elicitation. This includes system architects, designers, programmers, testers, quality assurance personnel, implementation consultants, and system maintenance administrators. Mainly this is a result of these stakeholders not yet being involved in the project at the requirements elicitation stage. Despite this, decisions made during this phase of the project will affect significantly these stakeholders and the subsequent phases of system development.
Documenter More often than not the analyst is responsible for the output of the elicitation process. Typically this takes the form of a requirements document or detailed system model. This role is particularly important, as it represents the results of the elicitation process and forms the foundation for the subsequent project phases. Evaluation of the elicitation process and the work performed by the analyst is based on these resultant artifacts, which in some cases may form the basis of contractual agreements.
Validator All the requirements elicited must be validated and verified against each other, then compared with previously established goals for the system. By this it is meant that the requirements describe the desired features of the system appropriately and that those requirements will provide the necessary functions in order to fulfill the specified objectives of the target system. This process typically involves all the identified stakeholder groups and results in further elicitation activities.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Elicitation for Complex Systems: Theory and Practice
47
Challenges of Requirements Elicitation The process of eliciting requirements is critical to system development but often overlooked or only partially addressed. This may be a result of it being one of the most difficult and least understood activities. Below are some of the more commonly experienced issues, challenges, and pitfalls of requirements elicitation: •
The initial scope of the project is not sufficiently defined and as such is open to interpretations and assumptions.
•
Stakeholders do not know what their real needs are and are therefore limited in their ability to support the investigation of the solution domain.
•
Stakeholders are not able to adequately communicate their needs. This does not necessarily mean that the stakeholders do not know what they want.
•
Stakeholders do not understand or appreciate the needs of other stakeholders. Users may only be concerned with those factors that affect them directly.
•
Stakeholders understand the problem domain but are unfamiliar with the available solutions and the way in which their needs could be met.
•
Stakeholders often assume and therefore overlook those things that are trivial in their daily lives. These may not be apparent to the analyst and other stakeholders.
•
The analyst is unfamiliar with the problem or solution domains and does not understand the needs of the users and the processes to be addressed.
•
The analysts and stakeholders do not share a common understanding of the concepts and terms in the domain.
•
Conflicts between stakeholders and requirements are common. For example the needs of the users may not be consistent with the goals of the project sponsors.
•
Stakeholders exhibit varying levels of cooperation and commitment to the project. Partial participation compromises the quality of requirements. This may be as a result of resistance to change that a new system may introduce.
•
Because the process of elicitation is very informal by nature, requirements may be incorrect, incomplete, inconsistent, and not clear to all stakeholders.
•
Requirements generated by stakeholders can be vague, lacking specifics, and not represented in such a way as can be measured or tested.
•
Analysts are not equipped with sufficient expertise and experience to perform effective requirements elicitation. Novice analysts may overlook a source of requirements or fail to identify and involve all the necessary stakeholders.
•
Only very limited guidelines and tool support exist for the process of requirements elicitation.
•
Requirements and the context in which they are elicited are inherently volatile. As the project develops and stakeholders become more familiar with the problem and solution domains, the goals and needs of the target system are susceptible to change. In this way the volatility is actually a result of the elicitation process itself.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
48 Coulin and Zowghi
Additional social and technical issues may be found when developing extremely large and complex systems where the number of stakeholders and the volume of requirements can become unmanageable.
Trends for Requirements Elicitation In reality the number of different factors that must be taken into consideration when performing requirements elicitation prohibits a definitive technique or method for all projects. The experience and expertise of the analyst, time and cost constraints, volatility of the scope, and the context in which the project is conducted all have significant influence on the way in which the process is performed. Despite this obvious complexity, some approaches have proved effective in overcoming many of the issues often associated with requirements elicitation.
Viewpoints Viewpoint approaches aim to model the domain from different perspectives in order to develop a complete and correlated description of the target system. For example a system can be described in terms of its operation, implementation and interfaces. In the same way solutions can be modeled from the standpoints of different users or from the position of related systems. These types of approaches are particularly effective for projects where the system entities have detailed and complicated relationships with each other. One common criticism of viewpoint approaches is that they do not enable non-functional requirements to be represented easily and are expensive to use in terms of the effort required. Most viewpoint approaches (Nuseibeh, Finkelstein, & Kramer, 1996; Sommerville, Sawyer, & Viller, 1998) provide a flexible multi-perspective model for systems, using different viewpoints to elicit and arrange requirements from a number of sources. Using these approaches, analysts and stakeholders are able to organize the process and derive detailed requirements for a complete system from multiple project-specific viewpoints.
Goal Based Goal-based approaches for requirements elicitation have become increasingly popular in both research and practice environments. The fundamental premise of these approaches is that high-level goals that represent objectives for the target system are decomposed into sub goals and then further refined in such a way that individual requirements are developed. The result of this process is significantly more complicated and complete than the traditional method of representing system goals using tree structure diagrams. These approaches are able to represent detailed relationships between system entities and requirements. One of the risks when using goal-based
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Elicitation for Complex Systems: Theory and Practice
49
approaches and goal decomposition in general is that errors in the high-level goals of the system made early on in the process can have a major and detrimental follow on effect. In recent times significant effort has been devoted to developing these types of approaches for requirements elicitation (Dardenne, van Lamsweerde & Fickas, 1993; Yu, 1997). The use of goals in conjunction with scenarios to elicit requirements has also attracted considerable attention (Haumer, Pohl, & Weidenhaupt, 1998; Potts, Takahashi, & Anton, 1994). In practice, these approaches have been particularly useful in situations where only the high-level needs for the system are well known and there exists a general lack of understanding about the specific details of the problems to be solved and their possible solutions.
Domain Based Domain knowledge in the form of guidelines and examples play an important part in the process of requirements elicitation. Approaches based on this type of information are often used in conjunction with other elicitation techniques. Analysts use previous experience in similar domains as a template for group work and interviews. Analogies and abstractions of existing problem domains can be used as baselines to acquire detailed information, identify and model possible solution systems, and assist in creating a common understanding between the analyst and stakeholders. These approaches also provide the opportunity to reuse specifications and validate new requirements against other domain instances (Sutcliffe & Maiden, 1998).
Combinational Approaches It is widely accepted that by using a combination of complimentary elicitation techniques many of the issues commonly associated with this process can be avoided or at least minimized. In practice the selection of techniques used during a project is more often determined by the experience and expertise of the analyst rather than their appropriateness to the specific situation. Consideration should be given to the types of stakeholders involved in the process, the information that needs to be elicited, and the stage of elicitation efforts in the project. Along these lines attempts have been made to develop and validate a set of tentative relationships between the characteristics of a project and the methods to be used as a guideline for selecting technique combinations (Hickey & Davis, 2003; Maiden & Rugg, 1996). One approach of combining techniques suggests that the process can begin with an ethnographic study to discovery fundamental aspects of existing patterns and behavior, followed by structured interviews to gain deeper insight into the needs of the stakeholders and the priorities of requirements (Goguen & Linde, 1993). Furthermore it is proposed that the more expensive requirements elicitation techniques are used to examine in greater detail those needs deemed important.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
50 Coulin and Zowghi
Future Directions Research So far relatively little research has been devoted to the process of requirements elicitation for complex systems. More work is needed to establish guidelines for the application of techniques and provide tool support for the various tasks that make up the process. Comprehensive empirical research, detailed case studies, and in-depth experience reports are required to supply the necessary historical information in order to produce these outcomes. Additional efforts also should be directed toward developing graphical approaches and integration of the traditional aspects of requirements elicitation with those methods used during validation and verification.
Practice As a result of shorter schedules and tighter budgets, analysts in industry are being asked to do substantially more with significantly less. The increasing complexity of systems today, and the speed at which new technologies are introduced to the market, only compound this situation. Organizations are placing greater demands on these new systems than at any other time. Consequently analysts are in need of practical techniques that are easy to use and provide them with quality requirements in less time at a reduced cost. Agile methods related to system specification have recently received considerable attention and approval in response to this demand. Furthermore approaches that allow requirements to be reused in multiple projects and for other phases in the development process are attractive for the same reasons. As stakeholders become familiar with complex systems through their everyday lives, their ability to express requirements for new systems will improve, and the expectations on those systems will continue to increase.
Education Experience is often the major difference between novice and expert requirements engineers. Given this, it is appropriate that the education of future analysts should place a stronger emphasis on gaining practical experience in real-world situations. Roleplaying activities and industrial placement can provide the required learning environment for analysts to prepare for industry and develop the social skills necessary to conduct requirements elicitation for complex systems.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Elicitation for Complex Systems: Theory and Practice
51
Conclusion It is obvious that requirements elicitation is a complex process. This is made certain by the numerous activities that make up the process and the variety of ways those activities can be performed. The tasks required and the issues faced are dependent on a wide range of potentially changing contextual factors specific to the situation. As a result the conditions under which requirements elicitation is performed are never exactly the same. Effective elicitation of requirements is largely dependent on the expertise of the analyst, the utilization of techniques, and the support of the stakeholders. Analysts are required to apply a wide range of skills, knowledge, and experience throughout the process. The selection and execution of techniques needs to be complementary and customized to the specific project. Stakeholders must work together with the analyst to address the social and technical aspects of the system. To this day requirements elicitation represents one of the most poorly executed activities and major challenges in system development. Consequently it is important for the success of future projects that new and innovative ways of conducting this process, and supporting the participants, continue to be investigated and examined.
References Beck, K., & Cunningham, W. (1989). A laboratory for teaching object-oriented thinking. Proceedings of OOPSLA ’89, New Orleans, LA. Christel, M.G., & Kang, K.C. (1992). Issues in requirements elicitation. Pittsburgh, PA: Carnegie Mellon University. Cockburn, A. (2001). Writing effective use cases. Reading, MA: Addison-Wesley. Constantine, L., & Lockwood, L.A.D. (1999). Software for use: A practical guide to the models and methods of usage-centered design. Reading, MA: Addison-Wesley. Dardenne, A., van Lamsweerde, A., & Fickas, S. (1993). Goal-directed requirements acquisition. Science of Computer Programming, 20. Davis, A. M. (1994). Software requirements: Analysis and specification. NJ: PrenticeHall. Goguen, J.A. (1996). Formality and informality in requirements engineering. Proceedings of the IEEE International Conference on Requirements Engineering, Colorado Springs, CO. Goguen, J.A., & Linde, C. (1993). Techniques for requirements elicitation. Proceedings of the IEEE International Symposium on Requirements Engineering, San Diego, CA. Haumer, P., Pohl, K., & Weidenhaupt, K. (1998). Requirements elicitation and validation with real world scenes. IEEE Transactions on Software Engineering, 24(12).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
52 Coulin and Zowghi
Hickey, A.M., & Davis, A.M. (2003). Elicitation technique selection: How do experts do it? Proceedings of the IEEE International Requirements Engineering Conference, Monterey Bay, CA. Hinkle, D. (1965). The change of personal constructs from the viewpoint of a theory of implications. (Doctoral dissertation, Ohio State University, 1965). Jackson, M. (1995). Software requirements and specifications: A lexicon of practice, principles and prejudices. Great Britain: Addison-Wesley. Jirotka, M., & Goguen, J. (Eds.) (1994). Requirements engineering: Social and technical issues. London: Academic Press. Kelly, G. (1955). The psychology of personal constructs. New York: Norton. Kotonya, G., & Sommerville, I. (1998). Requirements engineering: Processes and techniques. Great Britain: John Wiley & Sons. Lauesen, S. (2002). Software requirements: Styles and techniques. Great Britain: AddisonWesley. Maiden, N.A.M., & Rugg, G. (1996). ACRE: Selecting methods for requirements acquisition. Software Engineering Journal, 11(3). McGraw, K.L., & Harbison-Briggs, K. (1989). Knowledge acquisition: Principles and guidelines. New Jersey: Prentice-Hall. Nuseibeh, B., & Easterbrook, S. (2000). Requirements engineering: A roadmap. Proceedings of the conference on The Future of Software Engineering, Limerick, Ireland. Nuseibeh, B., Finkelstein, A., & Kramer, J. (1996). Method engineering for multiperspective software development. Information and Software Technology Journal, 38(4). Potts, C., Takahashi, K., & Anton, A.I. (1994). Inquiry-based requirements analysis. IEEE Software, 11(2). Robertson, S., & Robertson, J. (1999). Mastering the requirements process. Great Britain: Addison-Wesley. Sommerville, I., & Sawyer, P. (1997). Requirements engineering: a good practice guide. Great Britain: John Wiley & Sons. Sommerville, I., Sawyer, P., & Viller, S. (1998). Viewpoints for requirements elicitation: a practical approach. Proceedings of the IEEE International Conference on Requirements Engineering, Colorado Springs, CO. Sutcliffe, A., & Maiden, N. (1998). The domain theory for requirements engineering. IEEE Transactions on Software Engineering, 24(3). Wood, J., & Silver, D. (1995). Joint application development. New York: John Wiley & Sons. Yu, E.S.K. (1997). Towards modeling and reasoning support for early-phase requirements engineering. Proceedings of the IEEE International Symposium on Requirements Engineering, Washington, D.C.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Conceptual Modeling in Requirements Engineering
53
Chapter IV
Conceptual Modeling in Requirements Engineering: Weaknesses and Alternatives
Javier Andrade Garda, University of A Coruña, Spain Juan Ares Casal, University of A Coruña, Spain Rafael García Vázquez, University of A Coruña, Spain Santiago Rodríguez Yáñez, University of A Coruña, Spain
Abstract This chapter focuses on software engineering conceptual modeling, its current weaknesses, and the alternatives to overcome them. It is clear that software quality has its genesis in the conceptual model and depends on how well this model matches the problem in question. However, this chapter presents a representative study of the analysis approaches that highlights that (i) they have traditionally focused on implementation and have paid little or no attention to the problem domain and (ii) they have omitted the various stakeholders (viewpoints) generally involved in any problem. The proposed alternatives are based on those aspects that are related to a generic conceptualisation, independent of the implementation paradigms.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
54 Andrade Garda, Ares Casal, García Vázquez and Rodríguez Yáñez
Introduction The purpose of requirements engineering (RE) is to reach a thorough understanding of the needs of individuals involved in the problem to be solved. Because of this goal and its impact on subsequent phases, RE is an extremely relevant step in the software process. RE covers the requirements analysis activity, with conceptual modeling of the individuals’ problem as one of its most remarkable tasks. In this activity, the problem to be solved is understood through conceptual models. This chapter focuses on the conceptual modeling task within software engineering (SE), exposes its present weaknesses, and proposes a series of possible alternatives. The next section presents the basic and general aspects of conceptualisation regardless of SE. The following section tackles the relevance and orientation of conceptualisation in SE, and the two sections after that consider SE conceptualisation techniques and methods. The sections on alternatives and weaknesses list the weaknesses that result from this study and propose ways to avoid them based on the points of the section on understanding and conceptualising a problem. Finally the last section presents the most relevant conclusions.
Understanding and Conceptualising a Problem Humans usually start solving non-trivial problems by gaining an understanding of these problems. This involves two basic activities: 1.
Acquisition. All the possible information related to the problem is gathered from available sources. In general, initial acquisition is neither complete nor correct, because it is extremely difficult to gather all the information at once, and the gathered information is subject to inconsistencies. These inconveniences are gradually overcome by acquiring more information and by refining the information that is already available.
2.
Conceptualisation. The gathered information is organised or modelled to form a meaningful whole: the conceptual model of the problem. If we define a concept as a mental structure that derives from the acquired information and is able to clarify or even solve a problem when applied to it, conceptualisation can be defined as the use of concepts and relationships to deal with and solve problems. Accordingly conceptual models are abstractions of the universe of discourse of the problem, as well as possible models of possible conceptual solutions to the problem.
Owing to the above-mentioned problem of acquisition, the timing of these activities is neither sequential nor clear; numerous overlaps and feedbacks occur that constitute an inherent evolutionary process, as befits any modeling process. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Conceptual Modeling in Requirements Engineering
55
Any possible conceptualisation of a problem should consider (Gómez, Moreno, Pazos, & Sierra, 2000), as strictly as possible, the four rules proposed by Descartes (1969) in his Cartesian method: the rule of evidence, the rule of analysis, the rule of synthesis and the rule of proof. The conceptualisation process thus ruled entails two basic activities, analysis (rule of analysis) and synthesis (rule of synthesis), which should be preceded by a cautioning activity (rule of evidence) and followed by a confirmative activity (rule of proof). It should be noted here that a model does not necessarily have to be a copy of the system, element, or phenomenon it models. That is, we are not looking for a relationship of isomorphism between both elements. The most productive models are those that have a relationship of homomorphism (Ares & Pazos, 1998), which means that there is no injective mapping between the real system and the model. Although this unquestionably simplifies the understanding of the real system, it also has a price: a solution in the model does not necessarily represent a solution in the real system. In the conceptualisation process it is therefore essential to validate models before they are applied (rule of proof). Formally, any conceptualisation can be defined as a triplet of the form (Concepts, Relationships, Functions) (Ares & Pazos, 1998). This definition includes, respectively, the concepts presumed or hypothesised to exist in the world, the relationships (in the formal sense) between concepts, and the functions (also in the formal sense) defined on the concepts. Concepts are the primary elements. Since Plato (1997), the nature of concepts has stood out as one of the most complicated and oldest questions of philosophy. Some hypotheses, however, have been established about concepts. Although they do not define concepts, these hypotheses can be used to quite effectively delimit what they are and how they can be detected and used. These are the five hypotheses of conceptualisation (Díez & Moulines, 1997): •
HC1. Abstract entities. Concepts are in principle identifiable abstract entities, to which human beings, as epistemic subjects, have access. These elements provide knowledge and guidance about the real world.
•
HC2. Contraposition of a system of concepts with the real world. Real objects can be identified and recognised thanks, among other things, to the available concepts. Several (real) objects are subsumed within one and the same (abstract) concept.
•
HC3. Connection between a system of concepts and a system of language. The relationship of expression establishes a connection between concepts and words (expressions in general), and these (physical entities) can be used to identify concepts (abstract entities).
•
HC4. Expression of concepts by non-syncategorematic terms. Practically all nonsyncategorematic terms introduced by an expert in a domain express a concept.
•
HC5. Need for set theory. For many purposes, the actual concepts should be substituted by the extensions of these concepts (sets of subsumed objects) to which set theory principles and operations can be applied.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
56 Andrade Garda, Ares Casal, García Vázquez and Rodríguez Yáñez
Problem Conceptualisation in SE The software crisis was “officially” certified in 1967 when the NATO Science Committee set up a study group in computer science. Its mission was to evaluate the entire field of computer science. As a consequence of this initiative, in 1969 a congress was held in Rome. The main point on the agenda of this congress was to make a detailed analysis of the technical problems that plagued software development, setting aside management aspects. This congress stressed the need for rigorously defining software specifications, improving quality, raising flexibility, and adapting education for the practitioners who built software. In order to achieve these aims, work started on the different elements involved in software development, generally known as the three P’s: Product, Process, and Person (Reifer, 1993). Initially emphasis was placed on the Product, and methodologies, formal specifications, and metrics were developed. Later the interest switched to the Process, and the ISO 9000-3, SPICE and CMM process models were created. As the “good process implies good product” equation was much criticised and the hopes placed on process improvement were not realised, attention turned to the Person element: “anything that improves the person aspect will redound to better software development practice” (Ares & Pazos, 1998). However little or no emphasis was placed on what appears to be the heart of good software development: the Problem — fourth P — its understanding, and conceptualisation (Gómez et al., 2000; Jackson, 2001b). This was because it was assumed that a good practitioner should be capable of understanding and conceptualising any problem and that a good process would already have considered this aspect. So the first and primary issue in the software process is still to conceptualise the problem raised and to then be able to develop suitable software for solving it. Indeed the most critical factor in software systems development is to properly understand and represent the requirements that the system under development has to satisfy (Jackson, Embley, & Woodfield, 1995). In this process, the information gathered in the problem domain cannot be directly translated into the computer domain; it needs to be transformed (Blum, 1996): 1.
Construction of a conceptual model in the problem domain. This model serves to gain an understanding of the problem and represents the problem-oriented moment in the software process.
2.
Construction of the formal model in the computer domain, on the basis of the abovementioned conceptual model. This model is computer-readable and would merely have to be introduced into the computer (implementation) to output the software system. It represents the computer-oriented moment of the software process.
Therefore software system quality has its genesis in the conceptual model and depends on how well this matches the problem in question.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Conceptual Modeling in Requirements Engineering
57
Conceptual modeling has come to take an important place in the software process over the last few decades, because its underlying philosophy is the most advanced for dealing with problems (Ares & Pazos, 1998) and undertaking software development (Blum, 1996; Jackson & Rinard, 2000). The reason is that it is considered one of the main approaches for the description of reality (problem), possibly as powerful as natural language itself (Chen, Thalheim, & Wong, 1999).
Conceptual Modeling Techniques This section relates how SE conceptualisation techniques have progressively tried to avoid the obstacles that hindered an appropriate conceptualisation of the problem. It also indicates the remaining obstacles. These aspects are key points when addressing the following sections. The representation capacity of a conceptualisation technique is defined by the set of concept types within the problem domain that this technique can represent. According to this, we can distinguish three types of techniques: •
Solution-sensitive techniques. Their capacity is very restricted: they represent a few concepts concerning the problem domain, with a clear software-development orientation. Examples of these types of techniques are Data Flow Diagrams (DFD) and Object Diagrams.
•
Solution-sensitive techniques with enhanced capacity. By searching for a wider capacity, detached from the development concepts, they aim at modeling concepts such as strategies and goals. Examples of these types of techniques are TELOS, Knowledge Acquisition in autOmated Specification (KAOS) and Enterprise Modeling (EM).
•
Problem-sensitive techniques. Their capacity is oriented toward problems rather than to the development of software solutions. Problem Frames is the pioneer technique of this type.
Solution-Sensitive Techniques According to the orientation of their representation capacity, and following the idea suggested by Davis (1993), these techniques can be classified as follows: •
State-oriented. They describe configurations, changes caused in configurations, and the subsequent actions. The treated concepts are states, events, and actions.
•
Function-oriented. They describe data transformations through the process concept, which receives input data and generates output data.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
58 Andrade Garda, Ares Casal, García Vázquez and Rodríguez Yáñez
•
Data-oriented. They describe data and their relations. Even though Davis considers them to belong to the next group, we esteem that the resulting classification would not be sufficiently strict.
•
Object-oriented. These techniques describe objects (Object Diagrams) or object classes (Class Diagrams) and their relationships.
This classification illustrates the first problem: each technique can show only a partial vision of the problem, since its capacity only considers certain aspects (concepts). This entails the development of several conceptual models through various techniques in order to have a complete description of the problem. Besides, these techniques (together with the resulting models) are biased by computational aspects from the implementation domain, which is the opposite of the desired orientation. In this regard, criticisms refer mainly to two aspects: 1.
Computational-solution sensitivity. As opposed to the problem-sensitivity principle (Jackson, 1995), these techniques are oriented toward achieving a computational solution to the problem, not toward facilitating its understanding. This is due to the fact that the concepts dealt with include or are closely related to implementation concepts (Bonfatti & Monari, 1994; Høydalsvik & Sindre, 1993; Jackson, 1995).
2.
Link to particular development approaches. Due to the aforementioned obstacle, a conceptual model restricts the possible implementation options only to those in line with the technique used to produce it (Davis, 1993; Henderson-Sellers & Edwards, 1990; Jalote, 1991). Skater’s principle, also called raw material principle (Paradela, 2001), states that everything must be built with well-chosen raw materials. However, once a given technique has been used, it is extremely difficult to change the implementation paradigm without needing to carry out a new conceptualisation; that is, the raw material used for conceptualising is not appropriate.
As a consequence of the previous points, the understanding that has been achieved is partial and too much oriented toward the computational solution. This means that decisions for the development of a software solution are made when the problem to be solved is still in the comprehension phase, which has implications for the quality of the final software product (Davis, Jordan, & Nakajima, 1997).
Solution-Sensitive Techniques with Enhanced Capacity These techniques aim at achieving more representation capacity and less dependency on the implementation domain through some of the following approaches:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Conceptual Modeling in Requirements Engineering
59
1.
Capacity enlargement. This is the most widely used approach. For example, it was applied in control processes and control flows incorporation in DFD. When this approach is applied, the new concepts that are to be introduced must be defined with their semantics and notation.
2.
Explicit definition of a metamodel. This approach explicitly contemplates a metalevel, a domain level in which concepts are used according to the metalevel’s guidelines for a particular problem (that is, the conceptual model), and an instantiation level, which is the lowest abstraction level for the exemplifying of situations (Boman, Bubenko, Johannesson, & Wangler, 1997). Examples are EM and KAOS, which incorporate concepts related to static and dynamic aspects into their metamodel.
3.
Definition of a non-rigid capacity. The aforementioned techniques provide a specific capacity that is standardised and not apt for all domains; that is, a “rigid” one. Their weakness lies in the fact that they cannot provide a thorough coverage of any problem/domain. It is sometimes necessary to model a problem/domain with the eventual loss of relevant aspects in the assimilation process. To solve this problem, this third strategy allows the definition of “customised” techniques, in which each modeling process “creates” a new technique adapted to the corresponding domain. An example would be TELOS, whose main concept is class (bearing the same connotation as in object-orientation).
Despite the progress made by these approaches, the technological solution has not yet been fully removed: 1.
The enlargement of the capacity does not entail a distantiation from implementation. Looking at the previous case of control processes in DFD, we still observe an obvious closeness to software development.
2.
The explicit definition of a metamodel does not exclude concepts that are close to software development. In fact the KAOS metamodel considers concepts such as actions, events, and objects, which are obviously related to software development.
3.
The definition of a non-rigid capacity does not entail its remoteness from the software development. In fact any conceptualisation technique can be defined by means of this type of mechanism, which clearly indicates the possibility of its usage being hindered by the aforementioned obstacles.
Problem-Sensitive Techniques Problem Frames technique (Jackson, 2001a) is the first technique that clearly focuses on tackling the problem and allowing reuse at a conceptual level. Even though it shares the philosophy that underlies design patterns, it focuses on the problem instead of on technical solutions. The Problem Frames technique is based on (i) the identification of the simplest subproblems in the global problem, (ii) the identification of the frame — pattern — that best fits each subproblem, and (iii) their subsequent integration.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
60 Andrade Garda, Ares Casal, García Vázquez and Rodríguez Yáñez
Each frame is independent from any kind of software development paradigm and adjusted to a particular type of problem. For each type there exists a simple and systematic resolution method: the solution task (first frame component). This method can be easily applied because it is expressed in terms of the principal parts (second frame components). Once the modellers have identified a problem frame, they know which elements must be conceptualised - principal parts and solution task - but they do not know which process to follow. Although this modeling process is more restricted and guided, it is still necessary to fill out the “template” as expressed by the frame and, consequently, to have a guiding methodological framework. Nevertheless, this proposal does not constitute a conceptualisation framework because it does not explicitly define a process; it merely entails the three aforementioned activities. In fact, this proposal does not indicate how to conceptualise a subproblem if this does not fit any frame, nor does it detail which techniques or activities must be tackled in order to establish a new frame. Just as the design patterns did not avoid the need for a design, Problem Frames does not eliminate the need for a conceptualisation of the problem and, therefore, the need for a methodological framework for problem-sensitive conceptual modeling. This framework should consider two well-known obstacles: the simplification of the integration, thanks to a uniform specification of conceptualisations (Jackson & Jackson, 1996), and the consideration of the possible discrepancies that derive from dealing with each subproblem separately (Jackson, 2001b), given the fact that each subproblem stands for a viewpoint of the global problem.
Conceptual Modeling Methods Even though throughout their evolution conceptualisation techniques have tried to focus on the problem as such, one specific technique cannot be held responsible for the conceptualisation of a problem. This is because, apart from the above problems, a technique does not indicate how to conceptualise. There should be an element that, without considering any implementation aspects, tells the modeller what steps must be taken in the conceptualisation process, what the output is, and what technique(s) is (are) to be applied. It is in this context that the conceptualisation — or analysis — phases of development methodologies/methods/processes have put forward certain methods for the elaboration of conceptual models. As mentioned before, these conceptual modeling methods usually handle several techniques to express various perspectives of the same problem. This complicates the integration of all the information and the tackling of the process. As a result, the methods end up aiming exclusively at the coordination of the applied techniques instead of focusing on their real objective. This section shows that conceptual modeling methods in SE are not oriented toward the conceptualisation of the problem within its domain, because of the following reasons:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Conceptual Modeling in Requirements Engineering
61
1.
Historically, they have adopted the concepts that arose from programming, fostering their usage at the problem level. This is the main reason why both conceptualisation methods and techniques are so close to software development.
2.
They are closely linked to the solution-sensitive techniques that they deal with and on which they are based. As a result they inherit the already-mentioned orientation to software development, and their steps, rather than guiding the conceptualisation process, focus on the coordination of the used techniques.
What follows is a brief study of a representative sample of the analysis methods. The purpose is not to carry out an exhaustive study of each method but to check the previous points.
Structured Methods The most relevant structured methods are widely known approaches, such as Structured Systems Analysis and Design Method (SSADM), Méthode d’Etude et de Réalisation Informatique pour les Systèmes d’Entreprise (MERISE), MÉTRICA, and Information Engineering. For our purpose the study of one specific method is sufficiently representative, since they all share the same philosophy and do not present any significant differences. We shall therefore focus on MÉTRICA (Spanish Ministry of Public Administrations [MAP], 2001), an approach that allows structured and object-oriented developments. In its Information Systems Analysis (ASI) process, MÉTRICA deals with data and process models (such as normalised logical data models, process models, and process/ geographical location matrixes), specifying all the interfaces between system and user. Once the models are completed, a consistency analysis based on verifications and validations considers all the achieved elements. MÉTRICA is a good example of the aforementioned obstacles: it presents closeness to the development paradigms, as can be seen in the obtained products, and activities (MAP, 2001) that are totally oriented toward the coordination/integration of the applied techniques. Moreover before tackling the analysis (that is, before understanding the problem), the modellers must choose between the structured approach and the objectoriented approach. At that moment, and as opposed to the conceptualisation orientation, this decision is only based on the way the software is going to be developed.
Object-Oriented Methods The study of object-oriented methods is characterised by that of the Unified Process (Jacobson, Booch & Rumbaugh, 1999), since it is the result of a mixture of the most relevant and remarkable methods in this field. In Unified Process, the analysis workflow considers the identified use-cases and reformulates them as objects in order to achieve a better understanding of the require-
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
62 Andrade Garda, Ares Casal, García Vázquez and Rodríguez Yáñez
ments and prepare for the system’s design and implementation (Jacobson et al., 1999). Hence concepts like attributes, methods, control classes, state diagrams, and so forth are handled in the analysis model. This approach is also close to the implementation domain. In fact, the analysis model is presented as a detailed specification for requirements understanding at the implementation level. Its authors (Jacobson et al., 1999) have said that the level of detail contemplated in this model does not usually concern the client. However, if it had the orientation of a truly conceptual model, that would be precisely the most important aspect for the client.
Agent-Oriented Methods Because of the similarities between both paradigms (Bond & Gasser, 1988), it is very usual to expand the object-oriented approaches and make them consider the aspects that belong to the agents’ domain. MESSAGE (European Institute for Research and Strategic Studies in Telecommunications [EURESCOM], 2001) is the agent-oriented analysis and design methodology that we consider representative, because of the following reasons: •
It is the newest methodology.
•
It is based on Unified Process, which entails an updated vision.
•
It achieves remarkable goals through the integration of various fields (e.g., Agent UML).
On the one hand the five views defined by MESSAGE for the analysis model deal with aspects such as agents, aggregations, states, events, and trigger conditions. On the other hand the defined analysis process consists of repeated enhancements that result in different levels, which consider the previous views. Once again we can observe how conceptualisation depends on computational concepts (agents). We must also remember that the integration of the views within the MESSAGE analysis process has not been studied in depth (EURESCOM, 2001) and remains therefore an incompletely defined process.
Current Weaknesses in SE Conceptual Modeling The preceding sections allow the extraction of the common problems shared by current conceptual modeling methods:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Conceptual Modeling in Requirements Engineering
63
1.
The methods focus on the technological solution instead of on the problem domain: computational model vs. conceptual model.
2.
Rather than leading the conceptualisation process in a clear way, the current approaches are limited to the coordination and integration of the (implementationoriented) techniques they consider.
3.
With regard to the verification and validation of the elaborated model, these methods establish a consistency analysis (not always explicitly) that only considers development concepts. There is no actual evaluation with regard to the model’s ability to tackle the problem, because that model is expressed in technological terms that are very difficult or even impossible to understand by the person who faces the problem and evaluates the model.
4.
More often than not, it is group of people rather than a single individual who face the problem (Ares & Pazos, 1998). This means that the conceptualisation process must take into account various individuals, each of whom possibly have different viewpoints on the matter. These different ways of perceiving the problem can lead to discrepancies that should be managed to build a single conceptual model of the problem in accordance with all the viewpoints involved. However the finding of the previous study is that generally no guidelines are given to deal with these legitimate situations. That is to say, from the very outset, current conceptual modeling strives to build a single model in which different perceptions and, therefore, discrepancies have no place. Few works in viewpoint-based requirements engineering consider these aspects when conceptualising a problem. The most relevant works are Viewpoint Resolution (Leite & Freeman, 1991) and Reconciliation (Spanoudakis & Finkelstein, 1997). The former deals with conceptualisation in a very shallow manner, since it allows very basic conceptual models through the Viewpoint Language - a rule-based language - without fully managing discrepancies (it neither establishes a strict classification of discrepancies nor does it indicate how to resolve these discrepancies). The latter manages exclusive concepts: object-orientation and concepts linked to the syntax of the TELOS language.
Alternatives to Overcome the Identified Weaknesses Once the weaknesses are identified, it is possible to establish the bases that a methodological framework for conceptual modeling should take into account in order to avoid them: 1.
The need to pay attention to the information related to the problem (within the context of the problem). Although there exists a general and formal definition of a conceptualisation — triplet (C, R, F) — this is not practical from the point of view of articulating a methodological framework since (i) concepts are abstract entities
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
64 Andrade Garda, Ares Casal, García Vázquez and Rodríguez Yáñez
(see HC1), (ii) relationships and functions are defined on the basis of concepts (which increases the afore-mentioned complexity), and (iii) the natural way of expression is through natural language (see HC3). Taking into account these three points, and HC4, we propose the articulation of a methodological framework for conceptualisation based on the information types that result from the analysis of the grammatical categories of natural languages. This proposal stems from the fact that there exists a parallelism between natural language structures and conceptual modeling (Hoppenbrouwers, van der Vos, & Hoppenbrouwers, 1997). Moreover this procedure should not obviate the aforesaid formal definition (it should be respected), and it must consider the existing interrelationships between the various types of information, which are particularly interesting when the model is evaluated. 2.
The need to pay attention to the very close and natural interrelationships between information acquisition and conceptualisation activities. Special attention should be paid to the natural interaction between both activities and to the contributions that take place between them in order to define a real and integrated methodological framework. The execution of the cycle defined by both activities will undoubtedly suffer variations as the process itself progresses, so that an evolutionary process (typical of any modeling process) takes place.
3.
The need to pay attention to the general and characteristic activities appertaining to a conceptualisation process. Since the purpose is to lead the conceptualisation process, apart from the aforesaid, it will be necessary to indicate which activities need to be tackled and which interactions and contributions take place between them. This is where the rules of the Cartesian method must be applied. The analysis of the activities related to these rules, and of their relations, should lead to our goal. Moreover the execution of the identified activities should be subject to an evolution, which is determined by the one that was previously discussed. Among the identified activities, model evaluation should play a key role. The recommended procedure consists of a holistic evaluation by means of graphic techniques considering the interrelationships among the information types that were identified.
4.
The need to pay attention to various stakeholders who may have different viewpoints with regard to the considered problem, which might in turn cause discrepancies. Such discrepancies should be managed in a relevant way within the process in order to elaborate a conceptual model that is valid for every individual involved (viewpoints), since they are common phenomena from which a better understanding of the problem can be derived. Obviously the established activities and guidelines (types of discrepancies, resolution criteria, etc.) should take into account the core of the single-viewpoint conceptual modeling that was established as a result of the previous points. Finally, the detection and resolution of the diverse discrepancy types should follow and guide the previously presented evolution.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Conceptual Modeling in Requirements Engineering
65
Conclusion This chapter has presented an overview of the conceptual modeling in SE, which is an important activity in RE. The study presented in the methods and techniques sections allows the identification of common problems shared by current SE conceptual modeling methods. These weaknesses were enumerated, and the possible alternatives to overcome them were outlined in the final section. The alternatives (i) take into account the generic (not development-oriented) considerations that were presented in the section on understanding and conceptualising a problem, (ii) are in accordance with the actual orientation the conceptual modeling activity should have in SE, and (iii) constitute the basis of any problem-oriented conceptual modeling methodological framework. Once the identified weaknesses are set aside, research should focus on identifying the criteria that are now needed to continue with the software development: criteria for the selection of the most suitable development paradigm(s) in software or knowledge engineering, and criteria for the transformation of the conceptual structures to the corresponding computational structures in selected paradigm(s).
Acknowledgments The authors would like to thank Juan Pazos (Technical University of Madrid), for his constructive suggestions; University of A Coruña, for its economical support; and Valérie Bruynseraede (Research Transfer Office of the University of A Coruña), for her help in translating the chapter into English.
References Ares, J., & Pazos, J. (1998). Conceptual modelling: An essential pillar for quality software development. Knowledge-Based Systems, 11, 87-104. Blum, B. I. (1996). Beyond programming. To a new era of design. New York: Oxford University Press. Boman, M., Bubenko Jr., J. A., Johannesson, P., & Wangler, B. (1997). Conceptual modelling. London: Prentice-Hall Series in Computer Science. Bond, A. H., & Gasser, L. (1988). An analysis of problems and research in DAI. In A. H. Bond & L. Gasser (Eds.), Readings in DAI (pp. 3-35). CA: Morgan Kaufmann Publishers. Bonfatti, F., & Monari, P. D. (1994). Towards a general purpose approach to objectoriented analysis. Proceedings of the International Symposium of Object-Oriented Methodologies and Systems, Italy, 108-122.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
66 Andrade Garda, Ares Casal, García Vázquez and Rodríguez Yáñez
Chen, P. P., Thalheim, B., & Wong, L. Y. (1999). Future directions of conceptual modelling. In P. P. Chen, J. Akoka, H. Kangassalu, & B. Thalheim (Eds.), Conceptual modelling (pp. 287-301). Berlin: Springer-Verlag. Davis, A. M. (1993). Software requirements: Objects, functions, and states. New Jersey: Prentice-Hall. Davis, A. M., Jordan, K., & Nakajima, T. (1997). Elements underlying the specification of requirements. Annals of Software Engineering, 3, 63-100. Descartes, R. (1969). Oeuvres de Descartes. Paris: Librairie Philosophique J. Vrin. Díez, J. A., & Moulines, C. U. (1997). Fundamentos de filosofía de la ciencia. Barcelona: Ariel S. A. European Institute for Research and Strategic Studies in Telecommunications (2001). MESSAGE: Methodology for engineering systems of software agents. Methodology for agent-oriented software engineering. Retrieved on November 5, 2004 from http://www.eurescom.de/public/projectresults/P900-series/907ti1.asp. Gómez, A., Moreno, A., Pazos, J., & Sierra, A. (2000). Knowledge maps: An essential technique for conceptualisation. Data & Knowledge Engineering, 33(2), 169-190. Henderson-Sellers, B., & Edwards, J. M. (1990). The object-oriented systems life cycle. Communications of the ACM, 33(9), 142-159. Hoppenbrouwers, J., van der Vos, B., & Hoppenbrouwers, S. (1997). NL structures and conceptual modelling: Grammalizing for KISS. Data & Knowledge Engineering, 23(1), 79-92. Høydalsvik, G. M., & Sindre, G. (1993). On the purpose of object oriented analysis. Proceedings of the Conference on Object Oriented Programming, Systems, Languages and Applications, USA (240-255). Jackson, D., & Jackson, M. (1996). Problem decomposition for reuse. Software Engineering Journal, 11(1), 19-30. Jackson, D., & Rinard, M. (2000). Software analysis: A roadmap. In A. Finkelstein (Eds.), The future of software engineering (pp. 135-146). New York: ACM Press. Jackson, M. (1995). Software requirements and specifications. A lexicon of practice, principles and prejudices. New York: ACM Press. Jackson, M. (2001a). Problem frames: Analysing and structuring software development problems. Harlow: Addison-Wesley. Jackson, M. (2001b). Problem analysis and structure. Proceedings of 2000 NATO Summer School, Germany (3-20). Jackson, R., Embley, D., & Woodfield, S. (1995). Developing formal object-oriented requirements specifications: A model, tool and technique. Information Systems, 20(4), 273-289. Jacobson, I., Booch, G., & Rumbaugh, J. (1999). The unified software development process. Reading, MA: Addison-Wesley. Jalote, P. (1991). An integrated approach to software engineering. New York: SpringerVerlag.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Conceptual Modeling in Requirements Engineering
67
Leite, J. C. S. P., & Freeman, P. A. (1991). Requirements validation through viewpoint resolution. IEEE Transactions on Software Engineering, 17(12), 1253-1269. Paradela, L. F. (2001). Una metodología para la gestión de conocimientos. Unpublished doctoral dissertation, Technical University of Madrid, Madrid. Plato. (1997). Memón, Cratilo, Fedón. Barcelona: Planeta DeAgostini S. A. Reifer, D. J. (1993). Managing the three P’s: The key to success in software management. In D. J. Reifer (Eds.), Software management (4th ed.) (pp. 2-8). Los Alamitos: IEEE Computer Society Press. Spanish Ministry of Public Administrations (2001). Metodología MÉTRICA v.3. Retrieved on November 5, 2004 from http://www.csi.map.es/csi/metrica3/index.html. Spanoudakis, G., & Finkelstein, A. (1997). Reconciling requirements: A method for managing interference, inconsistency and conflict. Annals of Software Engineering, 3, 433-457.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
68 de Antonio and Imbert
Chapter V
Combining Requirements Engineering and Agents Angélica de Antonio, Universidad Politécnica de Madrid, Spain Ricardo Imbert, Universidad Politécnica de Madrid, Spain
Abstract The concept of Agent is being used with different meanings and purposes in two separate fields of software engineering, namely Requirements Engineering and AgentOriented Software Engineering. After an introduction to Goal-Oriented Requirements Engineering (GORE) and its evolution into Agent-Oriented Requirements Engineering (AORE), this chapter provides a review of some of the main Agent-Oriented Software Engineering (AOSE) methodologies, focusing on their support for requirements modeling. Then the chapter analyzes how both approaches to Agents relate to each other, what the differences are among them, and how they could benefit from each other. Problems are identified and discussed that need to be addressed for a successful integration of both fields, and recommendations are provided to advance in this direction.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Combining Requirements Engineering and Agents
69
Introduction This chapter is devoted to the analysis of a growing tendency to combine requirements engineering and agents. This analysis is conducted from a double perspective. On one hand agents have been recognized as an abstraction that can be useful for requirements engineering (RE). Specifically, the concept of agent can be considered as a building block for structuring the description of an information system and the environment in which it will operate and with which it will interact. Agents are considered a nice abstraction since they can be used for modeling different kinds of entities, such as software, hardware, humans, or devices. From this point of view agents are a tool that can be used for engineering the requirements of any software system, be it agent-based or not. Agent-oriented requirements engineering (AORE) is considered as an evolution of goal-oriented requirements engineering (GORE), both being social approaches to requirements engineering. On the other hand agent-oriented systems, also known as multi-agent systems (MAS), are being increasingly recognized during the last few years (from the mid-’90s) as just the kind of software systems that need the application of software engineering practices for their development like any other software system, or even more if we take into account that MAS are complex systems and are usually applied to complex domains. That is how the term Agent-Oriented Software Engineering (AOSE) was coined a few years ago to describe a discipline that tries to define appropriate software engineering techniques and processes to be applied to these systems. The requirements of a MAS, like any other software system, need to be elicited, specified, analyzed, and managed, and the question that naturally arises is if engineering requirements for a MAS are different from any other software system. Considering the apparent dissociation between the agent concept in GORE-AORE and in AOSE, we decided to investigate to which extent it would be possible to combine both approaches. The second and third sections of this chapter describe the main approaches to the use of agents for requirements engineering, stating the principles underlying GORE and AORE. The fourth section analyzes how requirements engineering is currently being performed for agent-based systems. The last two sections show a reflection about the conclusions reached in our attempt to clarify how both approaches are related and how they could benefit one from the other.
Goal-Oriented Requirements Engineering (GORE) The initial requirements statements, which express customers’ wishes about what the system should do, are often ambiguous, incomplete, inconsistent, and usually expressed informally. Many requirements languages and frameworks have been proposed for the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
70 de Antonio and Imbert
refinement of the initial requirements statements, making them precise, complete, and consistent. Increasingly information systems development occurs in the context of existing systems and established organizational processes. Some authors defend the need for an early requirements analysis phase with the aim to model and analyze stakeholder interests and how they might be addressed, or compromised, by various system-and-environment alternatives. This earlier phase of the requirements process can be just as important as that of refining initial requirements. However most existing requirements techniques are intended mainly for the later phase. Considerably less attention has been given to supporting the activities that precede the formulation of the initial requirements. These “early-phase” requirements engineering activities consider how the intended system would meet organizational goals, why the system is needed, what alternatives might exist, what the implications of the alternatives are for various stakeholders, and how the stakeholders’ interests and concerns might be addressed. Early-phase RE activities have traditionally been done informally and without much tool support. Because early-phase RE activities have objectives and presuppositions that are different from those of the late-phase, it seems appropriate to provide different modeling and reasoning support for the two phases. The introduction of goals into the ontology of requirements models represented a significant shift toward this direction. Previously the world to be modeled consisted just of entities and activities. Goal analysis techniques have proved to be very useful, covering functional and non-functional goal analysis. Some of the most remarkable GORE approaches are EKD (Enterprise Knowledge Development) (Kavakli & Loucopoulos, 1998) and KAOS (Dardenne, van Lamsweerde, & Fickas, 1993; van Lamsweerde, Darimont, & Letier, 1998; van Lamsweerde, 2000). In order to illustrate the relationship between agents and goals in GORE, we will describe briefly the approach of KAOS. In KAOS a requirement for the overall system is called a goal. A requisite is a requirement on the part of the dynamics controllable by a single agent or environment component. Overall goals are explicitly modelled, and then goal refinement is used to decompose goals into requisites via AND/OR graphs. Assignment of agents to roles is done using responsibility links. Therefore the KAOS goal-oriented method consists of eliciting goals and refining them into subgoals until the latter can be assigned as responsibilities of single agents such as humans, devices, and software. Domain properties and assumptions about the software environment are also used during the goal refinement process. The method supports the exploration of alternative goal refinements and alternative responsibility assignments of goals to agents. As we can see, goals are the core concept in GORE as they guide the process, while agents are understood as entities to which responsibilities about goals are assigned. The main benefit of GORE is the systematic derivation of requirements from goals, while goals provide rationale for requirements. Alternative goal refinements and agent assignments allow the exploration of alternative system approaches.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Combining Requirements Engineering and Agents
71
Agent-Oriented Requirements Engineering (AORE) The logical next step in RE was to go from goal-oriented RE to agent-oriented RE (AORE). Viewing organizational and system components as cooperating agents offers a way of understanding their inter-relationships and how these would or should be altered as new systems are introduced. The main difference with GORE is that in goaloriented approaches agents interact with each other non-intentionally, and they are not considered to have rich social relationships. In most goal-oriented frameworks in RE, the intentionality is assumed to be under the control of the requirements engineer. The requirements engineer manipulates the goals and makes decisions on appropriate solutions to these goals. This may be more appropriate for late-phase RE but not for the early phase. As an example we will describe i* (Yu, 1997), which is one of the most significant frameworks for AORE. It is used in contexts in which there are multiple parties with strategic interests that may be reinforcing or conflicting in relation to each other. The rationale behind i* is that understanding the organizational context and rationales (the “whys”) that led up to systems requirements can be just as important for the success of the system as stating what a system is supposed to do (system requirements). Without intentional concepts such as goals, one cannot easily deal with the “why” dimension in requirements. The central concept in i* is that of intentional actor. Organizational actors are viewed as having intentional properties such as goals, beliefs, abilities, and commitments. Actors depend on each other for goals to be achieved, tasks to be performed, and resources to be furnished. The i* framework differs from previous goal-oriented approaches in that it highlights the strategic dimension of agent relationships and deemphasizes the operational aspects. The i* consists of two main modeling components: •
The Strategic Dependency (SD) model is used to describe the dependency relationships among various actors. The SD model describes the process in terms of intentional relationships among agents instead of the flow of entities among activities. These types of relationships cannot be expressed or distinguished in the non-intentional models that are used in most existing requirements modeling frameworks.
•
The Strategic Rationale (SR) model is used to describe stakeholder interests and concerns, and how they might be addressed by various configurations of systems and environments. The SR model is a graph that is used to describe how an agent accomplishes a goal in terms of subgoals, softgoals, resources, and tasks.
The approach adopted in i* is to introduce the notion of intentional dependencies to provide a level of abstraction that hides the internal intentional contents of an actor. The goals and criteria for such intentions will only be made explicit when reasoning about alternative configurations (in the Strategic Rationale model).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
72 de Antonio and Imbert
The i* modeling is implemented on top of the Telos conceptual modeling language (Mylopoulos, Borgida, Jarke, & Kourbarakis, 1990), providing an extensible, objectoriented representational framework with classification, generalization, aggregation, attribution, and time. CARE (COTS-Aware Requirements Engineering), Tropos, and other methodologies have been later proposed adopting the i* framework. CARE is a GORE and AORE approach that explicitly supports the use of COTS (Chung & Cooper, 2002). In the CARE approach the goals of the agents may be functional (hardgoals) or non-functional (softgoals). CARE uses the i* framework notation for the description of goal and softgoal dependencies among agents. Tropos will be described in the next section. Another work derived from i* is the one described in (Yu, Du Bois, Dubois, & Mylopoulos, 1997), where the i* framework is used to support the generation and evaluation of organizational alternatives, while the ALBERT language (Agent-oriented Language for Building and Eliciting Real-Time requirements) (Dubois, Du Bois, Dubru, & Petit, 1994) is then used to produce a system requirements specification document. ALBERT supports the modeling of functional requirements in terms of a collection (or society) of agents interacting together to provide services necessary for the organization. We can see that agents are the core concept in AORE, while goals are used to model the intentionality of agents. The concept of agent is used with the aim of producing a specification of system requirements.
Engineering Requirements for AgentOriented Software Engineering (AOSE) As we have just seen, both in GORE and in AORE agents are abstract entities that can be used for modeling different kinds of components, such as software, hardware, humans, or devices. When we talk about MAS, however, the concept of agent takes a much narrower meaning, referring to just software entities. Agent-orientation is emerging as a powerful new paradigm for computing that offers higher-level software abstractions. As agent technology is maturing, attention is turning to the development of a full set of complementary models, notations, and methods to cover the entire software life cycle of agent systems. That is how the term Agent-Oriented Software Engineering (AOSE) was coined. It should be mentioned that MAS are mostly implemented with object- or component-based technology and programming languages. At a detailed level an agent is considered to be a complex object or component. Therefore UML has been widely considered as a convenient starting point for an agent-oriented modeling language, as it is a standard for object-oriented software engineering. Taking into account that the object- and agent-oriented paradigms are highly compatible and that UML is extensible, an effort is under course to define the so-called AUML (Agent Unified Modelling Language). Considering the apparent dissociation between the agent concept in GORE-AORE and in AOSE, we decided to investigate to which extent it would be possible to combine both
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Combining Requirements Engineering and Agents
73
approaches. A first step in this direction was an analysis of how AOSE methodologies currently approach RE. In this section we will summarize our review of some of the main AOSE methodologies, focusing our attention on their support for requirements modeling.
Tropos Tropos (Mylopoulos, Castro, & Kolp, 2000) is an agent-oriented software development methodology founded on i* and GRL (Goal-oriented Requirements Language), which is a successor of i*. Agent orientation is assumed throughout all the development stages. Tropos deals with modeling needs and intentional aspects of the agent system, from early requirements analysis to late design. It focuses particularly on BDI agent architectures. The modeling elements in Tropos are: Actor, Goal, Plan, Resource, Dependency, Contribution, Decomposition, Capability, and Belief. There are two main analysis diagrams: •
Actor diagram: describes the actors, their goals and the network of dependency relationships among actors (a kind of goal-based requirements business model).
•
Goal diagram: shows the internal structure of an actor (goals, plans, resources, and relationships among them).
Capability, Plan, and Agent interaction diagrams are used for detailed design, adopted from other modeling languages without changes (UML activity diagrams and AUML sequence diagrams). Tropos is a good example of how concepts coming from AORE are starting to be introduced into the AOSE world.
Gaia Gaia (Wooldridge, Jennings, & Kinny, 2000) is an agent-oriented methodology that intends to be neutral with respect to the application domain and the agent architecture. However Gaia imposes strong limitations to the kind of systems to be described and developed with it related to the system organizational structure, inter-agent relationships, agent’s abilities, and services. One of its strengths is that it combines a dual view of the system, both at the micro-level (roles first and then agents) and at the macro-level (as a society of agents). Although Gaia presupposes a requirements statement phase previous to the application of the methodology, it proposes a system specification activity in the analysis phase. Gaia proposes first the identification of the system roles in terms of their permissions (resources exploited by the role) and their responsibilities (functionalities aimed by the role). This perspective of the system goals is reflected in a Role Model. Then the Interaction Model is identified to capture the interactions between roles in a coarse-grained sense. Its aim is not to describe any kind of message exchange between
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
74 de Antonio and Imbert
roles, but the identification of the required protocols of interaction. Both Role Model and Interaction Model are then refined by consecutive iterations. Gaia methodology is very well considered in the AOSE community, despite the scarce documentation and examples that it provides hitherto.
MaSE MaSE, the Multiagent Systems Engineering methodology, views agents merely as a convenient abstraction — intelligent or not — to build complex, distributed, and possibly dynamic systems (DeLoach, Wood, & Sparkman, 2001). This means that the system requirements analysis phase of the MaSE methodology concentrates on identifying simple software processes that interact with each other to meet cooperatively an overall system goal, forgetting the typical Artificial Intelligence perspective of agents in which they are required to be autonomous, proactive, reactive, and social (Wooldridge & Jennings, 1995). The process proposed by this methodology to understand the multiagent system begins with capturing system goals from detailed technical documents, user stories, or formalized specifications. Goals — always considered at system-level — are first identified and structured through a Goal Hierarchy Diagram. Then the analyst translates goals into roles and associated tasks by drawing first Use Cases and then restructuring them into Sequence Diagrams. Finally each identified goal is associated with one specific role (although multiple goals may be mapped to a single role for convenience or efficiency). Roles are structured into a Kendall-style Role Model (Kendall, 1998) identifying protocols among them. Tasks concurrency is depicted using a kind of finite state automata denominated Concurrent Task Diagrams. The process and notations are supported by a CASE tool called agentTool (DeLoach, 2001).
Prometheus System specification is a key phase for the Prometheus agent-oriented methodology (Padgham & Winikoff, 2002a). The methodology proposes an iterative lifecycle in which determining the system’s environment and determining its goals and functionalities is crucial in the earlier iterations. Prometheus encompasses concepts, notations, artefacts, and a commented process to produce the system requirements. Starting from data arisen from discussions with clients, managers, and other stakeholders, Prometheus determines how the system is going to interact with the environment — usually a changing and dynamic environment. That means identifying “percepts” (raw data available to the agent system) and “actions” (mechanisms for affecting the environment). The system’s environment definition is completed by identifying any important shared data sources. In parallel with the environment specification, Prometheus starts with the description of the goals and functionalities, in a broad sense, of the future agent system as a way of Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Combining Requirements Engineering and Agents
75
reaching a first understanding of the system. Then the goals identified originally and continuously refined are associated to narrow functionalities (the ones called “roles” by other methodologies), linking each functionality to some system goal. Each goal should result in one or more functionalities (Padgham & Winikoff, 2002b). A textual structure is proposed to describe functionalities and their associated goals, percepts, actions, data used, and so forth. Finally, enfacing the particular system perspective of the functionalities, Prometheus proposes the definition of some scenarios, very similar to use cases, to give a more holistic view of system processing. The multiagent system engineering process proposed by Prometheus is supported by two different tools, the JACK Development Environment (JDE) and the Prometheus Design Tool (PDT). Unfortunately neither of them covers the system specification phase.
Cassiopeia Cassiopeia defines itself as an agent-oriented, role-based multiagent systems design methodology (Collinot, Drogoul, & Benhamou, 1996). It is neither targeted at a specific type of application nor does it require a given architecture of agents, although it has been mostly applied to the robot soccer teams domain (Drogoul & Zucker, 1998). In fact it is particularly suitable if the designer aims to make agents behave cooperatively. Cassiopeia proposes a development process based on iterative refinement. The same five steps are executed in each iteration until the system is finally constructed. Therefore, the system specification is not directly related to a specific phase of the methodology but rather to the activities carried out through the first iterations of the development. Basically the first step concentrates on the identification of the individual behaviors of the system, represented as roles of the system (both the function that an agent is achieving at a given time and the position that it occupies at the same time in the group of agents). Cassiopeia does not specify any particular technique and points out that approaches from functional or object-oriented methods or methodologies could be adopted in this stage. Steps two and three consist of analyzing the structure of the organization based on the dependencies between the individual roles being considered. Among the kind of dependencies to be identified may be coordination, simultaneous or sequential facilitation, or conditioning, and so forth. Finally, steps four and five define, on one hand, the potential groups that may arise in the system and, on the other hand, the organizational roles that will enable the agents to manage the formation and dissolution of the defined groups.
MESSAGE/UML MESSAGE/UML (Caire, Leal, Chainho, Evans, Garijo, Gomez, Pavon, Kearney, Stark, & Massonet, 2000) is an AOSE methodology that covers MAS analysis and design. It
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
76 de Antonio and Imbert
contributes some concepts at the agent knowledge level and diagrams that extend UML class and activity diagrams. It proposes the following diagrams: Organization diagram, Goal diagram, Task diagram, Delegation diagram, Workflow diagram, Interaction diagram, and Domain diagram. All of them are defined as extensions of the class diagram except the Task diagram that extends the activity diagram. Concepts are also textually described by schemas. The internal architecture of an agent is assumed to be based on one of the models derived from cognitive psychology. MESSAGE is almost independent of the internal agent architecture. The analysis model includes several views, or sub-sets: •
Organization view: shows Agents, Organizations, Roles, and Resources and coarse-grained relationships among them.
•
Goal/task view: shows goals, tasks, situations, and dependencies among them.
•
Agent/role view: for each agent.
•
Interaction view: for each interaction.
•
Domain view: domain-specific concepts and relations.
The analysis process is conducted by stepwise refinement. The Organization and Goal/ task views are created first, and they act as inputs to create the Agent/role and Domain views. Finally the Interaction view is based on the previous ones. Level 0 focuses on entities and their relationships. The internal structure and behavior of the entities are refined in the next levels of development.
AUML The Agent Unified Modelling Language – AUML (Bauer, Müller, & Odell, 2001) is a notation for multiagent systems development. AUML aims to become a de facto standard for specifying this kind of systems. Its authors argue that UML provides an insufficient basis for modeling agents and agent-based systems, since UML does not cover the proactive and social agent dimensions. AUML intends to support the whole software engineering process, including planning and analysis. With this aim, when possible, AUML reuses notations coming from UML, such as interaction, state, and activity diagrams, or use cases, but it also adds new ones, covering issues such as multiagent and single agents modeling, goals and soft goals specification, modeling of social aspects, identification of temporal constraints, or environment modeling. AUML is a promising notation for agent and multiagent systems specification and development, and an interesting effort is been carried out to propose a methodology supported by this language.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Combining Requirements Engineering and Agents
77
MAS-CommonKADS MAS-CommonKADS (Iglesias, Garijo, González, & Velasco, 1998) is an agent-oriented methodology that extends the knowledge engineering methodology CommonKADS with techniques from object-oriented and protocol engineering methodologies. The methodology proposes the development of seven models: •
Agent Model, describes the characteristics of each agent;
•
Task Model, describes the tasks that the agents carry out;
•
Expertise Model, describes the knowledge needed by the agents to achieve their goals;
•
Organization Model, describes the structural relationships between agents (software agents and/or human agents);
•
Coordination Model, describes the dynamic relationships between software agents;
•
Communication Model, describes the dynamic relationships between human agents and their respective personal assistant software agents; and
•
Design Model, refines the previous models and determines the most suitable agent architecture for each agent and the requirements of the agent network.
The analysis models are comprehensive, but the method lacks a unifying semantic framework and notation.
Discussion Our study about GORE and AORE on one hand and AOSE methodologies on the other hand has revealed some fundamental problems that need to be addressed first if we want to combine both approaches successfully.
Different Meanings for Agent The first conclusion of our survey is that researchers in AORE and researchers in AOSE come from different backgrounds and use different terminologies and concepts. The most remarkable distinction is in the meaning of the word agent. The former consider agents-in-the-world, that is people, organizations, existing systems, and other actors who may be involved in or affected by the system under development. The latter only consider agents-as-software, that is, rather complex software components. While agentsas-software and agents-in-the-world share conceptual features, their respective abstractions are introduced for different reasons and with different purposes. This ambiguity
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
78 de Antonio and Imbert
in the terminology can be very confusing. It would be better if the word actor was used for real-world things and the word agent for software things.
Different Meanings for Requirement Another significant example of the misunderstandings between both fields is related to the concept of requirement. Many of the authors of AOSE methodologies state that their focus is mainly on the analysis and design stages, assuming that requirements are given at least informally as an input. Some seem not to be concerned at all with requirements. Only the methodologies that derive from the goal-oriented RE tradition seem to recognize that the analysis of the system is mainly a requirements analysis task. We agree with several authors that it is necessary to differentiate among MAS requirements and agent requirements. The problem is what it is understood by MAS and agent requirements. Current AOSE methodologies consider that, at the level of the MAS, requirements concern the dynamics of interaction and cooperation patterns among the roles. The main drawback of this approach is that the identification of roles is the first step, and it is not realistic to assume that roles can be identified without knowing what is required from the system. Using this strategy, agents will probably derive more or less directly from real world concepts, and this may be not the optimal solution. In our opinion the choice of a certain set of roles (agents) should be considered as a design decision at the system level, once the requirements for the MAS have been established. Then, at the level of individual agents, requirements should be concerned with agent behavior. The existing AOSE methodologies mainly deal with the specification of requirements for the system as a whole and pay less attention to the specification of requirements for each individual agent. One could argue that specifying the agent’s requirements is like specifying requirements for each object in an object-oriented system. That does not seem to make much sense, but why does it make sense for agents? We believe that the reason is that an object is a relatively simple software abstraction while an agent is a quite complex one. The main difference is in the complexity degree. The next subsection goes deeper into this discussion.
The Need for Specific Agent Requirements An object has just attributes and methods, implying that it knows things and it is able to perform tasks on demand. An agent also knows things and is able to perform tasks, so where is the difference? A first difference is that an attribute is a very simple knowledge representation mechanism. An agent, however, can make use of much more complex knowledge representations. Object methods, on the other hand, usually have a reduced size. The specification of the internal design of each object’s methods is usually ignored by OO methodologies. Interaction diagrams, which are the most classical OO design technique, show the interaction among objects but hide the details of method execution. One could use the classical techniques of structured design for this, like control flow diagrams or pseudo-
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Combining Requirements Engineering and Agents
79
code. The tasks that an agent can perform, however, are much more complex and cannot be specified with a simple control flow diagram. Moreover, agents have autonomy over their behavior; that is, they may decide to attend a request or not (“objects do it for free; agents do it for money” (Jennings, Sycara, & Wooldridge, 1998)). An additional difference between objects and agents is that an object has no perceptual capabilities. We could say that objects are not “situated,” since they are not able to perceive their environment. Agents need to perceive their environment, and they also need to decide upon the best task to be performed at a given time, or they can even decide not to act at all. An agent is a rational system that is able to make decisions and combine reactive, proactive, and social behaviors (Wooldridge, 1997). All these additional capabilities of agents, together with their higher level of complexity, make it reasonable to consider the need for a specific agent requirements analysis task. We can make an analogy with the difference between system requirements and software requirements in traditional software engineering. In the development of a complex system with hardware, software, and human components, there is a system requirements analysis stage that concludes with the identification of the several components that will make up the system and the allocation of the system requirements to the different components. Then, for the software components of the system, an additional software requirements analysis stage is required. AOSE methodologies are mainly focused on the first stage: system requirements and the allocation of system requirements to agents in the form of responsibilities. Then each agent, as a complex software component, should be subject to individualized requirements analysis and design. Agent responsibilities derived from MAS analysis should be complemented with other categories of requirements. In agent-oriented research, agents have been characterized by four essential properties (Wooldridge & Jennings, 1995) that can be considered as high-level requirement categories for an agent: •
Autonomy: An agent has control over its own actions and internal states. Its behavior is not fully predictable or controllable. An agent can act without direct intervention from humans.
•
Sociability: An agent can interact with other agents (artificial or humans) to accomplish its goals, complete its tasks, and help others. This may require communication, coordination, cooperation, competition, and/or conflict management capabilities.
•
Reactivity: An agent perceives its environment and reacts in a convenient way to changes in this environment.
•
Proactivity: An agent can exhibit a goal-directed behaviour, taking the initiative over its actions.
Social ability is considered to be one of the most fundamental properties of agents, because in a MAS the overall functionality and properties of the system emerge out of the interplay between the agents. The social level in the analysis and design of MAS is concerned with phenomena such as cooperation, coordination, conflicts, and competition. For such phenomena it is Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
80 de Antonio and Imbert
necessary to describe the structures and mechanisms that must be present inside the agents and the overall system to produce the desired type of behaviour. Although the social capabilities can be considered the most relevant source of requirements for an agent, from the MAS-level perspective we should not forget the other three essential characteristics. Moreover additional properties have been proposed by other authors as typical of agents, and some or all of them could be required for a given agent: •
Learning: An agent that is capable of improving its behaviour through its experience.
•
Mobility: An agent that is able to travel across different platforms.
•
Embodiment: An agent that has any kind of physical representation in a virtual environment.
•
Character-Personality: An agent that is endowed with individual traits that influence its behaviors and reasoning processes.
•
Reflectivity: An agent that is able to reflect on its own operations.
•
Emotion: An agent that is able to react emotionally to external or internal stimuli.
We would like to see in the future more elaborate taxonomies, including these properties and possibly others, to guide the requirements engineer in the identification of the agent’s requirements.
Conclusion In an attempt to contribute toward the objective of combining requirements engineering and agents, we would like to conclude with a list of recommendations derived from the previous reflections: •
It would be better if different terms were used for real-world things and software things. We propose to use the word actor for the first and agent for the second.
•
AOSE methodologies should recognize that the analysis of the system is mainly a requirements analysis task, forgetting about the unrealistic assumption that a list of requirements will be provided beforehand.
•
Software agents do not need to derive directly from real-world concepts. Therefore, the choice of a certain set of roles (agents-as-software) should be considered as a design decision at the system level once the requirements for the MAS have been established considering agents-in-the-world from the perspective of AORE.
•
It is necessary to differentiate among MAS-level requirements and agent-level requirements. The complexity of agents justifies considering them as systems whose requirements need to be analyzed carefully.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Combining Requirements Engineering and Agents
81
•
AOSE methodologies should pay more attention to the specification of requirements for each individual agent. In this respect elaborate taxonomies are needed to guide the requirements engineer in the identification of the agent’s requirements.
•
The specification of the social abilities of agents and the interaction among them can also benefit from the techniques and models proposed by AORE.
In addition, we feel that the gap between the models obtained with AOSE methodologies and the design of a complex rational agent is still too big. Some possible architectures have been proposed for individual agents, but the designer is left alone with the decision of selecting the best architecture for each agent in the system. And it is obvious that this decision should be based on the set of requirements that those agents have to satisfy.
References Bauer, B., Müller, J. P., & Odell, J. (2001). Agent UML: A formalism for specifying multiagent interaction. In P. Ciancarini & M. Wooldridge (Eds.), Agent-oriented software engineering (pp. 91-103). Berlin, Germany: Springer-Verlag. Caire, G., Leal, F., Chainho, P., Evans R., Garijo, F., Gomez, J., Pavon, J., Kearney, P., Stark, J., & Massonet, P. (2001). Agent oriented analysis using MESSAGE/UML. In M. Wooldridge, P. Ciancarini, & G. Weiss (Eds.), Second International Workshop on Agent-Oriented Software Engineering (AOSE-2001) (pp. 101-108). Chung, L., & Cooper, K. (2002). Defining agents in a COTS-aware requirements engineering approach. Proceedings of the Seventh Australian Workshop on Requirements Engineering. Collinot, A., Drogoul, A., & Benhamou, P. (1996). Agent oriented design of a soccer robot team. Proceedings of the Second International Conference on Multi-Agent Systems (ICMAS’96), 41-47. Dardenne, A., van Lamsweerde A., & Fickas, S. (1993). Goal-directed requirements acquisition. Science of Computer Programming, 20, 3-50. DeLoach, S. A. (2001). Analysis and design using MaSE and agentTool. Proceedings of the 12th Midwest Artificial Intelligence and Cognitive Science Conference (MAICS 2001). DeLoach, S.A., Wood, M.F., & Sparkman, C.H. (2001). Multiagent systems engineering. International Journal of Software Engineering and Knowledge Engineering, 11(3), 231-258. Drogoul, A., & Zucker, J. D. (1998). Methodological issues for designing multi-agent systems with machine learning techniques: Capitalizing experiences from the RoboCup challenge. (Tech. Rep. No. LIP6 1998/041). Laboratoire d’Informatique de Paris 6. Dubois, E., Du Bois, P., Dubru, F., & Petit, M. (1994). Agent-oriented requirements engineering: A case study using the Albert language. In A. Verbraeck, H.G. Sol, & Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
82 de Antonio and Imbert
P.W.G. Bots (Eds.), Proc. Of the Fourth International Working Conference on Dynamic Modelling and Information System – DYNMOD-IV. Delft University Press. Iglesias, C. A., Garijo, M., González, J. C., & Velasco, J. R. (1998). Analysis and design of multiagent systems using MAS-CommonKADS. In M. P. Singh, A. Rao, & M. J. Wooldridge (Eds.), Intelligent Agents IV (Vol. 1365, pp. 313-326). SpringerVerlag, Berlin, Germany: Lecture Notes in Artificial Intelligence. Jennings, N. R., Sycara, K., & Wooldridge, M. (1998). A roadmap of agent research and development. Journal of Autonomous Agents and Multiagent Systems, 1(1), 7-38. Kavakli, V., & Loucopoulos, P. (1998). Goal driven business analysis: An application in electricity deregulation. CAiSE’98. Pisa, Italy. Kendall, E. A. (1998). Agent roles and role models: New abstractions for multiagent system analysis and design. Proceedings of the International Workshop on Intelligent Agents in Information and Process Management, Bremen, Germany. Mylopoulos, J., Borgida, A., Jarke, M., & Kourbarakis, M. (1990). Telos: A language for representing knowledge about information systems. ACM Transactions on Information Systems, 8(4), 325-362. Mylopoulos, J., Castro, J., & Kolp, M. (2000). Tropos: A framework for requirementsdriven software development. In J. Brinkkemper & A. Solvberg (Eds.), Information systems engineering: state of the art and research themes (pp. 261-273). SpringerVerlag, Berlin, Germany: Lecture Notes in Computer Science. Padgham, L., & Winikoff, M. (2002a). Prometheus: A methodology for developing intelligent agents. Proceedings of the Third International Workshop on Agent Oriented Software Engineering, at AAMAS’2002, Bologna, Italy. Padgham, L., & Winikoff, M. (2002b). Prometheus: A pragmatic methodology for engineering intelligent agents. Proceedings of the OOPSLA 2002, Seattle, WA. van Lamsweerde, A., Darimont, R., & Letier, E. (1998). Managing conflicts in goal-driven requirements engineering [Special issue]. IEEE Transactions on Software Engineering, 24(11), 908-926. van Lamsweerde, A. (2000). Requirements engineering in the year 00: A research perspective. Invited paper to 22nd International Conference on Software Engineering ICSE’2000. Wooldridge, M. (1997). Agent-based software engineering. IEE Proceedings Software Engineering, 144(1), 26-37. Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115-152. Wooldridge, M., Jennings, N. R., & Kinny D. (2000). The Gaia methodology for agentoriented analysis and design. Autonomous Agents and Multi-Agent Systems, 3(3), 285-312. Yu, E. (1997). Towards modelling and reasoning support for early-phase requirements engineering. Proceedings of IEEE International Symposium on Requirements Engineering RE97.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Combining Requirements Engineering and Agents
83
Yu, E., Du Bois, P., Dubois, E., & Mylopoulos, J. (1997). From organization models to system requirements: A cooperating agents approach. In M. P. Papazoglou & G. Schlageter (Eds.), Cooperative information systems: Trends and directions (pp. 293-312). Academic Press.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
84 Sawyer
Chapter VI
Maturing Requirements Engineering Process Maturity Models Pete Sawyer, Lancaster University, UK
Abstract The interest in Software Process Improvement (SPI) in the early 1990s stimulated tentative work on parallel models for Requirements Engineering (RE) process improvement in the late 1990s. This chapter examines the role of SPI and the implications of the exclusion of explicit support for RE in the most widely used SPI models. The chapter describes the principal characteristics of three RE-specific improvement models that are in the public domain: the Requirements Engineering Good Practice Guide (REGPG), the Requirements Engineering Process Maturity Model (REPM), and the University of Hertfordshire model. The chapter examines the utility of these models and concludes by considering the lessons learned from industrial pilot studies.
Introduction The risks posed to software development projects by weak requirements engineering (RE) practice have become widely recognized during the last decade. This has spawned a great deal of investment in RE methods, tools, and training by practitioner organizations and in RE research by the wider software and systems engineering communities.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Maturing Requirements Engineering Process Maturity Models
85
The focusing of attention on RE during the early 1990s coincided with the deployment of software process improvement (SPI) that was stimulated by Humphrey’s pioneering work in the 1980s (Humphrey, 1989). However, a European survey of organizations engaged in SPI programs during this period (Ibanez & Rempp, 1996) confirmed that the SPI models then available offered no panacea for RE problems. Indeed the organizations consulted identified requirements specification and the management of customer requirements as the principal problem areas in software development that they faced. Even enthusiastic adopters of SPI programs found that while SPI brought them significant benefits, their problems with handling requirements remained stubbornly hard to solve. The Software Engineering Institute’s Capability Maturity Model for Software (SWCMM) (Paulk, Curtis, Chrissis, & Weber, 1993), which was becoming widely deployed at this time, touched on RE practices but provided little specific guidance. To redress this, the Requirements Engineering Adaptation and IMprovement for Safety and dependability (REAIMS) project conducted the first systematic application of the principles of SPI specifically for RE. This resulted in the publication of the Requirements Engineering Good Practice Guide (REGPG) (Sawyer, Sommerville, & Viller, 1999; Sommerville & Sawyer, 1997) in 1997. This chapter reviews the state-of-the-art of process improvement for RE. It starts by reviewing the background to process improvement in the software and systems engineering industry. It then considers the nature of RE processes and the pressures and trends that have merged in recent years. It argues that for sociotechnical systems, RE practice needs to be particularly strong. It then reviews three RE process improvement methods and examines the extent to which they have been validated. The chapter concludes by summarizing the options open to organizations seeking to systematically improve their RE processes.
SPI Models and Standards Humphrey’s pioneering work on Software Process Improvement (SPI) in the 1980s (Humphrey, 1989) led to the development of the Capability Maturity Model for Software (SW-CMM), developed at the Software Engineering Institute under the sponsorship of the United States Department of Defense. Humphrey’s work reflected a realization that the piecemeal adoption of better methods and tools would not deliver the improvements in software quality increasingly demanded by customers. Rather the whole development lifecycle needed to be addressed in order to identify weak areas and focus improvement efforts. From the customer’s perspective, SPI allows software contractors to be assessed against a common model and provides a stimulus for contractors to increase product quality and meet cost and delivery targets. From the contractor’s perspective, SPI represents a strategic organizational tool for containing costs and increasing marketshare. The SW-CMM does this by defining a generic model of the software development process structured around five maturity levels. Process maturity represents the degree to which a process is defined, managed, measured, controlled, and effective (Paulk,
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
86 Sawyer
Figure 1. The five-level SW-CMM process maturity model
Level1 1 Level Initial Initial
Level 22 Level Repeatable Repeatable
Level33 Level Defined Defined
Level 44 Level Managed Managed
Level 55 Level Optimizing Optimizing
Curtis, Chrissis, & Weber, 1993). The more mature a process, the more it is possible to accurately forecast and meet targets for cost, time of delivery, and product quality. As a process becomes more mature, the range within which results fall is narrowed. The emphasis moves from understanding the software process through exerting control over the process to achieving on-going improvement. The SW-CMM maturity levels range from level 1 (initial), which is an ad hoc, risky process, to level 5 (optimizing), where a process is robust, adaptive, and subject to systematic tuning using experiential data (Figure 1). Each maturity level has a focus that is defined by a number of key process areas (KPAs). The KPAs effectively set capability goals that should be met if the supporting key practices are adopted and standardized. The set of key practices prescribed for each KPA define how the KPA can be satisfied and, since SPI is concerned with organizational maturity, institutionalized. SW-CMM level 2 (Repeatable) is interesting because its focus is on project management and is concerned with gaining control of the process. It is also the one level to explicitly address requirements engineering. It does this by defining Requirements Management as a level 2 KPA (the others are software project planning, software project tracking and oversight, software subcontract management, software quality assurance, and software configuration management). Since achieving level 2 is the initial target of almost all SW-CMM-led improvement programs, the SW-CMM has had the effect of greatly raising awareness of requirements management. It has stimulated the market for support tools and has made it more widely (though still far from universally) practiced. In this respect, therefore, it has been enormously beneficial. However, implementing requirements management is hard (Fowler, Patrick, Carleton, & Merrin, 1998), in part because it exposes weaknesses elsewhere in the requirements process. For example, one of the practices mandated for the requirements management KPA is the allocation of requirements to software subsystems. Arriving at the stage where a set of requirements are ready for allocation (by successfully eliciting, validating, negotiating, and prioritizing them) is itself a complex and error-prone process for which the SW-CMM provides no explicit help.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Maturing Requirements Engineering Process Maturity Models
87
The omission of detailed advice for the systematic improvement of requirements processes is true of all SW-CMMs (including the SW-CMMI (SEI, 2002)), of the draft ISO/ IEC 15504 (Garcia, 1997; Konrad & Paulk, 1995) international standard for software process assessment, and of the ISO9001-3 quality standard (Paulk, 1994). However help for requirements processes is not entirely lacking. There are software and system engineering standards such as ESA PSS-05 (Mazza, Fairclough, Melton, De Pablo, Scheffer, Stevens, Jones, & Alvisi, 1994), which provide valuable and explicit guidance on requirements practice. However these do not address systematic process improvement. They do not provide a method for assessing the weak points in an RE process or map out a route for the incremental adoption of their recommended practices. Few organizations can afford to make revolutionary changes to their requirements processes, so they need help in planning and controlling evolutionary improvement so as to minimize the risk that inevitably accompanies change. This is the great strength of SPI.
The RE Process Perhaps the most orthodox model of the RE process is that represented by the following three IEEE standards (it is interesting to note that RE process is defined implicitly in terms of documents that are products of the process): •
IEEE std 1362-1998 Guide for Information Technology - System Definition Concept of Operations (ConOps) Document (IEEE, 1998a)
•
IEEE std 1233-1998 Guide for Developing System Requirements Specifications (IEEE, 1998b)
•
IEEE std 830-1998 Recommended Practice for Software Requirements Specifications (IEEE, 1998c)
This process begins with scoping a problem and, in very broad terms, the solution (the concept of operations, or ConOps for short). This is followed by a process in which requirements are elicited from customers, validated by developers and other technical specialists, and constrained by factors associated with the system’s environment. The product of this is a System Requirements Specification document that defines the requirements for the overall system. Following this, the system requirements are allocated to components that will be configured in a system architecture that satisfies the system requirements. As part of this, subsets of system requirements are allocated to software components. For each software component, a further round of analysis is performed in which a set of software requirements are derived from the allocated system requirements. These are documented in a Software Requirements Specification (SRS). This forms the definitive set of requirements that the component must satisfy and is sufficiently detailed to allow development to commence. This process has evolved to deal with large custom systems comprising both hardware and software that are developed for single customers. These are typically developed Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
88 Sawyer
using a supply chain of sub-contractors responsible for delivering system components and who are managed in a hierarchical relationship with, at its root, a main contractor. Such projects still represent a substantial proportion of the economic activity in the software industry, particularly since heterogeneous systems have become increasingly software-intensive. Despite this their relative significance has declined during recent decades. This is not so much due to a decline in this sector of the industry as increases elsewhere. These include the booming market for retail software products and the development of the Internet as a medium for business and entertainment. As the industry has changed shape, the value of orthodox RE has been increasingly called into question. Clearly an RE process optimized for large heterogeneous systems is unlikely to be easily applied to, for example, small short-duration projects. Agile development methodologies, of which eXtreme Programming (XP) (Beck, 1999) is perhaps the best known, can be seen as a reaction against orthodox software engineering and, by implication, orthodox RE. Agile methodologies deviate from orthodox RE by eschewing detailed analysis, modeling, and documentation where a project might move straight from a concept of operations-like activity to development. The promise of agile methodologies to be lightweight and reactive is a seductive one. Although they are not entirely incompatible with SPI (Paulk, 2001), they take what is essentially a programming perspective on system development; but for many domains, this is inappropriate. For example, almost all large businesses are dependent on their computer systems, yet programmers are seldom equipped to find business solutions. Many systems are embedded within organizational contexts to support and manage business processes enacted by people and software. These sociotechnical systems are typically too critical to risk getting wrong, too sensitive to how the actors interact (which may be dependent on, for example, tacit understanding between the actors) to be easily understood, and too complex to make refactoring a viable option. Sociotechnical systems need careful appraisal of change options. They need careful analysis of the problem context and elicitation of user requirements. They need an overall system specification that documents the validated customer and user requirements and defines the new system in terms of its function, its quality attributes, and its context within its environment. They need responsibility for the satisfaction of the system requirements to be carefully allocated to appropriate software and human “components” (actors). And they need these components to be specified so that development work, which increasingly takes the form of selecting and integrating off-the-shelf components, can proceed. In these terms, most of the activities and products of orthodox RE processes still have a vital role to play in ensuring that systems are developed that meet their users’ real needs. However it would be naive to think that simply applying IEEE stds 1362, 1233, and 830 would guarantee success. There are many practices, techniques, methods, and tools that may be deployed to aid RE. Although they embody much accumulated wisdom, RE standards can only set out the general principles or give detailed guidance for particular activities or documents. They offer no aid for selecting appropriate methods (for example) or for designing an RE process optimized for a particular organization.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Maturing Requirements Engineering Process Maturity Models
89
Recognition of this problem was the motivation for the RE process improvement work described in the next section.
Process Improvement for RE Selection of the RE process improvement models for description below has been based on the following criteria: they should include advice on RE practice, they should present this advice within the framework of a maturity model, and they should provide a method for assessing existing processes. This set of criteria omits other valuable and pragmatic work (see below) that can be used for RE process improvement. However the criteria have the effect of isolating work designed specifically to help organizations integrate RE process improvement within a general SPI program. There are currently three RE process improvement models that meet the criteria: the Requirements Engineering Good Practice Guide (REGPG), the University of Hertfordshire model, and the Requirements Engineering Process Maturity Model (REPM). Other sources of improvement advice exist. In particular Young (2001) and Wiegers (1999) are excellent textbooks based around sets of recommended RE practices and include practical advice on how organizations can use them to improve their RE processes. These are not reviewed further here simply because they do not include a process maturity model or assessment method. However they contain much wisdom and practical advice for organizations that do not need or wish to undertake a formal, calibrated improvement process. The review of improvement models is mainly concerned with the improvement framework and assessment rather than the validity of the practices that the models recommend. All three models have derived the set of RE practices that they recommend from a variety of sources, including practices widely used in industry, practices recommended or mandated in standards, and practices learned from direct experience. However, as Wiegers (1999) warns, “not all … have been endorsed as industry best practices.” Nevertheless the developers of each model have been careful not to recommend practices that have not been validated in some form and shown to be practical and practicable for practitioners.
The Requirements Engineering Good Practice Guide The REGPG (Sommerville & Sawyer, 1997) was the first public-domain process improvement and assessment model for RE. Like the SW-CMM, the REGPG uses an improvement framework with several process maturity levels (Figure 2). The REGPG maturity levels are designed to mirror the SW-CMM levels. However, at the time of REGPG’s design, the adoption of RE practices across the industry was patchy (El Eman & Madhavji, 1995; Forsgren & Rahkonen, 1995), and there was insufficient empirical evidence for the existence of requirements processes that could be characterized beyond defined (level 3). The characteristics of highly mature RE processes were essentially unknown, so the Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
90 Sawyer
Figure 2. The three-level REGPG process maturity model Level 5 Optimizing Level 4 Managed Level 3 Defined Level 2 Repeatable Level 1 Initial
REGPG mirrors only the SW-CMM’s bottom three levels (initial, repeatable, and defined). In the REGPG model: •
Level 1 - Initial level organizations have an ad-hoc requirements process. They find it hard to estimate and control costs as, for example, many requirements have to be reworked. The processes are not supported by planning and review procedures or documentation standards. They are dependent on the skills and experience of the individuals who enact the process.
•
Level 2 - Repeatable level organizations have defined standards for requirements documents, which are more likely to be of a consistently high quality and to be produced on schedule. They also have policies and procedures for requirements management.
•
Level 3 - Defined level organizations have a defined process model based on good practices and defined methods. They have an active process improvement program in place and can make objective assessments of the value of new methods and techniques.
The key to improvement is in adopting appropriate good practices, in the right order, at the right pace, and with the required degree of strategic commitment. The REGPG distills guidance on good practice adoption into 66 guidelines (key practices in SW-CMM terms). The practices are classified according to whether they are basic, intermediate, or advanced to reflect, for example, dependencies among the practices. The practices are organized according to process activities. These are the requirements document, requirements elicitation, analysis and negotiation, describing requirements, system modeling, requirements validation, requirements management, and RE for critical systems. Although analogous to KPAs, REGPG process activities serve only to classify good practices and form no part of the maturity framework.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Maturing Requirements Engineering Process Maturity Models
91
Unlike the SW-CMM (but like the more recent SW-CMMI and ISO/IEC 15504), the REGPG uses a continuous architecture. This means that the REGPG does not prescribe the process activities to be targeted at each improvement stage. Instead organizations can adopt practices across a range of process activities according to their priorities. Before improvement can be achieved, causality has to be established between the evident problems and weaknesses in the organization’s RE process. Discovering weaknesses and identifying priorities for targeted improvement is the role of process assessment. Process assessment is based on a system of checklists designed to reveal what practices are in use and the extent to which they are used. A checklist of the 66 REGPG guidelines is used as the starting point for the assessment. The activities used in the assessment are:
1
Prune checklist
2
Select the right people to interview
3
Score process against checklist
4
Resolve uncertainty
5
Compute maturity
Assign a score of 0 (see below) to each practice that is already known not to be used. This is simply a pragmatic step for streamlining the assessment. Knowledge of an organisation's RE practices may reside with many people, particularly where there are different units working on different products/projects. This activity is tasked with selecting a small group of people in order to gain a representative snapshot of RE practice within the organisation. The purpose of this stage is to assign scores against each practice for which a score can be confidently assigned and to flag those practices where the extent of adoption is uncertain. This stage focuses on seeking additional information (perhaps by interviewing other people, looking for documentary evidence, etc.) that allows a score to be assigned to the uncertain practices identified at stage 3. This stage derives an overall maturity score based on the results of stages 1, 3 and 4.
During the assessment process, each good practice is assessed as being: •
Standardized (score 3). The practice has a documented standard in the organization and is integrated into a quality management process.
•
Normal use (score 2). The practice is widely followed in the organization but is not mandatory.
•
Discretionary (score 1). Some project managers may have introduced the practice, but it is not universally used.
•
Never (score 0). The practice is never or rarely applied.
The maturity level is calculated by summing the numerical scores for each practice used: •
Level 1 (initial) processes score fewer than 55 in the basic guidelines.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
92 Sawyer
•
Level 2 (repeatable) processes score more than 55 in the basic guidelines but fewer than 40 in the intermediate and advanced guidelines.
•
Level 3 (defined) processes score more than 85 in the basic guidelines and more than 40 in the intermediate and advanced guidelines.
The end result is an indicative maturity level for the organization’s RE process and a map of whether, and to what extent, each of the 66 RE practices described in the REGPG is used in the RE process. Subsequent improvement is based on identifying un- or under-used practices that: •
are likely to yield benefits that outweigh the cost of their introduction; and
•
can be feasibly introduced given dependencies on underpinning practices.
The practices’ classifications (basic, intermediate, advanced) give some help in selecting an appropriate subset for introduction. The classifications are supplemented with additional qualitative information in the good practice guidelines about benefits, how to implement them, and associated costs and problems. At the end of an REGPG process assessment, therefore, an organization should have the basis for an RE process improvement agenda in terms of areas of weakness and options for improvement. The two more recent models described below both share broadly similar aims and comprise a similar maturity framework.
The Requirements Engineering Process Improvement Model The Requirements Engineering Process Maturity Model (REPM) (Gorschek, Svahnberg, & Kaarina, 2003) is targeted at SMEs that, its authors observe, lack the resources to apply models like the REGPG. Instead, REPM claims to take an approach to the RE process that is sufficient to, “give an indication of whether a problem exists, and … where the problem areas reside” (Gorschek et al., 2003). REPM uses a six-level maturity model. It actually defines five, but since it is a staged model and level one has 11 actions (analogous to SW-CMM key practices or REGPG practices) associated with it, there is an implicit level zero that broadly corresponds to the SW-CMM’s initial level. Each action is associated with one of the levels 1 to 5 and, additionally, is classified according to three process activities (called, in REPM, main process areas, or MPAs): elicitation, analysis and negotiation, and management. Each maturity level has a set of actions defined for it under each of the three MPAs. Despite their superficial similarity, MPAs are unlike SW-CMM KPAs, which are only associated with a single maturity level. However MPAs are also different to process activities in the REGPG where it is not obligatory to implement practices for each process activity in order to achieve a given maturity level.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Maturing Requirements Engineering Process Maturity Models
93
MPAs may be further classified into sub-process areas (SPAs). For example, the Elicitation MPA comprises stakeholder identification, stakeholder consulting, domain knowledge, and scenario elicitation. The authors argue that this allows the model to evolve easily since a composite action may be promoted to an SPA allowing new actions to be derived from it. REPM is designed for project rather than organizational assessment. Like the REGPG, it uses checklist-driven interviews of key personnel in order to make an assessment. Because an REPM assessment is scoped at the project level, selection of personnel to interview defaults to the person responsible for RE in the project under assessment. This obviates the need to select a range of people across an organization (c.f. step 1 in the REGPG assessment model) and minimizes the interaction between assessor and practitioners. The project evaluation checklist is, like that of the REGPG, a list of all 60 actions. REPM recognizes that failing to implement an action does not necessarily fatally weaken the RE process, provided there is a sound rationale for not doing so. Producing a user manual draft (a level 2 action) may be meaningless for the development of an embedded system, for example. This allows checklist actions to be marked satisfied-explained. Actions in this category carry the same weight as completed actions. As with the REGPG, the authors of REPM have selected a set of RE practices (actions in REPM terms) and made a judgment about how fundamental they are to an RE process. Because REPM uses a staged model, each action is locked in to a particular maturity level. To achieve a maturity level n, a process needs to have completed or satisfied-explained all of the actions associated with levels one through n.
The University of Hertfordshire Model The RE process improvement model (Beecham, Hall, & Rainer, 2003) developed at the University of Hertfordshire is a direct adaptation of the SW-CMM framework for RE processes. This is intended to help dovetail RE process improvement with a wider SWCMM-based software process improvement programme. The Hertfordshire model uses the 5 SW-CMM maturity levels (Figure 1) to classify RE processes. Since it is based upon the SW-CMM, it uses a staged architecture in which a set of 68 practices (called processes) are mandated for each maturity level. Like the REGPG and REPM, these are classified according to RE process activities, called phases. These are management, elicitation, analysis and negotiation, documentation and validation. Like REPM MPAs, each phase in the Hertfordshire model defines a set of processes at each maturity level. As with the other RE process improvement models reviewed here, the Hertfordshire model draws its set of processes from analysis of RE practice in industry (including Hall, Beecham, & Rainer, 2002). However, to help integrate the model with the SW-CMM, some processes are drawn directly from the SW-CMM. For example, the Hertfordshire level 2 process P1 below is essentially the same as the SW-CMM level 2 requirements management KPA commitment to perform key practice.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
94 Sawyer
P1. Follow a written organizational policy for managing the system requirements allocated to the software project. Process assessment is broadly similar to that of the REGPG even to the extent that, as currently defined in the available documentation, it reflects a continuous rather than a staged improvement model. In order to assess the extent to which a process is satisfied by an organization, the Hertfordshire model allocates a score to each process against three assessment criteria. These are: •
Approach. A measure of the organizational commitment and capability.
•
Deployment. A measure of the extent to which a process is implemented across the organization.
•
Results. A measure of the success of a process’ implementation.
Each process is assessed against each of these criteria and given a rating of: •
Outstanding (10)
•
Qualified (8)
•
Marginally qualified (6)
•
Fair (4)
•
Weak (2)
•
Poor (score 0)
The average of the score for approach, deployment, and results is recorded for each process. Hence if a process was marginally qualified in terms of approach, fair in terms of deployment, and weak in terms of results, it would rate an average score of fair (4). The scores are then summed for each phase (for example, to assess the strength of requirements management in an organization), and the sum of all five phases yields an overall score. The Hertfordshire model is still under development. At the time of writing, the model has not been calibrated to map overall scores onto maturity levels, and the relationship of this to the staged model implied by the association of processes with maturity levels has not been worked through. It has undergone an initial validation phase that has focused on the overall framework and the processes and process descriptions used for maturity level 2. The set of processes and process descriptions for levels 3 to 5 currently exist in draft form.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Maturing Requirements Engineering Process Maturity Models
95
Table 1. Summary of RE improvement models Model
Maturity levels
REGPG
3
REPM
6 (1-5 with an implicit level 0)
Hertfordshire model
5
CMM key process area analogues? Process activities but no direct analogy Main process areas but no direct analogy Phases but no direct analogy
CMM key practice analogues? Good practice guidelines
Staged or continuous?
Actions
Staged
Processes
Staged but assessment is currently continuous
Continuous
Experience of RE Process Improvement Process improvement is a costly activity that requires substantial investment. Organizations therefore require confidence in the improvement models that they use. This is not simply confidence that the set of practices that a model embodies are proven in industry. Confidence is required that the maturity framework and assessment method will not caricature the organization’s processes. Organizations need to know that the process weaknesses revealed by an assessment are real weaknesses that inhibit process performance. They also need to know that the remedial action proposed by process analysts will address real organizational priorities in a cost-effective way. Validation is important for establishing confidence in improvement models, and each of the models reviewed in the previous section have undergone some form of validation.
REGPG More than 10,000 copies of the REGPG have been sold and, as the longest-established and most widely disseminated model, REGPG has undergone the most extensive validation. It has been the subject of two projects specifically intended to validate and improve the model: Qure (Kauppinen, Aaltio, & Kujala, 2002) and Impression (Sommerville & Ransom, 2003). The Qure project conducted by the Helsinki University of Technology included an RE process improvement theme. The REGPG was evaluated as part of this, in which the model was piloted in four companies working in a range of application domains. The scope of the analysis was to discover whether the REGPG process analysis method was accurate and usable. The question of whether a subsequent improvement cycle based upon the results of the analysis resulted in real improvement was outside the project’s scope. Qure found that, in the case studies used, the REGPG yielded useful results and was appropriate for organizations beginning an RE process improvement programme. A key weakness discovered was that the selection of good practices to address revealed Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
96 Sawyer
process weaknesses required more guidance and that, as a result, companies tended to be over-ambitious in the improvement programs that they undertook. This general point was echoed in the results of the Impression project, which concluded that the assessment method was too passive. The scope of the REGPG validation in Impression was wider than that of Qure. It involved nine companies from a variety of domains and ran for a full improvement cycle, from initial assessment through good practice introduction to follow-up assessment. Like Qure assessment and improvement piloting was performed by a third-party organization. However REGPG authors acted as trainers and consultants for the staff performing the process assessments. The passivity of the assessment method emerged when, perhaps unsurprisingly, it became apparent that the companies attached greater priority to actual improvement than mere diagnostics and maturity classification. The assessment method was modified to include an explicit step for selecting good practices to address the revealed weaknesses. This was backed up with the development of a decision support tool that processed the analysis data and listed those best practices likely to provide solutions most costeffectively. Other significant results included: •
Implicit dependencies between practices (for example, that intermediate practices cannot be introduced until appropriate basic practices have been deployed) were sometimes wrong. This implies greater-than-anticipated freedom of choice in the selection of practices for introduction and is something that would be hard to accommodate in a staged architecture.
•
The maturity model was not entirely generic, and different norms of practice that derive from different priorities across application domains were not accurately reflected. The REGPG evolved from the experiences of the REAIMS project’s industrial partners, who worked in dependable systems domains. The REGPG inevitably retains some of this flavor, which fits slightly awkwardly with, for example, sociotechnical systems.
At the end of the Impression assessment-improvement cycle, a follow-up assessment was performed. This established that, in terms of the REGPG maturity model, all but one of the nine companies had improved their RE processes. Four had moved from initial to repeatable. The remaining four had all improved their score but remained at the initial level. The strategies that companies used to improve ranged from concentrating on further embedding existing practices within company standard procedures to introducing large numbers of new practices (one company introduced 14). Although Impression covered a whole improvement cycle and appears to demonstrate the REGPG’s utility for improving RE processes, there was insufficient time to follow this through and establish whether increased process maturity in REGPG terms was reflected in concrete business benefits. The signs are good but unproven.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Maturing Requirements Engineering Process Maturity Models
97
REPM Gorschek et al. (2003) describe the evaluation of REPM using a pilot study involving four companies. These were SMEs of the size for which REPM was designed. The scope of this pilot was more restricted than that of Qure or Impression and had a dual aim of using REPM to discover norms of RE practice (which we ignore here) and of validating REPM. Despite the limited scope of the REPM validation, the results indicate that its lightweight assessment method yields useful results. REPM as a vehicle for improvement has not been demonstrated, but within the limited aims of REPM, the model appears to show promise as an assessment tool for SMEs. REPM was explicitly designed to be lightweight, and the authors report no serious problems with applying it. However validation of the model’s applicability does not appear to have been an explicit goal of the evaluation.
University of Hertfordshire Model The methodology for validation of the Hertfordshire model differs from that used for the REGPG and REPM. The model’s development was in part influenced by a study of RE practice in 12 companies (Hall et al., 2002). Validation of the model itself, however, has to date been performed by assessment by a panel of experts rather than deployment in practitioner companies. At the time of writing, the results of this exercise were being processed and will feed into maturation of the model.
Conclusion Given the level of interest in SPI in recent years, RE process improvement has received surprisingly little attention from the SPI and RE research communities or from industry. This may be because RE process improvement takes place but takes place adequately without the formal framework of a CMM-like maturity model. Nevertheless RE process improvement is an important issue, as surveys such as that reported by Ibanez & Rempp (1996) clearly demonstrate. Organizations interested in RE process improvement and that, perhaps because of investment in wider SPI programs, wish to use a maturity level framework now have a choice of at least the three improvement models reviewed here. Of these, the REGPG has the benefit of being widely known. It has been validated across a range of domains and its strengths and weaknesses have been identified. The REPM and the Hertfordshire models currently occupy more specialized niches. REPM has been purposely designed as a lightweight method suitable for SMEs. It is perhaps the model most in tune with the current mood for agile and lightweight methodologies, although it has not yet completed a full validation cycle. The Hertfordshire model is a work in progress that is carefully aimed at the many companies that have already invested in SW-CMM based improvement programs. If Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
98 Sawyer
subsequent development can complete the model and integrate it within the SW-CMM framework (and track CMM developments), then it has the potential to offer substantial benefits to companies.
References Beck, K. (1999). Extreme programming explained: Embrace change. Boston: AddisonWesley Longman. Beecham, S., Hall, T., & Rainer, A. (2003). Defining a requirements process improvement model. (Technical Rep. No. TR379). University of Hertfordshire. El Eman, K., & Madhavji, N. (1995). A Field Study of Requirements Engineering Practices in Information Systems Development. Proceedings of the Second IEEE International Symposium on Requirements Engineering (RE95) (pp. 68-80). York, UK. Forsgren, P., & Rahkonen, T. (1995). Specification of customer and user requirements in industrial control system procurement projects. Proceedings of 2nd IEEE International Symposium on Requirements Engineering (RE95), 81-88. Fowler, P., Patrick, M., Carleton, A., & Merrin, B. (1988). Transition packages: An experiment in the introduction of requirements management. Proceedings of Third International Conference on Requirements Engineering (ICRE’98), 138-147. Garcia, S. (1997). Evolving improvement paradigms: Capability maturity models & ISO/ IEC 15504 (PDTR). Software Process: Improvement and Practice, 3(1), 47-58. Gorschek, T., Svahnberg, M., & Kaarina, T. (2003). Introduction and application of a lightweight requirements engineering process evaluation method. Proceedings of Requirements Engineering Foundations for Software Quality ’03 (REFSQ’03), 83-92. Hall, T., Beecham, S., & Rainer, A. (2002). Requirements problems in twelve software companies: An empirical analysis, IEE Proceedings-Software, 149(5), 153-160. Humphrey, W. (1989). Managing the software process. Boston: Addison-Wesley Longman. Ibanez, M., & Rempp, H. (1996). European survey analysis. (Tech. Rep.). European Software Institute. IEEE Std 830-1998. (1998c). IEEE recommended practice for software requirements. New York: IEEE. IEEE Std 1233-1998. (1998b). IEEE guide for developing system requirements specifications. New York: IEEE. IEEE Std 1362-1998. (1998a). IEEE guide for information technology - system definition - concept of operations (ConOps) document. New York: IEEE. Kauppinen, M., Aaltio, T., & Kujala, S. (2002). Lessons learned from applying the requirements engineering good practice guide for process improvement. Proceedings of Seventh European Conference on Software Quality (QC2002), 73-81.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Maturing Requirements Engineering Process Maturity Models
99
Konrad, M., & Paulk, M. (1995). An overview of SPICE’s model for process management. Proceedings of Fifth International Conference on Software Quality, 291-301. Mazza, C., Fairclough, J., Melton, B., De Pablo, D., Scheffer, A., Stevens, R., Jones, M., & Alvisi, G. (1994). Software engineering guides. London: Prentice Hall. Paulk, M. (1994). A comparison of ISO 9001 and the capability maturity model for software, CMU/SEI-94-TR-12, Software Engineering Institute. Paulk, M. (2001). Extreme programming from a CMM perspective. IEEE Software, 18(6), 19-26. Paulk, M., Curtis, W., Chrissis, M., & Weber, C. (1993). Capability maturity model for software, Version 1.1. CMU/SEI-93-TR-24. Software Engineering Institute. Sawyer, P., Sommerville, I., & Viller, S. (1999). Capturing the benefits of requirements engineering. IEEE Software, 16(2), 78-85. SEI. (2002). CMMI for software engineering, version 1.1, continuous representation (CMMI-SW, V1.1, Continuous). CMU/SEI-2002-TR-028. Software Engineering Institute. Sommerville, I., & Ransom, J. (2003). An industrial experiment in requirements engineering process assessment and improvement. (Tech. Rep). Lancaster University. Sommerville, I. & Sawyer, P. (1997). Requirements engineering - a good practice guide. Chichester: John Wiley. Wiegers, K. (1999). Software requirements. Redmond, WA: Microsoft Press. Young, R. (2001). Effective requirements practices. Boston: Addison-Wesley Longman.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
100 Greer
Chapter VII
Requirements Prioritisation for Incremental and Iterative Development D. Greer, Queens University Belfast, UK
Abstract The problems associated with requirements prioritisation for an incremental and iterative software process are described. Existing approaches to prioritisation are then reviewed, including the Analytic Hierarchy Process, which involves making comparisons between requirements and SERUM, a method that uses absolute estimations of costs, benefits, and risks to inform the prioritisation process. In addition to these, the use of heuristic approaches is identified as a useful way to find an optimal solution to the problem given the complex range of inputs involved. In particular genetic algorithms are considered promising. An implementation of this, the EVOLVE method, is described using a case study. EVOLVE aims to optimally assign requirements to releases, taking into account: (i) effort measures for each requirement and effort constraints for each increment; (ii) risk measures for each requirement and risk limits for each increment; (iii) precedence constraints between requirements (where one requirement must always be in an earlier or the same increment as another); (iv) coupling constraints between requirements (where two or more must be in the same increment); and (v) resource constraints (where two or more requirements may not be in the same increment due to using some limited resource). The method also handles uncertainty in the effort inputs, which are supplied as distributions and simulated
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
101
using Monte Carlo simulation before carrying out the genetic algorithm operations. In addition to handling uncertainty, EVOLVE offers several advantages over existing methods since it handles a large range of factors. The overall implementation of the method allows the inputs to be changed at each iteration, and so better fits reality where requirements are changing all the time.
Introduction In any given project, requirements arise from various stakeholders. These stakeholders may be users, developers, project managers, business managers, or other categories of people affected by the system. In the case of new software applications, there are typically a large number of requirements, some of which are essential, others desirable, and some relatively unimportant. For existing applications, there will be a backlog of new requirements, potential fixes, and enhancements, again with differing priorities. In both cases it is usually impractical to implement all requirements simultaneously because of the cost involved, staff limitations, and market or user pressures to have the software implemented. Thus some form of prioritisation is necessary. The output of this process will depend on the effort required for each requirement against the overall effort available, the value arising out of delivering certain requirements at a given time, the risks incurred by delivering or not delivering given requirements, and on dependencies between requirements. In addition to this we have the preferences of business and developer stakeholders, who may be of different levels of importance and may be geographically dispersed with different viewpoints about what should be delivered and when. For some of these stakeholders a given requirement may be essential to the success of the product; others may believe that it might even damage the success of the product. In between these extremes some stakeholders may hold the view that a requirement is unimportant but hold no objection to it being included (Davis, 2003). Overall this is a complex problem, which is very difficult to solve to the satisfaction of all concerned. Management of requirements including priority assignment has been identified as a key success factor for commercial enterprises (De Gregorio, 1999). In the case of bespoke software development, there may be a small number of stakeholders, but in the case of commercial offthe-shelf software there may be hundreds or thousands of stakeholders (Regnell, Host, Natt och Dag, Beremark, & Hjelm, 2001).
Problem Discussion In recent years there has been an increasing recognition that an incremental approach to software development is often more suitable and less risky than the traditional waterfall approach (Greer, Bustard, & Sunazuka, 1999a). This preference is demonstrated by the current popularity of agile methods, all of which adopt an incremental approach to delivering software rapidly (Cockburn, 2002). This shift in paradigm has been brought
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
102 Greer
Figure 1. Complexity of software release planning Stakeholders: Need to satisfy each group of stakeholders
Resource constraints: Some requirements must not be in the same release
Effort constraints: limited for each release
Release Planning Risk Constraints: Each release must be below an acceptable risk level
Precedence constraints : Some requirements must be delivered before others
Coupling constraints: Some requirements are only valuable if delivered together
about by many factors, not least the rapid growth of the World Wide Web, the consequent economic need to deliver systems faster (Cusamano & Yoffie, 1998), and the need to ensure that software better meets customer needs. Hence, we will concentrate on this process model in our problem discussion. Planning and developing software is a very complex venture. As illustrated in Figure 1 stakeholder preferences, effort constraints, dependency constraints, resource constraints, and risk constraints all contribute to the complexity. In developing computer-based systems, the inputs of all stakeholders should be taken into account during the planning and development processes (Hart, Hogg, & Banerjee, 2002). In prioritising requirements, similarly, the viewpoints of all affected stakeholders must be taken into account (Bubenkbo, 1995). This is already a complicated process if there are several stakeholders with diverging viewpoints (Jiang, Klein, & Discenza, 2002). However there are other important factors such as the available effort for a given system or release versus the effort required. There are also likely to be dependencies between requirements. This may be due to the fact that certain requirements must be in place before others. It could also be that two or more requirements must be delivered together or indeed that they should not be delivered together in a certain timeframe due to some resource constraint. Other factors are the level of risk the organisation is exposed to in a given system or a given release (Charette, 1989). The nature of the software process is also relevant to the problem. In what follows we will assume that whatever the process, it will result in individual releases. This is true even of the waterfall model, where the intention is to deliver a complete system, but there will be subsequent releases in the maintenance phase to cover deferred requirements, errors, and enhancements (Rajlich & Gosavi, 2002).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
103
Stakeholders Viewpoints and Increment Planning A software system at any stage of its life can be described by a set R1 of requirements, that is, R1= {r1,r2… rn}. Using an incremental approach, at the first stage (k=1) a set of requirements are planned for delivery as Inc 1. At subsequent stages (k>1), new requirements are added and others are removed and a new subset of requirements is planned for delivery in the next increment Inc2. This continues for all increments, Inc k. Suppose there are q Stakeholders S1,S2,…,Sq who have been assigned a relative importance between 0 and 1 by a project manager. Each stakeholder Sp assigns a priority, prio(ri, Sp, Rk) to each requirement ri in the set Rk at phase k. Prio() is a function: (ri, Sp, Rk) →{1,2,.., σ }, performed by each stakeholder, S p, where σ is the maximum priority score that can be assigned. Using the scheme suggested in Davis (2003), one practical interpretation this might be where σ = 5. In this scenario, prio(r1, S1, R1)=5 means that for stakeholder S1 the first release will be useless without r1; prio(r 1, S1, R1)=4 means that the requirement should be included in R1; prio(r 1, S1, R1)=3 means that S1 is neutral on the issue for r1 in R1; prio(r1, S1, R 1)=2 means that the requirement should be excluded from R 1; prio(r1, S1, R1)=1 means that if r1 is included in R1, then the release will not be useful to S1. Thus the output from the release planning process is a definition of increments Inck, Inck+1, Inck+2, … with Inct ⊂ Rk for all t = k, k+1, k+2, … The different increments are disjointed, that is, Incs ∩ Inct = ∅ for all s,t ∈ {k, k+1, k+2, …}. The unique function ωk assigns each requirement ri of set Rk the number s of its increment Incs, i.e., ωk: ri ∈ Rk → ωk(ri)= s ∈ {k, k+1, k+2, …}.
Effort Constraints Effort estimation can be described as a function assigning each pair (ri, Rk) an effort estimate, effort() that is, effort() is a function (ri, Rk) → ℜ + where ℜ + is the set of positive real numbers. These effort estimates are specific to a particular phase Rk, since efforts may be re-estimated following any increment. It is likely that the effort available for a given increment Inck is limited to a fixed value Sizek. Hence the sum of the effort estimated for all requirements in a planned increment must be within this limit. Thus Σ r(i) ∈ Inc(k) effort(ri, Rk) ≤ Sizek for all increments Inc(k).
Dependency Constraints Any set of requirements will contain dependencies wherein one requirement must always be before or after another. Thus for all iterations k there is a partial order Ψk on the product set Rk x R k such that (ri, rj) ∈ Ψk implies ωk(ri) ≤ ωk(rj). In an incremental approach, having a constraint that a given requirement is before some other is also met if they are in the same increment. Similarly, there may be certain requirements that are only valuable if delivered together in the same increment. Thus for all iterations k we define a binary relation xk on Rk such that (ri, rj) ∈ ξk implies that ωk(ri) = ω k(r j) for all phases k. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
104 Greer
Additionally there are resource constraints to consider, where certain combinations of requirements may not be delivered in the same increment. This may be due to the fact that they share some limited resource. Hence resource(t) represents the resource capacity of t. In this case, there are index sets I(t) ⊂ {1,…,n} such that card({ri ∈ Rk: i ∈ I(t)}) ≤ resource(t) for all releases k and for all resources t.
Risk Constraints All development activities have associated risks, and a development team may wish to limit the extent of exposure to risk in a given increment. We define risk as the likelihood that some loss will be incurred in implementing a requirement due to some event and introduce a risk estimate for each requirement. Thus for each pair (r i,Rk) of requirement ri as part of the set Rk the estimated value for implementing this effort; that is, risk is a ratio scaled function risk: (ri, R k) → [0,1), where ‘0’ means no risk at all and ‘1’ stands for the highest risk. This idea of limiting increment risk is based on the idea of a risk referent, in the past applied at project level (Charette, 1989). Risk k, refers to this maximum and denotes the upper bound for the acceptable risk in Inc k. This involves summing the risk scores for all requirements in a given increment and checking the constraint Σ r(i) ∈ Inc(k) risk(ri, Rk) ≤ Riskk for each increment k.
Problem Statement for Software Release Planning We can now formulate our problem as follows (Greer & Ruhe, 2004): For all requirements ri ∈ Rk determine an assignment ω*: ω*(ri)= m of all requirements ri to an increment ω*(r i)= m such that (1)
Σ r(i) ∈ Inc(m) effort(ri, Rm) ≤ Sizem for m = k,k+1,…
(2)
(Effort constraints)
Σ r(i) ∈ Inc(m) risk(ri, R ) ≤ Riskm m
for m = k,k+1,…
(Risk constraints)
(3)
ω (ri) ≤ ω (rj) for all pairs (ri. rj) ∈ Ψ k (Precedence constraints)
(4)
ω*(ri) = ω*(r j) for all pairs (ri. rj) ∈ ξk (Coupling constraints)
(5)
card({ri ∈ Rk: i∈ I(t)}) ≤ resource(t)
*
*
for all releases k and all sets I(t) related to all resources t (Resource constraints) (6)
A = Σp=1…,q λp [Σr(i)∈R(k)benefit(r i,,Sp,ω*)] ⇒ L-max! with benefit(ri,Sp,Rk,ω*)=[σ-prio(ri,Sp, Rk) +1][τ - ω*(rj,)+1] and τ = max{ω*(ri): ri ∈ Rk}
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
105
The function (6) is to maximize the weighted benefit over all the different stakeholders. For any stakeholder the benefit from the assignment of an individual requirement to an increment is the higher, the earlier it is released, and the more important it is judged. Lmax means to compute a set of L best solutions (L>1) so that the uncertainty of the inputs is taken into account and the analyst retains the decision-making power
Existing Requirements Prioritisation Approaches The simplest approach to requirements prioritisation is where experts abstract across criteria to compare requirements and assign a ranking. This can be carried out practically by pair-wise comparison or traditional sorting methods such as a bubble sort, minimal spanning tree, and binary search tree. There are also several well-established methods that have been applied such as card-sorting and laddering (for example, Maiden & Rugg, 1996; Rugg & McGeorge, 1997), which typically involve grouping cards according to given criteria and using the groups.
Prioritisation using AHP One now well-documented technique for prioritisation is the Analytic Hierarchy Process (AHP) (Saaty, 1980). This has been applied to the real-world problem of requirements prioritisation (Karlsson & Ryan, 1997; Karlsson, Wohlin & Regnell, 1988). To date AHP has not been specifically applied to incremental software delivery. However, if dependencies between requirements were taken into account, it could easily be applicable. With AHP, the relative importance of the assessment criteria is first determined through their pair-wise comparison. For example, if the criteria for assessing requirements are taken to be cost-benefit ratio, impact on system quality, and risk-reduction, it might be decided that impact on system quality is four times as important as cost-benefit ratio and that risk-reduction is one-third as important as cost-benefit ratio (Table 1). The rest of the rows could be inferred from the first row, but more often the user enters all the rows, allowing a consistency calculation to be performed. Using these scores, a weighting for each criterion is derived. This can be achieved by calculating the eigenvalues for each row or by using the technique of averaging over normalized columns (Saaty, 1980). In this technique each cell is divided by the sum of its Table 1: Recording Priorities in AHP Cost-Benefit Impact on Quality Risk Reduction Sum
Cost-Benefit 1 0.25 3 4.25
Impact on Quality 4 1 1/4 5.25
Risk Reduction 1/3 1/9 1 1.44
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
106 Greer
Table 2: Determination of Priorities of Criteria using AHP
Cost-Benefit Impact on Quality Risk Reduction
Cost-Benefit
Impact on Quality
Risk Reduction
0.24 0.06 0.71
0.76 0.19 0.05
0.23 0.08 0.69
Norm. Sum 0.41 0.11 0.48
Table 3: Prioritising candidate requirements using AHP Cost-Benefit r1 r2 r3
r1 1 1 /3 ½
r2 3 1 ¼
r3 2 1 /9 1
Norm. Sum 0.63 0.15 0.21
column. The result of this for the example is shown in Table 2. The results for each row are summed and normalised by dividing by the number of criteria. In this case the relative values for the three criteria are 0.41, 0.11, and 0.48, respectively. A similar approach is then used to assess each candidate requirement in relation to the chosen criteria. The same scoring mechanism is used for all requirement pairs, so that each requirement obtains a preference score with respect to each decision criterion. The overall rating for a requirement is obtained by summing the preference scores and multiplying by the weighting for that criterion. This rating is then used for the prioritisation. In an incremental model context, it is then a matter of assigning these requirements to releases using a greedy-type algorithm.
Prioritisation using Absolute Estimations Changes can also be ordered by assessing them directly against criteria rather than relative to each other. This method can be classified as an absolute assessment approach (Greer, Bustard, & Sunazuka, 1999b). The use of such measurement has a number of distinct advantages over relative assessment. One is that fewer assessments are needed – just one for each requirement against each criterion. It also avoids the need to compare requirements with each other, which can be difficult, especially if they are unrelated. One such approach is SERUM (Greer et al., 1999b), which uses estimations for cost, benefit, development risk, and operational risk reduction to inform the prioritisation process. The process as illustrated in Figure 2 starts with a business analysis from which the requirements have arisen. These requirements are refined by assessing risks in the current system (Stage 1) and ensuring that new requirements are created or amended to reduce those risks, where appropriate. Similarly since the requirements define the proposed system, the risks associated with the proposed system are assessed and where possible the requirements amended or new ones created (Stage 2). The requirements for the new system are defined in more detail in Stage 3, and cost-benefit analysis is carried
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
107
Figure 2: SERUM approach to prioritising requirements Business Analysis Models and Recommendations 1. Refine proposed system by assessing risks in the current system
Product risk control plan
RiskscurrentSystem
2. Refine proposed system by assessing risks in the proposed system
RisksproposedSystem
3. Define Changes
System Changes
4. Perform costbenefit analysis
Costs
9. Create Product Risk Control Plan for Accepted Risk 10. Create Development Risk Control Plan for Accepted Risk
8. Prioritise Changes
7. Determine Change Plan
Change Plan
Benefits 5. Assess Development Risks for Changes
Risksdevelopment
6. Identify Risk Reduction Activities
out in Stage 4. This is a high-level assessment using a simple scoring system. In Stage 5 a risk assessment is made concerning the development of each requirement. Thus we have the following variables for each requirement: (i) the development risk; (ii) the risk reduction in moving from the current system to the proposed system, as calculated from Stages 1 and 2; (iii) the cost of the requirement; and (iv) the benefit from the requirement. (Note that (iii) and (iv) could be combined as cost-benefit ratio.) Using these estimations, a prioritisation is determined in Stage 8. In practice Stage 8 is carried out by giving preference to the most important criteria, but alternative viewpoints can be made, ultimately the choice of approach being up to the user. From industrial studies (Greer et al., 1999a), the favoured criteria is often the benefit accrued from the requirement, although in mission, critical systems risk reduction may be more important. The risk plans in Stages 9 and 10 are useful by-products of the process. SERUM does not at present formally handled dependencies between requirements, although this and a means to better support the final decision making process are subjects of current research.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
108 Greer
Using Heuristics In the combinatorial optimisation field, there are a group of techniques that have been collectively termed “heuristics”. This grouping generally refers to techniques that try to find “near-optimal” solutions at reasonable computational cost (Reeves, 1995). Examples of techniques available for solving NP-hard problems (where deterministic means may not be feasible) such as the one outlined above for Release Planning include Simulated Annealing, Tabu Search, and Genetic Algorithms. We have used Genetic Algorithms for reasons outlined in the next section.
Genetic Algorithms Genetic algorithms have been derived from an analogy with the natural process of biological evolution (Carnahan & Simha, 2001; Holland, 1975). With genetic algorithms an initial population is created and pairs of solutions, or chromosomes, are selected according to some fitness score. In this process of selection those members of the population with higher scores are given a higher probability of being chosen. The selected parents are then mixed using an operation known as crossover. Specifically, the method EVOLVE (Greer & Ruhe, 2004), which we will describe in the next section, makes use of the “order method” as described in Davis (1991): the crossover operator selects two parents, randomly selects items in one of them, and fixes their place in the second parent (for example, items B and D in Figure 3). These are held in position, but the remaining items from the first parent are then copied to the second parent in the same order as they were in originally. In this way some of the sub-orderings are maintained. The resulting offspring is a mixture of its parents. A further operation, mutation, is applied. This is a random change to the chromosome and is intended to introduce new properties to the population and avoid merely reaching a local optima rather than the global optimum. Since the values in the chromosome must remain constant, the normal approach to mutation where one or more variables are randomly changed will not work. Hence, in the order method, mutation is effected via random swapping of items in the new offspring. An example is shown in Figure 3 for the items A and C. The number of swaps is proportional to the mutation rate. The offspring is then added to the population so long as it represents a feasible solution. The population is normally maintained at a predetermined level, and so lower-ranked chromosomes are discarded. The net effect of this is a gradual movement toward an optimum solution. Termination of the algorithm can occur after a certain number of iterations, after a set time, or when improvement has ceased to be significant. Figure 3: Illustration of crossover and mutation operators Chromosome 1
A
B
C
D
E
Chromosome 2
E
B
D
C
A
Crossover →
Offspring ↓ mutation Offspring
A
B
D
C
E
C
B
D
A
E
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
109
The extent of crossover and mutation is controlled by the variables crossover rate and mutation rate. The choice of “best” mutation and crossover rates is sensitive to the type of problem and its characteristics (Haupt & Haupt, 2000). In applying this to requirements prioritisation, the chromosomes are made up of ordered requirements. Selection involves picking two of these from a randomly created population. Crossover is then the mixing of requirements in the chosen chromosomes and mutation is a swapping of two requirements in the offspring. A check is then made to see if the child meets all specified constraints and, if so, it is added to the population. If not a backtracking operation is carried out and the process retried. The population is maintained at a predetermined number by culling the weakest valued chromosome. A detailed algorithm is provided in the Appendix.
Solution Approach In choosing an approach to solving the problems described in the Problem Discussion, the techniques described in the previous section could probably all be adapted for use. However there are two main issues to be resolved, if we are to incorporate the viewpoints of many stakeholders, to apply effort constraints, to manage risk levels, and to take account of dependencies. Firstly, any method needs to be usable and scalable. In the case of techniques that involve pair-wise comparison there is typically O(n2) comparisons between n alternatives. For example, using AHP with 20 requirements, a total of 190 (n*(n - 1)/2) pair-wise comparisons must be performed for each criterion. Secondly, given such a complex problem domain and such a large solution space, it is impossible to incorporate all of the constraints and find an optimum solution with any deterministic prioritisation technique. Given these factors, combinatorial optimisation techniques seem very suitable. The choice of Genetic Algorithms is based on the work of others, where, for example, Genetic Algorithms have been found appropriate to general cases such as the Travelling Salesman problem (Carnahan & Simha, 2001) and in software engineering to system test planning (Briand, Feng, & Labiche, 2002) and network route planning (Lin, Kwok, & Lau 2003).
Evolve Method In EVOLVE software releases are planned as increments, but the planning process is repeated at each iteration. At each iteration the inputs include the current set of requirements, the constraints as described in (1)-(3) in the section on Problem Statement for Software Release Planning, and the stakeholder priorities. The objective function (6) in that section is applied for each solution. The purpose of the genetic algorithm is to determine the best (most optimal) release plan.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
110 Greer
Figure 4. EVOLVE approach to assignment of requirements to increments Iteration 1 Requirements & Constraints
Iteration 2
Refined & Extended
Requirements & Constraints
Iteration 3
Refined & Extended
Requirements & Constraints
&
&
&
Priorities
Priorities
Priorities
…
EVOLVE Increment1
Increment1
Increment1
Increment2
Increment2
Increment2
Increment3
Increment3
Increment3
…
…
…
…
At each iteration k, the next planned increment, Inck, is finalised as indicated by the solid border in Figure 4, and all other undelivered increments, Inck+1, Inck+2, … and so forth are tentatively planned, as indicated by the dashed border.
Algorithms and Tool Support A greedy-like procedure was applied in order to assign requirements to the increments. Original precedence (2), coupling constraints (3), and resource constraints (4) are implemented by specific rules used to check each generated solution. This is achieved via a table of pairs of requirements. In both cases, if any given solution is generated that violates either category of constraint, the solution is rejected and a backtracking operation is used to generate a new solution.
Description with Sample Project In evaluation of the method, a sample software project with 20 requirements was used. We have represented this initial set, R1 of requirements by identifiers r1 to r20. The technical precedence constraints in our typical project are represented by the set Y as shown below. This states that r2 must come before r3, r2 before r7, r6 before r15, and r7 before r15.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
111
Ψ1 = {(r2,r3), (r2,r7),(r6,r15),(r7,r15)} Further some requirements were specified to be implemented in the same increment as represented by the set x. This states that r 10 and r11 must be in the same release, as must r17 and r8. ξ1 ={(r10, r11),(r17, r8)} Resource constraints are represented by index sets for the set of those requirements asking for the same resource. In our sample project we have a resource T1 that has a capacity of 1 (resource(T1)=1) that is used by requirements r3 and r 8. The sample project had resource constraints represented by the following index set. I(T1) = {19,20} Each requirement has an associated effort estimate in terms of a score between 1 and 10. The effort constraint was added that for each increment the effort should be less than Table 4: Sample stakeholder-assigned priorities Stakeholder
Requirement
S1
S2 S3 S4 S5
r1 1
2
3
4
5
r2 5
4
3
2
1
r3 1
1
1
1
1
r4 3
3
2
2
3
r5 4
4
4
4
5
r6 1
2
3
2
1
r7 2
1
1
2
1
r8 4
5
5
4
1
r9 3
3
3
2
4
r10 2
2
2
2
2
r11 5
5
5
4
3
r12 3
4
3
4
3
r13 2
1
1
2
3
r14 1
1
2
2
1
r15 3
5
3
5
4
r16 4
4
4
5
5
r17 5
3
3
3
3
r18 1
1
2
1
2
r19 5
5
5
5
5
r20 3
3
3
3
2
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
112 Greer
25, that is, Sizek = 25 for all releases k. Five stakeholders were used to score the 20 requirements with priority scores from 1 to 5. These scores are shown in Table 4. As we can see, different stakeholders in some cases assign more or less the same priority to requirements (as for r 3). However the judgment is more conflicting in other cases (as for r1). The stakeholders, S1 to S5, were weighted using AHP pair-wise comparison from a global project management perspective. Using the technique of averaging over normalized columns (Saaty, 1980), we obtained the vector (0.174, 0.174, 0.522, 0.043, 0.087) assigning priorities to the five stakeholders. With regard to the genetic algorithm, we used the RiskOptimizer tool from Palisade (2001) with a default population size of 50 and an auto-mutation function that detects when a solution has stabilized and then adjusts the mutation rate. In preliminary experiments we established that it was not possible to consistently predict the best crossover rate. Hence we used a range of crossover rates for each experiment, from 0.4 to 0.8 in steps of 0.1.
Uncertainty in Effort Estimates The model used in EVOLVE allows the use of probability distributions rather than discrete values for effort. This reflects the real-world difficulties in estimating effort. In the case study, a triangular probability distribution was created as shown in Table 5 and simulated using Latin Hypercube sampling.
Requirement
Table 5. Definition of triangular probability functions for effort to deliver requirements
r1 r2 r3 r4 r5 r6 r7 r8 r9 r10 r11 r12 r13 r14 r15 r16 r17 r18 r19 r20
9 3 6 5 7 1 7 10 8 4 3 2 4 6 3 4 3 6 1 5
Effort Min
10 4 7 6 9 2 9 12 9 5 4 3 5 7 5 6 5 7 3 8
Mode
11 5 9 8 11 3 12 14 10 6 5 4 6 8 7 7 8 9 4 9
Max
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
113
Table 6. Sample results for release planning under different levels of probability for not exceeding the effort capacity bound Effort = 25 (Risk Referent=1.1) Probability 99%
Benefit (6) 133
95%
141
90%
144
Risk
0.77 0.64 0.38 0.82 0.98 0.44 0.41 0.72 0.97 0.36 0.82 0.68
Release 1 2 3 4 1 2 3 4 1 2 3 4
Assigned Requirements r2 r6 r8 r13 r3 r7 r12 r15 r10 r11 r14 r12 r17 r18 r19 r4 r2 r6 r12 r13 r14 r3 r7 r20 r1 r9 r15 r4 r5 r16 r2 r7 r12 r13 r19 r3 r14 r15 r17 r4 r11 r18 r20 r6 r9 r10 r16
r19
Figure 5. Sample results from sample project release planning – iterations 1 and 2 Iteration 1
r1,r2,r3,r4,r5,r6,r7,r8, r9,r10,r11,r12,r13,r14, r15,r16,r17,r18,r19,r20
Iteration 2 Amend constraints & estimates as required New requirements {r21,r22,r23,r24}
r1,r3,r4,r5,r7,r8,r9,r10, r11,r14,r15,r17,r18,r21, r22,r23,r24
…
Delete {r16,r20}
EVOLVE firm proposed
r2,r6,r12,r13,r14,r19
r2,r6,r12,r13,r14,r19
r3,r7,r20 r1,r9,r15 r4,r5,r16 …
r3,r7,r11, r24 Omitted: r8,r10,r11, r17,r18
r10,r17,r18, r21 r1,r4,r15, r22
Omitted: r5,r9,r23
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
114 Greer
This has the effect that the decision maker can plan the releases based on his or her confidence level in the estimates. In practice there is a trade-off between this level of confidence and the benefit achievable, as shown in Table 6. It also means that the adopted release plan has associated with it, a percentage indicating the probability that it will be adhere to its effort estimates. At each iteration for a given effort confidence level, a solution is chosen from the L-best. The first increment is then firmed for implementation, and, at this point, any changes to the current set of requirements, their effort estimates, the stakeholder priorities, and the constraints can be made prior to planning the next and future releases. Figure 5 illustrates the process for our case study for the first two iterations. Suppose that the release plan for 95% confidence in Table 5 is selected to be initiated. After the first iteration, the first increment (r2,r6,r12,r13,r14,r19) is firm and will be implemented. After this r21,r22,r23, and r24 are added, with r16 and r20 being deleted. At this stage the stakeholders may change their priorities, and the effort estimates and the constraints may be revisited. This could be in response to new information gathered from the first iteration. A second increment is then firmed for delivery. In both iterations, future tentative increments are also given so as to provide details of the likely product evolution. In this example the number of planned increments has been limited to four. This means that there are some requirements that are omitted from the plan and do not contribute to the expected benefit.
Conclusion In this chapter we have reviewed some of the techniques available for prioritisation of requirements for incremental and iterative development of software systems. A category called relative comparison techniques can be formed. These are the methods that require the requirements to be compared to achieve a prioritisation. Once a prioritisation is achieved, a greedy algorithm taking into account the dependencies in the requirements could be applied to assign requirements to increments for release planning. This method should work well for small numbers of requirements but is impractical with larger sets of requirements. A further method as illustrated by the SERUM method could be classified as absolute comparison method. Here estimations are made for the factors considered important for prioritising requirements. Again, this could be applied to release planning for incremental and iterative development and has the advantage in producing useful metrics, such as risk assessments, that can be used elsewhere. One problem is in combining these estimations, and to date no attempt has been made to do this, the method rather depending on analyst judgement based on the information gathered. Further, a recently developed technique has been described, EVOLVE, that uses Genetic Algorithms to optimise the prioritisation process for a number of stakeholders with differing priorities for requirements. The method takes into account differing stakeholder viewpoints, effort constraints, risk constraints, and dependencies between requirements. A case study has been used to illustrate the working of the method, and its efficacy is backed up by this and from experimentation and industrial feedback (Ruhe & Greer, 2003). Additional empirical work on this has confirmed the stability of the outputs and
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
115
usability of the methods (Amandeep, Ruhe, & Stanford, 2004; Ruhe & Greer, 2003). EVOLVE also allows for uncertainty in the effort estimates, allowing decision makers to choose the level of confidence they require for effort estimates. One of the key strengths of EVOLVE is that the iterative planning process assumes changes will occur between phases. This better reflects the reality of software projects. Future work will involve developing the method further to take account of other possible constraints and situations.
References Amandeep, A., Ruhe, G., & Stanford, M. (2004). Intelligent support for software release planning. In Proceedings 5th International Conference on Product Focused Software Process Improvement, LNCS 3009, 248-262. Briand, L.C., Feng, J., & Labiche, Y. (2002). Experimenting with genetic algorithm to devise optimal integration test orders. (Tech. Rep.). Department of Systems and Computer Engineering, Software Quality Engineering Laboratory Carleton University. Bubenkbo, J.A. Jr. (1995). Challenges in requirements engineering. Proceedings of 2nd IEEE symposium on Requirements Engineering, 160-162. Carnahan J., & Simha, R. (2001). Natures algorithms. IEEE Potentials, 21-24. Charette, R. (1989). Software engineering risk analysis and management. New York: McGraw-Hill. Cockburn, A. (2002). Agile software development. NJ: Pearson Education. Cusamano, M.A., & Yoffie, D.B. (1998). Competing on Internet time: Lessons from Netscape and its battle with Microsoft. New York: The Free Press. Davis, A.M. (2003). The art of requirements triage. IEEE Computer, 42-49. Davis, L. (1991). Handbook of genetic algorithms. New York: Van Nostrand Reinhold. De Gregorio, G. (1999). Enterprise-wide requirements and decision management. Proceedings of the Ninth International Symposium of the International Council on System Engineering, 775-782. Greer, D., & Ruhe, G. (2004). Software release planning: An evolutionary and iterative approach. Journal Information and Software Technology, 46(4), 243-253. Greer, D., Bustard, D., & Sunazuka, T. (1999a). Effecting and measuring risk reduction in software development. NEC Journal of Research and Development, 40(3), 378-38. Greer, D., Bustard, D., & Sunazuka, T. (1999b). Prioritisation of system changes using cost-benefit and risk assessments, Proceedings of the Fourth IEEE International Symposium on Requirements Engineering, 180-187. Hart, S., Hogg, G., & Banerjee, M. (2002). An examination of primary stakeholders’ opinions in CRM: Convergence and divergence? Journal of Customer Behaviour, 1(2), 241-267.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
116 Greer
Haupt, R.L., & Haupt, S.E. (2000). Optimum population size and mutation rate for a simple real genetic algorithm that optimizes array factors. Applied Computational Electromagnetics Society Journal, 15(2), 94-102. Holland, J.H. (1975). Adaptation in natural and artificial systems. Ann Arbor: University of Michigan Press. Jiang, J.J., Klein, G., & Discenza, R. (2002). Perception differences of software success: Provider and user views of system metrics. Journal of Systems and Software, 63(1) 17-27. Karlsson, K., & Ryan, K. (1997). Prioritizing requirements using a cost-value approach. IEEE Software, 14(5), 67-74. Karlsson, J., Wohlin, C., & Regnell, B. (1988). An evaluation of methods for prioritizing software requirements. Journal of Information and Software Technology, 39, 939947. Lin, X.H., Kwok, Y.K., & Lau ,V.K.N. (2003). A genetic algorithm based approach to route selection and capacity flow assignment. Computer Communications, 26(9), 961974. Maiden, N.A.M., & Rugg, G. (1996). ACRE: Selecting methods for requirements acquisition. Software Engineering Journal, 11(3), 183-192. Palisade Corporation. (2001). Guide to RISKOptimizer: Simulation optimization for Microsoft Excel Windows version release 1.0. New York: Palisade Systems Inc. Rajlich, V., & Gosavi, P. (2002). A case study of unanticipated incremental change. Proceedings of International Conference on Software Maintenance, 442-451. Reeves, C. (1995). Modern Heuristic Techniques for Combinatorial Problems. New York: McGraw-Hill. Regnell, B., Host, M., Natt och Dag, J., Beremark P., & Hjelm, T. (2001). An industrial case study on distributed prioritisation in market-driven requirements engineering for packaged software. Requirements Engineering, 6, 51-62. Rugg, G., & McGeorge, P. (1997). The sorting techniques: A tutorial paper on card sorts, picture sorts and item sorts. Expert Systems, 14(2), 80-93. Ruhe, G., & Greer, D. (2003). Quantitative studies in software release planning under risk and resource constraints, Proceedings of the IEEE-ACM International Symposium on Empirical Software Engineering, 262-271. Saaty, T.L. (1980). The analytic hierarchy process. New York: McGraw Hill.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Prioritisation for Incremental and Iterative Development
117
Appendix This appendix presents a summary of the EVOLVE genetic algorithm. Input: Sseed = initial seed solution m = population size cr = crossover rate mr = mutation rate Output: The solution with the highest fitness score from the final population Variables: Sn =a solution P =current population as a set of (solution, fitness score) pairs = {(S1,v1), ,(S2,v2)….(Sm,vm)} Sparent1 = first parent selected for crossover Sparent2 = second parent selected for crossover SOffspring = result from crossover/ mutation operation Functions: NewPopulation(Sseed,m): Sseed→P , returns a new population of size m. Evaluate(S) provides a fitness score for a given solution, S. Select(P) chooses from population P, based on fitness score, a parent for the crossover operation. Crossover(Si,Sj,cr) performs crossover of solutions Si and Sj at crossover rate cr. Mutation(Si, mr) performs mutation on solution Si at mutation rate mr. IsValid(Si) checks validity of solution S i against the user-defined constrraints. BackTrack(Soffspring) = proprietary backtracking operation on a given solution. This backtracks toward the first parent until a valid solution is created. Cull(P) removes the (m+1)th ranked solution from the population, P. CheckTermination()a Boolean function that checks if the user’s terminating conditions have been met. This may be when a number of optimizations have been completed, when there is no change in the best fitness score over a given number of optimizations, a given time has elapsed, or the user has interrupted the optimization. Max(P) returns the solution in population P that has the highest fitness score.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
118 Greer
Algorithm: BEGIN P := NewPopulation(seed); TerminateFlag := FALSE; WHILE NOT (TerminateFlag) BEGIN Sparent1 := Select(P); Sparent2 := Select(P/ Sparent1); SOffspring := Crossover(Sparent1, Sparent2, cr); SOffspring := Mutation(SOffspring, mr); If NOT IsValid(SOffspring) THEN BackTrack(SOffspring); IF IsValid(SOffspring) BEGIN P := P ∪ {(SOffspring, Evaluate(Soffspring)}}; Cull(P); END; TerminateFlag = CheckTermination(); END; RETURN(Max(P)); END.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 119
Chapter VIII
A Quality Model for Requirements Management Tools Juan Pablo Carvallo, Universitat Politècnica de Catalunya (UPC), Spain Xavier Franch, Universitat Politècnica de Catalunya (UPC), Spain Carme Quer, Universitat Politècnica de Catalunya (UPC), Spain
Abstract This chapter proposes the use of quality models to describe the quality of requirements management tools. We present the COSTUME (COmposite SofTware system qUality Model development) method aimed at building ISO/IEC 9126-1-compliant quality models, and then we apply it to the case of requirements management. We emphasize the need to use UML class diagrams to represent the knowledge about the domain prior to the quality model construction, and also use actor-based models to represent the dependencies of requirements management tools with their environment, and then comprehend better the implications of quality factors. We show the applicability of the quality model in a real experience of selection of a requirements management tool.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
120 Carvallo, Franch and Quer
Introduction The activities embraced by the requirements engineering discipline (Kotonya & Sommerville, 1998; Robertson & Robertson, 1999), such as the capture, analysis, validation, verification, and maintenance of software systems requirements, often turn out to be very complex to carry out, especially in the case of medium- and large-scale projects. Among other factors, their success depends on the following abilities: 1.
The ability to deal with a large number of requirements that are related in many ways.
2.
The ability to guarantee, to an acceptable extent, that the requirements are complete and consistent.
3.
The ability to organize the requirements with respect to different criteria.
4.
The ability to maintain and manage several versions of requirements during the process.
Requirements management tools (RMT) provide computer-based support to overcome the complexities that stem from those activities. RMTs provide functionalities to support the abilities mentioned above, such as requirements capture and classification, traceability, version management, and generation of a requirements document. Many RMTs exist in the market nowadays (for example, Rational RequisitePro, Telelogic DOORS, Compuware Reconcile and IrQA from TCP Sistemas e Ingeniería, and so forth), usually available in the form of COTS (Commercial Off-The-Shelf) components. They differ among others in the requirements capture strategies that they offer, in the way in which they structure and relate requirements, and in the additional components and resources that they require to operate. As it is true in other COTS domains, selecting the most appropriate RMT for an organization or even for a particular project can be difficult. For this reason an organization selecting a RMT should be aiming at defining or using a framework in which RMTs could be evaluated with respect to its specific needs. This framework should embrace different kind of features that affect software evaluation, such as managerial, political, and quality factors, which are the focus of our work. In this chapter we present a framework based on the use of quality models for describing the quality aspects of RMTs that will act as software evaluation criteria. In the ISO standard 14598-1, a quality model is defined as “the set of characteristics and related relationships that provides the base for specifying quality requirements and evaluating quality” (ISO, 1999). A widespread standard on quality models is the ISO/IEC 9126-1 (ISO, 2001). Quality models compliant with this standard present three different kinds of quality factors, namely characteristics, subcharacteristics, and attributes, organized as a hierarchy. The standard itself fixes a set of six characteristics (functionality, reliability, usability, efficiency, maintainability, and portability), decomposed into a first level of subcharacteristics (such as security, interoperability, maturity, and so forth). To complete a quality model applicable in a given COTS domain, it becomes necessary to further
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 121
Figure 1. Using a quality model in the selection of a requirements management tool
1. -----2. ------ ----- 3. ----4. -----
Quality Requirements
Knowledge RMTs Domain
Quality Model
Formalized Requirements
RMTs
RMTs Descriptions
Negotiation during RMT Selection
decompose these subcharacteristics, to define metrics for evaluating the measurable quality factors in which they will be directly or indirectly decomposed, and to define relationships among these quality factors. The standard is not precise at some points, being necessary therefore to take some decisions about which should be the organization of an ISO/IEC-based quality model; see Botella, Burgués, Carvallo, Franch, Grau, Marco, & Quer (2004) for details. These decisions can be reflected in a conceptual model to represent a quality framework, as done, for instance, in Kitchenham, Hugues, & Linkman (2001). Quality models are a means to obtain exhaustive and structured descriptions of COTS domains such as RMTs. Once built, RMTs may be evaluated with respect to the quality factors included therein. During the selection of an RMT, the involved company will state its quality requirements over the tool with respect to the quality model, and the classical requirement negotiation process (Alves & Finkelstein, 2003) will be used to yield to the final selection of the most appropriate one (see Figure 1). The rest of the chapter is organized as follows. The next section introduces the COSTUME method for developing quality models. The three sections after that are devoted to the application of the COSTUME activities to the particular case of RMTs. Then we apply the quality model to a real RMT selection process in which we have been recently involved. We provide in the final section the conclusions and comparison with related work.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
122 Carvallo, Franch and Quer
Costume: A Method for the Construction of Software Quality Models COSTUME (COmposite SofTware system qUality Model dEvelopment) is a method specifically suited for developing quality models for systems composed of several interconnected COTS. It comprises four activities (Figure 2). •
Activity 1. Analyzing the environment of the software system. The organizational elements (that is, the environment) that surround the system are identified and modeled as actors, and the relationships among the system and this environment are identified. This activity helps to understand the role that the software system plays in the target organization and makes explicit also what the system needs from its environment. We use an actor-based model (Yu, 1997) (to be described in the next section) to represent the results of this activity.
Figure 2. The COSTUME method
System
D
D C2
D
D
D D
D
D
System
D
D
D
C1 D
D
D
D
D
Organization Actors
Human Actors
D
Organization Actors
Human Actors
C3
C4
D Hardware Actors
Hardware Actors 1. Analysis of the environment
2. Decomposition into subsystems
System C1 QM1 QM2 C3
C4
C2 QM4 QM3
System Quality Model
3. Construction of individual quality models
4. Composition into a system quality model
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 123
•
Activity 2. Decomposing the software system into actors. The system is decomposed into individual elements, each offering well-defined services. These elements are also modeled as interconnected actors. Services are identified starting from the results obtained in the previous activity, that is, the needs posed by the environment onto the system and also exploring the COTS market for discovering what type of products can cover these services. One of these actors is usually the core of the system, providing most of the services, while others provide specific support for some other services. We remark that COTS products available in the market may cover the services of more than one actor at a time; see Franch & Maiden (2003) for more details. Activities 1 and 2 are run in parallel, as proposed in most COTS-based life cycles (Comella-Dorda, Dean, Morris, & Oberndorf, 2002; Maiden & Ncube, 1998).
•
Activity 3. Building individual quality models for the actors. We build an ISO/IEC 9126-1-compliant quality model for each individual actor applying the method presented in Franch & Carvallo (2002, 2003). The quality factors relevant for the quality model are determined with the help of the artifacts obtained in the two previous activities. In this activity, it becomes crucial to be aware of what the quality model is being built for. If it is intended to support just a single project, we build throw-away quality models, in which not all the quality factors are explored but just those directly related with the requirements of the system; in fact, these factors are refined up to a level of detail that allows evaluating the requirements. On the contrary we may be interested in building reusable quality models, and in this case we aim at including as many quality factors as found in the COTS market, but we do not intend to refine them too much, building just the upper levels of the quality model hierarchy. Reusable quality models are refined in a particular project using its requirements.
•
Activity 4. Combining the individual quality models. We build an ISO/IEC 91261-compliant quality model for the whole system as the combination of the individual ones, obtaining therefore a single and uniform vision of the system quality. We have identified some combination patterns that can be used over and over during this combination activity.
In the rest of the chapter we will apply the two first activities of COSTUME method to identify the environment and actors of requirements management systems. Among the identified actors, we will focus on RMTs, the core of such a system. We will apply activity 3 to this particular actor to obtain a reusable quality model. We will not apply activity 3 to the other actors nor activity 4, for the sake of brevity. Last we will use the reusable quality model in the context of a particular selection process, refining therefore some of its quality factors.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
124 Carvallo, Franch and Quer
Analyzing the Environment of the Requirements Management System The first activity in COSTUME aims at determining the environment of the software system, in this case a Requirements Management System (RMS). Since we are using an actor-based approach, the goal of this analysis is to identify the actors that will interact with such a type of systems. Four different types of actors may be distinguished: human, representing different types of users; organizations, providing informational or logical resources to the system; hardware resources, as mechanical devices governed by the system or providing data to the system; and software, representing other software systems, which provide data to or collect data from the system. The actors for an RMS are detailed in Table 1. Apart from the RMS itself, this system has just human actors. Other systems we have analyzed present actors of the other types (scanners in document management, mail servers in mail client tools, and so forth.). Once the actors are identified, the dependencies among them may be represented using an actor-based model. Specifically, we use Yu’s (1997) i* approach to actor-based modeling, and, in particular, the Strategic Dependency (SD) model. In an SD dependency relationship, a depender, the actor that “wants,” depends on a dependee, the actor that has the “ability” to give, for a dependum, the object of the dependency. There are four types of dependencies: goal (represented in i* by an ellipse), soft goal (by a curly shape; they stand for non-functional requirements), resource (by a rectangle), and task (by a hexagon).
Table 1. Environmental actors of a requirements management system
Actor
Abb.
Type
Requirements RMS Management System
Software
Requirements Engineer
RE
Human
Requirements Engineers Head
REH
Human
Software Engineer
SE
Human
Project Manager
PM
Human
Goal Provides mechanisms for defining and managing requirements for software development projects. Defines requirements and relationships among them for a project. Defines the rationale behind requirements in a project. Develops a software system from the requirements defined for a project. Defines and manages projects and users.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 125
There are some methods and processes for building i* models. In our case, we propose the following steps: •
First we identify which goals of the system’s actors depend on the system, and we represent these relationships by goal dependencies.
•
Then we identify which fundamental resources are central to these goal dependencies and we model them with resource dependencies. Note that resources may be physical or informational.
•
Next we analyze each goal dependency over the system with the perspective of the ISO/IEC 9126-1 subcharacteristics. For each goal we include in the i* model a soft goal for every subcharacteristic considered crucial for this goal. We may even
Figure 3. SD model for the environment of a Requirements Management System. A useful mnemonic for the dependency link is to check the direction of the half-circles of the dependency. Consider each as a letter “D” — the direction that the D faces denotes the direction of the dependency. As the letter “D” is read, on the left is the depender and on the right is the dependee. Requirement
Dependence
Query Query
D RMS
D
D
D
D
Allow Remote and Concurrent Work
D
D
Requirements Documents Requirements Structuring Elements
D
D
D REH
Manage Projects and Project Versions
PM
D
D
Requirements Documents
D
D
Support Export / Import Requirements
Manage Users/ Groups per Project
D
D
D
D
Flexible Structuring of Requirements
Provide Groupware Support
D
D
Rich Variety of Requirement and Dependencies Types
D
D
Manage Versions of Requirements
Support Traceability of Reqs . vs. System Elements
D D
D
D
D
D
D
D
D
SE
D
D
Support Requirements Traceability
Provide Groupware Support
D
Report
Report
D
D
D
D
D
Quering of Specification / Design of a Project
D
RE
D
Rich Query Processing and Report Generation
D
D
Allow Specification / Design of System Elements
Manage Requirements and Dependencies
D
System Elements
D
D
Flexible Capture of Requirements
RMS: Requirement Management System SE: Software Engineer PM: Project Manager RE: Requrement Engineer REH: Requirement EngineerHead
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
126 Carvallo, Franch and Quer
decide to substitute the goal by this soft goal in the case that we consider that satisfying the goal without satisfying the soft goal is not admissible. We tend to avoid task dependencies in the model, since they are rather prescriptive. A task dependency represents one particular way of attaining a goal; it can be considered a detailed description of how to accomplish a goal. In Figure 3 we find the SD model for the environment of an RMS. Goals have to be with the main functionalities that the RMS offers, namely requirements management, query and report facilities, requirements capture, and so forth. They give light to some fundamental resources, remarkably the notion of requirement itself. Concerning soft goals, there are some: for instance, requirements capture is of interest only if it is flexible enough; therefore it appears directly as a soft goal. We highlight that, as we are only interested in the part of the environment that involves the RMS. Dependencies among actors of the environment are not of interest; nevertheless, we use the is-a relationship offered by i* to relate two of them (RE and REH).
Decomposing the Requirements Management System into Actors The decomposition of the system into actors is done starting from the needs posed by the environment onto the system (identified in the i* SD environmental model), and at the same time exploring the COTS market for discovering what type of COTS components can cover these (and perhaps other, unexpected) needs. This decomposition, of course, includes the RMT itself as an actor, in fact, the core one, whose attainment of goals depends on some of the services provided by the rest of the actors. In Table 2 we show a list of actors in which a RMS may be decomposed, grouped by category. We do not include the core actor of the system, the RMT itself. The third column of the table contains the environment dependencies that point out the existence of the actors. These dependencies and the functionalities of the COTS components that could eventually play the roles of these actors generate some dependencies from the RMT actor to the others, which may be found at the fourth column. These dependencies are modeled in i*, too (see Figure 4). For instance we can combine the environment dependency Manage Requirements and Dependencies with the observation of RMTs available in the market to deduce the need of a DBMS to Provide Persistent Storage of Requirements and Dependency Relationships. As a different example, the resource dependency User is a kind of visionary one: currently RMTs are not so tightly integrated with directory services as to import/export information about users, but current trends allow predicting that in the future this will be the case. Note that soft goal dependencies such as Flexible Capture of Requirements have turned into goal dependencies, because the non-functional ability (flexibility in this case) is the responsibility of the RMT itself, not of the used tool.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 127
Table 2. Actors inside the Requirements Management System Category
Environment dependencies
Actor
Software System Development
CASE Tools (CASE)
Resources
Directory Services (DS)
Data Management
Data Base Management Systems (DBMS)
Office Suites
Word Processor Tools (WPT)
Client
Web Browser (WB) Chatting Tools (CT)
Groupware
Inter-actor dependencies stemming from RMT
Allow Specification/Design of System Elements Support Traceability of Requirements versus System Elements Manage Users/Groups per Project Manage Requirements and Dependencies Manage Versions of Requirements Manage Projects and Project Versions Rich Query Processing and Report Generation Flexible Capture of Requirements Allow Remote and Concurrent Work
Definition of System Elements Traceability of Changes in Requirements versus System Elements System Element Authentication of Users User Persistent Storage of Requirements and Dependency Relationships Persistent Storage of Requirement and Project Versions Generation of Reports Capture of Requirements from an External File Web Interface Offered Discussion among Requirement Engineers
Provide Groupware Support
Figure 4. SD model for the dependencies among Requirements Management System actors
D
D
Capture of Requirements from an external file
WPT
D
D
Definition of System Elements
D
D
D
D
Authentication of users
D
D
User
DS
Web Interface Offered
Persistent storage of requirement and project versions
DBMS
D
D
Persistent storage of requirements and dependency relationships
D
D
RMT: Requirements Management Tools CASE: CASE tools DS: Directory Services DBMS: Data Base Management Systems WPT : Word Processor Tools WB: Web Browser CT: Chatting Tools
D
CT
D
D
Discussion support between Requirements Engineering
D
RMT
D
Traceability of changes in Requirements versus System Elements
Generation of reports
D
D
D
CASE
System Element
WB
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
128 Carvallo, Franch and Quer
As mentioned in the section on COSTUME, actors are not supposed to be mapped directly onto individual COTS components. It may be the case of a COTS component covering goals from more than one actor, an actor covered by two COTS components working together, or even the goals of an actor covered by two COTS components at the same time for survivability reasons. In Franch & Maiden (2003) we have explored this issue. The fundamental thing with actors is that they are the relevant unit that the quality models are to be build for.
An ISO/IEC 9126-1-Complaint Quality Model for Requirements Management Tools In Franch & Carvallo (2002, 2003) we have proposed a seven-step method to build ISO/ IEC 9126-1-compliant software quality models. The first step is a domain-analysis activity, which yields as a result a UML class diagram. The other six steps refine the ISO/ IEC 9126-1 departing quality model by adding new quality factors (characteristics, subcharacteristics, and attributes), defining their relationships, and proposing metrics for them.
Analyzing the Domain of Requirements Management Tools The main goal of this analysis is to gain more knowledge about the domain. Other benefits of the analysis and the resulting class diagram are: on the one hand to fix the lack of uniformity in the terminology in COTS domains; on the other hand the eventual use of the classes, attributes, and associations appearing in the diagram to provide some rationale for identifying the quality factors of the quality model. In the case of RMTs, due to our previous knowledge of the domain, we have not been as confused by the differences in the used terminology as in previous experiences. During this analysis the knowledge acquired in the two previous COSTUME activities is completed with some classical tasks (literature reviewing, web page analysis, and so forth). Then the most relevant concepts and their relationships are represented in a clear way by means of a UML class diagram (Rumbaugh, Jacobson, & Booch, 1999). Finally the class diagram may be complemented with other structured or written information. Figures 5, 6, and 7 show the UML class diagram for the RMT case. In Figure 5 note that resources in the i* SD environmental diagram such as Requirement, Project, and Dependency have become a class. Many others have been introduced. Requirements are related to Attributes and give Values to them. Each Requirement is of one single Requirement Type. Requirements are related by Dependencies, which are of one Dependency Type (decomposition, refinement, conflict, synergy, and so forth). Also
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 129
Figure 5. UML class diagram for the requirements management tools domain: requirements subproject-of 0..1
0..1
*
Dependency
Defined Element
belongs-to 1
*
Project
0..1
*
* *
0..1 version-of
{disjoint, complete}
version-of 0..1
of-a
System Element
Requirement of-a
1
Value
in-terms-of
1
Dependency Type
0..1
*
*
*
Requirement Type *
Attribute
sense-for-requirements-of
can-take 1
*
Possible Value
*
Constraints -The version of a requirement belongs-to its same project. -A requirement must be expressed in-terms-of attributes that are defined-for the project which the requirement belongs-to or that are bound-to the template from which the project has been defined-from. -A requirement may be expressed in-terms-of attributes that have sense-for-the-requirements of its requirement type. -The value of an attribute in-terms-of which a requirement is expressed, must be one of the possible values that the attribute can-take.
Figure 6. UML class diagram for the requirements management tools domain: structuring elements
Structuring Element
{disjoint, complete}
{disjoint, complete}
Particular
Predefined *
*
bound-to
Requirement Type
Attribute
defined-for
*
Template
Dependency Type
1 defined-from 1
*
Project
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
130 Carvallo, Franch and Quer
Figure 7. UML class diagram for the requirements management tools domain: entities defined
User *
Entity *
{disjoint, complete}
Template
Project
Requirement Type
Dependency Type
Attribute
Requirement
Dependency
System Elements are related by Dependencies; since system elements are defined by the CASE tools (see Figure 4), their attributes will not be defined here but in the UML diagram for that domain (that’s why the class name is in italics). Both Requirements and System Elements are bound to Projects. Versions of Projects and Requirements are supported. We show some representative integrity constraints of the model. In Figure 6 we distinguish among Predefined and Particular Requirement Types, Dependency Types, and Attributes; we generalize instances in these three classes as Structuring elements. Predefined structuring elements are bound to project Templates, while Particular structuring elements are bound to particular Projects, which are defined from Templates. Last, in Figure 7, we show that the most relevant concepts are owned by Users; again the italics show that the concept of user is external of the RMT (we consider it imported from the directory service).
Constructing the Quality Model for the Requirements Management Tools Domain For the sake of reducing the cost in the construction of new quality models we take as a starting point not the original ISO/IEC 9126-1 hierarchy but an extended ISO/IEC 91261 quality model that consists of roughly 50 quality factors, organized into a three-level hierarchy, that have appeared repeatedly in our previous quality model building experiences in different domains. During the construction of a quality model for a new domain, quality factors specific for this domain will be added to the departing quality model, and, if it is necessary, others will be redefined or eliminated. In order to help in the identification of new quality factors, the artifacts obtained in previous steps will be used. In the extended ISO/IEC 9126-1 quality model, the suitability subcharacteristic, present in the original ISO/IEC 9126-1 model, is not decomposed, because its refinement is mostly domain-dependent. It is possible to decompose it into a set of subcharacteristics corresponding to some of the goal and soft goal dependencies identified in the i* SD environment model. Table 3 presents an extract of this decomposition and Table 4 shows the decomposition of one of the new subcharacteristics appearing in Table 3. Note that goal and soft goals that depend absolutely on capabilities offered by the dependee (such as requirements capture) do not appear in these decompositions. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 131
Table 3. Decomposition of the Suitability subcharacteristic for requirements management tools No.
Subcharacteristics
1
Requirements Management
2
Requirements Classification
3
Requirements Relationships
4
Requirements Traceability
5
Concurrent Work Support
6
Versioning and History Control
7
Queries and Searches
8
Reports and Views
Definition Options related to the definition and management of requirements Options related to the structure of requirements into a project Option related to the definition of relations among requirements Options related to the traceability relations among requisites and with system elements Support offered by the tool to users concurrent work Options related to the management of requirements and baseline versioning Support offered by the tool to perform basic and advanced queries and search of requisites Option related to the generation of reports and specification documents
Table 4. Decomposition of Requirements Management subcharacteristic for requirements management tools No.
Subcharacteristics
1
Requirements Input
2
Requirements Deletion
3
Requirements Edition
4
Requirements Numbering
5
Requirements attributes
6
Requirements Approval and Rejection
Definition Options related to the input of requirements through the RMT itself Options related to the removal of requirements Options related to the modification of existing requirements Options related to numbering capabilities Options related to the definition of attributes for requirements Options related to the approval and rejection of requirements
Concerning attributes, it is clear that we cannot aim at including all of them in the quality model (Dromey, 1996); however it is certainly possible to identify a set of the most relevant ones. Many of the attributes to be added to the quality model may be inferred from the UML class diagram (that is, from the classes, attributes, and associations therein), as it contains all the relevant concepts identified in the sources consulted when studying the domain. Because of the iterative nature of the method, the model may be refined even during the evaluation of the products up to the point at which adding new attributes would not make a difference in COTS component assessment. The use of derived attributes (attributes whose values depend on other attributes) at several levels is encouraged, as it makes the model more structured and reusable. Table 5 presents an
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
132 Carvallo, Franch and Quer
Table 5. Extract of the attributes for the Requirements Attributes subcharacteristic of requirements management tools Attribute 1
2
Requirements Attributes 1.1
Default Requirements Attributes
1.2
User-defined Requirements Attributes
1.3
Linkage to Requirement Types
Types of Attributes 2.1
Default Types of Attributes
2.2
User-defined Types of Attributes
Definition Capabilities for requirements-related attributes Set of default attributes related to requirements provided by the tool Possibility to define new requirements attributes Possibility to bind requirements attributes to requirements types Capabilities for types of attributes Set of default types of attributes provided by the tool Possibility to define new types of attributes
extract of the attributes included in one of the subcharacteristics identified in Table 4, namely Requirements Attributes. Metrics for some quality attributes may be difficult to define; however it is the only way to have an exhaustive and fully useful quality model that can be used in the evaluation of COTS components. Some metrics can be as simple as Boolean (for example, attributes 1.2 and 1.3 in Table 5), numerical, or label values, while some other attributes require a more complex representation, yielding to structured metrics such as sets (for example, attributes 1.1 and 2.1 in Table 5) or functions. Metrics for basic attributes are usually objective and quantitative. For derived attributes there is more diversity. For instance the metric for attribute 1 in Table 5 is likely to be subjective and qualitative. Another important aspect when building quality models is the identification of relationships among attributes. The model becomes more exhaustive and, as an additional benefit, user requirements may get implicitly extended once they have been expressed in terms of quality attributes. Many types of relationships between attributes can be defined; for example, collaboration, damage, dependency, and so forth. An example of dependency relationship between attributes in the RMT domain appears between the attribute Traceability Matrix and the attributes Requirements Relationships and Requirements Hierarchy, because traceability matrixes cannot be built if there is no way to define the relationships among the requirements.
A Real Case of Selection of a Requirements Management PRISMA is an ongoing project taking place at the Universitat Politècnica de Catalunya (UPC) aiming at replacing a legacy system that supports the management of academic data and records. GESSI (Spanish acronym of Software Engineering Group for InformaCopyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 133
tion Systems), the research group we belong to, is participating in some tasks, among which we mention participation in software quality assessment based on CMM level 2 Key Process Areas (KPA). One of these KPAs is Requirements Management. The goals of this KPA are to control requirements in such way that a baseline for the software engineering and management activities can be established and to make sure that software plans, products, and activities are kept consistent with requirements. As mentioned in the introduction, complex projects such as PRISMA have many requirements to deal with, therefore the need for a tool to support its management is more than well justified. Our GESSI decided, therefore, to select an RMT; in fact, this selection has been the trigger that pulled us to build the RMT quality model. The process to select the tool was made of seven tasks: 1.
Construction of a quality model for the domain. To complete our knowledge of the domain we decided to build first the quality model for the prospective composite system using COSTUME as explained in the previous sections. We built, therefore, an SD model, a UML class diagram, and individual quality models. The individual quality models for the software actors were not very detailed (for example, metrics were not considered); just the higher levels of the hierarchy were completed. The resulting quality model was used and further refined in the next steps.
2.
Elicitation of requirements and determination of evaluation criteria. The quality model was used as a guideline. As each of the sections of the quality model was reviewed, older requirements were made clearer, and new requirements emerged in a very structured way, resulting from the understanding of the capabilities that products in the domain have to offer. Also the decomposition of the RMT system into actors (see the section on decomposing RMS into actors) came in handy in some situations. As an example the fact that an approved requirements specification document was already available (because CMM level 2 compliance was decided once the project was issued) introduced a risk. The people responsible for the elaboration of this document were uncomfortable with the idea of redoing all their work because of the new tool. Thus they were happy to discover that some kind of integration with external word processors to capture requirements was possible. As a result new requirements were proposed, such as the provision of some sort of semi-automatic capture of requirements from most popular word processors or the possibility to import from XML or RTF formats. During this elicitation process evaluation criteria were identified and the corresponding metrics were defined.
3.
Selection of an initial set of RMTs to be evaluated. Based on the previous screening of the members of GESSI, and also some market exploration made by the PRISMA personnel, we selected five initial candidates. The availability of local support and the existence of previous successful installations in Spanish companies, together with coverage of some high-level suitability attributes, were used to perform this first screening.
4.
Hands-on experimentation of RMTs. Demo versions of the products were installed and tested more thoroughly than before. Here again the quality model became a
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
134 Carvallo, Franch and Quer
cornerstone of the task. After an initial familiarization with the interface, each tool was explored to identify the factors related to the attributes in the quality model. Products were also tested to find out the degree of coverage of the dependencies with COTS that were already part of the PRISMA architecture. 5.
Assessment of the RMTs using the quality model. Once requirements and RMTs evaluations were input into the quality model, products were compared with each other and with respect to the requirements. Two of the candidates were discarded by non-technical reasons (mainly poor supplier support during interaction). A third candidate was discarded because of the poor requirements capture capabilities. The two remaining products were successfully evaluated with respect to all of the attributes in the quality model. Their mismatches with respect to the requirements were identified and documented.
6.
Conclusions and proposal of the final candidates. A final report containing a summary of the advantages and disadvantages of each of the products, and their evaluation with respect to the quality model and to the requirements, was addressed to the responsibility of the final selection. This document included an invitation to a final demonstration of the two finalist candidates.
7.
Demonstration of the final RMTs and selection of the final one. Again the quality model was used as a guideline to present the benefits and disadvantages of each RMT’s respect to the quality factors and the requirements. The final selection was made based on the cost/benefit relationship offered by the products.
Conclusion Requirements management tools play an increasingly important role in requirements engineering, especially in large software projects. It becomes necessary to know in depth which services these kinds of tools may offer to their users, and also which criteria may be used to compare among different alternatives available in the market. In this chapter we have presented COSTUME, a method aimed to represent the quality of software systems as a means to attain these goals, and we have applied it to the domain of RMTs. We think that the main contributions of this chapter are: •
We have provided (an overview of) the description of RMT quality factors. The resulting model can be used with two purposes. First to know exactly the type of functionalities and the quality of service that RMTs can be expected to provide. Second to improve RMT selection processes, that is, to make these selection processes less time-consuming and less error-prone.
•
The resulting model keeps clearly separated those functionalities and quality of service issues that are the responsibility of the RMTs themselves and those for which the RMTs depend on other software tools. We have identified the most relevant of these other tools, namely CASE tools, directory services, word processors, and databases, and we have made explicit the way in which RMTs depend on
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 135
them. From our point of view, this is a crucial difference with most other existing approaches, which rely on the ISO/IEC 9126-1 interoperability subcharacteristic for keeping track of these dependencies, making them obscure and difficult to trace. •
The description of quality may be used to drive selection processes, as we have illustrated with a particular CASE study. We mention that building the quality model in advance, at least up to the most important quality factors, helps in driving the requirements elicitation process. Also that the explicit distinction of RMTs from other actors points out clearly which responsibilities are really inherent to these type of tools and which others are fulfilled by other tools. Last evaluation of candidate COTS components is also driven by the quality factors identified in the quality model of RMTs.
•
The method presented here, COSTUME, is not bound to RMTs; it can be applied to other software domains.
Other proposals of quality models for RMT exist. Probably the most relevant is the one proposed by INCOSE (2003). Although some communality with our approach exists (for example, both define a multilevel hierarchy of attributes, and both aim at having a common framework and criteria to evaluate the products), many significant differences subsist: •
The departing catalogue. INCOSE uses an ad-hoc hierarchy; it is not based on any existing standard. Our approach is based on the commonly used and widespread standard ISO/IEC 9126-1.
•
The hierarchy decomposition. INCOSE proposes a three-level taxonomy with a departing top level with 14 attributes and their decomposition into two lower levels. Our approach further decomposes the starting ISO/IEC 9126-1 standard with two additional levels of subcharacteristics and up to four levels of attributes. This leads to a very structured, easy-to-tailor, and thus reusable hierarchy. Furthermore the INCOSE approach is mostly concerned with the functional aspects of the tools missing other important aspects.
•
The attributes. Although many attributes are common to both approaches, the attributes included in the INCOSE catalogues have been identified mostly based on common sense. There is not a rationale (at least not provided) to identify and later classify the attributes. In our approach we have build UML and i* models of the domain and its context, which have been used as the rationale to identify the attributes and their categorization into the model.
•
The metrics. INCOSE proposes only a four-value leveling system. Therefore the direct comparison of products is not possible, or at the best not fully reliable. Attributes need to be further decomposed and products explored in order to be fully evaluated. As an example, some RMT evaluations stated that they fulfilled some attribute by themselves (for example, creation of UML models) while in the practice additional products are needed. In our approach basic attributes are directly measurable with well-defined metrics, allowing direct comparison among products and analysis of matching with the requirements.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
136 Carvallo, Franch and Quer
The construction of quality models is a time-consuming and costly activity. However, once available, they become a powerful tool that provides a general framework to get uniform descriptions of COTS in a domain. In order to facilitate the return of the investment in their construction, we are currently exploring the use of several artifacts to organize and reuse the knowledge available from previous experiences. One could be an evolving taxonomy of COTS categories and domains. In this taxonomy quality models could be associated with each category and domain. Thus domain quality models could be reused into new selection processes of COTS of the domain and category quality models could be reused in the construction of quality models of any domain belonging to the category.
Acknowledgments This work has been partially supported by the Spanish project TIC2001-2165. We also thank the people from PRISMA, especially Nuria Rodríguez, for their support in this work.
References Alves, C., & Finkelstein, A. (2003). Investigating conflicts in COTS decision-making. International Journal of Software Engineering and Knowledge Engineering, 13(5), 473-493. Botella, P., Burgués, X., Carvallo, J.P., Franch, X., Grau, G., Marco, J., & Quer, C. (2004). ISO/IEC 9126 in practice: What do we need to know? Proceedings of the 1st Software Measurement European Forum, 297-306. Comella-Dorda, S., Dean, J.C., Morris, E., & Oberndorf, P. (2002). A process for COTS software product evaluation. Proceedings of the 1st International Conference on COTS-Based Software Systems, Springer-Verlag, LNCS 2255, 86-96. Dromey, R.G. (1996). Cornering the chimera. IEEE Software, 13(1), 33-43. Franch, X., & Carvallo, J.P. (2002). A quality-model-based approach for describing and evaluating software packages. Proceedings of 10th IEEE Joint International Conference on Requirements Engineering, 104-111. Franch, X., & Carvallo, J.P. (2003). Using quality models in software package selection. IEEE Software, 20(1), 34-41. Franch, X., & Maiden N.A.M. (2003). Modeling component dependencies to inform their selection. Proceedings of 2nd International Conference on COTS-Based Software Systems, LNCS 2580, 81-91. INCOSE (2003). http://www.incose.org/tools/tooltax.html. ISO (1999). ISO/IEC Standard 14598-1 software product evaluation: General overview.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
A Quality Model for Requirements Management Tools 137
ISO (2001). ISO/IEC Standard 9126-1 software engineering – product quality, 1: Quality model. Kitchenham, B., Hugues, R., & Linkman, S.G. (2001). Modeling software measurement data. IEEE Transactions on Software Engineering, 27(9), 788-804. Kotonya, G., & Sommerville, I. (1998). Requirements engineering. Processes and techniques. John Wiley & Sons. Maiden, N., & Ncube, C. (1998). Acquiring Requirements for COTS Selection. IEEE Software, 15(2), 46-56. Robertson, S., & Robertson, J. (1999). Mastering the requirements process. AddisonWesley. Rumbaugh, J., Jacobson, I., & Booch, G. (1999). The unified modeling language user guide. Addison-Wesley. Yu, E. (1997). Towards modeling and reasoning support for early-phase requirements engineering. Proceedings of 3rd IEEE International Symposium on Requirements Engineering, 226-235.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
138 Carvallo, Franch and Quer
Section II: Challenges
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements for the Integration of Autonomous Computer Systems
139
Chapter IX
Composing Systems of Systems: Requirements for the Integration of Autonomous Computer Systems
Panayiotis Periorellis, University of Newcastle upon Tyne, UK
Abstract Information Systems in general carry or have embedded in their structure, elements that stem from the organization’s strategic, tactical, and operational goals. Finding elements of an organization’s strategic, tactical, or operational goals embedded in computer systems is not at all surprising, since most developers and programmers were taught how to successfully map such goals into the Information System. We are, however, in an era where technology allows us to develop systems that are composed of smaller autonomous parts (sometimes complete systems themselves) that are integrated together despite being bound by their corresponding organizational boundaries. Therefore integration is not only a technical challenge but an organizational one, too. In this chapter we address a number of issues, namely system composition, regulation, evolution, and dependability, using examples from the two case studies we worked on for three years. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
140
Periorellis
Introduction Information Systems in general carry or have embedded in their structure, elements that stem from the organization’s strategic, tactical, and operational goals. One can easily “see” the level of trust an airline system has in its clients when asking for a credit card prior making the booking (and therefore updating the database), as opposed to another that updates the database and asks the user to pay over the phone within a time limit. Finding elements of an organization’s strategic, tactical, or operational goals embedded in computer systems is not at all surprising since most developers and programmers were taught how to successfully map such goals into the Information System. We are, however, in an era where technology allows us to develop systems that are composed of smaller autonomous parts (sometimes complete systems themselves) that are integrated together despite being bound by their corresponding organizational boundaries. Many of the technical challenges have indeed been solved to allow the deployment of purpose-built modular pieces of code that provide some valuable service. Architectures such as Web services, which various organizations have started exploiting by exposing parts of their services over the Web, will reach commercial maturity or critical mass in the forthcoming years. The benefit such architectures add is that systems can be broken into smaller pieces not only for the purpose of analyzing their complexity, but also for the purpose of exploiting the value of the individual components. A company, for example, that runs a Web site for selling books could now “lease” its shopping cart component while at the same time maintaining the component as part of the overall business. There is, however, an additional level of complexity that has not been addressed yet. In order to understand and consequently tackle the organizational challenges of building systems, we need to distinguish between distributed processing and distributed control. Distributed processing implies distribution of a task over a number of resources, whereas distributed control implies control of a task from a number of unrelated parties. This is what has not been addressed so far and where the originality of this work lies. The conclusions of this chapter have been drawn by the two case studies that we developed throughout the course of our three-year project. The first case study involves a travel agency composed of individual autonomous Web-based booking services (that is, car rentals, hotels, airlines, insurance companies) to provide to the user via a Web front-end the ability to book a full trip. The case study Periorellis & Dobson (2001) was a technical one in nature. It involved the integration of autonomous booking systems of airlines, car rental systems, and hotels to provide an integrated travel agency. At times we refer to it as the TA case study in the chapter. The second case study, Ferrante & Diu (2002), involves a pan-European electrical power distribution system composed of national electricity grids. Sometimes we refer to it as the EXAMINE case study. It was a rather rich case study given the political, economic, and cultural diversity that has to be reflected on the software that will carry the actual distribution. We use examples from these case studies to emphasize certain points throughout the chapter. The chapter raises a number of issues (which have so far been overlooked), and they all stem from one main question: “What happens when we integrate autonomous systems (that is, systems of systems) when the organizational elements embedded in them have conflicting interests.” In a nutshell three possible outcomes can happen: good things Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements for the Integration of Autonomous Computer Systems
141
(added extra value), bad things (organizational failures), and unexpected things (new emerging properties). The chapter addresses these issues to reveal this “novel” problem space the new technologies create. Furthermore we examine the characteristics of this new problem space and discuss how it can be addressed, and more specifically how we engineer requirements of these new types of systems. We discuss in the chapter (and in this particular order) notions such as regulation, composition, transactions, evolution, dependability, and all related properties that we have found crucial in requirements engineering.
Definitions Let us start on the right foot by looking at some definitions of the most common terms used throughout the chapter. •
“System of Systems” is the overall system that encompasses the sum of the individual autonomous systems. A point to note is that we want to distinguish ourselves from component systems or virtual enterprises, since we are talking about integration of autonomous systems (controlled by a number of parties) that have a particular customer base, a service history, a working culture, a physical presence. In many places the term is abbreviated to SoS.
•
“Distributed Control” refers to a service whose composition, operation, and termination is regulated by more than one organizational entity.
•
“Emerging Service” is the service provided by the SoS itself and not any of the individual autonomous systems. The attributes of the emerging service may be different than the sum of attributes of the autonomous systems.
•
“Dependability” (Laprie, 1992) is that property of a computer system such that reliance can justifiably be placed on the service it delivers. The term encompasses a number of non-functional properties such as reliability and availability.
•
“Organizational Failure” (Periorellis & Dobson, 2002) is an exception thrown by the business logic of an SoS. In most cases it does not manifest itself as a software error, although software is the cause of it.
Background In the next paragraphs we explore the notion of systems of systems, which implies the deployment of autonomous systems to build larger, more complex ones while at the same time we move from the organizational context of single control to a global domain where control is distributed. The originality of the notion of systems of systems lies in the fact
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
142
Periorellis
Figure 1. The circles indicate distinct organisational entities while lines indicate dependencies. ma rket
scoping
resourcing
suppliers
delivery
customers
that we are not simply talking about distributed processing but, more importantly, distributed control. Let us turn our attention to a simple process model (Figure 1). This is a model of the kinds of process that take place in an organisation. The scooping-related processes are market-facing, dealing with questions such as deciding on position within the market — and indeed which market — monitoring the movement of the market and deciding on appropriate responses, and — for some kinds of business — actually making the market. Resourcing processes are supplier-facing. They are concerned with acquiring sufficient resources (including, of course, human resources) to run the business, maintaining those resources, monitoring their quality, managing suppliers, and so on. Delivery processes are customer-facing. They are concerned with obtaining and fulfilling orders, enlarging the customer base, obtaining feedback as a useful form of input to the scoping processes, and evaluating the resourcing processes. Again it is easy to see a number of possible failure modes immediately, such as inappropriate market positioning, inadequate resourcing and delivery processes, and failures in communication both within and between these processes. A typical failure in the travel agency case study is that of customer base, or individual booking services targeting different types of customer. One very common way in which these processes are composed is in a supply chain or network, Figure 2. The circles indicate distinct organisational entities while lines indicate dependencies. The bold line indicates a supply demand channel.
scoping
scoping
resourcing
delivery
resourcing
delivery
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements for the Integration of Autonomous Computer Systems
143
Figure 3. In autonomous systems more often than not we assume certain aspects regarding the scope of the component (that is, the customer-base targeted) or its quality (reliability of its resource). None of these are made explicit at the level of the interface, hence the question mark on the dotted line. ?
scoping
resourcing
scoping
scoping
delivery
resourcing
delivery
resourcing
delivery
in which the unit of composition is linking the resourcing and delivery processes (Figure 2). We can now see additional failure modes concerned with mismatch of various kinds between organisations represented by the bold links (Figure 3). This is a type of data that is not accessible via the interface. The point we want to bring across is that information passed via interfaces is largely inadequate, and a large amount of information is assumed (Periorellis & Dobson, 2001). In case of a supply chain where one is concerned with extracting value from a market, there needs to be some agreement on the apportionment of value, and this can only occur within the scoping processes. There is also the possibility of mismatch between the scoping decisions. For example, a travel agent will normally try to match the market position of the hotel with the market position of the airline, preferring to fly travellers to cheap hotels using no-frills carriers or to expensive hotels using full-service carriers, and indeed may choose to specialize in one end of the market or the other.
So What Can Go Wrong? Although the scoping/resourcing/delivery model describes an organisation in terms of processes, these processes do not always (or indeed hardly ever) employ distinct mechanisms. Thus the delivery mechanism will embody aspects of the (results of the) scoping process and so on, all the way around. This means that we cannot assume that we can provide a failure-proof (where failure implies inability to offer the intended service) travel agent simply by connecting together delivery mechanisms from component suppliers. A well-known example in the travel agent domain occurred a couple of years ago, when customers of a well-known no-frills airline realized that although the flights were inexpensive, that was not the case for the hotels. Although the airline that provided the online booking had successfully connected to a hotel broker that provided
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
144
Periorellis
rooms in well-known hotel chains, it failed to realize that the broker aimed at a different customer base than the airline. Therefore although the airline tickets were within the nofrills price range, the hotels were not. Other examples of scoping mismatch include marketing policies, operational policies (that is, cancellations), and systems with differing models of trust. These policy mismatches should not be seen as mere technical glitches to be overcome by ingenious Java programming. Rather they are policy mismatches that constitute a fault that may result in a failure of the travel agent to deliver an adequate service to the customer, and recovery management needs to be addressed at that level.
Architecting an SOS to Create an Emerging Service We have learned so far that building systems of systems is not by any means only a technical challenge. In fact many of the major problems in the case studies we examined stem from organizational incompatibilities. From the case studies we learned the important areas to look at in order to build requirements and how one can gain extra added value from systems that move outside organizational (or, in some cases, national) boundary and at what cost. Organizational boundaries that surround various systems can have an immediate effect on the way the system is perceived, developed, and consequently delivered. The property of distributed control is what distinguishes an SoS from other componentbased distributed systems. By control of course we imply authority to regulate and impose structure and operational policy to the emerging service, initiate and terminate transactions, as well as regulate access to the sub-systems of an SoS. By emerging service we do not simply imply the sum of all services of the autonomous systems. A surprising outcome of such integration is that an SoS may indeed exhibit properties that do not necessarily stem from any of the participating systems. Ashby (1956) illustrates this better with his example about the elastic band, where he describes how the property of elasticity is found only in the band and not in the individual threads themselves that form the elastic band: For years physical chemists searched for what made the molecule contractile. It is now known that the rubber molecule has no inherent contractility. Why then does rubber contract? The point is that the molecules, when there are more than one, jostle each other and thereby force the majority to take lengths less than their maxima. The result is that a shortening occurs, just as if, on a crowded beach, a rope fifty feet long is drawn out straight: after a few minutes the ends will be less than fifty feet apart (p. 112). In systems of systems one can observe similar examples of properties that were part of the emerging service only and not properties of the individual systems. These properties
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements for the Integration of Autonomous Computer Systems
145
can yield hidden treasures that all participating systems can benefit from. In the example of the Electrical Power Distribution case study one surprising attribute was that of waste management. Waste management is not something that all systems had catered for. The composition of the individual national systems into a pan-European network had this attribute. It was a result of re-directing electrical power (that is, it stemmed from the operation of the emerging service), but nonetheless it was not part of the individual systems. However, before attempting to build an SoS, and before this whole new trend can bear fruit, we need to avoid certain traps. Given that a system’s behavior can reflect political, economic, or cultural interests, in particular the diversity of these can hinder the emerging new service. So what is missing and we want to bring forward through this chapter are key issues related to the integration of largely autonomous systems, the boundaries that can hinder integration and furthermore discuss the notion of emerging system properties. We believe that by dealing with these issues early on in the development process of a system we can avoid what we call organizational failures and furthermore take advantage of the added extra value emerging properties can offer. From our practical experience we have derived that there are basically five areas that we need to address before putting together requirements, namely regulation of emerging service, composition, transactions (in their broader term), evolution, and dependability. Each of these reveals a number of issues important in engineering requirements for integration. As many readers who work in the area will no doubt know, there are several layers of requirements engineering ranging from high-level requirements that consist of strategic objectives to lower-level network and engineering-related tasks. In our model, which is being presented from here on, we are concerned with the requirements surrounding the emerging service itself, which is in other words what the system of systems delivers.
Regulation of Emerging Service The rationale behind this is that since we have a number of controlling organizational entities, we also need an overall agreement for the composition, operation, and consequent termination of the emerging service. In the travel agent case study this manifested as an overall agreement regarding cancellation and booking policies. In the EXAMINE case study this manifests itself as a common economic policy. Since we have talked about the need to compose the emerging service according to some rationale, we should stress that the rationale of such a composition should stem from the regulations of the emerging service. We identified during the development of the travel agent case study that the emerging service is more than just a front end between the consumer and the service providers (autonomous systems) that manages transactions via exception handling schemes. Similarly in the electric power distribution case study a European Wide electrical power distribution system cannot be a network of interconnected national grids. There has to be regulation regarding its operation. Regardless of the time of its physical existence it needs to be regulated in terms of scope and operational policies. It is only then that intelligent composition can really take place. This is something that is
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
146
Periorellis
in many occasions omitted in service-oriented engineering, especially in cases where the services are in fact small processes. The scope of the emerging service determines aspects such as the type of consumer. Immediately this enables the composition process to reason about the type of autonomous systems that can be utilized since the consumer types would need to be similar. Also operational policies that regulate how the emerging service should be provided would allow one not only to reason about the compositional semantics but also determine the type of functionality the SoS would need to provide. Composing an emerging service has been proved not to be as easy as putting together (or even wrapping) methods borrowed from autonomous systems. Additional consideration is needed when the SoS “adopts” a method from a component system in order to carry out a service. Although wrapping allows us, to an extent, to control the I/O of the method, it does not resolve problems stemming from the mismatches between operational policies that may exist between the SoS and the autonomous systems. An example of this can be the cancellation method of hotel booking systems. Although a large number of them would only require the booking reference number, policies regarding cancellations vary considerably. All sorts of problems arise when these policies do not match the policies of the SoS, namely maintaining the state, coordinating the transactions (as we shall see later), carrying out compensation, and so forth. We therefore argue that the emerging service itself needs to be regulated. In technical terms policies regarding scope and operations would need to be represented in data structures that would in turn be used to reason (accept or reject) certain configurations. This need is even more apparent when one considers evolutionary aspects of autonomous systems. We need some way of capturing those and making sure that the composition of the emerging service from individual autonomous systems (and therefore their policies) reflects what the emerging service is regulated to provide.
Composition We view composition of the “emerging service” as the process that ensures that goals embedded in the design and execution of individual autonomous systems do not bring the services of the SoS into conflict. We consider this view appropriate since the composition of the emerging service is based on systems of diverse objectives, operational policies, architectural styles, and implementation. In other cases (EXAMINE) these conflicts may take the form of political or social disputes. Having already reported on the issues regarding the crossing of organizational boundaries and fault tolerant policies in SoS in Dobson & Periorellis (2002), we need to look how goals embedded into an autonomous system can affect the composition of the SoS. In particular we want to identify, eliminate, and reconcile conflicting goals while at the same time promoting complementary ones. Note here that we use the term “goal” in order to abstract from both case studies, since in the TA what we regard as goals maps onto organizational strategy and in the case of EXAMINE maps onto national interest. In both cases, however, it is important to acknowledge the need for composing emerging services according to a
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements for the Integration of Autonomous Computer Systems
147
rationale. Bear in mind that the purpose of a system of systems is to compose an “emerging service” by making use of existing services offered by autonomous systems, each of which has its own goals, culture, history, and so forth. It has already been discussed by Lamsweerde (2001) that goal-orientated approaches to system composition can increase the visibility of goal relationships that can exist. This is an important benefit of such a compositional approach, as such goal-relationships will be subtly specific to the particular compositional context and require greater insight into their evaluation and design. However, more generally, it can be seen that three principal relationships can exist between associated goals that may result from contributing autonomous components in an SoS. First there are complementary goal-relationships that have a reinforcing effect upon each other. As an example consider integrating two systems that target the same market base. Such relationships require little, if any, recomposing (and cannot only be readily integrated) but increase the dependability and value of the emergent service. One of the requirements of the European Union Electrical Power System is a common market environment (open market) where all participating systems adhere to the laws of trading and competition. Secondly, in some situations, while goal relationships may not be complementary in a pre-existing sense, they may still be compatible in terms of being more readily reconcilable with each other through conceiving alternative compositional approaches to achieving an emergent service. Compatible communication protocols would allow integration between components. Continuous monitoring of the performance could reveal further information regarding embedded goals. Lastly, and inevitably, there exist conflicting goal relationships. It is important that such relationships between autonomous components are identified early so that increased compositional effort can be focused on these areas to influence either the appropriate selection of alternative (and more accommodating) components or to ensure that such relationships have the major influence on the architecting of the envisaged coordination system. In some situations this may involve configuration of conflicting goals into higher or lower prioritization status.
Transactional Handling In integrating systems to create systems of systems we are primarily concerned with multi-party transactions that are distributed over many locations and that may require a considerable time to complete. Each party in such a transaction has a set of preconditions and a set of post-conditions that must be met before the transaction is judged to have been successful from that party’s point of view. Thus, for a transaction to be judged to be wellformed, the evidence, embodied in a set of instruments, must reliably reflect the intended acts of remote parties. For this to be the case, there are three characteristics of the instruments and the operations on them that must be assumed: •
Atomicity: Specific actions occur exactly once or not at all, and the parties are able to confirm completion of an action.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
148
Periorellis
•
Persistence: Once information is generated it does not disappear; it may be changed, but the instrument(s) must record the original and the changed information.
•
Security: which, in this case, also refers to the authenticity and integrity of information represented in instruments.
Our experience tells us that not all transactional problems can be solved at the machine level. In fact in cases where atomicity cannot be guaranteed, we need to address the problems that arise at the business level. A structuring suggestion that has been extensively tested is that of using Coordinated Atomic actions (Zorzo, Periorellis, & Romanovsky, 2003). It uses multiparty transactions to carry out specific functions, and we place these within a context (which can be thought of as a box) inside which exception handling occurs. We need, therefore, to recognize that with transactions that cross multiple domains of management, notions such as persistence and atomicity cannot be assumed. There are two configurations of the relationships of a multi-party transaction at the structural level. In this discussion the term “structural” applies to the roles and responsibilities of the actors directly participating in the SoS while “infrastructural” roles and relationships are those associated with the deployment of reusable, generic resources that are exploited in the execution of structural relationships. •
A centralized transaction monitor in which each of the participants has a direct relationship with one particular participant in the role of transaction manager. The logical point of co-ordination is also a physical point of control.
•
Distributed transaction management, in which each participant undertakes transaction management responsibilities, and the logical point of co-ordination is, in fact, replicated and distributed.
In the first approach, which is implemented by transaction monitoring functionality, all transacting parties must have a pre-defined relationship with one particular party responsible for the co-ordination. “Pre-defined” here means that these relationships were established outside the context and infrastructure in which the transactions will be executed. In the second, which is implemented in distributed transaction management, each party depends on all the others and must be able to monitor and interpret their acts. These two approaches to the allocation of responsibilities in a distributed transaction result in a different relationship between structural and infrastructural responsibilities. Atomicity of operations, persistence of information and security, and authenticity and integrity of messages are dependabilities, or qualities of service that are delivered at the structural level either as end-to-end or centralized mechanisms. In the case of a distributed transaction the economies of provision are quite different. Since each participant takes responsibility for components of the transaction and needs to be able to monitor remote activities and states, each needs to be able to rely on the quality of a set of service and applications, components within their own domain and in each of those of the other participants. In this case the pre-established relationship must be with the infrastructural suppliers, and it is possible that the transacting parties are Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements for the Integration of Autonomous Computer Systems
149
establishing a new context as well as a new instance of commerce. In this case, new instruments, which arise from the characteristics of the new context, may well be required. In the distributed approach to transaction management, and here we are concerned not merely with distribution over time and space but, more significantly, distribution over the boundaries of different enterprises, each enterprise must have the option and capability of replicating all of those aspects of transaction co-ordination that are relevant to their particular interests. They must also be able to rely on the provider of the infrastructural environment to ensure that their view of the current state of any transaction is coherent with the views of all the other participants of that transaction. Thus atomicity, persistence, and security become responsibilities of the environment provider and infrastructural in nature, and it is these qualities of service and application that dictate the characteristics of the instruments of the structural conversations.
Evolution This paragraph discusses the possibility of the individual parts of an SoS evolving independently. An SoS integrates systems that might be under the control of organizations totally separate from that commissioning the overall SoS. In this situation it is unrealistic to assume that all changes to the interfaces of such components will be notified. In fact, in many interesting cases, the organization responsible for the components may not be aware of (all of) the systems using its component. One of the most challenging problems faced by researchers and developers constructing dependable systems of systems is, therefore, dealing with online (or unanticipated) upgrades of component systems in a way that does not interrupt the availability of the overall SoS. It is useful to contrast evolutionary (unanticipated) upgrades with the case where changes are programmed (anticipated). In the spirit of other work on dependable systems, the approach taken here is to catch as many changes as possible with exception handling mechanisms. Dependable systems of systems are made up of loosely coupled, autonomous component systems whose owners may not be aware of the fact that their system is involved in a bigger system. The components can change without giving any warning (in some application areas, for example, Web services, this is a normal situation). The drivers for online software upgrading are well known: correcting bugs, improving (non) functionality (for example, improving performance, replacing an algorithm with a faster one), adding new features, and reacting to changes in the environment. When a component is upgraded without correct reconfiguration or upgrading of the enclosing system, problems similar to ones caused by faults occur. Changes to components can occur at both the structural and semantic level. For example, changes of a component system can result in a revision of the units in which parameters are measured (for example, from Francs to Euros), in the number of parameters expected by an operation (for example, when an airline introduces a new type of service), and in the sequence of information to be exchanged (for example, after upgrading, a hotel booking server requires that a credit card number is introduced before the booking starts). Although there are several existing partial approaches to these problems, they are not generally applicable in our context. For example, some solutions deal only with proCopyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
150
Periorellis
grammed change where all possible ways of upgrading are hard-wired into the design and information about upgrading is always passed between components. This does not work in our context, in which we deal with pre-existing component systems but still want to be able to deal with interface upgrading in a safe and reasonable fashion. Other approaches that attempt to deal with unanticipated or evolutionary change in a way that makes dynamic reconfiguration transparent may be found in the AI field. We believe that we need to use fault tolerance as the paradigm for dealing with interface changes: specific changes are clearly abnormal situations (even if the developers accept their occurrence is inevitable), and we view them as errors of the integrated SoS in the terminology accepted in the dependability community. Error detection aims at earlier detection of interface changes to assist in protecting the whole system from the failures that they can cause. For example, it is possible that, because of an undetected change in the interface, an input parameter is misinterpreted (a year is interpreted as the number of days the client is intending to stay in a hotel), causing serious harm. Error recovery follows error detection and can consist of a number of levels: in the best case dynamically reconfiguring the component/system and in the worst resulting in a safe failure notification and offline recovery. We also believe that we need a structured approach to dealing with interface changes that relies on exception handling, which in turn should be incorporated into an SoS. The general idea is straightforward: during SoS design or integration, the developer identifies errors that can be detected at each level and develops handlers for them. If handling is not possible at this level, an exception is propagated to the higher level and responsibility for recovery is passed to this level. In addition to this general scheme, study of some examples suggests classifications of changes that can be used as check lists. At this stage we would also like to acknowledge the need for communicating semantic information between component systems. Being able to communicate additional semantic information may resolve some of the conflicts discussed earlier but also enable better handling of interface upgrades. In the initial stages we found that in order to communicate semantic information between two components we need a structured collection of information (meta-data) as well as a set of inference rules that can be used to conduct automated reasoning. Traditionally knowledge engineering (Hruska & Hashimoto, 2000), as this process is often called, requires all participants to share the same definitions of concepts. Knowledge bases, however, and their usage does not necessarily make the system more flexible; quite the contrary. Requests would have to be performed under strict rules for inference and deduction. The SoS would have to process its metadata (globally shared data descriptions) in order to infer how to make a request for a particular method (that is, booking a flight) and furthermore infer what parameters accompany this method and their meaning.
Dependability Let us finally address some dependability concerns. Change or evolution has always been a difficult issue to deal with in technical terms, although vital in the survival of systems. Open systems architectures and frameworks serve well as vehicles for dealing Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements for the Integration of Autonomous Computer Systems
151
with this issue. The SoS, however, challenges the main assumption. Since we put together autonomous systems, that is, systems with existing strategic, tactical, and operational goals, issues such as common interest or common goals (if they exist) have to be made explicit. So where are these issues made explicit? From a dependability point of view we are trying to provide a framework for putting together systems in a dependable manner. The notion of SoS, however, challenges this issue. In dependability terms, an assumption we have been taking for granted is that of a universal (or system-wide) judge, a judge that identifies faults and errors, and acknowledges deviation of intended service. This notion of judgment exists within the same organizational boundary as the information system and is exercised by the organization itself. When we put together systems of systems we also bring together these judging systems in a network of goals mapped onto computer systems, assessed by autonomous judgment systems. Bearing in mind that we have such a diversity of goals and “intended behavior”, who can judge the “intended behavior” of Systems of Systems and, furthermore, who is the judge of the system of systems? Putting systems together implies a common interest reflected on what we call “emerging service.” Deviating from the emerging service can be regarded as a failure. This, however, does not imply that the individual autonomous system failed. So we are moving from deviation of intended behavior to deviation of common interest. Maintaining a common interest would require analyzing the roles and responsibilities of all participating systems in an effort to maintain a state of affairs. A system of systems can reveal properties that are not found in any of the autonomous sub-systems. This can really increase the value of the SoS and generate new revenues for the individual autonomous systems through the SoS interface. So in a sense maintaining the SoS by dependably maintaining the individual systems is in the interest of the system provider. The emerging properties of the SoS are, in fact, the common interest. At this point in time we lack enough tools that would allow us to study this problem space. We need, however, to address it at the conceptual level in order to clarify the dependability requirements of the system.
Conclusion We hope through this limited space that we managed to give an idea of the risks but at the same time the vast potential of putting systems together. Current architectures do not address organizational issues when it comes to systems integration via technologies such as web services. We studied these issues for a number of years, and we firmly believe that it would benefit anyone to read about the organizational pros and cons of such technologies as opposed to their technological advancements. We concentrated on five distinct areas that are of greatest importance during the integration process in order to help developers understand some of the issues surrounding integration. We showed how boundaries that surround various systems can have an immediate effect on the way the system is perceived, developed, and consequently delivered. We showed one of the most important attributes to be that of control. This is what distinguishes an SoS from other component-based distributed systems. By control we mean authority to
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
152
Periorellis
regulate and impose structure and operational policy of the emerging service but also to foresee future behavior of a system, initiate and terminate transactions, as well as access all components of an SoS. Given this additional dimension we believe that addressing these issues would enable developers or regulators of an SoS to capture certain type of non-technical failures (organizational failures) early prior to the requirements analysis stage. We drew our conclusions and examples primarily from the TA case study and, to a lesser extent, EXAMINE. Although there is a lot of literature on architecture and design, we felt that there is little or no work available to address SoS-specific issues. The most important of those relate to the way the emerging service is composed, regulated, delivered, and how it evolves. Since there are a number of tools and methodologies addressing structuring techniques and design patterns, we concentrated on concepts that would allow us to look at this new problem space that an SoS addresses.
References Ashby, R.W. (1956). Introduction to cybernetics. Methuen & Co. Dobson, J., & Periorellis, P. (2002) Models of organisational failure. Dependable Systems of Systems Project (IST-1999-11585). University of Newcastle upon Tyne. UK. Online www.newcastle.research.ec.org/dsos/. Ferrante, A., & Diu, A. (2002). Needs Expression: Revised Version. Technical Deliverable. EXaMINE Project (IST-2000-26116). Hruska, T., & Hashimoto, H. (Eds.) (2000). Knowledge-based software engineering. Ios Press. Lamsweerde, A. (2001, August). Goal oriented requirements engineering: A guided tour. Proceedings of RE’01 5th IEEE international symposium on Requirements Engineering, Toronto. Laprie, J.C. (1992). Dependability: Basic concepts and terminology. In T. Anderson, Avizienis, & W.C. Carter (Eds.), Series: Dependable computing and fault-tolerant systems, vol. 5. New York: Springer-Verlag. Periorellis, P., & Dobson, J.E. (2001). Case study problem analysis. The travel agency Problem. Dependable Systems of Systems Project (IST-1999-11585). University of Newcastle upon Tyne, UK. Periorellis, P., & Dobson J.E. (2002). Organisational Failures in Dependable Collaborative Enterprise Systems [Special issue]. Journal of Object Technology, 1(3), 107117. Zorzo, A.F., Periorellis, P., Romanovsky, A. (2003, January 15-17). Using coordinated atomic actions for building complex Web applications: A learning experience. Proceedings of the 8th IEEE International Workshop on Object-oriented Real-time Dependable Systems (WORDS 2003), Guadalajara, Mexico.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
153
Chapter X
Requirements Engineering for Technical Products:
Integrating Specification, Validation and Change Management Barbara Paech, University of Heidelberg, Germany Christian Denger, Fraunhofer Institute for Experimental Software Engineering, Germany Daniel Kerkow, Fraunhofer Institute for Experimental Software Engineering, Germany Antje von Knethen, Fraunhofer Institute for Experimental Software Engineering, Germany
Abstract Over the last few years the functionality and complexity of technical products has increased dramatically. This is reflected in the complexity of the development process. In this chapter we discuss in detail the resulting challenges that have to be faced by requirements engineering. We identified these challenges in interviews conducted at a German car manufacturer. The main part of this chapter presents the QUASAR requirements engineering process that faces all identified challenges. In particular, it supports: (1) a set of views and abstraction levels tailored to the stakeholders, (2) Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
154 Paech, Denger, Kerkow and von Knethen
communication about these views through understandable notations, (3) efficient access based on tools and traces that make relationships between views explicit, (4) explicit feedback based on inspection and simulation, and (5) overall quality by integrating a formal specification technique with informal, textual specification techniques as well as through guidelines, checklists and tailored review techniques.
Introduction Technical products, such as automobiles, washing machines, or video recorders, play an important role in our daily lives. Over the last few years the functionality and complexity of these products increased dramatically. Reasons for this are a technology push through reduced costs for hardware but also growing shares of software. Technical products are specific sociotechnical systems. Typical sociotechnical systems, for example, service engineer support systems, are characterized by many stakeholders in different roles that interact with the system. To understand and design such a system adequately, stakeholders and their relationships, as well as the influence of the system on the usage environment, have to be analyzed in detail (Sutcliff & Minocha, 1998). For a technical product, typically, there is only a small set of users. In addition the context of the user is already well known because products are typically developed in series. Therefore requirements engineering (RE) can concentrate on the interaction of the users with the new functionalities of the product. However the development process has to be reflected during RE. Technical products are sold on the market. They integrate hardware, software, and mechanics. And for complex technical products such as cars several suppliers are involved. Thus viewpoints of various stakeholders with differing backgrounds have to be elicited and integrated. Typically these stakeholders are distributed over several companies and locations and work in a fast-changing environment. This induces challenges for the whole product development process, and in particular for identifying, collecting, managing, and validating requirements (Grimm, 2003). In terms of the onion model of stakeholder relationships proposed in Alexander & Robertson (2004), this means that most stakeholders are found in the outer-most layers. Roles and often even persons involved in the development process of technical products are typically quite stable and well-known because of the series development. To reflect this situation, the emphasis during RE of technical products is on fostering long-term relationships based on efficient communication between the stakeholders and not so much on identifying the stakeholders. In the following we discuss in detail the challenges that have to be faced by RE for technical products. We identified these challenges in interviews that we conducted at a German car manufacturer. The main part of this chapter presents a prescriptive RE process that faces all identified challenges. This process has been developed and evaluated during the last three years and is the major result of the QUASAR project funded by the German ministry for research. Throughout this chapter we use an example (that is, the development of a door control unit of a car) to illustrate the challenges and
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
155
all aspects of our RE process. This example is part of a typical industrial specification for a car control unit (Houdek & Paech, 2002).
Challenges of Technical Product Development The purpose of this section is to characterize the development context of technical products, identify the challenges, and motivate the importance of an adequate RE process. We discuss also how existing RE approaches deal with these challenges. Technical products consist of embedded systems where mechanics, hardware, and software together deliver functionality. The complexity and functionality of such systems increased dramatically over the last years. In particular, the quality aspects of these systems are of ever-growing importance, since the systems enable innovative usage of highly sophisticated technology. The automotive domain can serve as a reference example for this trend. In this domain complex embedded systems, such as steer-by-wire, electronic stabilization systems, antilock braking system, or seat positioning, are nowadays standard components of a car. Thereby they take over and enhance activities that seemed to be inherent to human car usage. This, of course, highlights the importance of quality such as reliability or safety. To provide such complex functions and high quality, an increasing number of software components are needed in the automobiles. These components are coupled by several networks and to the outer-car world, for example, via radio link. One example is the current Mercedes-Benz S-class, which contains more than 50 controllers and 600,000 lines of code coupled by three different bus systems (Grimm, 2003). Nowadays cars contain more software than the first Apollo rocket that flew to the moon. All of this has to be developed within extremely short time and low cost frames. Furthermore, car manufacturers need to involve suppliers on all levels of mechanics, hardware, and software. As software is the basis for competitive differentiation, they have to maintain a delicate balance of in-house development and procurement. This emphasizes the importance of the RE process, where all stakeholders need to communicate effectively about the system to be built. In-house experience from DaimlerChrysler has shown that more than 40 percent of errors occurring during automotive functions are attributable to requirements errors caused by immature specifications (Grimm, 2003). Figure 1 shows the typical stakeholders involved in car software development. As the control software is embedded, it directly interacts with the electronics and the mechanics. Together they deliver the functionality of a component, which again interacts with other car components or directly with the user. In this case the main user is the driver. Sometimes also the passengers are involved. In addition car repair personnel have a special interface to the software. People initiating, managing, and developing such a component are shown in the two outermost layers. These include — either in-house or at the suppliers — on the one hand the engineers responsible for the realization, namely
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
156 Paech, Denger, Kerkow and von Knethen
Figure 1. Stakeholder in car control software development Supplier
Marketing
Marketing
Inhouse
Sales Sales
Requ. Eng.
Car components HW Eng. HW Eng.
SW Eng.
Car eletronics Control software
SW Eng.
Requ. Eng. Car repair
Qual. Eng.
Qual. Eng.
Car mechanics Sys. Sys. Driver Eng. Eng.
Passenger Project manager
Maint. Eng. Maint. Eng.
Project manager
hardware, software, or system engineers, and on the other hand the project roles involved in initiating and managing the project, namely marketing, sales, requirements engineers, quality engineers, maintenance engineer, and project manager. A RE process for technical products has to take into account all characteristics sketched above. In the literature, one mainly can find approaches that focus on the quality and complexity aspects. They aim at providing comprehensive tool support for precise specification and quality assurance, and thus focus on formal notations based on state transition diagrams (for example, Glinz, 2002; Harel & Politi, 1998; Heitmeyer & Bharadwaj, 2000; Thompson Heimdahl, & Miller, 1999). Such notations, however, are not suitable for stakeholders such as product marketing or management. Typically experts are needed to handle these notations. This in turn complicates the involvement of small and medium suppliers that cannot afford such experts. On the other side of the spectrum are approaches focusing only on textual documents such as use cases (Cockburn, 2001). This supports the communication of different stakeholders distributed over different companies but does not support the precision and rigor needed to achieve high quality. Therefore in QUASAR we aimed at an approach that combines these two extremes and thus supports informal and creative communication, as well as precise specification and tool-based quality assurance.
Requirements Engineering for Technical Products In this section we motivate and present the most important facets of the QUASAR requirements process for technical products. In the first subsection we discuss detailed
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
157
issues in the requirements process. The second subsection presents our process, in particular the integration of specification, change, and quality management.
Issues in the RE Phase of Technical Products In this subsection, again, we take the example of the automotive industry to illustrate a typical RE process and its challenges. This is based on interviews conducted at a German car manufacturer. It details the challenges resulting from the complexity of the product and process and the high demands on quality. Then we discuss the techniques that can address these challenges.
A Typical RE Process During the management and product marketing of the car, manufacturers decide on innovative features of the next car series. These considerations are incorporated into requirements documents and handed over to several suppliers. Typically these documents capture the context of the system functions and many technical details, but no coherent view on the functionality or required quality. In particular the user view is not explicit. Often the documents comprise up to several hundred pages. Sometimes the suppliers capture their understanding of the requirements into specification documents as the basis for discussion and negotiation with the procurers. This specification also serves as detailed instruction for its developers. Often the suppliers use only informal notes on the basis of the requirements document to define the functionality and communicate it to the developers. Both sides try to reuse as much as possible from earlier projects to save time and effort. Of course this bears the risk of introducing inconsistencies and omissions into the documents. All stakeholders pursue their own interests and adhere to different constraints. This includes a product perspective as well as a humanor technology-oriented perspective. For example marketing wants to specify innovative products, project managers have to cope with time restrictions, users want to have comfortable products, and suppliers want to push their own technology. These views are typically implicit and scattered throughout the documents. Furthermore all documents are subject to frequent change, since often requirements are not clear at the beginning (due to innovative features) or they evolve due to changes in the organizational and technical project context.
Improving the Process This process can be improved wrt. managerial challenges such as time and cost as well as wrt. sociotechnical challenges: handling the sociotechnical complexity of the product, handling the quality of the product, and handling the communication between the stakeholders.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
158 Paech, Denger, Kerkow and von Knethen
Obviously, to cope with the complexity of the product and to enhance the communication between the stakeholders, one should make all the different views explicit in one or several documents and use requirements management tools to facilitate access, reuse, and change. To represent the different views different notations, different levels of abstraction, and degrees of detail must be dealt with: for example, sales produces high-level product specifications for the offer, which focus on business issues; usage descriptions can serve as high-level goal description from the user point of view; a software engineer needs detailed technical requirements. In particular it is important to make the user view explicit. This is typically not communicated to the engineers but is essential to ensure the sociotechnical adequacy of the product. To ensure consistency between the different views and abstraction levels it is important to systematically and efficiently establish relationships between them. Such a relation is in particular needed to bridge the gap between the high-level user requirements and the more detailed technical requirements. To cope with frequent changes and short development cycles, efficient traceability throughout the whole RE process and to the latter phases is needed. This supports change-impact analyses (that is, identifications of entities affected by proposed changes) and efficient communication of these impacts to relevant stakeholders. Clearly traceability can only be executed efficiently through tool support. As traceability induces new effort for the requirements engineer who has to establish traces, it is important to take motivational factors into account. Quality can be improved by using more precise specifications techniques. Quality as well as practicability and repeatability can be supported by constructive quality assurance in terms of guidelines and templates, and by analytic methods such as inspections and simulation. Both enhance the communication process between the different stakeholders as a side effect. Of course there are several approaches for all these techniques (for example, Sommerville & Sawyer, 1997). The main challenge, however, is to integrate all of them efficiently. So, for example, documents and included notations have to be tailored to the stakeholders. Inspections, traceability guidelines, and management tools have to be adapted to the documents. Clearly an improvement to this process is only helpful if it gains the acceptance of all stakeholders in the RE phase and requires only minimal additional effort and training of them. In particular this implies that novices and experts can apply the approach and tools are used efficiently. Therefore detailed guidelines capturing the experience of the experts have to be provided.
An Integrated RE Approach for Technical Products In this subsection we present the QUASAR process in detail. We focus on three roles. Of course there are many activities associated with these roles. In the following we only concentrate on the activities most important for the issues identified above:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
159
•
Requirements engineer: This role is representative for the issue of supporting the communication between the different stakeholders. In particular it needs to bridge the gap between user requirements and technical requirements.
•
Quality engineer: This role is representative for the issue of ensuring quality continuously for all requirements as early as possible.
•
Maintainer and project manager: These roles are representative for the need to efficiently cope with change.
While discussing these roles separately, we discuss the integration of the corresponding techniques throughout. Figure 2 depicts the different elements of the QUASAR-RE process. In the following we describe the basic steps in this process. Document creation (Step 1 and 4): The requirements document (RD) and the specification document (SD) are created by the requirements engineers of the car manufacturer and suppliers, respectively. This creation step is supported by guidelines and templates. In addition, the CASE-tool “Rhapsody” (Rhapsody) and the requirements management (RM) tool “RequisitePro” (RequisitePro) are used as supporting tools. We chose “Rhapsody” over “STATEMATE” as it also supports object-oriented structuring, which greatly improves readability of the diagrams. We chose “RequisitePro” because it allows working directly with WORD documents. Therefore it was near to the existing working environment of the requirements engineers. Of course it is possible to use any other requirements management tool, such as “DOORS” (Telelogic). Figure 2. The QUASAR-RE process u Creation guidelines
(i.e., template and UC guidelines)
elicit and document
w verify RD
Inspection technique
C
requirements document
x derive Creation guidelines
v establish RD traces
Tracing guidelines
A D
Requisite-Pro Rhapsody -
y QuaTrace tool
establish SD traces
B
z specification document
(i.e., template and SC guidelines)
Tracing guidelines
verify SD Inspection technique
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
160 Paech, Denger, Kerkow and von Knethen
Traceability (Step 2 and 5): Traces between documentation entities within the RD or the SD, or between entities in the SD and the RD, need to be established to support project and change management. QUASAR provides tracing guidelines and a tool environment. In contrast to other traceability techniques (Ramesh & Jarke, 2001) our technique gives detailed guidance on what traces should be established to support later impact analyses (von Knethen & Grund, 2003). Furthermore the technique defines who should establish traces when, as well as who should analyze traces how. In addition the tool environment provides facilities to perform (semi-) automatic impact analyses and to guide implementation of changes. These facilities support the maintainer and project manager. Inspection and Simulation (Step 3 and 6): The RD and SD are inspected for defects to assure the quality of the requirements documents early in the development process. We provide tailored perspective-based inspection techniques that facilitate defect detection and emphasize knowledge and experience transfer. This includes the use of the simulation facilities of the tool “Rhapsody.” It also includes test-case derivation as part of the tester reading scenario. In the following sections, we show how requirements engineers, quality engineers, and maintainers use the different QUASAR techniques and motivate our choice of notations and tools.
Supporting Different Views The requirements engineer of the car manufacturer creates the RD. The RD must support communication between management, marketing, and user experts about the functionality (in particular innovations). In addition the supplier must be able to understand why the specified system functions are needed. Thus the requirements engineer captures the context of the functions through use cases (UC). She abstracts from detailed system behavior, that is, system internal technical details that are not visible to the user. This means she abstracts from sensors, actuators, or user interface details. Instead she uses monitored and controlled variables to describe inputs and outputs (Parnas & Madey, 1995). This abstraction in particular helps to focus on the essentials of the sociotechnical aspects of the product. The UCs also support quality assurance, as they are a good basis for test-case derivation. The requirements engineer at the supplier creates the SD. The SD must support communication between developers about the functionality. In addition developers at the car manufacturer need to verify that the required system functions from the RD are covered in the SD. The requirements engineer captures the details of the system functions with Statecharts (SC). SCs describe efficiently the possible events and the system reaction. Developers are used to this notation in the car industries. SCs are also
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
161
a good basis for quality assurance, as there are powerful tools to support their specification and analysis.
Document Templates During the creation steps the requirements engineers use QUASAR templates and QUASAR creation guidelines for the RD and the SD. The document templates are an adaptation of IEEE Std.830 (IEEE 830, 1998) for the domain of automotive systems (Kamsties, von Knethen, & Paech, 2001). The template captures experience with existing documents. It requires detailed contents, such as scope, purpose, customers of the product, functionality block description, production, process, and domain constraints. This helps to ensure the completeness of the documents.
UC Creation Guidelines To specify the UCs in the RD, the requirements engineer follows specific guidelines. These guidelines collect ideas from literature and refine them for technical products. First the requirements engineer creates a UC diagram by identifying the relevant actors and their tasks. This diagram gives an overview of the system functionality from a user point of view; that is, the main tasks the system shall support. Moreover the diagram helps to identify relationships between different user tasks by means of relationships between UCs. Then the requirements engineer refines each UC identified in the UC diagram by a textual description that clarifies the general task. She fills in a UC template for each identified UC. Figure 3 shows the most important facets of such a template. It is an excerpt of a UC for totally moving a window in a car (up or down). In addition to the shown facets, the name of the UC, actors involved in the UC, the goal of the actor, and quality requirements related to the UC are collected. The requirements engineer takes the name and actors directly from the UC diagram. Then she elaborates the name with the actor goal. She details this goal with the precondition and the postcondition. Next she collects the monitored and controlled variables relevant for the UC. Controlled variables describe the system parts controlled in this UC as well as system data created. Monitored variables capture the different possibilities of the actors to trigger the system reaction as well as other system data needed in the UC. Now the requirements engineer focuses on the normal course of interaction between actor and system and the possible exceptions. This clarifies how exactly the user task should be supported by the system. Here she uses the essential UCs from Constantine & Lockwood (2001). To support a compact interaction description, the requirements engineer captures details of the system reaction in the rules facet. To address the safetycritical characteristics of technical products, she puts particular emphasis on the specification of exceptions. Therefore she analyses four types of exceptions explained in the guidelines:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
162 Paech, Denger, Kerkow and von Knethen
•
Exceptions resulting from actor inputs outside of the UC (for example, the actor triggers partial window movement),
•
Exceptions resulting from reaching limit positions,
•
Exceptions resulting from system behavior outside of the UC (for example, window positioning might be dependent on the current speed),
•
Exceptions resulting from technical problems in carrying out the system reaction.
These exceptions induce different system behavior. In case of a limit position, for instance, the system reaction is stopped. Another input triggers a new use case execution. Finally she documents all important traceability relationships between UCs and to other requirements (see section on change management).
SC Creation Guidelines The SC guidelines give detailed instructions for the derivation of SCs from UCs; that is, for the derivation of the developer’s view (formal) from the user’s view (informal) on the requirements (Denger et al., 2003a). The main characteristic of these guidelines is that the SC structure mirrors the UC structure in order to facilitate lean traceability and easy understandability.
Figure 3. Excerpt of the UC “Total window movement” Description
Exceptions
Rules Mon. Variables Cont. Variables Precondition Postcondition
1. Actor totally moves the window into a certain direction 2. System reacts accordingly [Exception 2.1.: Actor moves the window partially] [Exception2.2.: Technical problem] [Exception 2.3.: Safety opening] 2.1. Partial movement: Use case “Partial Movement” 2.2. Technical problem: System does not react accordingly 2.3. Safety Opening: System moves the window into its lower end position The system activates the “Safety opening” in the case that the actor moves the window upwards but no change of the window position is recognized Window_Position, Actor_Input: movement type (partially/total) and movement direction (up,down) Window_Position Window has new position
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
163
In the first step the requirements engineer builds a class structure of the system based on the UC structure. She models each UC, each monitored, and each controlled variable as a class and maps relations between UCs to associations between these classes. In a second step she creates SCs for every class describing a monitored or controlled variable. She represents the possible values of the variables as states. The transitions between the states are triggered by events occurring in the environment of the system, that is, triggered by the actors of the system. She identifies such events in the facet description of the UC template. In a third step she builds an SC for each UC-class. Initially this SC contains two states (Figure 4): 1.
an “idle-state” representing that the UC is not active and
2.
a “system-reaction-state” representing that the UC is active.
This simple structure of the SCs establishes lean traceability between the UCs and the SCs. The start of a UC is represented as a transition from the “idle-state” to the “systemreaction-state” (Figure 4, the external event “ev_Start_Total_Movement”), and the end of a UC is represented as a transition from the “system-reaction-state” to the “idle-state” (event “ev_Stop_Input”). The requirements engineer finds the events triggering these transitions in the UC template facet description. The different exceptions are realized as additional transitions from the “system-reaction-state” to the “idle-state” as well as guard conditions restricting the transition from the “idle-state” to the “system-reaction-state.” The requirements engineer does not focus on minimality of the class structure or the SCs. Therefore it might be necessary to restructure the class diagram and the SCs during the development of the system design. During RE the direct traceability between RD and SD
Figure 4. Statechart of the Use Case “Total window movement”
ev_Start_Total_Movement[!( s_Idle (itsWindow.isIn(upper_Limit)&& (params.pDirection == up)) || (itsWindow.isIn(lower_Limit)&& (params.pDirection == down)))]/ ev_Stop_Input/ o_create_Interrupt(); o_Stop_Window o_Start_Window(params.p_Direction); ev_Interrupt_Partial s_System_Reaction
ev_Limit_Position/ o_Stop_Window(); o_Notify_UC();
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
164 Paech, Denger, Kerkow and von Knethen
is more important than minimality. As shown in the next section UCs and SCs are also a good basis for quality assurance.
Quality Management Based on Inspections and Simulation The quality manager is responsible for early quality assurance. She uses perspectivebased inspection (Basili et al., 1996; Laitenberger, 2000) to ensure that all important stakeholders give early feedback to the documents and that each document is scrutinized wrt. a wide variety of viewpoints. The most important stakeholders of the RD and SD are marketing of the car manufacturer, requirements engineers, system, hardware and software engineers, testers, and maintainers. Each stakeholder uses QUASAR reading scenarios that are tailored to the quality needs of each perspective (Paech & Denger, 2004; Denger & Ciolkowski, 2003). A reading scenario consists of a detailed description of activities an inspector should perform and a set of questions that should be answered while and after performing these activities. For example, the reading scenarios of the tester describe step by step how to derive test cases from the RD or the SD. Consequently defects that reduce the testability of the RD or the SD are detected. As the inspectors perform activities typical for the stakeholder roles, the created results can be used in the following as sketches for the real work products. With the reading scenarios the quality manager can also involve novices as inspectors, as they have detailed guidance on what to look for. Inspections also support knowledge transfer between experts and novices as experiences about typical defects are used to enhance the scenarios. Thus novices quickly learn about important quality aspects and typical defects. Finally novices learn about expert knowledge in the inspection meeting where the findings of all inspectors are collected. As part of the inspections the quality manager uses simulation. The SCs are developed within the tool Rhapsody, which allows the behavior of the SCs to be executed and visualized. Thus the quality manager checks for complex defects resulting from the dynamic interaction of specified functionalities. For example, typical scenarios from the UCs are checked against the SC to detect defects in the SCs due to wrong refinement of the UCs. In addition the visualization gives feedback on the behavior of the UCs. This supports the communication about the user point of view.
Change Management Based on Traceability Guidelines and Requirements Management Tools Change management involves various roles: the requirements engineer who has to establish traces that can be analyzed, the project planner or project manager who has to
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
165
analyze traces in order to estimate the costs of proposed changes, and the maintainer who has to analyze traces to implement proposed and agreed changes consistently. There are other roles that are also involved in change, such as the quality engineer who has to verify the quality of changed documents. However these roles are not yet supported by the change management techniques of QUASAR. The requirements engineers create the different requirements documents with support of the QuaTrace tool (Figure 5). They apply the tracing guidelines to establish traces between related requirements (von Knethen, 2001b). The QUASAR tracing guidelines reduce the necessity for explicit link setting by using implicit links such as name traces or relationships given by the documentation structure. This reduces the effort for establishing traces. The requirements engineers establish traces between textual requirements and model elements (for example, UCs, SCs) with the help of RhapsodyInterface (that is, the first component of our QUASAR tool environment). RhapsodyInterface allows model elements to be exported to RequisitePro, where the requirements engineers are now able to establish traces. As soon as all implicit and explicit traces of a requirements document are established, the requirements engineer extracts these traces (for example, a name trace between a natural language paragraph and a use case) by using the Relation-Finder. Relation-Finder stores all extracted traces in a relation file. The project planner and the maintainer use the Relation-Viewer to analyze the impact of changes (semi-) automatically. The Relation-Viewer analyses and visualizes the traces stored in the relation file. It helps rating the effort and costs of changes (there is an export functionality to Excel) and to change the requirements documents consistently. The maintainer uses tracing guidelines to update established traces and add new traces, too. The tracing guidelines and the tool environment support efficient establishing of traces. However, due to the uncertainty of upcoming changes, the requirements engineer is often not motivated to document his or her knowledge (see also work from organizational psychology about the importance of experienced meaningfulness, experienced responsibility and the knowledge about the results of work activities (Cook & Salvendy, 1999; Peters & Peters, 1999). Therefore the project manager has to make a tradeoff between overspecification and demotivation of the requirements engineer and underspecification Figure 5: The QuaTrace tool tool environment
RequisitePro
Rhapsody-Interface Rhapsody Relation-Finder
Relation-Viewer
R
relations file
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
166 Paech, Denger, Kerkow and von Knethen
and overload of the maintainer. This tradeoff decision is supported by the following additional QUASAR practices: •
The requirements engineers anticipate change by eliciting possible changes to requirements. They highlight these requirements and capture dependencies to and from these requirements.
•
The requirements engineers capture the rationale of a requirement based on a Socratic dialogue by asking three times “why” (Dutoit & Paech, 2001).
Evaluation In this section, we summarize empirical studies on the QUASAR process. The first empirical study was an experiment to evaluate the understandability of UCs compared to unstructured natural language documents. One group of six practitioners read an unstructured RD of the seat positioning part of the case study. A second group of six practitioners read the same content but specified with UCs. After reading the documents each subject had to answer questions regarding the content of the document. In order to compare the understandability of the two notations, we measured the time for reading the documents, the time for answering the questions, and the number of correct answers regarding the context (see comparable experiment designs, for example, in Kamsties, von Knethen, & Reussner, 2003). The experiment showed that there was no difference in the correctness of the answers, it took more time to read the UC-document, and it took less time to answer the question for the UC-document. We did not check the statistical significance of our results because of the low number of participants. However we think the results support the usefulness of UCs. The guidelines for the creation of the UCs and the UC inspection process were evaluated in a case study in a practical course at the Technical University of Kaiserslautern. In this case study 12 students used the guidelines to create UCs of a building automation system. After the completion of the UC document, the students had to answer a questionnaire regarding the usefulness and applicability of the guidelines. The guidelines were perceived as useful and applicable by the subjects (Denger et al., 2003b). Furthermore the students used three tailored reading scenarios to inspect the UCs. We analyzed the effectiveness and the efficiency by measuring the number of detected defects and the time needed to perform the inspection compared to another inspection approach (checklist-based reading). Additionally the students participated in a survey regarding the perceived usefulness of the reading scenario. On average a higher number of defects was found with our inspection approach but more time was needed to perform the inspection. The reading scenario advises the inspectors to perform real work activities on the document. The results of the questionnaire indicate that the detailed guidance given in the reading scenarios are perceived as helpful to support the inspection compared to a pure checklist. Finally the guidelines to derive SCs from UCs were successfully applied in a seminar with four students at the Technical University of Kaiserslautern. The students reported that Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
167
the clear relationship (lean traceability) between the UCs and SCs imposed by the guidelines was very helpful. They appreciated the concrete guidance to develop an executable SC model from the UCs. Altogether these evaluations show that the templates and guidelines for the integration of specification and quality assurance are helpful. So far we have not evaluated the change management techniques in QUASAR. However similar techniques for a comparable documentation structure were evaluated in a controlled experiment and two case studies. These results strongly suggest that the techniques have a beneficial effect on the efficiency of impact analyses and the quality of the resulting set of change impacts (see von Knethen, 2001a; von Knethen, 2001b). Therefore we also have the indication that the change management techniques are helpful. Clearly it is difficult to judge the applicability in industry from student experiments. In our view the main obstacle to widespread use of the QUASAR process is that the detailed guidelines have to be adapted to the company context. This additional effort should be outbalanced by the benefits on the quality of the requirements.
Future Trends We have presented an RE approach to cope with the major sociotechnical challenges of technical products. Future trends concern on the one hand improved RE and on the other hand innovations in the technical products. An example for the former is the tight integration of inspection, simulation, and test case derivation as a means to enhance the quality of requirements. This can be achieved based on a defect classification, which allows the defect detection technique to be chosen that is best suited to a defect class. In addition measurement of the defects found can be used to focus quality assurance effort. First results in this direction can be found in Freimut & Denger (2003). An example for the latter is the trend to telematics and infotainment in the car. Because of the innovative user interfaces, this requires in particular usability engineering techniques tightly integrated with RE (Paech & Kohler, 2003). In addition it increases the number of stakeholders involved. For example, in the case of telematics many drivers interact with a central authority. For such systems, detailed analysis of the system context and the stakeholders — as typical for collaborative embedded systems — need to be emphasized.
Acknowledgments We thank our project partners at Fraunhofer FIRST and colleagues at DaimlerChrysler for fruitful discussions. This approach was developed in the QUASAR project supported by the BMBF under the label VFG0004A.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
168 Paech, Denger, Kerkow and von Knethen
References Alexander, I., & Robertson, S. (2004). Understanding project sociology by modeling stakeholders. IEEE Software, (January/February), 23-27. Basili, V. R., Green, S., Laitenberger, O., Lanubile, F., Shull, F., Sorumgard, S., & Zelkowitz, M. (1996). The empirical investigation of perspective-based reading. Empirical Software Engineering, 1, 133–164. Cockburn, A. (2001). Writing effective use cases. Addison Wesley. Constantine, L.L., & Lockwood, L.A.D. (2001). Structure and style in use cases for user interface design. In M. van Harmelen (Eds.), Object-oriented user interface design (pp.1-27). Addison-Wesley. Cook, J.R., & Salvendy, G. (1999). Job enrichment and mental workload in computerbased work: Implications for adaptive job design. International Journal of Industrial Ergonomics, 24(1), 13-23. Denger, C., & Ciolkowski, M. (2003). High quality statecharts through tailored perspective-based inspections. Proceedings of EuroMicro Conference (316-323). Denger, C., Kerkow, D., von Knethen, A.,& Paech, B. (2003a). A comprehensive approach for creating high-quality requirements and specifications in automotive projects. Proceedings of International Conference on systems and software engineering and application. Denger, C., Paech, B., & Benz, S. (2003b). Guidelines – Creating use cases for embedded systems (IESE-Report 033.03). Kaiserslautern, Germany: Fraunhofer Institute for Experimental Software Engineering. Dutoit, A.H., & Paech, B. (2001). Rationale management in software engineering. In S.K. Chang (Eds.), Handbook of software engineering and knowledge engineering (787-815). World Scientific. Freimut, B., & Denger, D. (2003). A defect classification scheme for the inspection of QUASAR requirement documents (IESE report 076.03). Kaiserslautern, Germany: Fraunhofer Institute for Experimental Software Engineering. Glinz, M. (2002). Statecharts for requirements specification – As simple as possible, as rich as needed. Proceedings of ICSE-Workshop on Scenarios and State Machine Models, Algorithms and Tools. Grimm, K. (2003). Software technology in an automotive company – Major challenges. Proceedings of Int. Conference On Software Engineering, IEEE, 498-503. Harel, D.,& Politi, M. (1998). Modeling reactive systems with statecharts: The statemate approach. McGraw Hill. Heitmeyer, C., & Bharadwaj, B. (2000). Applying the SCR requirements method to the light control case study. Journal of Universal Computer Science (JUCS), August, 650679. Houdek, F., & Paech, B. (2002). Das Türsteuergerät. Eine Beispielspezifikation (IESEReport 002.02/D (in german)). Kaiserslautern, Germany: Fraunhofer Institute for Experimental Software Engineering. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Technical Products
169
IEEE Recommended Practice for Software Requirements Specification. (1998). Standard 830-1998. Kamsties, E., von Knethen, A., & Paech, B. (2001). Structure of QUASAR requirements documents (IESE-Report 073.01). Kaiserslautern, Germany: Fraunhofer Institute for Experimental Software Engineering. Kamsties, E., von Knethen, A., & Reussner, R. (2003). A controlled experiment to evaluate how styles affect the understandability of requirements specifications. Information and Software Technology, 45(14), 955-965. Laitenberger, O. (2000). Cost effective detection of software defects through perspectivebased inspections. (PhD Theses in Experimental Software Engineering, Fraunhofer IRB Verlag). Paech, B., & Denger, C. (2004). An integrated quality assurance approach for use case based requirements. Modellierung 2004, Lecture notes in Informatics (45), 59-74. Paech, B., & Kohler, K. (2003). Usability engineering integrated with RE. Proceedings of ICSE-Workshop Bridging the gap between HCI and SWE. Parnas, D.L., & Madey, J. (1995). Functional documents for computer systems. Science of Computer Programming, 25, 41-61. Peters, T.J., & Peters, T. (1999). The project 50 (reinventing work): Fifty ways to transform every “task” into a project that matters. Knopf. Ramesh, B., & Jarke, M. (2001). Towards reference models for requirements traceability, IEEE Transactions on Software Engineering, 27(1), 58-93. Requisitepro. Retrieved on November 11, 2004 from http://www.rational.com/products/ reqpro/index.jsp. Rhapsody. Retrieved on November 12, 2004 from http://www.ilogix.com/products/ rhapsody/index.cfm. Sommerville, I., & Sawyer, P. (1997). Requirements engineering – A good practice guide. Addison-Wesley. Sutcliff, A., & Minocha, S. (1998). Analysing sociotechnical system requirements. CREWS_Report 98-37. Telelogic. Retrieved on November 12, 2004 from http://www.telelogic.com. Thompson, J.M., & Heimdahl, M.P.E., & Miller, S.P. (1999). Specification-based prototyping for embedded systems. ESEC/FSE, 163-179. von Knethen, A. (2001a). A trace model for system requirements changes on embedded systems. Proceedings of IWPSE’01 (17-26). von Knethen, A. (2001b). Change-oriented requirements traceability. support for evolution of embedded systems. (PhD Theses in Experimental Software Engineering, Fraunhofer IRB Verlag). von Knethen, A., & Grund, M. (2003). QuaTrace: A tool environment for (semi-) automatic impact analysis based on traces. Proceedings of Int Conf. on Software Maintenance, ICSM (246-255).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
170 Grützner and Paech
Chapter XI
Requirements Engineering for Courseware Development Ines Grützner, Fraunhofer Institute for Experimental Software Engineering, Germany Barbara Paech, University of Heidelberg, Germany
Abstract Technology-enabled learning using the Web and the computer and courseware, in particular, is becoming more and more important as an addition, extension, or replacement of traditional further education measures. This chapter introduces the challenges and possible solutions for requirements engineering (RE) in courseware development projects. First the state-of-the-art in courseware requirements engineering is analyzed and confronted with the most important challenges. Then the IntView methodology is described as one solution for these challenges. The main features of IntView RE are: support of all roles from all views on courseware RE; focus on the audience supported by active involvement of audience representatives in all activities; comprehensive analysis of the sociotechnical environment of the audience and the courseware as well as of the courseware learning context; coverage of all software RE activities; and development of an explicit requirements specification documentation.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Courseware Development
171
Courseware: A Typical Sociotechnical System Continuing professional education and life-long learning are both vital in order to maximize competitiveness, introduce innovative technologies, and prepare for new challenges in all branches of industries. Traditional strategies for education and training such as seminars are not able to fulfill the growing demand for further education in a topical and efficient way. Therefore technology-enabled learning using the Web and the computer and courseware, in particular, is becoming more and more important as an extension or a replacement of traditional further education (Levis 2002; Ochs & Pfahl, 2002). We denote as courseware any instructional system delivering content or assignments via computers in order to support learners as well as teachers in technical and instructional ways. In other words, from the users’ point of view, courseware may be seen as educational material (content and instructional guidelines) that is distributed via the Web for training purposes. From the developers’ point of view, courseware can be perceived as a collection of multimedia documents interrelated by means of (perhaps restricting) navigational structures, which is supplemented with community functionalities. Courseware is usually not developed by software companies but by content experts such as publishing companies, universities, or companies that want to use the courseware for their internal further education. This is mainly due to the fact that courseware is perceived as an instructional product, although it is definitely software. Courseware is a special kind of software, adding an instructional dimension to the content, functional, nonfunctional, and user interface dimensions of traditional software. Its main goal is to support the learners in achieving their learning objectives in an effective and efficient way. The software and user interface features of courseware are essential in achieving this goal but also have to fit into and support the overall instructional strategy of the courseware. Furthermore courseware is very often integrated into a larger educational program (that is, a curriculum). The courseware is used to achieve only a few of the learning objectives of the program. The other objectives are achieved by seminars, talks, workshops, virtual communities, and so forth. Thus, courseware is one particularly complex example for a sociotechnical system that requires equal support for user needs and technological innovations. Requirements engineering processes for such systems typically build on a user- and usage-centered process (for example, Constantine & Lockwood, 1999). They start from task and user profiles and gradually develop an understanding of the domain, of the technological options, and of the interactions between users and software (for example, the TORE approach (Paech & Kohler, 2003)). The challenge for courseware development is that in addition to the procurers, users, domain experts, and software developers, there are also instructional experts as stakeholders. Thus, two levels of tasks have to be reflected: on the one hand the learning task (interest of the instructional experts) and on the other hand the working task, whose performance should be supported through the course (interest of the procurers, users, and domain experts). Both influence the functionality, quality, content, and presentation of the courseware. In addition they are embodied in an instructional strategy that drives the courseware.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
172 Grützner and Paech
The working tasks of the learners define which knowledge, skills, and attitudes have to be taught in the courseware in order to enable the learners to perform these tasks successfully. Following a taxonomy adapted from Gagné, Briggs, & Wager (1992), this means that: •
Knowledge includes both declarative and procedural knowledge.
•
Skills demonstrate that the learners are able to perform a learning task or a working task.
•
Attitudes amplify positive or negative reactions toward objects, persons, or events/situations.
The learning tasks define how the knowledge, skills, and attitudes have to be taught in order to enable effective and efficient learning in the sociotechnical environment of the learners and the courseware. The sociotechnical environment consists of the places where the learners learn during an educational program or work with a courseware, as well as of the technical equipment and the environmental situation in which learning takes place. The fact that courseware RE has to support two different types of tasks increases its complexity. For example, it requires the introduction of requirements elicitation and quality assurance methods from pedagogy and instructional design. Traditional software requirements elicitation and quality assurance methods like prototyping can only be applied to elicit or assure information regarding the learning tasks. However they cannot be used to elicit and assure that all required knowledge, skills, and attitudes are taught and that they are taught efficiently in the specified environment and learning context. Therefore instructional elicitation and quality assurance methods, and evaluations used to measure the impact on and the gains for the audience (ReinmannRothmeier, Mandl, & Prenzel, 1994), have to be applied additionally in courseware RE. In the following, we focus on elicitation, analysis, and documentation of requirements but omit requirements management. For the latter traditional RE techniques like traceability can be used. In the next section we discuss the state-of the-art and -practice of RE for courseware development. We also introduce important terminology and identify the main challenges for courseware RE. In order to illustrate the challenges and possible solutions, we present how the IntView process developed at Fraunhofer IESE addresses these challenges. We close with a short summary, lessons learned from the application of IntView, and a discussion of future research.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Courseware Development
173
State-of-the-Art of Requirements Engineering for Courseware Development As of today, many approaches for courseware development have been published. However these approaches have several shortcomings regarding RE. To identify these shortcomings we have analyzed more than 30 courseware development approaches published in books, journals, or on Web sites, and their RE activities. Summarizing the results of this literature review, we have identified the following five challenges courseware RE has to deal with: 1.
Equal support of all stakeholder views Taking care of two different kinds of tasks increases not only the complexity but also the number of roles involved in courseware RE. All roles involved can be clustered into the following four stakeholder views:
2.
•
The content-instructional view dealing mainly with the content and the instructional design of courseware, for example, Hannafin & Peck (1988), Lee & Owens (2000);
•
The technical-graphical view dealing mainly with user interface design and the technical implementation of courseware, for example, Hall (1997), McCormack & Jones (1998), Weidauer (1999);
•
The managerial view dealing mainly with project management aspects, for example, Bruns & Gajewski (1999), Schanda (1995); and
•
The usage-oriented view dealing with the task aspects.
Comprehensive RE dealing with both learning and working tasks This challenge has to be split into two interdependent parts. First courseware development has to support all RE activities. The categories for classifying the approaches in the review regarding this challenge are: •
Comprehensive RE including all RE activities,
•
Partial RE including some RE activities, and
•
No RE.
Second courseware RE has to deal with both working and learning tasks. The categories for classifying the approaches in the review regarding this challenge are:
3.
•
Both tasks,
•
Learning tasks only, and
•
Working tasks only.
Real user- and learning objective-centered RE Representatives of the potential audience have to participate actively in the RE activities in order to enable the planned courseware to achieve its goal, namely the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
174 Grützner and Paech
support of the learners in achieving their learning objectives in an effective and efficient way. 4.
Gradual development of an understanding of the domain, of the technological options, and of the sociotechnical environment Attaining this understanding is very important, since the learning tasks of the educational program and, eventually, its courseware have to be designed in accordance with the sociotechnical environment in order to guarantee learning success. The presence of a specific learning task, which was a major success factor of one educational program or courseware, does not result in learning success when applied in a new sociotechnical environment. To make matters worse, its presence could even be counter-productive (for example, to work through animations with spoken explanations works well when learners work in a single-person office but not in an open-plan office). For the literature review we distinguish between the following three options: • Comprehensive, • Partial, or • No incrementality.
5.
Development of a comprehensive, explicit requirements specification documentation A comprehensive, explicit requirements specification documentation is an important prerequisite for complete processing of the analysis results and, therefore, for a common understanding of the courseware requirements as a basis for the design. Of course common understanding could also be reached by intensive communication among representatives of the roles. However, in courseware RE, there are more different roles involved than in RE for “traditional” software. Furthermore these roles speak very different languages. Therefore it is almost impossible to achieve a common understanding only by communication. In addition, if there is no requirements specification documentation, the decisions made on the basis of the analysis results will not be documented. Then it is almost impossible to change the courseware when new requirements arise or the sociotechnical environment changes. The categories of documentation identified in the literature review are: • Specification required, • Documentation of the results of the requirements activities, and • No documentation required. Table 1 provides an overview of how current courseware development approaches as well as the IntView RE methodology meet the challenges identified. This enables a comparison of the IntView courseware RE solutions to current state-of-the-art.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Mainly Contentinstructional view Partial RE dealing with both tasks No Partial analysis
Documentation of the results of the requirements activities
No
Partial analysis
Documentation of the results of the requirements activities
3
4
5
Lee & Owens, 2000
Approach Hannafin & Peck, 1988 Challenge 1 Mainly Contentinstructional view 2 Partial RE dealing with both tasks No
Mainly Technicalgraphical view Partial RE dealing with learning tasks only
No documentation required
Documentation of the results of the requirements activities
Partial analysis Partial analysis
No
Mainly Technicalgraphical view Partial RE dealing with learning tasks only
McCormack & Hall, 1997 Jones, 1998
No
Mainly Managerial view No RE
Schanda, 1995
Explicit requirements specification
No documentation required
Partial analysis No analysis
Mainly Technicalgraphical view Comprehensive RE dealing with learning tasks only No
Weidauer, 1999
Support of all roles from all views on courseware RE
IntView
Covers all software RE activities As underlying software RE approach, the task-driven RE approach (TORE) by Paech & Kohler (2003) was chosen. No Focus on the audience supported by active involvement of audience representatives in all activities Partial analysis Comprehensive analysis of the socio-technical environment of the audience and the courseware as well as of the courseware learning context No Development of an explicit documentation requirements specification required documentation (Development of a specification optional)
Bruns & Gajewski, 1999 Mainly Managerial view Partial RE dealing with learning tasks only
Requirements Engineering for Courseware Development 175
Table 1. Overview of typical courseware development approaches and of IntView
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
176 Grützner and Paech
Courseware Requirements Engineering with the IntView Methodology The IntView courseware engineering methodology was developed by Fraunhofer IESE in order to address the challenges identified in the previous section. As a comprehensive development methodology, IntView integrates all of the important views of high-quality courseware development. IntViews integrates best practices of courseware development approaches with software engineering approaches (for example, Endres & Rombach, 2003; Kruchten, 1998) in order to meet the challenges identified in the previous section. For example, the lifecycleencompassing quality assurance methodology is adapted from software engineering. On a high abstract level the resulting IntView courseware methodology can be described as a V-Model-like, product-centered life-cycle model (Grützner, Pfahl, & Ruhe, 2002). The RE activities are part of the second IntView phase, which produces the courseware requirements specification. The courseware requirements specification, which refines the problem statement, consists of several dependent elements. In addition the project team and the plans for both the RE activities and the whole courseware development are established in order to perform IntView RE successfully. An overview of the products of the IntView RE activities is given in Figure 1. The elements of the requirements specification follow the TORE approach (Paech & Kohler, 2003). The task level serves to identify the user roles and their tasks. On the domain level the tasks are analyzed to determine the to-be-activities and data of the user roles and the support of the software. The support is detailed on the interaction level by devising a logical user interface structure, use cases, interaction data, and system functions. On the system level, the architecture and internal details of the functions are specified.
Figure 1. The products of the IntView RE activities and their dependencies Courseware requirements specification Requirements Requirementsengineering engineering project projectteam team
Task Tasklevel: level: Analysis Analysisresults results
Courseware Coursewaredevelopment development project projectteam team
Requirements Requirementsengineering engineering plans plans
Domain Domainlevel: level: Instructional Instructionalspecification specification
Courseware Coursewaredevelopment development plans plans
Interaction Interactionlevel: level: Interaction Interactionspecification specification
Test Testcases cases
System Systemlevel: level: Courseware Coursewarearchitecture architecture
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Courseware Development
177
The products are explained in the following subsections. To illustrate the products, we use examples from a courseware development project we have performed at Fraunhofer IESE. The goal of this project was to develop courseware that teaches how to develop technical documentation. The project is performed together with a local education organization acting as the customer. The customer also provides working task experts that are, in addition, the subject matter experts in the project team. Furthermore participants of a current further education program in the field of technical documentation run by this education organization participated in the project as audience representatives. The courseware is to be used in programs that the education organization will organize for the further education of professionals as well as for unemployed people.
RE and Courseware Project Structure and Plans First the plans (that is, a project plan, a quality assurance plan, and a risk management plan) and the project team for the RE activities have to be established. The project team consists of people representing roles from the four views, for example: •
Project manager and quality manager as managerial roles;
•
Subject matter expert, instructional courseware designer, courseware author and editor, teachers/tutors, and human factors expert as content and instructional roles;
•
Graphic designer and artist, multimedia developer, programmer, and courseware programmer as technical and graphical roles; and
•
Members of the potential audience, representatives of the customer (for instance, the responsible project manager from the customer or a steering manager), and working task experts as roles representing the usage-oriented view.
The roles representing the usage-oriented view participate in the project team in order to involve the audience actively in the RE and to assure strict learner orientation. Similarly these roles ensure that all the required knowledge, skills, and attitudes are taught and that the sociotechnical environment of the learners and the courseware is specified correctly. These plans have to be consolidated at the end of the courseware RE, because the requirements and the architecture of the courseware have a great impact on the development process and the structure of the project team.
Analysis Results In order to take the strong learner orientation into account, IntView RE starts with an analysis of the audience and its educational needs. The main goals of these analyses are to learn more about the potential learners and their learning and working tasks, as well
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
178 Grützner and Paech
Figure 2. The elements of the analysis results documentation and their dependencies Analysis results documentation Related to the audience
Related to the educational needs
Audience Audiencecharacterization characterization
Working Workingtask taskdefinition definition
Requested Requestededucational educational profile profile
Required Requirededucational educational profile profile
As-is As-iseducational educationalprofile profile
Target Targeteducational educationalprofile profile
Educational Educationalneeds needs
as to specify the knowledge, skills, and attitudes required to perform the working tasks successfully. The results of both intertwined analyses are documented in an analysis results documentation consisting of the elements depicted in Figure 2. The focus of the audience analysis and the resulting audience characterization is on the learning tasks on the task level. In detail the audience characterization provides insights into the characteristics of the potential learners and into their prerequisites for performing learning tasks, especially for performing learning tasks with new educational media. One of the most important characteristics is the expectation of the audience regarding the educational program, since this reveals a lot about the hopes and fears of the audience, which should, in turn, be dealt with in the following specifications (realize hopes, remove fears). Table 2 shows a section of the audience characterization from the example project. These audience characteristics are elicited with questionnaires or in interviews from both potential learners and customer representatives. The educational needs analysis and its resulting documentation provide a detailed overview of the working tasks of the audience. It starts with the definition and, if necessary, the refinement of all working tasks that should be supported by the educational program (see Table 3). In the next step, those knowledge, skills, and attitudes are specified that are required for performing each of the defined working tasks successfully from the working task expert point of view. They are collected in the required educational profile (see Table 4). This required educational profile represents the objective educational need of the potential learners.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Courseware Development
179
Table 2. Audience characterization in the area “Technical documentation” Target group: Participants of further education measures Demographical • Mix of male and female learners (15 up to 20 in each measure) data • Age: 25 – 55 • Mainly university graduates from all fields / but also college dropouts (!!) or persons with good writing skills • … Motivation • Career opportunities (intrinsic) • Professional reorientation because of unemployment (intrinsic/extrinsic) • … Professional • Almost no knowledge or experiences in writing technical documentations experiences Learning • Experiences with traditional education from school and university experience • Experiences with self-directed learning from university • … Computer and • Good command of word processors Internet skills • Rudimentary Internet skills (know how to surf the net and how to write e-mails) • … Expectations • A good interactive multimedia presentation that has an added value compared to books • …
Table 3. Section of the working task definition from the area “Technical documentation” Working task Refined working tasks (optional)
1 Evaluation of software to be documented 1.1 Evaluation of software functions, processes, and structures 1.2 Evaluation of the graphical user interface 1.3 Evaluation of the terminology used in the software
Potential sources for eliciting this profile are the representatives of the customer and of the working task experts, written job descriptions, or artifacts used, modified, or developed during the working tasks. Example methods for elicitation are: observations of experts while performing the working tasks (especially for elicitation of required skills); interviews or other kinds of questionings of customer and working task experts (especially for elicitation of knowledge and attitudes); or analysis of documents (especially for elicitation of knowledge and skills). The required educational profile has to be supplemented by the subjective educational needs of the audience, that is, by the set of knowledge, skills, and attitudes the learners think they have to possess in order to perform their working tasks (requested educational profile, see Table 5). This is a measure of active learner participation in the RE. The example shows that the potential learners raise additional educational needs. Potential sources for eliciting this profile are the representatives of the potential audience, who are interviewed or asked to fill in questionnaires. Therefore it is a good choice to do this elicitation together with the audience analysis. Together with the requested educational profile, the “as-is” educational profile of the potential learners is elicited, which represents the set of knowledge, skills, and attitudes that the learners have already acquired (see Table 6). The target educational profile is specified by merging the required and the requested educational profile in one table. If there are opposing required and requested needs, the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
180 Grützner and Paech
Table 4. Section of required educational profile from the area “Technical documentation” Refined working task “1.1 Evaluation of software functions, processes, and structures” Knowledge Required • Procedures to run the evaluation • Scenarios when to use which procedure • Structure of bug reports • Structure of terminology lists Skills Required • Analytical skills • … Attitudes Required • “Communication is helpful” • …
Table 5. Section of the requested educational profile from the area “Technical documentation” Refined task “1.1 Evaluation of software functions, processes, and structures” Knowledge Requested • Problems from practice and their solution Skills Requested • Applying a systematic methodology to evaluate software Attitudes Requested • None
Table 6. Section of the “as-is” educational profile from the area “Technical documentation” Refined task “1.1 Evaluation of software functions, processes, and structures” Knowledge “As-is” • Some procedures to run the evaluation • Structure of bug reports Skills “As-is” • Analytical skills • Applying a systematic methodology in general Attitudes “As-is” • None
Table 7. Section of the educational needs from the area “Technical documentation” with the marked needs (underlined) that have to be taught Refined task “1.1 Evaluation of software functions, processes, and structures” Knowledge • Additional procedures to run the evaluation • Scenarios when to use which procedure • Structure of terminology lists • Problems from practice and their solution Skills • Applying systematic methodologies, especially in software evaluations • … Attitudes • “Communication is helpful” • …
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Courseware Development
181
conflict has to be solved by negotiations between the working task experts and the representatives of the audience. Then it is compared to the “as-is” educational profile. The result of this gap analysis is the set of knowledge, skills, and attitudes that should be taught by the educational program (the educational need, see Table 7). Typically the educational program cannot teach all of them because of time restrictions. Therefore a selection has to be made (underlined elements). The analysis result documentation is verified by a perspective-based inspection (Laitenberger & DeBaud, 2000). Besides providing the proof of completeness and correctness of the elicited information, the main goal of this inspection is information dissemination among all team members.
Instructional Specification The decisions dealt with during the instructional specification are part of the domain level. Their goal is to specify the sociotechnical environment of the educational program and its courseware as well as the rough design of the educational program itself in a comprehensive way. The results will be documented in the elements of the instructional specification, which are shown in Figure 3. The instructional specification contains the rough design of the learning tasks. First the social part of the courseware is specified, which defines the constraints for the definition of the technical part of the courseware, including the development of the interaction specification documentation. Figure 3. The elements of the instructional specification and their dependencies Instructional specification Main Mainlearning learningobjective objective
Instructional Instructionalstrategy strategy Definition of phases
Subjects Subjectstotobe betaught taught Specification of phases Description Descriptionofofsociosociotechnical technicalenvironment environment Definition of learning places Description of learning places
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
182 Grützner and Paech
Table 8. Learning objectives in the area “Technical documentation” Knowledge
Declarative Procedural
Skills
Working task Learning task
Attitudes
-
The learners can state the structure of a terminology list. … The learners know all of the suitable procedures to perform a software evaluation. The learners are able to apply systematic methodologies in a software evaluation on their own. The learners use the button “Additional information” to get more detailed information on a topic. The learners choose to communicate with other technical writers when writing their documentation.
Table 9. Section of the educational program content in the area “Technical documentation” Refined task “1.1 Evaluation of software functions, processes, and structures” Knowledge • Additional procedures to run the evaluation • What are alternative procedures to evaluate software (in addition to the already known procedures)? • What steps are to be followed when applying them? • Which procedure should be applied in which context? • …
The first result of the instructional specification activities is the main learning objective of the educational program. This specifies the main results the learner has to achieve in a testable way (Wambsganß, Eckert, Latzina & Schulz, 1997). Taxonomies that can be applied in specifying learning objectives are presented in Engelhart, Bloom, & Krathwohl (1971) and Krathwohl, Bloom, & Masia (1971) as well as in Gagné et al., (1992). In Table 8 an adaptation of the latter taxonomy is applied to the educational needs defined in Table 7. However the main learning objective of an educational program does not deal with learning objectives on the level of detail presented in Table 8, but subsumes all of them in one single goal on a much more general level. As an example the main learning objective of the educational program in the area of technical documentation is, “The learners are able to write high-quality technical software documentations on their own in a useroriented, systematic way.” The following specification of the educational program content refines the educational needs to be satisfied in compliance with the main learning objective of the educational program (see Table 9). Besides the specification of the content of the educational program, the sociotechnical environment of the educational program is analyzed in detail. The goal of this analysis is to identify constraints that the sociotechnical environment imposes on the program. Therefore the analysis identifies the places where the potential learners will learn during their participation in the educational program and describes the technical equipment at these places, the motivation of the learners to learn at these places, and the environmental situation at the places while the learners are learning (see Table 10).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Courseware Development
183
Table 10. Section of the sociotechnical environment description for the area “Technical documentation” Learning place 1: Computer classroom PC with Technical equipment • Pentium II 333MHz • 256 MB RAM • … Environmental • Learners will sit next to each other in one room without any partition. situation • Open for learning during the office hours, but learning mainly takes place during the lessons • Timetable for the lessons: • Possible factors of disturbance ~ Chatting neighbors ~ … • Goals of learning ~ Acquisition of new knowledge and skills ~ Training of new skills ~ ... • Kinds of learning ~ Following pre-defined learning paths (learning step-by-step) ~ Looking up information ~ … Motivation for • Scheduled lessons learning (Why do • Performing learning tasks upon request by the teacher/tutor the learners learn • Information acquisition while performing working tasks that have to be performed upon at this place?) request by the teacher/tutor Learning place 2: workplace … • …
Table 11. Section of the instructional strategy in the area “Technical documentation” Phase 1 of the educational program: Learning and Training I Computer classroom Learning place The learners know how to evaluate software to be documented. Learning … objective Procedure to perform an evaluation and the context for their application Content … Teaching • Traditional instruction with its different methods and with courseware as special learning methods material • Courseware for explorative, self-directed, step-by-step learning of knowledge (guided tour required) ~ … 2 months Duration Learner support Face-to-face support by teachers/tutors who are present in the computer classroom concept Face-to-face communication between teachers/tutors and learners as well as between Communication learners concept Not required Collaboration concept Phase 2 of the educational program: Project I … …
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
184 Grützner and Paech
The instructional strategy combines these inputs and specifies how the performance of the learning tasks will be supported by the program and its courseware (see Table 11). The instructional specification is also verified by a perspective-based inspection. Besides providing the proof of completeness and correctness of the documentation, the goal of this verification is to reach a broad consensus among the project team members, especially the representatives of the customer and those of the working task experts, for the specification and to get their final commitment that the instructional specification is feasible (that is, this verification establishes a consolidated basis for the following specification of the requirements).
Interaction Specification Starting with the interaction specification, IntView RE only deals with the courseware that is part of the educational program. Functional and non-functional requirements specify which courseware functionalities and characteristics are required in order to enable efficient performance of the learning tasks specified in the instructional specification. Experience from projects shows that this is more efficient when one or two experienced team members develop an initial draft of this specification. This initial draft should later on be discussed and consolidated in the team. The courseware interaction specification mainly deals with the learning tasks as they were specified in the instructional specification. The functional requirements specify, for example, navigational functionalities, functionalities for orientation in the courseware space, and functionalities for support of cooperation and collaboration, including a rough dialog and user interface design. As representation IntView proposes the use of a structured, clearly written natural language complemented with use cases and flow charts. Besides the functional requirements, well-known non-functional requirements fields like performance and usability have to be covered. In addition, these well-known requirements fields have to be extended by courseware-specific fields such as: •
Data requirements, which are mainly requirements regarding the content of the courseware (for example, the topics that have to be covered by the courseware and characteristics of these topics, that is, completeness, up-to-date-ness, and consistency);
•
Data requirements, which specify how the content has to be modularized and organized (for example, the number of modules to be developed, the maximum online learning time a module is allowed to comprise, or sequences of content that have to be realized in the courseware);
•
Implementation requirements, which deal with restrictions of the media usage in the courseware and of the parameters of the media types (for example, audio sequences are not allowed because the audience will learn in open-plan offices); and
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Courseware Development
185
Table 12. Section of the interaction specification for the courseware in the area “Technical documentation” Functional requirements Step-by-step forward guidance of the learner through all pages of the courseware (guided tour F1 forward) … … Provide a message board enabling teachers/tutors to make asynchronous announcements FA … … Data requirements Topics to be covered by the courseware NF 1 1) Evaluation of the software to be documented 2) … Structure of the courseware to be followed NF 2 1) Knowledge presentation 2) Exercises to test how much of the knowledge has been learned 3) … … … Data requirements Number of modules to be developed: 13 NF X Maximum length of a module: 30 minutes without exercises NF X+1 … … Implementation requirements Use of audio sequences only if there is no other way to teach a topic NF Y If audio sequences have to be used, they have to be supplemented by a written version of the NF Y+1 spoken text in order to allow to switch off audio. … … Instructional requirements Use of a gender-independent writing and visualization style NF Z Use of a single example with a practical background in all modules NF Z+1 … …
•
Instructional requirements, which refine the instructional specification (for example, writing, language, and presentation styles, usage of examples, as well as exercises and their feedback mechanisms).
Table 12 presents example requirements for each of these requirements categories as well as for each of the functional requirements. The methods for verifying the courseware requirements are inspections or reviews as well as prototyping. The goal, aside from assuring the quality of the requirements, is to gain the commitment of every project team member that he/she agrees with the requirements. In addition the specification of acceptance test and system test cases is used as a quality assurance measure. However these test cases cannot cover all elements of the courseware that have to be validated in the tests because there are non-testable items, for example, media parameters. Therefore a checklist for such non-testable items has to be developed, which is applied together with the test cases.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
186 Grützner and Paech
Courseware Architecture The decisions made during the architecture specification are part of the system-level decisions. The goal of the courseware architecture specification activity is the specification of how the courseware interacts with components required to run thecourseware. Examples of such components are a learning management system (LMS), or, if no LMS is used, a user management component, a chat system, and a content management system. It also supports the selection of suitable hardware to run the courseware. The architecture documentation contains a specification of the architecture components (hardware and software), the functionalities they provide or support, and their interfaces. It is verified by an inspection or a review in order to assure that the architecture can be implemented and that it is able to fulfill the functional as well as the non-functional requirements in an efficient way.
Conclusions, Experiences, and Future Research This chapter introduced the IntView courseware RE activities. These systematic activities are designed to meet the challenges of courseware RE. IntView courseware RE activities have been applied in several projects. Experience shows that the methodology fulfills its promises and facilitates fast, efficient RE for courseware. Findings from the projects show, for example: •
A good way to specify the educational program content is asking questions as shown in Table 9 that have to be answered in the educational program in order to achieve the overall learning objective. Doing it that way allows for verification of the content later on in the development project because it can be verified whether all questions have been answered or not.
•
A good way to develop the instructional specification is a workshop with all members of the project team. During such a workshop the team members can discuss their viewpoints face to face and decide on a joint instructional specification instead of doing this in a time-consuming and often-drawn-out off-line process.
•
The discussion and verification of the different elements of the courseware requirements specification leads to a stable requirements specification. In addition it creates a broad consensus in the project team, which is an important prerequisite for an efficient design phase.
IntView RE will be applied and improved in future projects. In addition it will be adapted to meet challenges arising from new approaches to courseware development like the semi-automatical, on-demand assembly of courseware from already existing courseware Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Courseware Development
187
fragments. For example the RE activities have to be performed much faster and more efficiently to create courseware just in time. In addition, the requirements specification has to be made by a single person, that is, by the person who will create the courseware by applying a tool. Unfortunately, in most cases, this person will not be able to fill all the roles of courseware RE. Therefore requirements scenarios have to be specified in advance by a project team. While running the semi-automatic assembling system, the responsible person can use these scenarios to establish the requirements for her/his courseware. IntView RE activities will have to be adapted in order to meet such new challenges.
Acknowledgments The development of the IntView lifecycle model and IntView RE was partly funded by the “e-Qualification Framework (e-QF)” project under grant 01AK908A of the German Federal Ministry of Education and Research (BMBF). The comprehensive example of and experiences in applying the IntView RE are taken from the project “Entwicklung und Erprobung modularisierter Lerneinheiten zum Profil ‚IT Technical Writer’” funded under grant 01NM244A of the German Federal Ministry of Education and Research (BMBF).
References Bruns, B., & Gajewski, P. (1999). Multimediales Lernen im Netz: Leitfaden für Entscheider und Planer. Berlin, Heidelberg, New York: Springer (in German). Constantine, L., & Lockwood, L. (1999). Software for use. Addison Wesley. Endres, A., & Rombach, H.D. (2003). A handbook of software and systems engineering: Empirical observations, laws and theories. New York: Addison-Wesley. Engelhart, M.D., Bloom, B.S., & Krathwohl, D.R. (1971). Taxonomy of educational objectives, vol. 1, “Cognitive domain,” 16th print. New York: MacKay. Gagné, R.M., Briggs, L.J., & Wager, W.W. (1992). Principles of instructional design (4th ed.). New York: Holt, Rinehart and Winston. Grützner, I., Pfahl, D., & Ruhe, G. (2002, May). Systematic courseware development using an integrated engineering style method. Proceedings of the World Congress “Networked Learning in a Global Environment: Challenges and Solutions for Virtual Education.” Hall, B. (1997). Web-based training cookbook. New York: Wiley & Sons. Hannafin, M.J., & Peck, K.L. (1988). The design, development, and evaluation of instructional software. New York: Macmillan. Krathwohl, D.R., Bloom, B.S., & Masia, B.B. (1971). Taxonomy of educational objectives, vol. 2, “Affective domain,” (1st ed., reprint). New York: MacKay. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
188 Grützner and Paech
Kruchten, P. (1998). The rational unified process: An introduction. Reading, MA: Addison-Wesley. Laitenberger, O., & DeBaud, J.M. (2000). An encompassing life-cycle centric survey of software inspection. Journal of Systems and Software, 50(1), 5-31. Lee, W.W., & Owens, D.L. (2000). Multimedia-based instructional design: Computerbased training, Web-based training, distance broadcast training. San Francisco: Jossey-Bass/Pfeiffer. Levis, K. (2002). The Business of (e)learning: A revolution in training and education markets (Extract). Screen Digest. Retrieved July 11, 2003, from http:// www.screendigest.com/content/R.E-LEARN_12_2002-more.html/view. McCormack, C., & Jones, D. (1998). Building a Web-based education system. New York: Wiley & Sons. Ochs, M., & Pfahl, D. (2002) eLearning market potential in the German IT sector: An explorative study. Kaiserslautern, Germany: Fraunhofer IESE. Retrieved November 2, 2003, from http://www.iese.fhg.de/market_survey. Paech, B., & Kohler, K. (2003). Task-driven requirements in object-oriented Development. In Leite, & J. Doorn (Eds.), Perspectives on RE. Kluwer Academic Publishers. Reinmann-Rothmeier, G., Mandl, H., & Prenzel, M. (1994). Computerunterstützte Lernumgebungen: Planung, Gestaltung und Bewertung. Erlangen: Publicis-MCDVerlag (in German). Schanda, F. (1995). Computer-Lernprogramme: Wie damit gelernt wird; wie sie entwickelt werden; was sie im Unternehmen leisten. Weinheim, Basel: Beltz. (In German) Wambsganß, M., Eckert, S., Latzina, M., & Schulz, W. K. (1997). Planung von Weiterbildung mit multimedialen Lernumgebungen. In H.F. Friedrich, G. Eigler, H. Mandl, W. Schnotz, F. Schott, & N.M. Seel, (Eds.), Multimediale Lernumgebungen in der betrieblichen Weiterbildung: Gestaltung, Lernstrategien und Qualitätssicherung. Neuwied, Kriftel, Berlin: Luchterhand (in German). Weidauer, C. (1999). Ein Vorgehensmodell für die industrielle Entwicklung multimedialer Lehr- und Lernsysteme. Forschungsgruppe SofTec NRW. Softwaretechnische Anforderungen an multimediale Lehr- und Lernsysteme. Retrieved December 13, 2000, from http://www.uvm.nrw.de/News/AktuellesFS (in German).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 189
Chapter XII
Collaborative Requirements Definition Processes in Open Source Software Development Stefan Dietze, Fraunhofer Institute for Software and Systems Engineering (ISST), Germany
Abstract This chapter discusses typical collaborative requirements definition processes as they are performed in open source software development (OSSD) projects. In the beginning, some important aspects of the entire OSSD approach are introduced in order to enable an understanding of the subsequent description of the feedback-based requirements definition processes. Since the OSSD model seems to represent a successful way of dealing with the significant distribution and heterogeneity of its actors, some opportunities to adapt this approach also in other (software) industries are discussed. Nevertheless the entire OSSD model still exhibits several improvement opportunities that also are addressed in this chapter. In order to overcome possible weaknesses, several approaches to improve the described requirements definition approach are introduced. These improvements help to assure a higher level of efficiency and quality assurance for both processes and the developed artifacts, and furthermore also enable the consideration and acceptance of this approach in other domains and industries.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
190 Dietze
Introduction Open Source Software (OSS) has reached a remarkably high popularity in many different application domains throughout the last years. The success of famous OSS products like the Linux Kernel or the Apache HTTPD Web Server leads to the suggestion that the development processes in general and especially the requirements definition processes are well suited to the demands of the users and the developers of OSS. Since the heterogeneous communities of established OSS projects typically consist of a large number of globally distributed actors who collaborate almost exclusively through the Internet, OSS projects should be perceived as complex sociotechnical systems. Whereas typical requirements engineering (RE) processes often are not designed to deal with an increasing level of complexity, heterogeneity and distribution of their organizational structures (Herlea, 1998), the collaborative OSS development methodologies seem to have overcome these issues. This suggests the hypothesis that the underlying OSS development model, which obviously has the ability to produce successful software products, should be considered as a reliable and viable approach in the areas of software engineering (SE) and of cooperative work in general. Despite the growing popularity of OSS, this new paradigm of software development has not been researched much yet in contrast to proprietary SE processes. Therefore these practices, including the deployed software support of the identified RE processes, should be analyzed in detail in order to determine whether the advantages of these methods can contribute to non-SW-related industries as well. This chapter outlines and interprets some results of a comparative case study of OSS development processes within the Apache HTTPD1, the Linux Kernel2, and the Mozilla project3. These research activities were aimed at the identification and formalized specification of a descriptive process model for OSS development based on case studies that were performed by participating in the projects, analyzing the projects’ information sources, doing interviews, and literature review. The research was focused on the processes, roles, artifacts, and the deployed software infrastructure, which is used to support the whole development approach and especially the requirements engineering practices presented in this chapter. The identification and formal presentation of a descriptive process model enable the further improvement of the identified processes and its software infrastructure, and furthermore open the opportunity to consider the integration of these practices into traditional software development approaches. The following describes the results of this approach and focuses on the requirements definition processes in typical OSS environments. Section two provides background information about collaborative RE, and typical characteristics of OSS and OSS development (OSSD) processes, whereas the third section introduces some important aspects of the identified OSSD process model. Section four focuses especially on the requirements definition approach in OSS projects and section five discusses its adaptability to other industries, which is the basis for the description of possible improvement opportunities in section six. Finally the core results of the chapter are summarized in section seven.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 191
Background This section provides a brief overview of collaborative requirements definition processes and the open source software domain.
Collaborative Requirements Definition As Herbsleb & Kuwana (1998) state, the social and organizational environment is an important factor for successful software development projects. In general the requirements for software systems are constructed and negotiated in a complex, iterative, and co-operative requirements definition process (Loucopoulos & Karakostas, 1995). Involving the users and all the stakeholders of a system early in the development process ensures that the product can satisfy all involved actors. Therefore requirements definition processes are typically performed by different stakeholders like users, customers, domain experts, project managers, and developers (Boehm, Grünbacher, & Briggs, 2001), who often have different perspectives and backgrounds (Mehandjiev, Gaskell, & Gardler, 2003). According to Herlea (1998) it is a considerable problem of RE processes that the involved actors often work in globally distributed environments and over organizational boundaries. Thus the requirements definition process can be perceived as highly collaborative and interactive. Since an increasing number of projects in the software industry and also in other businesses are performed in distributed and heterogeneous environments, these are characteristics of RE in general. This led to the development of different approaches to handle the increasing distribution of RE actors, for example, groupware-based methods (Boehm et al., 2001; Herlea, 1998) or the more comprehensive model in Mehandjiev et al. (2003). Since the OSS community seems to have found a way to deal with its significantly high level of heterogeneity and distribution, the analysis of the OSS approach for requirements definition could contribute to the whole software development industry and perhaps other industries.
Open Source Software An appropriate explanation of the open source term is provided by the Open Source Initiative (OSI), which has developed the Open Source Definition (OSD). This definition contains a set of criteria that have to be considered in the software license models used for OSS in accordance with the OSD (Open Source Initiative, 2002): •
Free distribution and redistribution
•
Publicly available source code
•
Possibility of source code modification and redistribution
•
No discrimination of certain users/groups or employment domains
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
192 Dietze
All license models that follow the criteria defined in the OSD can be considered to be compatible to the understanding of OSS as defined by the OSI. In addition the OSI provides a list of all certified software licenses (Open Source Initiative, 2003). These characteristics have significantly determined the evolution of the entire OSS development model and especially the requirements definition processes.
Open Source Software Development Although many existing OSS projects have successfully developed individual practices and specific processes, it is possible to define some common characteristics that can be identified in most of the OSS development projects (Cubranic, 2002; Fogel & Bar, 2002; Gacek, Lawrie, & Arief, 2002; Scacchi, 2001; Vixie, 1999): •
Collaborative development
•
Globally distributed actors
•
Voluntariness of participation
•
High diversity of capabilities and qualifications of all actors
•
Interaction exclusively through web-based technologies
•
Individual development activities executed in parallel
•
Dynamic publication of new software releases
•
No central management authority
•
Truly independent, community-based peer review
•
“Bug driven” development
According to Raymond (2001) these characteristics lead to the metaphor of a “bazaar” that represents the characteristics of the OSS development practices in contrast to a “cathedral” representing the centralized and strictly controlled traditional software development. The OSS development processes are often characterized as “bug-driven.” This results from the typical practice that every software modification is based on a specific bug report or more generally on a change request that represents the central requirements artifact within the OSSD approach. This characteristic also clarifies the importance of the requirements definition processes within OSSD projects.
Identified OSSD Process Model This section summarizes some of the key aspects of the process model, which was identified during the preliminary research activities.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 193
Key Processes and Roles The initial release of the OSS prototype can be perceived as the starting point of the OSSD process as a gradual process of software improvement. The OSS approach can be divided in the following key processes: •
Management Processes
•
Environment Processes
•
Development Processes
Whereas the development processes describe the collaborative and highly distributed activities to improve the source code and to develop all projects artifacts, particularly the RE artifacts, the management and environment processes support the development activities from a central position. These main development processes are executed in parallel and mainly independent from each other. Figure 1 represents these core process categories and the roles involved in this model in an UML-based use case view: All collaborative development activities are performed by distributed actors who can be aggregated to the following roles: •
User
•
Contributor
Figure 1. Identified key processes and roles
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
194 Dietze
•
Developer
•
Reviewer
•
Committer
These roles are not usually defined explicitly but describe a certain set of actors who fulfill a defined set of functions and tasks. A common set of characteristics can also be defined, for example, user privileges that all actors fulfilling a certain role are associated with. Usually an actor is associated with more than one role. For example, the development of a source code as a developer implies the use of the OSS as a passive user, and the submission of patches makes a developer also become a contributor.
Maintainers and Core Developers The maintenance processes for maintaining the infrastructure and to support and coordinate all development activities are typically performed by a central maintainer or a maintenance committee. In fact the initiator of the project is in most cases the same person as the primary maintainer. The maintenance actors provide central services that cannot be provided by distributed and global actors, for example, a common software infrastructure that provides central access to all shared resources. In many OSS projects a group of core developers could be identified (Dietze, 2003). These actors are typically responsible for the majority of source code modifications and interact very closely with the maintainer(s) of a project. Furthermore the core developers are often responsible for activities that require enhanced qualifications, skills, privileges, or knowledge of the source code, for example, the commit of source code (Mockus, Fielding, & Herbsleb, 2000; Reis & Pontin de Mattos Fortes, 2002).
Overall Process Model The overall process model at its highest level of abstraction is represented in Figure 2. The above figure contains the key processes of the OSSD process model and is presented for informal purposes in order to provide a brief overview on the central processes and their relations. After the initial development and release of the prototype, first, environmental and management activities have to be accomplished in order to establish a project community and to setup an initial rudimentary software infrastructure, for example, a Web site, a central source code repository (CVS), and a bug tracking system. These environmental and management processes are primarily executed by the maintainers of an OSS project and are necessary to enable the collaborative RE processes and all further development processes by a distributed project community. For means of simplification, only the requirements definition and the patch development processes are visualized as explicit development processes, since both represent the primary activities of OSS development. A very important aspect of the distributed Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 195
Figure 2. Overall process model of the OSSD approach Re-Engineering
Gradual Software Improvement
Process Input/ Output Individual Deve-
Initial Protypedevelopment and -release
Environment and Management Processes (central) Development Processes (distributed) Collaborative Requirements Definition Change Requests
Collaborative Patch Development Cycles Patches / Source Code
Release Processes Software Releases
lopment Activity t
activities within the development processes is the fact that most processes are performed continuously, in parallel, and mainly independent from each other by autonomous and distributed actors. This separates the OSSD approach from any traditional SE approach and typical development practices within commercial environments. For example, a developer who is going to develop and submit a patch acts absolutely independently from other developers and also from contributors who are contributing code change requests. The only relation between these two processes is the fact that the database of change requests is the starting point for every patch development cycle.
Collaborative Requirements Definition The typical process of requirements engineering in OSSD projects is based on the collaborative and iterative development of requirements artifacts within the process of gradual software improvement. These requirements artifacts (“Change Requests”) are used to capture the requirements for all further development activities. Scope of the OSSD requirements definition processes As is shown in Figure 2 the requirements definition processes are part of the process of gradual software improvement and always refer to an existing software version. The
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
196 Dietze
Figure 3. Use case view of requirements definition process
typical OSSD processes start after the release of a software prototype as OSS and are just aimed to improve and maintain this prototype. Thus an already-existing software prototype seems to be a prerequisite for the requirements definition processes that typically occur in OSSD projects. This indicates that these requirements artifacts are, in general, not appropriate to describe the requirements of entire new software products exhaustively. The requests that are developed in the RE processes by contributors of the distributed community are aimed to describe requirements for improvement of the OSS, that is, to remove bugs (“Corrective Maintenance”) or to add new features to the software (“Perfective Maintenance”). Therefore the requirements artifacts can be divided into two categories: •
Bug Reports
•
Enhancement Requests
Whereas a bug report describes an observed software bug, an enhancement request defines the extension of the OSS in terms of functionality or system compatibility.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 197
Figure 4. Activity view of requirements definition process
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
198 Dietze
Process of Contributing Requirements The process of contributing requirements is performed by all actors of the community of an OSS-Project, who then turn into the role of active “Contributors.” The following figure represents this process in an UML-based use case view. Change requests are typically managed by a central bug tracking system that represents a central repository for all change requests and enables their structured description based on metadata. A popular bug tracking system is Bugzilla, which was developed by the Mozilla community and is now used, for example, for the management of change requests in the Linux Kernel4 as well as in the Mozilla 5 and the Apache 6 projects. RE processes in OSSD projects are performed autonomously and independently from other processes. This is another big difference to traditional RE processes as they can be identified in proprietary (software) industries besides the aspect that the RE process in OSSD is aimed at the gradual improvement of an already-released software product. Figure 4 presents the activities of the requirements definition process in an UML-based activity view. After the identification of a certain software modification need, this software misbehavior has to be verified by the user. This is generally done by ensuring that the latest release of the OSS was installed and by subsequently trying to reproduce the modification requirement in this software version. If the bug can be reproduced or if a need for a general software enhancement can be identified, the actor reviews the existing change request repository in order to verify that the request has not already been submitted. After ensuring the novelty of the requirement, the contributor can optionally communicate the request via a dedicated mailing list to the community of the project. This enables the whole community to review and discuss the request and is aimed at ensuring that the misbehavior is not specific to the system or software installation of one single actor and that no developer is already working on the implementation of this requirement. After completing this process, the contributor creates an entry in the bug tracking system and describes the change request based on structured metadata. In most projects this process is described in a guideline document, which is typically called the “Bug Reporting Guideline” and defines the process of developing and submitting a change request. This document ensures a common understanding of this process and is used by the community as a prescriptive guide to the process, thus supporting a higher level of quality assurance by creating a consensus about how the whole requirements definition process has to be performed and which information should be provided.
Metadata of a Change Request As mentioned above, the change requests are captured in a bug tracking system and described using certain attributes — the metadata of a requirements artifact. This metadata information may be specific for each OSS project but contains some generalizable attributes that exemplarily are listed below:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 199
•
Summary
•
Description
•
Keywords
•
Status
•
Affected Software Version
•
Attachments
•
Comments
•
Owner
•
Severity
•
Priority
The Status attribute is of special importance because it is used to document the current state of the change request, thus controlling the progress of the patch development activities. The severity attribute is typically used to separate enhancement requests from bug reports by assigning the value “Enhancement.”
Collaborative Requirements Review Processes for reviewing existing change requests supplement the requirements definition processes and are mostly executed independently from the process of requirements definition. In some OSS projects these processes are also known as “Bug Triage” (Mockus et al., 2000; Reis & Pontin de Mattos Fortes, 2002). They are aimed at ensuring a consistent, redundancy-free, and always up-to-date repository of change requests. During these review activities new and unconfirmed change requests are validated, and validated change requests that are currently not in development by a specific developer are assigned. If a change request is assigned to a developer or selected for implementation by a developer himself, an individual patch development cycle, aimed at implementing the described requirement, is started. Figure 5 represents the requirements review process and its activities. It is important to emphasize that these review processes are implicit processes performed voluntarily by the distributed actors, as most typical OSSD processes are. Thus the intensity these processes are performed with is not centrally planned and a defined level of quality assurance regarding the repository of requirements cannot be guaranteed for each point of time.
Lifecycle of a Change Request The attribute “Status” is used to document the current state of an individual request. Every actor performing an activity that is related to a particular change request can make
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
200 Dietze
Figure 5. Activity view of requirements review
this information available to the whole community by setting the Status attribute of the change request to a certain value that describes what he is going to do. Thus this attribute provides a very important functionality for the documentation and coordination of the patch development cycle.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 201
Figure 6. Typical lifecycle of a change request
Figure 5 describes the typical states of a change request and therefore illustrates the lifecycle of this artifact, which is directly related to the lifecycle of the patch that implements this request. This information can be supplemented by setting additional attributes (Attachments, Comments) that can contain supplementary information about ongoing development activities.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
202 Dietze
Adaptability and Issues of the OSSD Approach Today most software development projects, also proprietary ones, are based on the collaboration of a more or less distributed developer and user community. Therefore certain elements of the OSSD process model could be considered as an appropriate approach for distributed (software) development in general. But a lot of open questions arise in this context. In this section the adaptability of the described RE approach to other domains is discussed and several issues are outlined, which should be considered when establishing elements of the OSSD approach.
Adaptability to Other Businesses The OSSD-based approach for gradual software improvement based on collaborative requirements engineering can be perceived as a promising and valuable process for software development that has been developed by users and developers of OSS in an evolutionary process throughout the previous evolution of OSSD practices. It seems to be well suited to detect the requirements of globally distributed users of a software, once a software prototype has been released. Thus the described requirements definition process, which has already proven its ability to produce successful products, possibly has the potential to be adopted to traditional software development projects aimed at producing proprietary, closed source software and maybe also to other industries. Involving all stakeholders of a product as early as possible in the RE process ensures that the developed products are suitable for all actors. Therefore similar RE approaches based on user feedback are already in use in the software industry and also in other businesses. For example, user service and support hotlines are often used to collect customer feedback on specific products. Furthermore several Web-based approaches to enable customers to provide feedback or request product changes are popular in different domains and should be perceived as informal RE methods. Often these methods are less structured than the OSSD approach and do not comprehensively consider the RE processes or further implementation activities, as is done in the OSSD model. Therefore it seems possible to improve these approaches by establishing a request tracking method and requirements artifact lifecycle similar to the OSSD model. But regardless of this fact, the OSSD approach is aimed at the improvement of an existing product and it is not suited to thoroughly develop and define the requirements for entirely new (software) products. Since a software prototype is always released at the beginning of a typical OSSD project, this seems to be a prerequisite for most of the OSSD processes, as they can all be perceived as part of a gradual process of software improvement. Therefore this requirements definition approach could not completely replace an entire RE methodology, but it seems to be appropriate to enhance or replace traditional product maintenance processes. For example, software maintenance processes could be supported by the OSSD approach of requirements definition by providing all users and developers of the proprietary software product with access to a central request tracking system and encouraging them to submit change requests and discuss existing requests. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 203
This could enable the appropriate elicitation of the demands of all stakeholders for further product improvement activities and could reduce the barrier between developers and users, which is a typical issue in many industries. For a successful integration of such practices into existing business processes it is important to establish effective processes and to provide all actors with an appropriate infrastructure. Since the OSSD model still exhibits several issues, these should be considered before establishing elements of this approach.
Issues of the Identified Approach The analysis of the requirements definition processes that can be observed within the OSSD approach led to the identification of some issues and weaknesses of this process: •
Varying intensity of requirements review
•
No implementation of dedicated user privileges
•
Inappropriate implementation of RE artifact lifecycle
•
No integrated artifact management
As the process of requirements definition is usually performed by a huge community of distributed actors with varying skills, it is supported by prescriptive bug reporting guidelines to ensure the accomplishment of the most necessary activities. But since the observance of these process guidelines cannot be enforced by the maintainers of the project, the activities for reviewing and updating the change requests are important for ensuring the consistency and correctness of all information within the requirements artifact repository. Because of the voluntariness of these processes, the intensity with which these activities are performed cannot be ensured. Thus the quality and relevance of all artifacts within the bug tracking system cannot be guaranteed on a defined level at each point of time. In most OSSD projects no dedicated user privileges regarding the request artifacts are implemented, and thus all actors can modify existing change requests independent of their current state. This also leads to the issue that the correctness of the metadata of RE artifacts cannot be ensured. That is especially problematic with the Status attribute. Consequently it is possible that the Status attribute does not represent the actual state of the artifact. As another issue the inappropriateness of the identified artifact states can be mentioned. The values of the state attribute of a change request as shown in Figure 6 enable the description of certain states within its lifecycle but they are not appropriate to exhaustively document every relevant state of a change request and the resulting patch. Therefore the whole process could be supported and implemented by improved artifact metadata. Furthermore the artifacts that could be identified within the OSSD model are managed by different, heterogeneous systems. Currently in typical OSSD software infrastructures no
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
204 Dietze
integrated management of all artifacts is deployed. This leads to the issue of redundant artifacts and the concern that the linkage of artifacts that are related to each other is not possible. For example, a developer who is going to fix a bug in the development source code typically has to review information in different information objects, which are organized independently in different repositories — for example the bug tracking system, the source code repository, or mailing list archives.
Improvement Approaches Besides the many advantages of the described RE approach some improvement opportunities could be identified. In this section some possible approaches for resolving the issues described above are introduced. These are especially important when considering the OSSD model in commercial development environments.
Dedicated Request Review Processes Enhancing the process model by establishing additional roles and processes can be perceived as an improvement approach for many issues regarding the OSSD model (Dietze, 2003) because an immanent tendency of OSS developers to focus on implementation tasks has been observed. This leads to the problem that certain activities, for example, documentation development, software test, or user support, are often not accomplished sufficiently by the OSS community and therefore should be supplemented by the core developers or the maintainers of a project. Thus additional and supportive processes and roles could be defined, which should not replace but supplement the already-described RE processes. As described above, the sufficient accomplishment of the requirements review activities at each point of time cannot be guaranteed within OSSD projects. Therefore periodic request review activities could be assigned to a specific actor to ensure the appropriateness of all request artifacts. This actor is then assigned with the supplementary role of a dedicated “Request Reviewer” and should be granted all privileges necessary to create, modify, or delete requirements artifacts. This creates the necessity that such an actor should be familiar with all aspects of the software, its architecture, and the past evolution of the source code database. Nevertheless this actor should be part of the alreadymentioned group of core developers. Since the core developers typically interact very closely with the maintainers of a project, coordination of tasks and their schedulability is much easier and will find a higher level of acceptance within this group compared to the heterogeneous community of “ordinary” actors.
Improved and Role-Based Artifact Lifecycles As already mentioned in “Issues of the identified approach” the implemented lifecycle of RE artifacts does not seem appropriate to support and document every state of the Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 205
requirements definition and patch development process. Therefore appropriate metadata, especially the assignable values for the Status attribute, should be defined in order to enable the exhaustive representation of the entire lifecycles of a change request and the resulting patch. This would enable the appropriate documentation of the progress of all ongoing implementation processes. To achieve these goals the processes for development and review of change requests were analyzed in the context of the subsequently performed patch development and patch review processes that enabled the definition of the following, more appropriate states: •
New:
Unconfirmed change request
•
Verified:
Request verified by request reviewer
•
Not Verified:
Request could not be verified
•
Assigned:
Developer is implementing a patch
•
Patch Review:
Patch submitted for review
•
Patch Approved:
Patch approved by reviewer
Figure 7. Suggested lifecycle of a change request
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
206 Dietze
•
Patch committed: Patch committed into development source code repository
•
Patch released:
Patch released (as part of a software release or self-contained)
Figure 7 represents an improved lifecycle in an UML-based state view. Preventing incorrect state modifications could enforce a higher level of quality assurance. Therefore, every state was associated with a certain role that should be assigned the privilege to promote the state of the artifact to the next value. Also the permission to modify other artifact metadata should be dependent on the role of the specific actor.
Appropriate Software Support The primary software development functionalities identified in OSSD projects are provided by source code control and bug tracking systems (Dietze, 2003). Defining an appropriate software infrastructure that integrates all necessary functionalities to support the entire process model and to partly automate some of the core development processes can significantly enhance these functionalities. In general, an appropriate software infrastructure should provide transparent and central access to all artifacts and information sources relevant for the involved actors. This information should be organized consistently and without redundancies to enable the collaborative usage of common information resources and artifacts. Furthermore, adequate communication channels are a prerequisite for distributed software development processes. In the context of the requirements definition processes, the software infrastructure has to implement the improved artifact metadata and the entire artifact lifecycle, which has been described above. This leads to the need of implementing role-based user privileges related to request artifacts, in order to enforce a role based lifecycle model. By implementing well-defined user rights within such an infrastructure, an appropriate role concept could be enforced and implemented. In addition, routing of artifacts to appropriate actors based on the current state of the artifact lifecycle or automated e-mail-notification of certain roles triggered by defined events can be used to implement workflow functionalities and to partly automate the development processes. Furthermore, an integrated artifact management of all artifacts, which are created during OSSD projects, provides a benefit to all actors and avoids redundancy (Dietze, 2003). The logical or physical organization of all artifacts in one unique repository enables the linkage of related artifacts. Besides that, the structured description of all artifacts based on appropriate metadata could support such an overall artifact management and the integrated linkage of all artifacts that feature semantic relationships. This could increase transparency and significantly reduce the necessary effort for obtaining information.
Conclusion This chapter has presented certain aspects of the OSSD model and explained an alternative approach of requirements definition than can be observed in typical OSSD Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Collaborative Requirements Definition Processes 207
projects. In addition, it has discussed opportunities to improve this approach and to adopt it in traditional industries and also in other businesses to supplement their existing RE methods. Besides the advantages of the OSSD approach for defining requirements, this methodology still exhibits several weak points and issues. Before implementing OSSD RE practices in conventional SW development projects, these weaknesses should be considered and minimated, e.g. by applying the improvement strategies outlined in the previous sections. Therefore, a higher level of quality regarding the entire OSSD process model should be approached, which could open the opportunity to utilize general OSSD methodologies and especially the described requirements definition practices in commercial software development domains as well as in other industries. Thus, some possible proceedings for achieving an improved RE process were presented which could lead to a higher level of efficiency and quality for both, the processes and the developed artifacts and products. This could be achieved by the assignment of supplementary roles and tasks to core developers of an OSSD project aimed at the introduction of formal quality assurance processes into the described requirements definition approach and to make it a reliable starting point for further implementation activities. The implementation of an appropriate software infrastructure that supports these distributed processes in all of their facets could enable process automation and a high level of process autonomy and parallelism. Thus, this could provide a big benefit to further OSSD projects and could support or supplement distributed organizational structures in general.
References Boehm, B., Grünbacher P., & Briggs, R. O. (2001). Developing groupware for requirements negotiation: Lessons learned. IEEE Software, 18(3). Cubranic, D. (2002). Open source software development. Retrieved March 26, 2002, from http://sern.ucalgary.ca/~maurer/ICSE99WS/Submissions/Cubranic/ Cubranic.html. Dietze, S. (2003). Improvement opportunities for the open source software development approach and how to utilize them. Proceedings of the NetObjectDays 2003. NODe Transit GmbH. Fogel, K., & Bar, M. (2002). Open source-projekte mit CVS. MITP. Gacek, C., Lawrie, T., & Arief, B. (2002). The many meanings of Open Source. Retrieved May 28, 2002 from the World Wide Web: http://citeseer.nj.nec.com/485228.html. Herbsleb, J.D., & Kuwana, E. (1998). An empirical study of information needs in collaborative software design. Information Processing Society of Japan Journal, 39(10). Herlea, D. (1998). Computer supported collaborative requirements negotiation. Retrieved October 10, 2003, from http://ksi.cpsc.ucalgary.ca/KAW/KAW98/herlea/.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
208 Dietze
Loucopoulos, P., & Karakostas, V. (1995). System requirements engineering. McGrawHill. Mehandjiev, N., Gaskell, C., & Gardler, R. (2003). Live multi-perspective models for collaborative requirements engineering. Retrieved October 23, 2003, from http:// www.co.umist.ac.uk/~ndm/Papers/MehAWRE.pdf. Mockus, A., Fielding, R., & Herbsleb J. (2000). A case study of open source software development: The Apache server. Proceedings of the 22nd International Conference on Software Engineering. Open Source Initiative. (2002). Open source definition. Retrieved December 12, 2003, http://opensource.org/docs/definition.php. Open Source Initiative. (2003). OSI certified software licenses. Retrieved January 15, 2003, http://opensource.org/licenses/index.php. Raymond, E. S. (2001) The cathedral and the bazar. UK: O’Reilly. Reis, C. R., & Pontin de Mattos Fortes, R. (2002). An overview of the software engineering process and tools in the Mozilla project. Retrieved May 17, 2002, from http:// www.async.com.br/~kiko/papers/mozse.pdf. Scacchi W. (2001, May 15). Software development practices in open software development communities: A comparative case study. Position paper for the First Workshop on Open Source Software Engineering at the 23rd International Conference on Software Engineering (ICSE 2001). Vixie, P. (1999). Software engineering. In C. Dibona, S. Ockman & M. Stone (Eds.), Open sources – Voices from the open source revolution. O’Reilly & Associates.
Endnotes 1
http://httpd.apache.org/
2
http://www.kernel.org
3
http://www.mozilla.org
4
http://bugzilla.kernel.org/
5
http://bugzilla.mozilla.org/
6
http://nagoya.apache.org/bugzilla/
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
209
Chapter XIII
Requirements Engineering for Value Webs Jaap Gordijn, Free University Amsterdam, The Netherlands
Abstract Value webs are cooperating, networked enterprises and end-consumers that create, distribute, and consume things of economic value. The task of creating, designing, and analyzing such webs is a prototypical example of a multi-disciplinary task. Businessoriented stakeholders are involved because the way an enterprise creates economic value is discussed. But also representatives responsible for business processes (many innovative value webs require changes in processes) and inter-organizational information systems (enabling value webs from a technical point of view) play a key role, as well as end-consumers. To facilitate exploration and analysis of such value webs, we propose an approach called e3value that utilizes terminology from business sciences, marketing, and axiology but is founded on methodology seen in requirements engineering such as semi-formal, lightweight graphical conceptual modeling, multiple viewpoints, and scenario techniques. We have developed and tested this methodology in a series of e-business consultancy projects. In this chapter we will present lessons learned in developing value webs, which stem from our consultancy experience. Then we present the e 3value methodology, with a focus on modeling and understanding what parties offer each other of economic value. Analysing value webs from such an economic value perspective is the main contribution of our approach; business science approaches contain the right terminology but are far too sloppy to be usable in practice, whereas requirements engineering and conceptual modeling approaches are sufficiently rigorous but do not provide adequate terminology. For educational purposes, we illustrate the methodology with an easy-to-understand, inline example. Finally we discuss related approaches and conclusions.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
210 Gordijn
Introduction Over the past few years many innovative e-business ideas have been considered. Innovative ideas are characterized by one or more new economic value propositions yet unknown to the market. A value proposition is something offered by a party for consideration or acceptance by another party. In the recent past, industry clearly showed that is not easy to understand and analyze such e-business ideas (Shama, 2001). Many initiatives have been falling apart. One of the problems with e-business development is that so many stakeholders from different backgrounds (CxO, business development, ICT) representing different enterprises are involved, not understanding each other too well and having different and sometimes conflicting concerns. To enhance a shared understanding of the e-business idea at stake, requirements engineering (RE) and, more specifically, conceptual modeling (CM) approaches can be of use. Such approaches offer support in defining aspects of a world (in our case ebusiness ideas) around us with the aim to understand and to analyze it. Although RE/ CM is strongly developed in the realm of information systems, there is to our knowledge no such approach focusing on exploration of an e-business idea. In this chapter we combine a RE/CM way of working, with a business science terminology to understand a network of enterprises creating, exchanging, and consuming objects of economic value — in short a model representing an e-business idea. Our methodology is called e-value, reflecting that it is important to understand an e-business idea from an economic value perspective before thinking over business process and information systems consequences. This chapter is structured as follows. The next section introduces e-business development and the role RE/CM plays in more detail. Then we present the description techniques offered by the e-value methodology, and thereafter we provide guidelines for how to make these descriptions. Additionally these sections contain an educational case study (for real-life projects, see Gordijn & Akkermans (2003)). A series of other, sometimes ontological-founded approaches are discussed. Finally we present our conclusions.
A RE-Approach for Value Webs Over the past few years we have learned a number of lessons while doing a series of projects on innovative e-business case development in the realm of banking, insurance, telecom, Internet service provisioning, news, music, and electricity supply and distribution (Gordijn & Akkermans 2003). The most important lesson is that in such projects, initially exploring a business model from an economic value perspective is crucial. A business model explains why an e-business idea is potentially profitable for the enterprises involved. Because we assume the business model under consideration is innovative and thus hardly known, such a model initially can only be articulated vaguely, resulting in misunderstandings between participating enterprises. Additionally a vaguely
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
211
articulated business model is an insufficient starting point for a requirements engineering track with a focus on information system requirements (in e-business cases such systems always play an important enabling role). In sum the business model should be better understood. Developing a business model in more detail involves many different stakeholders, having varying concerns and representing different enterprises. Typically stakeholders are CxOs, marketers, responsible for business processes, and ICT experts. Additionally stakeholders represent different enterprises with potentially different interests; ebusiness initiatives are characterized by a web of enterprises offering a proposition rather than a single enterprise. We have experienced that such a group of stakeholders has not a shared interpretation of a value proposition at all. Additionally industry itself has demonstrated clearly that proper analysis of e-business models has not been done (Shama, 2001), especially before the dot.com bubble exploded. To facilitate a shared understanding of an e-business model as well as to design and to evaluate it, we propose to use a conceptual modeling and requirements engineering way of working combined with domain terminology from economic and business sciences. Since a conceptual modeling approach comprises the activity of formally defining aspects of the physical and social world around us for the purpose of understanding and communication (Mylopoulos, 1992), it is likely to contribute to a shared understanding of a business model. Requirements engineering is the process of developing requirements through an iterative co-operative process of analyzing the problem, documenting the resulting observations in a variety of representation formats, and checking the accuracy of the understanding gained (Loucopoulos & Karakostas, 1995) and can be of help during elicitation, design, and assessment of e-business models. What makes our approach new is the application of a CM/RE way of thinking in an economic value-driven business development process. Consequently the e3value methodology cannot be compared to the UML, because the UML assumes an entirely different domain terminology allowing different kinds of requirement statements. From a business perspective the UML is mainly of practical use to model business processes (using activity diagrams). In contrast our approach models who exchanges what of economic value with whom and requests what in return, which are not statements about business processes but rather are expressions that can be used to derive processes from. A RE approach typically consists of a number of description techniques and guidelines on how to work with these. The e3value methodology consists of three such CM-like techniques for modeling value hierarchies, value, and profitability sheets.
The e3value Description Techniques The e3value methodology utilizes three related description techniques. First a value hierarchy presents a consumer need and how such a need can be fulfilled by obtaining objects (things) of economic value. Second we introduce value webs, representing who creates, distributes, and consumes such objects. Finally we create profitability sheets that show the financial effects of the e-business idea at hand. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
212 Gordijn
Figure 1. A need for transportation can be satisfied by a train trip or airplane flight, a taxi trip, and some food (legend in grayed boxes is not part of the notation). Leg end C onsum er need/ Start stim ulus V alue n am e object AND OR C onsists-of
T ransp ortation from A m sterdam to Paris
L ong distance trip
Food …
T axi trip
A ir flight T rain trip …
…
Value Hierarchy One of the lessons learned from our consultancy experiences is that easily understandable description techniques are needed for the exploration of an e-business idea. Persons are involved with no background knowledge in conceptual modeling techniques at all and with no time nor inclination to learn these techniques. Hence all our notations are simple. We start elaborating an e-business idea with the elicitation of a value hierarchy. Figure 1 shows a value hierarchy for an illustrative case. A consumer need for transportation can be satisfied by either an air flight or a train trip but always requires a taxi trip and some food. In general a value hierarchy is a rooted a-cyclic directed graph whose root represents a consumer need. Children of a node represent value objects used to satisfy this need. A value object is a good or service of economic value to some actor. The edges of a value hierarchy represent a contributes-to relationship. The reverse relationship is called consists-of. An AND-node represents the fact that all children are needed for the higher-level one and an OR-node represents that fact that only one of the children is needed. The leaves (in Figure 1 indicated by “…”) of the value hierarchy are the boundary of our value descriptions. We assume that one or more actors can produce these leaf objects against known expenses, and can do so in a profitable way, so in order to elaborate the e-business idea, we do not need to decompose these leaf objects further. Value hierarchies are similar to goal hierarchies known from RE (Antón 1997; Dardenne, van Lamsweerde, & Fickas, 1993; Yu & Mylopoulos ,1998). Both are means-end hierarchies. The difference is that the nodes in a value hierarchy represent value objects to be produced or to be exchanged between business actors, whereas the nodes in a goal
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
213
Figure 2. A value web showing the value objects exchanged by actors, including economic reciprocal value objects (the numbers marked with “#” and the legend are not part of the notation). L eg end A ctor n am e V a lu e n am e activity V a lu e interfa ce V a lu e port V a lu e n am e E xchang e V a lu e object M arket n namame e segm ent C onsum er need / Start stim ulus C onnection elem ent A N D dep. elem ent O R dep. elem ent End stim ulus
T raveler T ra nsportation from A m sterd am to P a ris #1
T ransportation
#4
#2
#3
#5
A ir m oney trip F o od
#6
#7
T rain m oney trip
F o od m oney
#8
A irline F ly in g
AAirline irline
“T raining”
#9
R ail w ay
...
...
... ... AAirline irline
...
...
...... AAirline irline
...
T axi
C atering
#10
T axi m oney trip
D riv ing
# 11 ...
... ... AAirline irline
...
...
...... AAirline irline
hierarchy represent goals to be achieved. Often goal hierarchies are developed for single business actors, whereas value hierarchies are always developed for multiple business actors. Finally a value hierarchy always starts with a consumer need, whereas a goal hierarchy typically starts with a business mission.
Value Webs A value web shows which actors are involved in the creation and exchange of the value objects shown in the value hierarchy. Figure 2 shows a value web for our running example and contains the following constructs. An actor (for example, Railway) is an entity perceived by itself and by its environment as an independent economic (and often also legal) entity. An actor makes a profit or increases its utility by performing activities. In a sound, sustainable value web each actor should be capable of making a profit. Actors are represented by rectangles with sharp corners. Sets of actors with similar properties, called market segments (for example, Traveler), are represented by stacked rectangles.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
214 Gordijn
To satisfy a consumer need, or to produce a value object for others, an actor should perform a value activity for which it may be necessary to exchange value objects with other actors. A value activity (for example, Flying) is an operation that can be performed in an economically profitable way by at least one actor. It is depicted by a rounded rectangle. An important design decision represented by a value web is the decision whether a value object is to be obtained from other actors by means of a value exchange or to be produced by means of a value activity by the actor itself. A value exchange, depicted by an arrow, shows that actors are willing to exchange objects of value with each other. Value exchanges are between actors or between value activities performed by actors. So in the example a train trip is exchanged between a railway company and a traveler. Each value web expresses economic reciprocity. That means we assume that our actors are rational economic entities that are only willing to offer a value object if they acquire another value object in return that is of reciprocal value. Reciprocity is shown by value interfaces and value ports. A value port is a willingness of an economically rational actor to acquire or provide a value object. A value interface is a collection of value ports of an actor that is atomic. By this we mean that an actor is willing to acquire or provide a value object through one port of a value interface if and only if it is willing to acquire or provide values through all ports of the interface. In other words in case of a value interface with an out-going and in-going port, it is not possible to obtain an object via a port’s interface without offering another, reciprocal object via the other port. For example, Figure 2 shows that a traveler is willing to offer money, but wants in return for that a train trip. Note that the requirement of reciprocity causes us to introduce value objects not mentioned in the value hierarchy. The reason that these reciprocal objects are not mentioned in the value hierarchy is that their introduction is a design choice. Different elaborations of the value hierarchy contain different choices. In most cases value interfaces of actors are identical to value interfaces of activities performed by the actors; they exchange the same objects. For these cases we only show the value interfaces of the activities and not of the actors. Apart from modeling economic reciprocity, the value interface construct serves also the purpose of modeling mixed bundling. The value interface of the airliner in Figure 2 shows that an air trip and food are sold as a bundle. In other words it is not possible for the traveler to buy the air trip and the food served during the trip separately (a reminder that a value interface expresses atomicity with respect to objects it offers and requests; either all ports in an interface exchange objects, or none at all). Note that in real life this bundling decision does not hold for every airliner, but it is exactly these decisions that we want to model with value webs. Finally we have the dependency element and connection element constructs jointly forming a dependency path (derived from Buhr (1998)). If an actor exchanges objects of value via one of its value interfaces, that same actor may need to exchange value objects via one of its other value interface. Considering Figure 2, a Railway company that provides food to its customers (value interface marked #9) needs to obtain the food from someone else (value interface marked #11) or has to provide to food itself. Such internal relationships between interfaces of an actor are shown by a dependency elements and
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
215
Table 1. Profitability sheet for the “Railway” actor Actor Start stimulus Value interface
Railway Transportation from Amsterdam to Paris Quantity #8 100
Value objects Out-going (Train trip)
#9 #10 #11
(Food) … …
100 100 100
In-going Money=startfee+kilometerfee×kilometers Money=… … … Total Net Flow
Net Flow 100× (In-going-Out-going) 100× (In-going-Out-going) … … ? Net Flow
connection elements. Connection elements connect dependency elements. We distinguish various forms of dependency elements: the start stimulus modeling a consumer need that triggers the exchange of value objects, the end-stimulus representing that we do not consider any more value exchanges, the AND fork/join showing that a dependency path continues in multiple sub-paths, and the OR fork/join representing that a selection has to be made out of a set of paths for continuation.
Profitability Sheets Profitability sheets have the purpose to assess whether an e-business idea is likely to be profitable. Because we assume that for a successful e-business idea each participating actor should be capable of making a profit, we construct a profitability sheet for each actor involved. Such a sheet shows for a particular actor the in-coming and out-going value objects. In addition a profitability sheet presents how that actor assigns economic value to an obtained or delivered object using valuation functions. Such functions calculate a fee based on some properties. For instance a fee of train-trip may be calculated as a fixed start fee plus a kilometer-dependent fee. If we then know the number of objects exchanged during a timeframe (expressed by quantity, depending on the number of start stimuli and the fractions associated with OR-forks), we can calculate the Net Flow (in monetary units such as Euros) for each value interface and the Total Net Flow for each actor. This Total Net Flow should be positive for each actor. Note that a positive Total Net Flow does not directly imply that an actor is able to make profit (for that, we need to have potential investments and other expenses too), but a negative value for the Total Net Flow indicates that the actor will not be able to make profit.
How to Create e 3value Descriptions of an E-Business Idea Making e3value descriptions of an e-business idea is a far from trivial task. Therefore we present in this chapter suggestions how to do so. It is based on Gordijn & Akkermans Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
216 Gordijn
(2003) and Kartseva, Soetendal, & Gordijn (2003) (here we have developed a domainspecific approach for the power/electricity sector). A starting point for making e3value descriptions is a promising e-business idea. The e3value methodology contains no guidelines for finding such ideas themselves. Rather the scope of the e 3value methodology is more modest: exploring, understanding, and analyzing such ideas once stated such as idea. The following sections present guidelines for exploring a (often vaguely) stated ebusiness idea consisting of the following steps: (1) construction of value hierarchies, (2) construction of value webs, (3) finding new webs by reconstruction of earlier found webs, and (4) construction and evaluation of profitability sheets. For reason of brevity we only discuss steps one, two, and four; for step three please refer to Gordijn & Akkermans (2001).
Constructing Value Hierarchies To understand how to construct a value hierarchy, it is first important to state which kind of design choices we make during construction of a value hierarchy. A value hierarchy presents three design choices: 1.
Which consumer needs do exist? By asking stakeholders to formulate a consumer need they plan to satisfy, we increase the chance that products and services are really wanted by consumers. It is our experience that many stakeholders have products or services in mind they want themselves, rather than those wanted by their customer. A similar approach is also suggested by Tapscott, Ticoll, & Lowy (2000).
2.
Which (alternative) value objects satisfy a need, and which (alternative) value objects contribute to creation of another value object? Value objects are goods or services that can be produced by an enterprise or by a collaboration of enterprises. An important upcoming design choice is who in going to produce and consume a value object. Consequently a first step is to identify those objects that can be produced and consumed independently from other objects by an actor.
3.
What is the scope of the e-business idea; which value objects to include and which not? In a value object hierarchy, leaves indicate the scope of the collaboration under study; we call such objects contextual value objects. The value objects that are leaves are assumed to be available already and not part of the e-business idea exploration study. As a consequence value objects needed to produce the leaf objects are also not in the scope of hierarchy.
We use a number of guidelines to make these design choices. We indicate design guidelines with a tick (√ ) symbol.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
√
217
Use well-defined criteria for value object identification. If a consumer need has been textually stated, value object(s) must be found. Value objects should contribute to satisfaction of a need or should allow production of another object, should be of economic value to someone, and may be possessed, rented, granted, or are experienced. We show the satisfaction of a consumer need by a value object by relating the need and object by a contributes-to relationship.
√
Find fine-grained value object by deconstructing coarse-grained objects. Value objects can be split up into smaller objects that still satisfy the above criteria. Finding such smaller objects is of interest because these objects might be produced by different enterprises, thus resulting in different value webs. Consequently, to find smaller objects, we ask ourselves the question whether the candidate smaller objects can be supplied and/or consumed by different actors.
√
Stop with value object deconstruction if the object under consideration falls outside the scope of analysis. A value object needs not be deconstructed further if we reasonably can assume that at least one actor can produce that object, and/or we are not interested in analyzing which other value objects are needed to produce the value object under consideration. Alternatively it is possible that a given value object cannot be deconstructed because no finer-grained objects can be found that comply with aforementioned criteria for value object identification.
Note that the development of a value hierarchy and a related value web is a process of step-wise refinement. It is common to start with a more course-grained hierarchy, which results in a value web with a few actors and value activities.
Constructing Value Webs As was the case for the value hierarchy, a value web shows a number of design choices: 1.
Who offers/requests which value object to/from its environment? Each value object, taken from the value hierarchy, potentially can be offered by different actors. By assigning a value object to an out-going or in-going port of actor, we decide which actor offers or requests such an object.
2.
What are the value activities? An actor offering value objects must perform activities to obtain these objects, for instance, by manufacturing objects or by trading these. Since these activities generate profit, it is important to decide on the performing actor.
3.
What are the economic reciprocal value objects? A value hierarchy only states which objects satisfy a need, not how someone who is offering such an object is
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
218 Gordijn
compensated for that. We call such objects economic reciprocal objects. Economic reciprocity is shown by the value interface construct. 4.
Is there (un)bundling of value objects? Apart from economic reciprocity, the value interface may also show bundling decisions. For instance an actor may offer two (different) value objects as one offering to its environment. An actor may do so because he believes that he will generate more revenue by selling objects as a package deal rather than selling them separately.
5.
Which partnerships exist? To offer a specific coarse-grained value object, actors may decide to bundle more fine-grained objects. The important design decision here is that from a customer perspective, the coarse-grained object is offered by one (virtual) enterprise and that the virtual enterprise companies are invisible to the customer.
The following guidelines may contribute to answering the above design questions: √
The consists-of/contributes-to relationships between value objects/start stimuli in the hierarchy indicate value activities. A value object with consists-of relations to other value objects/start stimuli indicate a value activity. Such a value activity should produce a value object, using the value objects related by consists-of as inputs.
√
Leaves in the value object hierarchy indicate contextual value activities. We have stated before that the leaves in the value object hierarchy correspond to contextual value objects; we are not interested in how these objects are created and consumed nor in profitable ways to do so. We assume that they are available. To be able to draw value networks, and more specifically value transfers, we need, however, contextual value activities (and actors) that produce the contextual value objects. So contextual value objects result in contextual value activities.
√
Value objects/start-stimuli related by consists-of/contributes-to relationships each are assigned to a separate value interface of a same actor/value activity. This follows from the logic of value activities and interfaces.
√
Each offered/requested value object has a reciprocal requested/offered value object. Value interfaces model economic rational behaviour: “one good turn deserves another.” So an interface contains at least one requested (in-going) and one offered (out-going) value object. For each value object, one asks what the reciprocal value object is.
√
If value objects are related in the hierarchy, they are related in the value web by means of dependency paths. In a value network we show the consists-of/contributes-to relationship stated in the value object hierarchy by a dependency path. This allows us to do profitability assessments. If we know the number of consumer needs per timeframe, we can calculate the total amount of objects leaving and entering an actor for that timeframe.
√
Bundle objects if it is likely that they generate more profit in combination than sold separately. The value interface construct can be used to express the notion
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
219
of bundling by showing two or more offered value objects into one value interface (or two or more requested value objects). By doing so we may represent that an actor believes that by selling two objects as one package, he will create more revenue than selling both objects separately (known as mixed bundling (Choi, Dale, & Whinston, 1997)), or if an actor only assigns value to having two objects in combination rather than having them separately.
Construction and Analysis of Profitability Sheets The aim of profitability sheets is to determine, on a per-actor basis, whether the ebusiness idea seems to be profitable. Construction and analysis of profitability sheets consists of the following steps: (1) construction of a base-line sheet consisting of the amount of value objects flowing into and out of an actor, (2) quantification of the economic value of value objects in monetary units, (3) quantification of other variables such as the number of start-stimuli, (4) calculation of profitability sheets, and (5) sensitivity analysis. The steps are concisely presented below. 1.
Construction of a base-line sheet consisting of the amount of value objects flowing into and out of an actor. A base-line profitability sheet (for each actor one) is constructed by following the dependency path, starting at the start-stimulus. Table 1 presents a profitability sheet for the value web (actor Railway) in Figure 2, and is constructed as follows. Start at the start stimulus and walk down the scenario path. The AND fork (marked #1) splits the path into two sub-paths, which are both executed. For brevity, we only continue the path into the left direction (marked #4) of the OR Fork (marked #2), but for a complete picture all possible paths have to be explored. Suppose that 50% of the paths take this direction and the other 50% continues along the other path (marked #3). Now we see a next AND fork (marked #5) on your path, and thereafter we cross two value interfaces (marked #6 and #7). If we cross a value interface, we update the profitability sheet of the actor the interface belongs to: in-going objects are captured under the column “In-going,” whereas out-going objects are presented under the column “Out-going.” Then we traverse over the value exchanges and update the profitability sheet of the Railway actor accordingly (because we cross-value interfaces marked #8, #9, #10, and #11).
2.
Quantification of the economic value of value objects in monetary units. Table 1 shows only the sheet in terms of in-going and out-going objects. A next step is to assess the number of times objects are exchanged (based on an estimate of the number of start-stimuli for a given timeframe). Thereafter we calculate the profitability. For enterprise-actors we only take into account objects, which directly represent money (for example, fees). This is cf. investment theory (Horngren & Foster, 1987). In short we assume that all non-money objects flow into and out of an actor, and consequently are not relevant for profitability analysis (these objects are shown between parantheses). If we consider end-consumers (for example, private persons), we also include non-money objects. In brief actors are asked to assign economic value cf. utility theory and Holbrook’s consumer value framework (Holbrook, 1999).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
220 Gordijn
3.
Quantification of other variables, such as the number of start-stimuli. All other variables in the profitability sheets that are still un-assigned need to have a value. A difficult decision is to determine the number of start stimuli or consumer needs. This models the expected need for the products delivered by the e-business idea to the end-consumer. It is not uncommon to use alternative numbers, for example, to model an increasing and decreasing need for the product during its lifetime. Also models of diffusion of innovation (Rogers, 1995) can be used to make estimates on how the number of consumer needs evolve (recall that the e-business ideas we consider are about innovative products, otherwise it would not be necessary to do such an extensive exploration track).
4.
Calculation of profitability sheets. If we have assumptions on all the numbers, calculation of the sheets themselves is a trivial task. Finally we are interested in total Net Flow per actor. If this flow is negative, we have for that actor an unsustainable e-business idea. On the other hand if it is a positive number, it should be sufficient to cover other (for example, operational) expenses of that actor plus required investments to put the idea into operation.
5.
Sensitivity analysis. It is our experience that numbers on profitability themselves are not very useful for stakeholders involved because it is not possible to predict profitability numbers for innovative e-business ideas accurately. Results of exploiting such innovative ideas are unknown by definition, which makes it very difficult, if not impossible, to estimate important numbers to determine profitability, for example, the number of start-stimuli per timeframe. What is important for stakeholders, however, is to reason about profitability and to do a sensitivity analysis. This contributes to a better understanding of the e-business idea, in this case from a profitability perspective. To reason about profitability, we employ evolutionary scenarios. In contrast to operational scenarios, which describe behavioral aspects, evolutionary scenarios describe events that are expected to possibly occur in the future. As such, effects of events underlying risks and structural uncertainties are analysed, as well as effects of wrong estimations.
Related Approaches There exist a series of related approaches to aid e-business development. We first discuss ontological business approaches that tend to focus on single enterprise, and more recent ontologies that take into to account a multi-enterprise perspective. Finally we briefly review some non-ontological-founded approaches known from Business Sciences.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
221
Business Ontologies: AIAI Enterprise Ontology & TOVE Ontology The AIAI enterprise ontology (Uschold, King, Moralee, & Zorgios, 1998) defines a collection of terms and definitions relevant to business enterprises. Two enterprise ontology concepts relate to our ontology but have a different interpretation: (1) activity and (2) sale. In the enterprise ontology activity is the notion of actually doing something, the how. Our related definition, value activity, abstracts from the internal process and in contrast stresses the externally visible outcome in terms of created value, independent from the nature of the operational process. The enterprise ontology further defines a sale as an agreement between two legal entities to exchange one good for another good. In our ontology, the concept of sale roughly corresponds to a set of reciprocal value exchanges. The TOVE ontology (Fox & Gruninger, 1998) identifies concepts for the design of an agile enterprise. An agile company integrates his or her structure, behaviour, and information. The TOVE ontology currently spans knowledge of activity, time and causality, resources, cost, quality, organization structure, product, and agility. However the interfaces an enterprise has to its environment are lacking in TOVE. Generally the notion of the creation, distribution, and consumption of value in a stakeholder network is not present in the TOVE ontology. Hence the TOVE ontology concentrates on the internal workflow of a company, whereas our ontology captures the outside value exchange network.
Recent E-Business Ontologies: REA Ontology and Osterwalder’s Ontology The Resource Event Agent (REA) ontology (Geerts & McCarthy, 1999) shows from an ontological perspective many similarities to the e3-value ontology. REA calls actors agents. Agents are offering or requesting resources (in e3-value called value objects) by economic events. The latter can be compared to value ports in e3-value. REA relates economic events of different actors by exchanges that correspond to e3-value value exchanges. Finally economic events of an agent are related by a duality relation. This models economic reciprocity, which is handled in e3-value by the notion of value interface. From an ontological perspective e3-value and REA differ with respect to the notion of value activity. This concept lacks in REA, but it is important to discuss alternative assignments of such activities to actors. From a methodological point of view, REA is not an approach for business development, whereas e 3-value provides a methodology for doing so, for example, by value model construction and reconstruction, and by profitability-based sensitivity analysis. Osterwalder proposes an ontology for business models consisting of four pillars: (1) product innovation, (2) customer relationship, (3) infrastructure management, and (4) financial aspects (Osterwalder & Pigneur, 2002). Ontologically Osterwalder’s ontology is rather comprehensive but not sufficiently lightweight. The latter is, for instance,
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
222 Gordijn
important for having a tractable instrument in workshops. Taking a methodological viewpoint, the ontology currently lacks a convenient way for visualizing business models, which is important for using the ontology in a practical way. Additionally the ontology seems not so much intended for designing business models themselves but is more biased toward ontologically stating what a business model actually is.
Value Chains (Porter) and Value Constellations (Tapscott) Value chain/system analysis (Porter, 1985; Porter, 2001) and the more recently value constellation analysis (Normann & Ramírez, 1994) can be viewed from a conceptual modeling perspective, although these approaches have not developed with that perspective in mind. Value chains can be graphically depicted. A main drawback of value chains is that they do not precisely represent what is needed for proper business developments: they do not show who is doing business with whom but rather show the physical track a products travels. Additionally value objects and exchanges are not shown, and consequently economic reciprocity is not modeled. Tapscott’s value constellations (Tapscott et al., 2000) do not follow the physical transportation of goods but mix various concerns to be modeled, for instance, products, knowledge, and sometimes information to carry out business processes. To our experience, choosing a too-broad range of concerns may hamper the development of a business model due to a unfocussed stakeholder group. Additionally Tapscott’s value constellations do not provide facilities to model economic reciprocity and bundling.
Conclusion The e-value methodology is about the economic value-aware exploration of innovative e-business ideas, which utilizes principles from both requirements engineering and conceptual modeling and focuses on the exploration of an IT-intensive value proposition. We call such an exploration track value-based requirements engineering. Based on observations made during e-business idea exploration tracks, we motivate the need for an e-business model, rather than a vaguely described idea. Development of such a model serves two goals: (1) enhancing agreement and a common understanding of an e-business idea amongst a wide group of stakeholders and (2) enabling validation of the e-business idea in terms of evaluating economic feasibility. Although the development of an e-business model focuses on business requirements in general and potential profitability of the idea in particular, the model can also be used as a starting point for a more software-oriented requirements engineering process. A business model then contributes to a better understanding of the e-business idea by system architects and software developers and thereby frames the software requirements engineering process. To represent and analyze an e-business model, we have developed three description techniques, which can be used to represent a multi-actor network exchanging objects of Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
223
value. Operational scenarios are used to analyze the model for profitability in conjunction with evolutionary scenarios to do a sensitivity analysis on expected profits. Thereby we recognize that for innovative e-business ideas it is nearly impossible to predict profitability; rather we aim at the more modest goal to reason about factors influencing this profitability. Finally, tool support is needed for drawing and checking models (for example, for compliance with the e-value ontology), as well as to evaluate value models. Tool support is currently developed (a value modeling case tool) in the EC-IST project OBELIX (Obelix, 2001). The latest version can be downloaded from http://www.cs.vu.nl/~gordijn.
Acknowledgment This work has been partly sponsored by the Stichting voor Technische Wetenschappen (STW), project VWI.4949, EU-IST project IST-2001-33144 Obelix, and EU-EESD project NNE5-2001-00256 BusMod.
References Antón, A I. (1997). Goal identification and refinement in the specification of information systems. Unpublished doctoral dissertation, Georgia Institute of Technology, Raleigh, NC. Buhr, R.J.A. (1998). Use case maps as architectural entities for complex systems. IEEE Transactions on Software Engineering, 24(12), 1131-1155. Choi, S.Y., & Dale O.S, & Whinston, A.B. (1997). The economics of doing business in the electronic marketplace. Indianapolis, IN: MACMillan Technical Publishing. Dardenne, A., van Lamsweerde, A., & Fickas, S. (1993). Goal-directed requirements acquisition. Science of Computer Programming, 20, 3-50. Fowler, M. (1997). Analysis patterns. Reading, MA: Addison Wesley Longmann. Fox, M.S., & Gruninger, M. (1998). Enterprise modelling. AI Magazine, 19(3), 109-121. Geerts, G., & McCarthy, W.E. (1999). An accounting object infrastructure for knowledgebased enterprise models. IEEE Intelligent Systems and Their Applications, 89–94. Gordijn, J., & Akkermans, J.M. (2001). Ontology-based operators for e-business model de- and re-construction. Proceedings of the First International Conference on Knowledge Capture, 60-67. Gordijn, J., & Akkermans, J.M. (2003). Value based requirements engineering: Exploring innovative e-commerce idea. Requirements Engineering, 8(2), 14-134. Holbrook, M.B. (1999). Consumer value: A framework for analysis and research. New York: Routledge.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
224 Gordijn
Horngren C.T., & Foster. G. Cost accounting: A managerial emphasis, 6th ed. Englewood Cliffs, NJ: Prentice-Hall. Kartseva, V., Soetendal, J., & Gordijn, J. (2003). Distributed generation business modeling. Deliverable D 3.1 BusMod consortium. EC-EESD funded project. Loucopoulos, P., & Karakostas, V. (1995). System requirements engineering. Berkshire, UK: McGraw-Hill. Mylopoulos, J. (1992). Conceptual Modeling and Telos. In Conceptual modelling, databases and CASE: An integrated view of information systems development (pp. 49-68). New York: Wiley. Normann, R., & Ramírez, R. (1994). Designing interactive strategy - from value chain to value constellation. Chichester, UK: John Wiley & Sons Inc. Obelix consortium. Obelix Project IST-2001-33144: Ontology-based ELectronic Integration of CompleX Products and Value Chains: Annex I - Description of Work. 2001. See also http://obelix.e3value.com. Osterwalder, A., & Pigneur, Y. (2002). An e-business model ontology for modeling ebusiness. Proceedings of the 15th Bled Electronic Commerce Conference 2002, Slovenia. Porter, M.E. (1985). Competitive advantage - creating and sustaining superior performance. New York: Free Press. Porter, M.E. (2001). Strategy and the Internet. Harvard Business Review, 63-78. Rogers, E.M. (1995). Diffusion of innovations. New York: Free Press. Shama, A. (2001). Dot-coms’ coma. The Journal of Systems and Software, 56(1), 101-104. Tapscott, D., Ticoll, D., & Lowy, A. (2000). Digital capital - harnessing the power of business webs. London: Nicholas Brealy Publishing. Uschold, M., King, M., Moralee, S., & Zorgios, Y. (1998). The enterprise ontology. The Knowledge Engineering Review, 13(1), 31-89. Yu, E.S.K., & Mylopoulos, J. (1998). Why goal-oriented requirements engineering. In E. Dubois, A. L. Opdahl, and K. Pohl, (Eds.), Proceedings of the 4th international workshop on requirements engineering: Foundation for software quality (RESFQ 1998). Namur, B: Presses Universitaires de Namur.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering for Value Webs
225
Section III: Approaches
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
226 Garrido, Gea and Rodríguez
Chapter XIV
Requirements Engineering in Cooperative Systems J.L. Garrido, University of Granada, Spain M. Gea, University of Granada, Spain M.L. Rodríguez, University of Granada, Spain
Abstract Technology is increasing the possibilities for working in groups and even changing the way in which traditionally this has been performed. This chapter reviews models and techniques for obtaining and describing requirements in cooperative systems. Features and diversity of this kind of system imply an inherent complexity in studying and developing. Therefore methodologies and techniques aimed at enhancing requirements and software engineering processes should be applied. The chapter also presents a new methodology (called AMENITIES) based on behaviour and task models, specifically intended to study and develop these systems. The focus is on the part of the methodology that is concerned with the requirements engineering discipline. Several models are assembled under new conceptual and methodological frameworks in order to allow a more complete requirements elicitation, description, and negotiation.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
227
Introduction New technological challenges provoke continuous improvements in society, thus changing the conception of the world around us. Nowadays communication and collaboration activities play an important role in the modern work organisation. Technology is increasing the possibilities for working in groups (McGrath, 1993) and even changing the way in which traditionally this has been performed. In this context several users can cooperate to perform their tasks in order to achieve better productivity and performance. Thus users are part of a shared environment where distributed systems (called groupware) support and promote human-human interaction, where social issues acquire more relevance (Dix, Finlay, Abowd, & Beale, 1998). A cooperative system can be defined as “a combination of technology, people and organisations that facilitates the communication and coordination necessary for a group to effectively work together in the pursuit of a shared goal, and to achieve gain for all its members” (Ramage, 1999). The discipline called CSCW (Computer Supported Cooperative Work) (Greenberg, 1991; Greif, 1988) studies and analyses coordination mechanisms for effective human communication and collaboration as well as the systems supporting them. Groupware has been defined as “a computer-based system that supports groups of people engaged in a common task (or goal) and that provides an interface to a shared environment” (Ellis, Gibbs, & Rein, 1991, p. 40). To date groupware has constituted various systems: Workflow Management Systems (WfMS), computermediated communication (CMC) (for example, e-mail), shared artefacts and applications (for example, shared whiteboards, collaborative writing systems), meeting systems, and so forth. From the groupware definition, the notions of common task and shared environment are also crucial for the understanding of such systems. Thus groupware systems exhibit and support the following features (Ellis et al., 1991, p. 40): •
Communication. This activity emphasizes the exchange of information between remote agents by using the available media (text, graphics, voice, and so forth).
•
Collaboration. It is an inherent activity in groups. An effective collaboration demands that people share information. The information content is shared in the group context.
•
Coordination. The effectiveness of a communication/collaboration is based on coordination. It is related to the integration and the harmonious adjustment of individual work effort toward the accomplishment of a greater goal. It is an activity in itself influenced by technological and social protocols.
One well-known classification for groupware systems is the time-place matrix (Johansen, 1988). Time dimension varies from the same time (synchronous) to different time (asynchronous) for cooperative work. On the other dimension it distinguishes same place (participants are located in the same location) from different place cooperative work (geographically distributed). Other requirements are faced with the sociotechnical model
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
228 Garrido, Gea and Rodríguez
in which users are involved (social roles, responsibilities, organisation rules, and so forth) when they are collaborating. These features and diversity of groupware systems imply an inherent complexity to extract requirements. Development of groupware systems is more difficult than that of a single-user application because requirements, related to social aspects and group activities, must be taken into account for a successful design (Grudin, 1993). Therefore methodologies and techniques aimed at enhancing group interaction activities should be applied. Requirements engineering (RE) is defined as “a systematic process of developing requirements through and interactive cooperative process of analyzing the problem, documenting the resulting observations in a variety of representation formats, and checking the accuracy of the understanding gained” (Pohl, 1993). The problem has to be understood completely by means of its analysis. Certain stages of the process, such as analysis, viability, and modeling, should provide a full description of requirements, including functional and non-functional requirements to be satisfied. The chapter is organized as follows. Briefly, the background section presents a review of methods and techniques used to specify and analyze cooperative systems. In the motivations section we describe the main motivations in providing a new proposal. The next section introduces the methodology AMENITIES (Garrido, Gea, Padilla, Cañas, & Waern, 2002), providing a general description of the used notation (UML), conceptual model, and specific models and stages involved. The emphasis is placed on the process of requirements engineering. Then we introduce a case study in order to show how the methodology addresses the RE process. Finally the main contributions are summarized.
Background RE is concerned with capturing and describing information, identifying what should be designed (Macaulay, 1996). Techniques such as interviewing, brainstorming, and so forth have been widely used to obtain requirements in many kinds of systems. However, due to the specific nature of cooperative systems, the more relevant techniques applied to this kind of system are: applied ethnography, participatory design, and scenarios. Ethnographic techniques (for example, video recording) allow specialists to understand and document functional and non-functional requirements (Ehrlich, 1999). Ethnographers are more objective than stakeholders in finding out circumstances that provoke dynamic changes in the system (for example, a new incoming actor in the system must assume some work previously assigned to others). Ethnography covers non-functional requirements that are related to socio-cultural issues, domain constraints, work practices, group dynamics, and organisations. These requirements can be used for design specifications as well as to evaluate existing techniques. The focus is on the collaborative assembly of work in interaction, which leads ethnographers to speak of the social organisation of work, or more simply, of cooperative work (Crabtree, 2003).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
229
Participatory design (Muller & Kuhn, 1993) is a complementary method in which stakeholders and developers are involved in the design process of the system. It is a form of understanding users’ needs and reflecting this better understanding in the design (Newman & Lamming, 1998). Scenarios-based design is other well-known technique widely used in cooperative systems (Carroll, 1995). A scenario is a concrete description of activities that users engage in when performing specific tasks. Such a detailed description is also used as a requirement specification to infer different design aspects. On the other hand RE also embraces the development of abstract system models (like part of a detailed system specification). To date several approaches address the analysis and construction of abstract models for cooperative systems: task analysis (Dix et al., 1998) and task modeling (Nardi, 1995; van der Veer, van Vliet, & Lenting, 1995). Task-based approaches study cooperative systems from the more convenient level, user’s point of view. They describe the user’s cognitive skills to be acquired for a correct use; the specification is focused on representing and modeling user’s tasks (Markopoulos, Johnson, & Rowson, 1997). In these approaches the system specification is a collection of user goals where each one is defined by the sequence of tasks that allow us to achieve a desired objective. Several notations have been proposed that can be used for different purposes. For example: •
GOMS and NGOMSL (John & Kieras, 1996) measure system performance, and they are suitable to express the user’s knowledge.
•
GTA (Groupware Task Analysis) (van der Veer & van Welie, 2000) is a method for specifying groups. It proposes an ontology-based system study for task world models, that is, a framework in which participants (agents and users’ roles), artefacts (objects), and situations (goals, events) take place. Moreover relationships between these elements are clearly identified (uses, performed-by, play, and so forth).
•
CTT (ConcurTaskTrees) (Paternò, 2000) provides a hierarchical graphical notation to describe concurrent tasks and allows us to specify cooperation by adding a hierarchical specification with temporal constraints for each cooperative task. This extension and others aim to establish common tasks for several users and the relationships between them.
Motivations Methods and techniques in the previous section are considered a correct way for capturing and describing requirements in cooperative system. Nevertheless they also present some disadvantages:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
230 Garrido, Gea and Rodríguez
Ethnography and scenarios are an extensive, informal system description with too many details to be managed efficiently. Task-based approaches have a common foundation but different level of abstraction and notations. Usually each task-based technique only addresses a few objectives (user performance, skills, task allocation, and so forth), and the translation from one model to another in order to study different aspects of the same system is difficult. In some cases task-based models cannot describe certain information about the coordination of work. Usually coordination is based on the knowledge of what the members of the group are doing (presence, actions, and so forth). The term group awareness (Dourish & Bellotti, 1992) is used to define this situation. These requirements allow us to identify mechanisms that should be implemented to support coordination. In general non-functional requirements (Wilson, 1991) such as organisations, group dynamics, workload, and so forth escape to task-based models, or at least they cannot be represented in a suitable way within the same model. For example, the user’s role may change throughout real situations (for example, a change of responsibilities in an office department) and the ways in which the objectives are achieved (for example, a new commercial strategy or different work organisation). In order to create software for group support, it is necessary to know the way in which a group works. The main feature of a group is the interaction amongst its members. A group is a social aggregate that involves mutual awareness and potential interactions. The group performance can be affected, from a social point of view, by the human factors regarding: •
Different interpretations of a given aspect.
•
Suitability of concrete tasks.
•
Tasks to be carried out by groups.
•
Relationships amongst individuals.
•
Properties of the environment in which they work.
•
New models establishing effective work scenarios.
In addition, groupware should preserve social protocols that are present in the group context (and the surrounding social context). It should manage social relationships within the group (social roles, constraints, and so forth). Therefore different proposals are required to develop this kind of system. Thus the main motivations in devising a new proposal are the following: 1.
It is important to describe requirements in an appropriate way, for example, the same language is used for describing all requirements in order to facilitate their communication.
2.
Processes to study and develop CSCW systems require a unified conceptual framework including the more relevant concepts and their relationships.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
231
3.
A methodological framework should combine techniques and methods used in the best way.
4.
More complete system models specifically devised for this kind of system are required.
AMENITIES From the motivations presented in the previous section, this section presents a methodological proposal to be applied to cooperative systems. The first subsection describes the main ideas behind this proposal. The next subsection defines a conceptual framework that is the basis for it. And the last subsection, from the point of view of RE, presents a general description of the methodology.
Introduction The proposal states a system engineering process in which the requirements engineering has a specific weight (as is shown). It adopts the standard UML language (OMG, 2001) for requirements specification, system modeling and software development (see Figure 1 for comparing with traditional detached approaches described in the two previous sections). UML can be defined as a general-purpose visual modeling language to specify, visualize, construct, and document the artefacts of a software system. The strengths of UML we are interested in are summarized as follows: •
The language is a standard that includes semantic concepts, notation, and guidelines.
•
It has dynamic, environmental, and organizational parts.
•
It is supported by visual modeling tools that usually include code generators.
•
It is not intended to be a complete development method.
•
It allows a graphical specification for aspects that can be better stated in this way.
•
Specification at different levels of abstraction is possible.
•
One model may be translated into others automatically to achieve a more suitable representation (for example, activity and state diagrams are isomorphic).
UML provides a technological focus for system development. However we are especially interested in requirements description and system modeling, so the UML notation is used as follows:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
232 Garrido, Gea and Rodríguez
Figure 1. Engineering process for cooperative systems
1.
UML use case diagrams model the functionality of the system as perceived by outside users. A use case is a coherent unit of functionality expressed as a transaction between the user and the system. Sometimes in the literature a use case has been considered equivalent to a set of scenarios. Thus UML use case diagrams allow us to elicit and describe functional requirements.
2.
The more abstract kind of UML diagrams (activity and state) are used to build an abstract system model. These diagrams are combined with text notation in order to simplify the specification and increase its expressiveness.
Moreover the system engineering process will be supported on the basis of a unique conceptual framework (defined in the following subsection).
Conceptual Framework Cognitive frameworks and theories, such as the activity theory (Fjeld, Lauche Bichsel, Voorhorst, Krueger, & Rauterberg, 2002; Vygotsky, 1979) and distributed cognition (Rogers & Ellis, 1994), analyze concepts within cooperative systems. We define the following conceptual framework (shown in Figure 2 by means of a UML class diagram) in order to gather and connect most of these concepts. Thus the framework gives a higher common abstraction level for cooperative systems to be described in terms of general concepts and the relationships between them. According to this conceptual framework, an action is an atomic unit of work. Its eventdriven execution may require/modify/generate explicit information. A subactivity is a set of related subactivities and/or actions. A task is a set of subactivities intended to achieve certain goals. A role is a designator for a set of related tasks to carry out. An actor is a user, program, or entity with certain acquired capabilities (skills, category, and so
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
233
Figure 2. Conceptual framework Cooperative System
*
*
*
1..*
Group
* connect < play
*
Actor
connect
*
*
*
Capability assign/remove *
Task *
*
*
Subactivity do
Event 1 send/receive
Work Unit
*
Action use
Information Object *
trigger * * do
* *
*
*
1..*
*
*
check >
Law
interrupt
*
*
*
*
1..*
* *
*
*
1..*
Role *
1..*
1..*
Organisation
*
use
Artefact *
use
Interaction Protocol
forth) that can play a role in the execution (using artefacts) of or responsibility for actions. A group performs certain subactivities depending on interaction protocols. A cooperative task is one that must be carried out by more than one actor, playing either the same or different roles. A group is a set of actors playing roles and organized around one or more cooperative tasks. A group may be composed, that is, formed from, related subgroups. A law is a limitation or constraint imposed by the system that allows it to adjust the set of possible behaviors dynamically. An organisation consists of a set of related roles. Finally a cooperative system consists of organisations, groups, laws, events, and artefacts. The two key concepts defined above are task and group. We use the notion of task to structure and describe the work that must be performed by the group. This provides the way to translate work, that is, something that is tacit and implicit into something that is concrete and explicit. Nonetheless tasks are also considered at a very abstract level as
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
234 Garrido, Gea and Rodríguez
noted above. On the other hand a group can be more or less explicit. Sometimes organizational aspects determine the way people work, but in other cases personal and/ or operational aspects are the basis for organizing people in order to perform an activity. The notion of role, in any case, allows us both to specify groups as needed and to establish dynamic relations between actors and tasks.
Methodology AMENITIES (acronym for A MEthodology for aNalysis and desIgn of cooperaTIve systEmS) is a methodology based on the integration of behaviour and task models for the analysis and design of generic cooperative systems (Garrido & Gea, 2001). In Figure 3 a general scheme of the methodology (main models and stages involved) is shown. The set of behaviour and task models is a system model. This model specifies functional and non-functional requirements collected using ethnographical techniques and represented semi-formally by using UML use case diagrams. This cooperative model allows us to carry out the system validation and property verification (Garrido & Gea, 2002) by means of the Coloured Petri Net (CPN) formalism (Jensen, 1996), as well as the development of the software subsystem. The methodology is based on the conceptual framework
Figure 3. General scheme of AMENITIES
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
235
described in the previous subsection. It provides a simple process that embraces phases and models. The main stages of this process are (see dashed lines in Figure 3): (1) requirements compilation and elicitation, and analysis of the system by means of ethnographic techniques and UML case use diagrams; (2) construction of the abstract system model (called Cooperative Model) for requirements description and negotiation; (3) properties verification and system validation by using a formal model derived from the system model; (4) system design by introducing new elements and changes into the system model, probably as a result of the previous analysis; and (5) software development fulfilling requirements. Just like most methodologies, AMENITIES follows a simple iterative process allowing us to refine and review these models on the basis of requirements elicitation, negotiation, and analysis. In the following subsections we describe in detail the main objectives of the models and stages directly involved in RE.
Requirements Models Benefits can be obtained from combining ethnography and UML use cases. Some requirements obtained and elicited by means of ethnography (especially functional requirements and social roles that users play) can be structured and specified in an abstract way by using the UML use case diagrams. Thus these diagrams: •
define stable characteristics (for example, services provided by the system), or at least those that are maintained for long periods, and
•
identify and classify social roles that users play in the system.
In this context, the more interesting advantages of using UML use case notation are: 1.
use cases can be described at different levels of detail,
2.
they can be factored and described in terms of other, simpler use cases on the basis of include and extend relationships, and
3.
roles can be classified by means of generalization/specialization relationship.
Cooperative Model The objective of developing groupware systems is to support processes based on interactions between users. As part of requirements elicitation, these processes should be described by using, for example, workflows and/or role models. In addition requirements described in the above stage should be discussed and negotiated with stakeholders. To address these issues the methodology proposes a system model structured in a hierarchical way.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
236 Garrido, Gea and Rodríguez
This system model allows us to represent and connect instances of all concepts defined in the conceptual framework according to requirements for each specific system. It is formed by a set of behavioral and task models that are useful for specifying a CSCW system from the point of view of its structure and behaviour. The cooperative model describes the system (especially on the basis of coordination, collaboration, and communication) irrespective of its implementation. Thus it provides a better understanding of the problem domain. To build the cooperative model, a structured method (that consists of four stages) is proposed: specification of the organisation, role definition, task definition, and specification of interaction protocols. This method has been specifically devised for making easier connections between all concepts according to the conceptual framework defined previously. The notation proposed, called COMO-UML (Garrido, 2003), is based on UML state and activity diagrams. Hence it is basically a graphical notation that integrates small declarative and operational sections. Its expressiveness power (promoting participatory design) makes it easy for stakeholders to be involved in the requirements negotiation process.
Formal Model An operational, formal semantics, using the Coloured Petri Net formalism (CPN) (Jensen, 1996), is provided in Garrido & Gea (2002) for the notation COMO-UML proposed for building the cooperative model. The derived Petri Net from the cooperative model allows us to analyze the system, validate certain properties (consistency, completeness, and so forth) and evaluate usability (in terms of tasks) by means of automatic and/or guided executions. These simulation techniques can also carry out performance analysis by calculating transaction throughputs and so forth. Moreover, applying other analysis techniques (reachability graphs, algebra), it is possible to verify security and liveness properties in order to provide the complement to the simulation. In the problem domain, for instance, it is possible to demonstrate that specified tasks can be performed by means of existing human resources. Therefore requirements can be negotiated in detail.
Software Development Models The cooperative model of AMENITIES provides most requirements that groupware system development needs (namely functionality, behaviour, and deployment). In particular requirements related to design of the interactive system and usability can be derived and analyzed, respectively, from the cooperative model in a similar way in Paternò (2000). The development models must satisfy for each system specific requirements collected in previous stages and the following general requirements/objectives in the software engineering field:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
237
•
To simplify system development, splitting the whole system into components (called subsystems).
•
To satisfy functional and behavioural requirements, by the orderly mapping of services onto subsystems.
•
To fulfil certain software engineering objectives, such as reusability, portability, interoperativity and maintenance.
•
To facilitate the application of new technologies (Java, objects, Web services, and so forth) making use of distributed platforms (that is, middlewares such as RMI, CORBA, etc.) for the system implementation.
Case Study This section shows how AMENITIES addresses the main RE aspects, namely requirements elicitation, description, and negotiation, by means of an example: the Emergency Coordination Centre (ECC) in Sweden (Artman & Waern, 1999). This system was designed and implemented fulfilling control requirements of extreme situations. The Centre distributes tasks and is responsible, in the case of large-scale accidents, for the coordination of the organisations involved (police, fire brigade, and medical help), until all units have arrived at the scene of the accident. The main goal is to assign and manage resources as fast as possible, as well as to assess the particular conditions of each emergency.
Requirements Models Ethnography allows us to identify crucial requirements for the cooperative system. With regard to the actors, ethnography can determine current and future roles they play and the dynamic changes between them, on the basis of capabilities or laws. Moreover in relation to work, it is possible to identify real practices to be carried out by members of the group. For example, it is possible to determine whether the work finishes when an emergency call is made and then rejected, or whether the system waits for the new user’s reactions in order to reconsider the previous decision. A typical scenario is the reception of an emergency call, which is dealt with by an operator. This main operator receives information, evaluates the situation, and identifies the citizen while another operator (the assistant) dispatches the required resources. A third operator (the resource responsible) coordinates the available resources for each accident. From other scenarios like this it is possible to describe functional requirements and identify roles as is shown in Figure 4 by using a UML use case diagram. This diagram allows us to describe in semi-formal notation the basic relationships between tasks (include, extend ...), roles/actors (generalization, specialization ...) and the task in which each role is involved. This is the preliminary step of the group task analysis.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
238 Garrido, Gea and Rodríguez
Figure 4. UML use case diagram for the ECC Emergency Coordination Centre
Server Information
“include”
Converse
“include”
Ask for Information
Ask for Assistance
Citizen Identify Citizen
“include”
Operator
Server Help
“include”
“include”
Interview
“include”
Resolve Emergency
“include”
Listen Conversation
Main Operator
Resource Assistant Operator Responsible
“include” “include”
Register Information
Decide Rescue “include”
Decide Resources
“include”
Monitor Resources
Register Call
Dispatch Resources
Cooperative Model The structured method for the construction of the system model aims to represent requirements gathered in the previous stage. These requirements can be negotiated with stakeholders during this construction. Some requirements, such as maintainability (that is, expected changes in the computer-based system), might be derived from this model but not showed explicitly in it. In order to build this model the method proposes the following four stages (Garrido & Gea, 2001).
Specification of the Organisation The information obtained from ethnography is relevant to system modeling because it imposes organizational constraints on group behaviour. The organisation is described by using COMO-UML organisation diagrams (based on UML statecharts) as is shown in Figure 5. Internal structure of the group based on social roles is identified. Relevant relationships between roles are identified on the basis of social and cognitive constraints. That is, capabilities (identified as
?) state skills that actors can acquire and laws Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
239
Figure 5. COMO-UML organisation diagram group CentreStaff
[ResourceResponsible?] [Operator?]
rs ato er p eO re [F
capability role Operator
role ResourceResponsible
] =0
[FreeOperators = 0]
role AssistantOperator
law
(identified as []) represent general constraints that the organisation imposes to control role changes. Both concepts are very helpful to define and analyze possible strategies in group organizations (for example, behaviour protocols) and group dynamics (representing the evolution of the group). The example is comprised of only one group with three members. They play the roles Operator, AssistantOperator, and ResourceOperator. Initially laws checking capabilities (for example, [Operator?]) determine the role that is played by each actor according to his/her professional category, specialization, and so forth. In the future actors may change roles that they play as a result of various circumstances. An example of organizational requirement is the following: the social organization imposes laws such as [FreeOperators = 0], that is, an actor playing the role ResourceResponsible can become an Operator (or AssistantOperator) if no operators are available to resolve a new emergency call.
Role Definition Each group divides the workload among actors, whereas each role establishes a connection between these actors and tasks. Actors’ knowledge is described by the role being played at a given time. Thus the next step is to define each previous role by the set of tasks that can/must be performed (see Figure 6). Tasks involved are specified taking into account relevant requirements that might affect participants’ behaviour. For instance, relevant information would be the event that interrupts an activity (denoting task priorities) or triggers a task. A task can be defined as individual or cooperative. Figure 6 describes the roles Operator and AssistantOperator by using COMO-UML role diagrams. The common task they are both involved in is the cooperative task ResolveEmergency (previously identified in the above use case diagram). This form of specification allows us to associate different context elements (events, actions) for each Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
240 Garrido, Gea and Rodríguez
Figure 6. COMO-UML role diagrams for Operator and AssistantOperator event role O pe ra tor E m e rge nc y C all / Fre eO p erato rs --
role A s sis tan tO p era to r in terru ptib le -tas ks R es olve E m e rge nc y by O p era to r.R e so lv eE m erg en cy A s sis ten ceC a ll / F ree O pe ra tors --
coop era tive-ta sk R e so lv eE m e rg en cy
coop era tive-ta sk R e so lv eE m e rg en cy / F ree O pe rators + + / F ree O pe rators + +
actions
role involved in the task. For example, events triggering this task are EmergencyCall (for the role Operator) and AssistanceCall (for the role AssistantOperator). Moreover, for the role AssistantOperator, the task being performed can be interrupted (section interruptible-tasks) if there is a new emergency. To fulfil this coordination requirement, the actor will behave as operator for responding to the new emergency.
Task Definition Tasks define goals of individuals or groups. In this step each task specified previously is broken down into related subactivities and actions required to achieve the objective. In this case COMO-UML task diagrams (based on UML activity diagrams) are used to define individual/group tasks. They describe cognitive capabilities that participants need to accomplish work. Figure 7 defines task ResolveEmergency. This is a cooperative task because more than one participant is required to accomplish it; therefore, it specifies a collaboration requirement to be satisfied. The notation allows us to specify temporal-ordered constraints of subactivities by means of sequential (arrows) and concurrent (thick bars) constructions. Bifurcations (labelled with a diamond) denote a context-based decision in the organization whereas triggering and receiving events are included as in/out rectangles. This notation assigns roles to subactivities. It is a clearer alternative to the UML swimlanes notation because it avoids more complicated diagrams. A subactivity is a composed state that can be described in greater detail. Each subactivity/ action includes the specification of those responsible and optional roles needed to accomplish it. Optional roles are shown between brackets, and the symbol ‘|’ specifies an inclusive-or relationship. This information is relevant to specify structural and political requirements of the group. Note that this task definition includes relevant information about user tasks, coordination mechanisms, and organisation politics (decision making, protocols, workload, and so forth).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
241
Figure 7. COMO-UML task diagram for ResolveEmergency
Specification of Interaction Protocols In this last stage the interaction protocols in the above task definition are described (see Figure 7). In each collaborative activity the type of protocols used to accomplish this objective between participants is explicitly specified. For instance, the subactivity InterviewCaller in Figure 7 specifies the type of protocol Request-Reply that should be used to accomplish this, as well as the participants involved. On the other hand, the activity DecideRescueUnits specifies a conversational protocol, that is, participants can participate in this activity in any order and with the same degree of responsibility. The identification of such protocols is very helpful because they identify non-functional system requirements such as type of communications required (synchronous or asynchronous) and type of communication channel (link, mailbox, and so forth) for supporting collaboration. In this case applied ethnography allows identification of the interaction between participants. For instance, decision-making is done by both operators (a negotiation protocol) while in other organisations the task allocation is clearly separated (the first operator registers the information while the second one decides the rescue resources). These differences can also be identified in the way people interact in the scenario. Shared
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
242 Garrido, Gea and Rodríguez
decisions require two combined contexts: one for the input channel between citizen and operator (which is shared with the assistant operator) and another channel (based on voice and gestures) for internal communications between operators so that they can discuss and decide the best solution.
Conclusion The importance of CSCW systems is growing in recent years because of its presence in different types of scenario (collaborative e-learning, shared knowledge, workflow, mobile computing, and so forth). In this way, as methodological proposal, AMENITIES aims to address main activities in the field of requirements and software engineering for CSCW systems: the collaboration process itself and the dynamic context in which it is involved. In relation to RE, the aim is to obtain, document, and maintain functional and non-functional requirements. With this objective in mind, the methodology combines requirements techniques and models on the basis of a conceptual framework. Significant questions are whether collaboration (between actors) in the environment is necessary (compulsory) or recommended, how this should be done, and so forth. Requirements concerning the appropriate artefacts to guarantee the correctness of the work undertaken are also taken into account. This knowledge, therefore, is obtained from an in-depth study of group organisation and the rules that govern its behaviour. The notation used along with the process is based on the standard language UML. Current research is focused on the development of tools to support the engineering process presented: guided construction of the cooperative model, execution of this model for simulations, and automatic translation to software models for groupware development. In addition the used notation is being extended to cope with other relevant aspects related to requirements: information objects, new concepts to be added (for example, job), task scheduling, and so forth. Finally AMENTIES is being applied to several real systems in which technology plays an important role (ubiquitous computing).
References Artman, H., & Waern, Y. (1999). Distributed cognition in an emergency co-ordination center. Technology and Work, 1, 237-246. Carroll, J.M. (1995). Scenario-based design. John Wiley & Sons. Crabtree, A. (2003). Designing collaborative systems – A practical guide to ethnography. Springer. Dix, A., Finlay, J., Abowd, G., & Beale, R. (1998). Human computer interaction (2nd ed.). Prentice Hall.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Requirements Engineering in Cooperative Systems
243
Dourish, P. & Bellotti, V. (1992). Awareness and coordination in shared workspaces. Proceedings of ACM CSCW’92 Conference on Computer Supported Cooperative Work, 107-114. Ehrlich, K. (1999). Designing groupware applications: A work-centered design approach. In M. Beaudouin-Lafon (Eds.), Computer supported cooperative work (pp. 1-28). Wiley. Ellis, C.A., Gibbs, S.J., & Rein, G.L. (1991). Groupware: Some issues and experiences. Communications of the ACM, 34(1), 38-58. Fjeld, M., Lauche K., Bichsel, M., Voorhorst, F., Krueger, H., & Rauterberg, M. (2002). Physical and virtual tools: Activity theory applied to the design of groupware. Computer Supported Cooperative Work 11, 153–180. Garrido, J.L. (2003). Especificación de la notación COMO-UML. (Tech. Rep. No. LSI2003-2). Granada, Spain: University of Granada, Departamento de Lenguajes y Sistemas Informáticos. Garrido, J.L., & Gea, M. (2001). Modelling dynamic group behaviours. In C. Johnson (Eds.), Interactive systems - design, specification and verification (pp. 128-143). Revised papers. Lecture Notes in Computer Science No.2220. Springer. Garrido, J.L., & Gea, M. (2002). A coloured petri net formalisation for a UML-based notation applied to cooperative system modelling. In P. Forbrig, et al. (Eds.), Interactive systems - design, specification and verification (pp. 16-28). Revised papers. Lecture Notes in Computer Science No.2545. Springer. Garrido, J.L., Gea, M., Padilla, N., Cañas, J.J., & Waern, Y. (2002). AMENITIES: Modelado de entornos cooperativos. In I. Aedo, P. Díaz, & C. Fernández (Eds.), Actas del III Congreso Internacional interacción persona-ordenador 2002 (Interacción’02), (pp. 97-104). Madrid, Spain. Greenberg, S. (1991). Computer-supported cooperative work and groupware. London: Academic Press Ltd. Greif, I. (Ed.) (1988). Computer supported cooperative work: A book of readings. San Mateo: Morgan Kaufmann Publishers. Grudin, J. (1993). Groupware and cooperative work: Problems and prospects. In R.M. Baecker (Eds.), Readings in groupware and computer supported cooperative work (pp.97-105). San Mateo, CA: Morgan Kaufmann Publishers. (Reprinted from) Jensen, K. (1996). Coloured petri nets - Basic concepts, analysis methods and practical use (2nd ed.). Springer. Johansen, R. (1988). Current user approaches to groupware. In R. Johansen (ed.): Groupware: Computer support for business teams (pp. 12-44). New York: Free Press. John, B.E., & Kieras, D.E. (1996). The GOMS family of user interface analysis techniques: Comparison and contrast. ACM Transactions on Human-Computer Interaction, 3(4). Macaulay, L.A. (1996). Requirements Engineering. Springer.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
244 Garrido, Gea and Rodríguez
Markopoulos, P., Johnson, P., & Rowson, J. (1997). Formal aspects of task based design. In design, specification and verification of interactive system (DSV-IS’97). Springer Computer Science. McGrath, J. (1993). Time, interaction and performance: A theory of groups. In R. Baecker (Eds.), Readings in groupware and computer-supported cooperative work. San Mateo, CA: Morgan Kaufmann Publishers. Muller, M., & Kuhn, S. (Eds.) (1993). Special issue on participatory design. CACM 36(4). Nardi, B. (Ed.) (1995). Context and consciousness: Activity theory and human computer interaction. Cambridge MA: MIT Press. Newman, W., & Lamming, M. (1998). Interactive system design. Reading, MA: Addison Wesley. Object Management Group. (2001). Unified modelling language specification. Online http://www.omg.org. Paternò, F. (2000). Model-based design and evaluation of interactive applications. Springer-Verlag. Pohl, K. (1993). The three dimensions of requirements engineering. Proceedings of 5th International Conference of Advanced Information Systems Engineering 1993, 275-292. Ramage, M. (1999). Evaluation of cooperative systems. PhD Thesis. Lancaster University, UK. Online http://www.dur.ac.uk/~dcs1mr/thesis/. Rogers, Y., & Ellis J. (1994). Distributed cognition: An alternative framework for analysing and explaining collaborative working. Journal of Information Technology, 9, 119– 128. van der Veer, G.C., & van Welie, M. (2000). Task based groupware design: Putting theory into practice. Proceedings of Symposium on Designing Interactive Systems (DIS 2000), 326-337. van der Veer, G.C., van Vliet, J.C., & Lenting, B.F. (1995). Designing complex systems A structured activity. In G.M. Olson & S. Schuon, (Eds.), Symposium on designing interactive systems (DIS’95) (pp. 207-217). New York: ACM Press. Vygotsky, L.S. (1979). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wilson, P. (1991). Computer supported cooperative work: An introduction. Oxford, UK: HardboundKluwer Academic Publishers, Dordrecht Co-publication with Intellect Ltd.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
245
Chapter XV
RESCUE:
An Integrated Method for Specifying Requirements for Complex Sociotechnical Systems Sara Jones, City University, UK Neil Maiden, City University, UK
Abstract This chapter describes RESCUE (Requirements Engineering with Scenarios for a Usercentred Environment), a method for specifying requirements for complex sociotechnical systems that integrates human activity modeling, creative design workshops, system goal modeling using the i* notation, systematic scenario walkthroughs, and best practice in requirements management. This method has been, and is being applied in, specifying requirements for three separate systems in the domain of air traffic control. In this chapter we present examples showing how the method can be applied in the context of a case study involving the specification of requirements for Countdown, a system to provide bus passengers with information about expected bus arrival times. While this system shares some important similarities with systems used in air traffic control, we hope it is small and familiar enough to readers to provide meaningful insights into the application of the RESCUE process.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
246 Jones and Maiden
Introduction Despite recent advances in software engineering we still lack systematic and scalable requirements engineering processes for complex sociotechnical systems. The domain of particular interest in this chapter is that of air traffic control, in which human air traffic controllers and technical, software-intensive systems are both integral parts of what can be seen as a complex sociotechnical system controlling the movement of air traffic. One problem is that established requirements techniques have emerged from single disciplines – use cases from software engineering and task analysis from human-computer interaction are two obvious examples. Safety-critical sociotechnical systems such as those used in air traffic control demand rigorous analyses of controller work, software systems that support this controller work, and the complex interactions between the controllers, the air traffic, and the software systems. To do this we need new hybrid processes that integrate best practices from the relevant disciplines. The RESCUE (Requirements Engineering with Scenarios for a User-centred Environment) process has been developed to address this need in the domain of air traffic control. Academic researchers have worked with staff at Eurocontrol (the European Organisation for the Safety of Air Navigation) to design and implement an innovative process to determine stakeholder requirements, which is specifically targeted toward the needs of that domain. Thus RESCUE focuses on specification of requirements for critical systems, where development of new systems is evolutionary rather than revolutionary and where the emphasis is on getting requirements right, rather than speed to market. The RESCUE process was initially developed to specify requirements for a system called CORA-2 (“CORA” stands for Conflict Resolution Assistant). CORA-2 is a system that will provide computerised assistance to air traffic controllers to resolve potential conflicts between aircraft. CORA-2 is part of a complex sociotechnical system in which controllers and computers depend on each other to bring about the desired effect of avoiding collisions between aircraft in the sky. The RESCUE process is now being applied in the specification of requirements for two further systems in the domain of air traffic control: DMAN (Departure Manager), which is a system to support controllers in managing the departure of aircraft from major European airports, and MSP (Multi-Sector Planning), which is for scheduling aircrafts from gate to gate across multiple, multinational sectors. This chapter will provide a description of the RESCUE process, together with examples taken from the application of RESCUE to a small case study. We then provide a brief review of related literature. Each of the major components of the RESCUE process will be explained, with references to the literature on which it has been based. Examples of artefacts, or models, generated during the course of the process will be given with reference to a small case study. The chapter ends with a brief consideration of future trends, as well as some overall conclusions.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
247
Background There has recently been a trend in requirements engineering toward combining techniques in order to complement the deficiencies of one with the strengths of another. For example, Leveson, de Villepin, Srinivasan, Daouk, Neogi, Bachelder, Bellingham, Pilon, and Flynn (2001) describe a safety and human-centred approach to developing ATM tools that integrates human factors and systems engineering work. Several authors have combined use cases with other techniques for this reason. For example, Rolland, Souveyet, and Ben Achour (1998) explored the possibilities of linking goal modeling and scenario authoring, and Santander and Castro (2002) presented guidelines for using i* organisational models in the development of use cases. RESCUE is a process that continues this tradition and combines use cases with a number of different techniques in a concurrent engineering approach in an effort to increase coverage of use cases and provide some validation of use case models through the use of other models that can be checked against them. Use cases have been argued to provide a good basis for developing sociotechnical systems as they enable inter-disciplinary learning of the kind that may be necessary when development team members are drawn from the somewhat disparate disciplines of, for example, ethnography, human-computer interaction, and software engineering (Wiedenhaupt, Pohl, Jarke, & Haumer, 1998). They are acknowledged to provide an intuitive “middle-ground abstraction” that encourages stakeholder participation (Jarke and Kurki-Suonio, 1998) and are currently used in requirements elicitation by around half of all organisations included in a recent survey (Neill and Laplante, 2003). However a number of difficulties have been identified for those working with use cases alone. For example: •
We cannot specify a new system to support work without understanding how that work is currently done (for example, Haumer, Heymans, Jarke, & Pohl, 1999) — in RESCUE we use human activity modeling to build this understanding;
•
We cannot write detailed use cases without establishing the system boundaries — in RESCUE these are explored through the development of context and i* models (Yu, 1997);
•
We cannot write use cases without knowing about dependencies between actors described in the use cases — in RESCUE these dependencies are thoroughly explored using i* models;
•
We cannot write use cases without making at least some high-level design decisions — in RESCUE we use creativity workshops to do this;
•
We cannot write testable requirements without knowing the context in which those requirements arise (Robertson & Robertson, 1999) — in RESCUE we provide this context by linking requirements to scenarios and use cases.
Thus RESCUE aims to complement the deficiencies of use cases with a range of different techniques in order to better support the development of large sociotechnical systems Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
248 Jones and Maiden
where confidence in the correctness and completeness of use cases, and hence requirements, is important.
The Rescue Process The RESCUE process consists of a number of sub-processes, organised into four ongoing streams. These streams run in parallel throughout the requirements specification stage of a project, and are mutually supportive. The streams focus on the areas of: •
Analysis of the current work domain using human activity modeling (based on work described in Diaper, 1989; Schraagen, Ruisseau, Graff, Annett, Strub, Sheppard, Chipman, Shalin, & Shute, 2000; Vicente, 1999);
•
System goal modeling using the i* goal modeling approach (Yu, 1997);
•
Use case modeling and specification, followed by systematic scenario walkthroughs and scenario-driven impact analyses using the CREWS-SAVRE and CREWSECRITOIRE approaches (Sutcliffe, Maiden, Minocha, & Manuel,, 1998); and
•
Requirements management using VOLERE (Robertson & Robertson, 1999) implemented in Rational’s requirements management tool RequisitePro in current rollouts of RESCUE.
In addition to these four streams, the RESCUE process uses the ACRE framework to select techniques for requirements acquisition (Maiden & Rugg, 1996), and creative design workshops, based on models of creative and innovative design (Maiden & Gizikis, 2001), to discover candidate designs for the future system, and to analyse these designs for fit with the future system’s requirements. Both ACRE and the creative design workshops have implications for all streams. ACRE is used to support the selection of methods for requirements acquisition at any point in the RESCUE process where there is a need for further requirements information. The creative design workshops use inputs from all streams as a baseline for creative thinking and provide outputs that inform the development of i* and use case models, as well as the identification of requirements for the future system. Consistency between the various artefacts and deliverables produced at different stages in the RESCUE process is checked at five different points during the process. These five “synchronisation points” provide the project team with different perspectives from which to analyse system boundaries, goals, and scenarios, as follows: •
At the first point — the boundaries point — the team establishes first-cut system boundaries and undertakes creative thinking to investigate these boundaries;
•
At the second point — work allocation — the team allocates functions between actors (human and technical) according to boundaries and describes interaction and dependencies between these actors;
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
249
Figure 1. Overview of the RESCUE process
•
At the generation point, required actor goals, tasks, and resources are elaborated and modeled, and scenarios are generated;
•
At the completeness point, stakeholders have walked through scenarios and express all requirements so that they are testable; and
•
At the consequences point, stakeholders have undertaken walkthroughs of the scenarios and system models to explore impacts of implementing the system as specified on its environment.
An overview of the process is provided in Figure 1. The rest of this section provides a “stream by stream” view of activities carried out in the RESCUE process. These activities will be illustrated by reference to the Countdown system, and a short overview of this system is given.
Countdown System Overview Countdown is the system currently used by London Buses to provide bus arrival times on indicators at bus stops all over London. While sharing some important similarities with
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
250 Jones and Maiden
Figure 2. A simple depiction of a bus stop indicator 1
207
2
83
3
207
ACTON MKTPL
1 min
GOLDERS GREEN
3 mins
SHEPHERDS BUSH
4 mins
...Delays due to London Mayor’s Show
systems used in air traffic control, this system is small and familiar enough to readers to provide meaningful insights into the application of the RESCUE process. An example of the format used on current bus stop indicators is shown in Figure 2. Countdown carries a number of pieces of information to waiting passengers in a clear, easy-to-understand form: •
The order in which buses will arrive at the stop;
•
The number of each bus;
•
The destination of each bus – this information originates from the driver who keys a two-digit code into the system at the start of the journey;
•
The time until the bus arrives – based on how long the central computer estimates it will take the bus to reach the bus stop from where it currently is; and
•
Base-line messages – the base line of the Countdown display can scroll messages across the screen from left to right every 90 seconds – messages convey general information on matters such as night buses and congestion.
Drivers are directed to start or sometimes curtail their journeys by route controllers, who monitor the position of all buses on a number of different routes on the central computer system. The location of the bus is currently calculated by the automatic vehicle location (AVL) system using information from roadside beacons placed at intervals along the bus route. However, this system has some disadvantages – for example, when a bus is diverted (maybe because of road works or an accident) onto a different route where there are no beacons, information held on the AVL system and displayed on bus stop indicators quickly becomes out of date and incorrect. To overcome this a new system is planned that will use GPS technology to more accurately locate the position of buses at all times. An additional enhancement planned in the new system is that bus arrival information should be made available to potential passengers in a number of different formats and on a number of different devices such as mobile phones and PDAs. This will enable passengers to plan their travel wherever they are – they will no longer need to be at the bus stop in order to know when the next bus is due.
Human Activity Modeling1 In this RESCUE stream (shown as “Activity Modeling” in Figure 1) the project team develops an understanding of the current sociotechnical system to inform specification
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
251
of a future system. Human activity modeling focuses on the human users of the technical system, aiming to build understanding of the controllers’ current work – its individual cognitive and non-cognitive components and social and co-operative elements, as well as the environment in which it takes place – in order to specify a technical system that can better support that work. It draws on the literature of task analysis (for example, Diaper, 1989), cognitive task analysis (for example, Schraagen et al, 2000) and cognitive work analysis (for example, Vicente, 1999). The stream consists of two sub-processes – data gathering and human activity modeling. During the first sub-process (shown as “gather data on human processes” in Figure 1) data about all components of the activity model is gathered and recorded. Techniques to gather this data include observation of current system use; informal scenario walkthroughs, using scenarios representative of how the current system is used; interviews with representative human users; analysis of verbal protocols or recordings of users talking through scenarios or tasks; and ethnographic techniques. In the second sub-process (shown as “model human activity” in Figure 1) the project team creates a “human activity model” by generating a number of “human activity descriptions” corresponding to each of the major types of activity in the current system. An activity model is a repository of information about various aspects of the current system, including: •
Goals: desired states of the system
•
Human actors: people involved in the work of the system
•
Resources: means that are available to actors to achieve their goals
•
Resource management strategies: how actors achieve their goals with the resources available
•
Constraints: environmental properties that affect decisions
•
Actions: undertaken by actors to solve problems or achieve goals, and
•
Contextual features: situational factors that influence decision-making.
Further information may be found in Maiden and Jones (2004a). Extracts from a human activity description relating to the passenger activity of making travel decisions in the current Countdown system are shown in Figure 3. As can be seen from the figure, the human activity description template provides placeholders for each of the types of information identified above. It has been designed in a similar way to our use case description template (shown in Figure 8) in which we describe the desired behaviour of the future system so that information about how the current system supports the work of human actors can be used quite easily to help develop and check proposals for the future system. Note that actions in the normal course of the human activity description are broken down into their physical, cognitive, and communicative components. This information is used in generating scenarios to walk through as described later in the chapter. Activity models developed in this way provide important sources of data for the development of use case models and use case authoring, as well as data for fit criteria Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
252 Jones and Maiden
Figure 3. Extracts from a human activity description for the Countdown system Make travel decisions Author Date Source Actors Precis Goals
…… …… ….. Passenger Passengers at the bus stop want to know when buses will arrive. Passenger gets to destination. ……. Semantic knowledge Passengers know: • which route will take them closest to their destination • …… Triggering event Passenger seeks bus information from the Countdown indicator. Passenger is at bus stop. Preconditions …… Assumptions Passenger has normal eyesight. 1. The passenger seeks bus information from the Countdown indicator. Normal Course Resources --- Countdown indicator Physical actions --- passenger looks at Countdown indicator Communication actions --- passenger reads Countdown indicator Cognitive actions --- passenger recognises which route number(s) will take them closest to their destinations; - passenger remembers expected arrival time(s) for bus(es) on route(s) of interest Resource management strategies --- passenger checks information on the indicator every few minutes while waiting at the bus stop 2. The Countdown indicator shows the bus information for the relevant route(s). …… Differences due to variations 1. For passengers with mobility restrictions, passenger recognises which route number(s) has mobility buses …… Differences due to contextual 4. If wet weather, then passenger may decide not to use a bus if expected waiting time is too long features …… Constraints N/A
for system requirements. In writing these activity descriptions, the team also obtains a better understanding of the work and application domains, which is essential for effective requirements acquisition.
System Modeling In this RESCUE stream (shown as “system goal modeling” in Figure 1) the project team models the future system’s actors (human and otherwise), dependencies between these actors and how these actors achieve their goals, to explore the boundaries, architecture, and most important goals of the sociotechnical system. RESCUE adopts the established i* approach (Yu, 1997) but extends it to model complex technical and social systems, establish different types of system boundaries, and derive requirements. The system modeling stream requires three analyses to produce three models. The first model, generated in the first sub-process of this stream (“determine system boundaries” in Figure 1) is a context model, similar to that used in the REVEAL and VOLERE processes (see Hall, 2001; Robertson & Robertson, 1999) but extended to show different candidate boundaries for:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
253
Figure 4. Context model for the Countdown system
•
The technical systems, expressed in terms of software and hardware actors within the inner boundary (in the case of the Countdown system shown in Figure 4, the technical system is made up of two sub-systems);
•
The redesigned work system, expressed primarily in terms of human actors, within the middle boundary (in Figure 4 there are two human actors who interact directly with technical sub-systems and one technical actor, which also receives data directly from one of the technical sub-systems);
•
Other actors that are strongly influenced by the redesign of the new system, although they do not interact directly with it, are shown within the outer boundary (in Figure 4 the only actors of this kind are passengers); and
•
Systems that interact with elements of the new sociotechnical system but are not strongly influenced by its redesign are shown outside the outer boundary (in Figure 4 this includes the GPS system, which provides data about bus locations, the communication system used in communications between drivers and route controllers, and the London Transport central computer system).
A completed example of a context model for the Countdown system is shown in Figure 4. The second model is the i* Strategic Dependency (SD) model, which describes a network of dependency relationships among actors identified in the context model. A first cut of this model is produced in the second sub-process of the system goal modeling stream (“determine system dependencies, goals and rationale” in Figure 1) and then refined in the third sub-process (“refine system dependencies, goals and rationale”). In an SD
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
London Transport
Traffic Information be Updated
Bus Location Monitored
Information be Accurate
Route Changes Decided
Calculate Arrival Times
Time Card
Destination and User ID be Input
Operate Without Error
Be Easy to Use
Driver
Current Bus Information Received
Information be Reliable
Response Received Quickly
Manage Routes Effectively
Route Changes be Received
Communication doesn't cause Danger
Emergency Reported
Route Controller
Communication be Understandable
System Be Available
Bus Information Monitored
Driver Contacted
Comms System
Bus Locations be Up-to-date
On-Board Bus System
GPS Receiver
Bus Located
AVL System
Buses Located
Get to destination
GPS
Information be Up-to-date
Display Bus Information
Bus Information
Information be Reliable
Display be Readable
Bus Arrival Information
Make Travel Decisions
Countdown Display
Information be Accurate
Information be Reliable
Passenger
254 Jones and Maiden
Figure 5. The Countdown Strategic Dependency Model
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
255
model, a depender can depend upon a dependee to achieve a goal, undertake a task, obtain or use a resource, and achieve a soft goal in a particular way. For further explanation of the i* notation see Yu (1997). An SD model showing the main dependencies between actors relevant to the Countdown system is shown in Figure 5. The third model is the i* Strategic Rationale (SR) model, which provides an intentional description of how each actor achieves its own goals and soft goals. First cut and refined versions of this model are developed in the second and third sub-processes of the system goal modeling stream as described above. The SR model includes goals, tasks, resources, and soft goals from the SD model, as well as task decomposition links, means-end links, and contributes-to-soft goal links that provide a more detailed view of each individual actor’s behaviour (Yu, 1997). An SR model for the Countdown system is shown in Figure 6. To support i* modeling, we have developed REDEPEND, a graphical modeling tool developed as a plug-in to Microsoft Visio that enables the team to construct and analyse i* SD and SR models (Maiden, Pavan, Gizikis, Clause, Kim, & Zhu, 2002). This stream provides key inputs to use case modeling. Context and i* models define the system boundaries that enable use case modeling and authoring. Furthermore the i* SR models provide a basis for validating use case descriptions prior to scenario walkthroughs.
Creativity Workshops In this RESCUE process the team carries out some high-level creative design activities in parallel with on-going requirements work. This process, which takes inputs from, and provides outputs to, sub-processes in each of the RESCUE streams, is shown as ‘creative design workshops’ in figure 1. Further information about how creativity workshops are run can be found in Maiden and Jones (2004a) and Maiden Manning, Robertson, and Greenwood, (2004). A brief summary is provided below. Workshop activities were designed based on three reported models of creativity from cognitive and social psychology. First we design each workshop to support the divergence and convergence of ideas described in the CPS model (Daupert, 2002). As such each workshop period, which typically lasts half a day, starts from an agreed current system model, diverges, then converges toward a revised agreed model that incorporates new ideas at the end of the session. Second we design each workshop period to encourage one of three basic types of creativity identified by Boden (1990) – exploratory, combinatorial, and transformational creativity. Third we design each period to encourage four essential creative processes reported in Poincare (1982): preparation, incubation, illumination, and verification. Exploratory creativity is encouraged by asking stakeholders to reason about the future system using analogies from different domains such as textile design and musical composition, and combinatorial creativity is triggered by random idea generation and parallels with, for example, fusion cooking. The use case model and descriptions are used as essential inputs and structuring mechanisms for all new requirements and design ideas. Throughout a workshop each use case is displayed on a separate pin board. The facilitators instruct participants that all new requirements and other ideas generated
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
256 Jones and Maiden
during the workshop should be related to one or more use cases, indicated by the posting of the requirement or idea on the relevant board. After each half-day session, the use cases, requirements, and ideas are reviewed, leading to some rewriting of the use cases prior to the next session. As such the outputs from the workshop are better structured to enable a RESCUE project team to write detailed use case descriptions more effectively. This concurrent design process benefits the requirements process in two ways. First the candidate design space reduces the number of requirements to consider by rejecting requirements that cannot be met by current technologies. Second high-level decisions about a system’s boundaries enable the team to write more precise use cases and generate more precise scenarios that, in turn, enable more effective requirements acquisition and specification.
Scenario-Driven Walkthroughs In this RESCUE stream (shown as “use case modeling” in Figure 1) the team develops a use case model, writes use case descriptions, then generates and walks through scenarios to discover and acquire stakeholder requirements. We have applied results from the European Union-funded CREWS project (Sutcliffe et al., 1998) to provide method guidance for use case authoring, software tools for scenario generation and walkthroughs, and rich traceability to link and contextualise requirements in scenarios. There are five sub-processes. The first sub-process (“develop use case model” in Figure 1) employs inputs from the context model (developed in the system modeling stream), to investigate different system boundaries. The outcome is a use case diagram with use cases and short descriptions that are input into use case authoring. An example of a use case diagram for the case study is shown in Figure 7. In the second sub-process (“describe use cases” in Figure 1), the team writes detailed use case descriptions using a structured template derived from use case best-practice (for example, Cockburn, 2000). Use cases are described as in UML, but remembering that they should define interactions between human and other actors at levels two, three, and four of the context diagram, as well as interactions with the technical system. Extracts from a completed use case description relating to the Countdown system are shown in Figure 8. Authoring is guided using use case style and content guidelines from the CREWS-ECRITOIRE method (Ben Achour, Rolland, Maiden, & Souveyet, 1999), temporal semantics expressed as action-ordering rules, and, for our air traffic management (ATM) projects, an extensive lexicon of ATM nouns and verbs elicited from controllers. To write each description the team draws on outputs from the other streams – human activity models, i* strategic dependency and rationale models, stakeholder requirements, and innovative design ideas from the creativity workshops. Once each use case description is complete and agreed, the team produces a parameterised use case specification from it (this is the “specify use cases” sub-process in Figure 1) to generate scenarios automatically using the CREWS-SAVRE software tool (Sutcliffe et al., 1998). Different types of action in the normal course of the use case lead to the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication be Understandable
System Be Available
Call driver on comms system
Driver contacted
Route Controller
Update Ldn Transport with Traffic news
Driver number
Communicate with driver
Bus Location Monitored
Received
Current Bus Information
Viiew bus sytem display
Bus Information
Request Updates
London Transport
Information be Reliable
Manage Routes Effectively
Manage bus routes
Comms System
Check AVL System
Transport Situation Monitored
Emergency Reported
Contact Route Controllers
Traffic Update be Received
Information be Accurate
Route changes decided
Communicate Safely
Contact Route Controller
Communicate with Route Controller
Proceed on new route
Curtail Bus Route
Route Changes be Received
Information be Accurate
System be reliable
Bus timetables
Update system displays
Expected journey times
Calculate Arrival Times
immediately
Route Information
Drive Bus
Act on route change
Bus Scedule Adhered to
AVL System
Update system with current location
Bus location displayed
Route Knowledge
Response Received Quickly
Driver
Bus Locations be Up-to-date
Driver info accurate
Operate Without Error
Poll system regularly
Buses Located
Be Reliable
Time Card
Enter Destination Information
Destination and User ID be Input
Driver & route info
On-Board Bus System
Training be Minimal
Be Easy to Use
Radio Controller
Locate Bus
Bus Information
Information be Up-to-date
Information be Reliable
Indicator Identification
Display on Indicator
Information be Convenient
Standard Look & Feel
Display be Readable
Timetable and route map
Make Travel Decisions
Indicator be working
Display on PDA
Display Bus Information
Information be Accurate
Recieve Bus Information
Bus Arrival Information
Information be Accurate
Knowledge of Bus Routes
Passenger
Viiew bus sytem display
Countdown Display
Next Stop Info Given
Information be Accurate
Enter Info on systerm
Satellite Signal Received
Information on the Next Stop
Trasnmit Information to AVL
Bus Located
Transmit Location
Transmit Signal
Information be Reliable
GPS Receiver
GPS
Display on Website
Display be Readable Display on Mobile Phone
Purchase Ticket on Bus
Complete Journey Quickly
Board Bus
Purchase ticket before boarding
Get to Destination
Information be Reliable
Decide which bus to catch
Specifying Requirements for Complex Sociotechnical Systems 257
Figure 6. The Countdown Strategic Rationale Model
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
258 Jones and Maiden
Figure 7. Use case diagram for the Countdown system Send route information
Alert controller to emergency
<> Operator sign-on Driver Provide information for travel decisions
Display
Determine bus location and arrival times
<>
Monitor bus location
Route controll
Receive current traffic information
GPS
London Transport
generation of different alternative courses, which will be used to guide the stakeholders to consider how the future system should respond in the event of, for example, cognitive slips or communication errors as described below. A more detailed description of how this is done in the ATM domain is given in Mavin and Maiden (2003). The fourth sub-process (“walkthrough scenarios” in Figure 1), is pivotal to RESCUE and involves walking through each generated scenario with stakeholders using bespoke software tool support. Each scenario may be delivered for walkthrough in two forms – either through the Web-based Scenario Presenter tool shown in Figure 9 or as an interactive MicroSoft Excel spreadsheet that can be downloaded by the session facilitator. Facilitators walk through the scenario with relevant stakeholders, guided by the Scenario Presenter, to consider each normal course event and each alternative course linked to that normal course event in turn. The same scenario may be considered in a number of different “walkthrough contexts” in which stakeholders are asked to make different assumptions about the human or environmental context in which it takes place; for example, considering how passengers may act differently at night or in bad weather. A scribe uses the tool to document all requirements and comments relating to each event. For example, when considering the event “The passenger looks at the Countdown display” and the alternative “What if passenger has some unusual physical characteristics that affect his/her behaviour during this action?” the scribe is asked to add a new functional requirement that “The Countdown system shall provide an audio facility,” as shown in Figure 9. Further guidance on how to conduct a scenario walkthrough can be found in Maiden and Jones (2004a) and Maiden (2004).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
259
Figure 8. Extracts from a use case description for the Countdown system Provide information for travel decisions Use Case ID Author Date Source Actors Problem statement (now) Precis Functional Requirements Non-functional Requirements Added Value Justification Triggering event Preconditions Assumptions Successful end states Unsuccessful end states Different walkthrough contexts Normal Course
Variations
Alternatives
UC5 ….. ….. ….. Passenger, Countdown display, AVL system Arrival information is only available at bus stops, which means that some travel decisions can only be made at that time. The passengers will be able to use various different types of Countdown displays to find out the arrival times of buses FR23: All types of Countdown display shall provide the passenger with bus arrival information ….. AR4: All types of Countdown display shall be available at all times ….. Passengers will be able to access bus arrival times from various locations, not just at bus stops. Passengers will be more likely to use buses if they are able to access bus arrival information from a range of locations. The passenger seeks bus information from the Countdown display The Countdown display is functioning correctly Passenger has normal eyesight The use case is successful if the passenger receives the required bus information The use case is unsuccessful if the passenger does not receive the required bus information Wet weather ….. 1. The passenger looks at the Countdown display 2. The Countdown display shows the bus information for the relevant route(s) 3. The passenger reads the bus information from the Countdown display ….. • If the Passenger uses a PDA or mobile phone to access the Countdown display, then 3.1. The Passenger enters the start and destination locations of his or her desired journey 3.2. The Countdown system validates the locations 3.3. The Countdown system identifies the relevant routes 3.4. The Countdown system displays the relevant bus routes and the arrival times for buses due within the next hour N/A
The final sub-process (“impact analysis” in Figure 1) uses a sample of these scenarios in order to investigate how the system, as specified, will impact key social and environmental factors such as job security, actor roles and responsibilities, and access to information. This is done in a series of impact inspection meetings using ‘reading techniques’ such as those identified in Travassos et al. (2002). Questions about the potential impact of the proposed future system have been identified based on work by Heath, Jirotka, Luff, & Hindmarsh (1993), Hughes, O’Brien, Rodden, & Rouncefield (1997) and Viller and Sommerville (1999). The main outcome of this stream is a set of more complete requirements that can be traced to the originating scenario, and hence use case, and specified in context to remove ambiguity and make them more testable.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
260 Jones and Maiden
Figure 9. Screen from the Scenario Presenter tool
Managing Requirements In this fourth RESCUE stream (“requirements management” in Figure 1) the project team documents, manages, and analyses requirements generated from the other three streams. Each requirement is documented using a modified version of the VOLERE shell (Robertson & Robertson, 1999), a requirement-attribute structure that guides the team to make each requirement testable according to its type. Use cases and scenarios are essential to making requirements testable. Each new requirement is specified either for the whole system, one or more use cases of that system, or one or more actions in a use case. This RESCUE requirement structure links requirements to use cases and use case actions and places them in context, thus making it much easier to write a measurable fit criterion for each requirement. This use case-driven requirement structure carries over into the requirements document itself, to improve both the readability of the document and the understandability of each requirement statement. The document is divided into a series of use case descriptions using the RESCUE use case template, with requirement statements inserted into normal and alternative course descriptions next to the relevant use case and use case actions, as shown in Figure 10.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
261
Figure 10. Extracts from the Countdown requirements document 1.
The passenger looks at the Countdown display FR28 The Countdown system shall provide an audio facility
2.
The Countdown display shows the bus arrival information for the relevant route(s) FR3 The Countdown system shall display the order in which the next three buses will arrive at the journey starting point, the number of each bus, the destination of each bus, and the estimated time until the arrival of each bus RR2 95% of estimated bus arrival times shall be correct to within 30 seconds
In ATM projects carried out to date, RESCUE requirements have been documented using Rational’s RequisitePro. Outputs from other streams, such as use case, context, and i* models are all included in the document. The team also applies the VOLERE Quality Gateway (Robertson & Robertson, 1999) to all requirements to be entered into the document. One member of the team is allocated to play the Gatekeeper role, asking a number of questions of each requirement to ensure that only complete and correct requirements enter the document. Questions seek to establish whether the requirement is testable, viable, solution-independent, and of value to stakeholders.
Managing the RESCUE Process RESCUE is a complex process and depends crucially on managing the activity carried out in different streams to ensure consistency between the different artefacts shown above and to ensure that work in each stream can draw on the others as needed. Central to this are the checks carried out at the five synchronization points identified at the beginning of this section and shown in Figure 1. In overview these checks are as follows. Further information and examples can be found in Maiden and Jones (2004b). At the end of stage one, data about human activities and the context model are used to check the completeness and correctness of the use case model. System-level requirements are used to check use case summaries. Most cross checking is done at stage two in order to bring the human activity and firstcut i* models to bear on the development of correct and complete use case descriptions. Components of the human activity descriptions are checked against the i* models and use case descriptions, with particular attention being paid to areas where the future system will be different from the existing one. The i* models and use case descriptions are checked against each other, and a first set of requirements is derived from both the i* models and the use case descriptions. At the end of stage 3, use case specifications are checked using the i* models, and the refined i* models are used to check the requirements database. Checks carried out at the end of stage four and five relate solely to the internal structure of the requirements database, as no new artefacts are generated during these stages.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
262 Jones and Maiden
Future Trends To identify requirements for future sociotechnical systems we need integrated requirements processes that draw on human- as well as techno-centric disciplines. This has implications both for the teams of people carrying out the work as well as the tools, techniques, and artefacts that are a part of the processes used. Teams must be drawn from a variety of disciplines such as human-computer interaction, ethnography, cognitive task analysis, and software engineering (see, for example, Viller & Sommerville, 1999). Tools, techniques, and artefacts must enable the intertwining of inputs from each of these disciplines. We believe that our scenarios provide a valuable tool for capturing insights regarding current work practices as well as detailed knowledge about human-computer interaction and integrating them into a framework that is now familiar to most software engineers. To further explore the application of these techniques in new contexts we have recently developed a version of our scenario walkthrough tool for Personal Digital Assistants (PDAs), so that scenario walkthroughs can be done in the work context, thereby linking contextual inquiry and structured walkthrough techniques for requirements discovery (Seyff, Grunbacher, Maiden, & Tosar, In press).
Conclusion In this chapter we have presented RESCUE, a concurrent engineering approach to requirements specification that combines use cases with a number of different techniques from both software engineering and other disciplines. We have learned from our experience with RESCUE that use cases, when complemented by other techniques – i* modeling, creativity workshops, CREWS-SAVRE scenario walkthroughs, and so forth – do indeed provide a solid foundation for the specification of requirements for complex sociotechnical systems. We do not attempt to capture all of the information derived through this process using any single formalism, but rather see the strength of our approach as its use of multiple, integrated representations, which support a systematic, analytic, yet creative approach to the development of a final requirements document. On the strength of our experience to date we argue that RESCUE focuses more attention on human elements of a sociotechnical system, provides better support for identification of sociotechnical system boundaries, and embodies a more systematic approach to the discovery of requirements through scenario walkthroughs than other mainstream approaches based on use cases alone. RESCUE has already been successfully applied in specifying requirements for the CORA2 system introduced earlier (see Maiden, Jones, & Flynn, 2003a; Maiden, Jones, & Flynn, 2003b), and at the time of writing, we have almost completed the application of RESCUE to the specification of requirements for Eurocontrol’s future DMAN Departure Management system, working together with the U.K. National Air Traffic Service. Applying the RESCUE process in CORA-2 led to the generation of an operational requirements document that had the confidence of stakeholders and passed reviews carried out by staff who were outside of the CORA-2 requirements team. The requirements document Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
263
for CORA-2 contained approximately 400 requirements, structured using 22 use cases. Numbers for DMAN are likely to be similar. However such an approach is not cheap. For both CORA-2 and DMAN, considerable training was required before the process could be applied. In each case, the requirements phase of the project has taken around nine months, with two full-time requirements engineers as well as additional effort from stakeholders and domain experts. However the process has worked, and investment in training is now yielding a good return as RESCUE is rolled out to two other projects in the ATM domain. We look forward to reporting fully on our experiences in these two projects in due course.
References Ben Achour, C., Rolland, C., Maiden, N.A.M., & Souveyet, C. (1999). Natural language studies on use case authoring. Proceedings of 4th IEEE Symposium on Requirements Engineering, 36-43. Boden, M.A. (1990). The creative mind. London: Abacus. Cockburn, A. (2001). Writing effective use cases. Addison-Wesley. Daupert, D. (2002). The Osborn-Parnes creative problem solving manual. Retrieved from www.ideastream.com/create. Diaper, D. (Eds.) (1989). Task analysis for human-computer interaction. Ellis Horwood. Hall, A. (2001). A unified approach to systems and software requirements. Proceedings 5th IEEE International Symposium on Requirements Engineering, 267. Haumer, P., Heymans, P., Jarke, M., & Pohl, K. (1999). Bridging the gap between past and future in RE: A scenario-based approach. Proceedings of 4th IEEE Symposium on Requirements Engineering, 66-73. Heath, C., Jirotka, M., Luff, P., & Hindmarsh, J. (1993). Unpacking collaboration: The interactional organisation of trading in a city dealing room. Proceedings of the Third European Conference on Computer-Supported Cooperative Work, Kluwer. Hughes, J., O’Brien, J., Rodden, T., & Rouncefield, M. (1997). Designing with ethnography: Making work visible. Proceedings of DIS’97, 147-159. Jarke, M., & Kurki-Suonio, R. (1998). Guest editorial: Introduction to the special issue on scenario management. IEEE Transactions on Software Engineering, 24(12) 10331035. Leveson, N., de Villepin, M., Srinivasan, J., Daouk, M., Neogi, N., Bachelder, E., Bellingham, J., Pilon, N., & Flynn, G. (2001). A safety and human-Centred approach to developing new air traffic management tools. Proceedings Fourth USA/Europe Air Traffic Management R&D Seminar. Maiden, N.A.M. (2004). Systematic scenario walkthroughs with ART-SCENE. In I. Alexander & N.A.M. Maiden (Eds), Scenarios in practice. John Wiley. Maiden, N.A.M., & Gizikis, A. (2001). Where do requirements come from? IEEE Software 18(4), 10-12. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
264 Jones and Maiden
Maiden, N.A.M., & Jones, S. (2004a). An integrated user-centred requirements engineering process, Version 4.1. RESCUE process project document. Maiden, N.A.M., & Jones, S. (2004b). RESCUE Process: Examples, Version 2.1. RESCUE process project document. Maiden, N.A.M., & Rugg, G. (1996). ACRE: Selecting methods For requirements acquisition. Software Engineering Journal, 11(3), 183-192. Maiden, N.A.M., Jones, S., & Flynn, M. (2003a, June 23-27). Innovative requirements engineering applied to ATM. Proceedings of ATM 2003, 5th USA/Europe R&D Seminar, Budapest. Maiden, N.A.M., Jones, S., & Flynn, M. (2003b, September 8-12). Integrating RE methods to support use case based requirements specification. Proceedings 11th International Requirements Engineering Conference, Monterey. Maiden, N.A.M., Manning, S., Robertson, S., & Greenwood, J. (2004). Integrating creativity workshops into structured requirements processes. To appear in Proceedings of DIS2004. Maiden, N.A.M., Pavan, P., Gizikis, A., Clause, O., Kim, H., & Zhu, X. (2002, September). Making decisions with requirements: Integrating i* goal modelling and the AHP. Proceedings REFSQ’2002 Workshop, 9-10, Essen, Germany. Mavin, A., & Maiden, N.A.M. (2003). Determining sociotechnical systems requirements: Experiences with generating and walking through scenarios. Proceedings 11th International Conference on Requirements Engineering. Neill, C., & Laplante, P. (2003). Requirements engineering: The state of the practice. IEEE Software, 20(6), 40-45. Poincare, H. (1982). The foundations of science: Science and hypothesis, the value of science, science and method. University Press of America. Robertson, S., & Robertson, J. (1999). Mastering the requirements process. AddisonWesley-Longman. Rolland, C., Souveyet, C., & Ben Achour, C. (1998). Guiding goal modeling using scenarios. IEEE Transactions on Software Engineering, 24(12), 1055-1071. Santander, V., & Castro, J. (2002). Deriving use cases from organisational modeling. Proceedings IEEE Joint International Conference on Requirements Engineering, 32-39. Schraagen, S., Ruisseau, J., Graff, N., Annett, J., Strub, M., Sheppard, C., Chipman, S., Shalin, V., & Shute, V. (2000). Cognitive task analysis. (Tech. Rep. No. 24). North Atlantic Treaty Organisation, Research and Technology Organisation. Seyff, N., Grunbacher, P., Maiden, N.A.M., & Tosar, A. (In press). Requirements engineering tools go mobile, submitted for publication. Sutcliffe, A.G., Maiden, N.A.M., Minocha, S., & Manuel, D. (1998). Supporting scenariobased requirements engineering. IEEE Transactions on Software Engineering, 24(12), 1072-1088. Vicente, K. (1999). Cognitive work analysis. Mahwah, NJ: Lawrence Erlbaum Associates.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Specifying Requirements for Complex Sociotechnical Systems
265
Viller, S., & Sommerville, I. (1999). Social analysis in the requirements engineering process: From ethnography to method. Proceedings of 4th IEEE International Symposium on Requirements Engineering, 6-13. Weidenhaupt, K., Pohl, K., Jarke, M., & Haumer, P. (1998). Scenario usage in systems development: A report on current practice. IEEE Software, 15(2), 34-45. Yu, E. (1997). Towards Modelling and Reasoning Support for Early-Phase Requirements Engineering. Proceedings of the Third IEEE International Symposium on Requirements Engineering, Jan. 6-8, Washington D.C., USA. IEEE Computer Society Press, 226-235.
Endnote 1
Note that the term “human activity modeling” (or sometimes “activity modeling” for short) as used here is distinct from the term “activity modeling” as used by the HCI community in relation to design reasoning.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
266 Sørby, Melby and Seland
Chapter XVI
Using Scenarios and Drama Improvisation for Identifying and Analysing Requirements for Mobile Electronic Patient Records Inger Dybdahl Sørby, Norwegian University of Science and Technology, Norway Line Melby, Norwegian University of Science and Technology, Norway Gry Seland, Norwegian University of Science and Technology, Norway
Abstract This chapter presents two different techniques for elicitation and analysis of requirements for a mobile electronic patient record (EPR) to be used in hospital wards. Both techniques are based on human-centred and participatory design principles. The first technique uses observational studies as a basis for identifying and analysing requirements for a mobile EPR. The observations are structured and systematised through a framework. The second technique is named “Creative system development through drama improvisation”, and it enables users (in this case healthcare
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
267
professionals) to contribute to the requirements engineering (RE) process by acting out everyday work situations in one-day workshops. Both techniques presented in this chapter focus on user requirements elicitation, and we believe that they are promising and complementary contributions to more traditional requirements elicitation and analysis methods, not only for hospital information systems but for a wide variety of complex, sociotechnical systems.
Introduction Advanced clinical information systems have great potential for systematising and structuring the large amounts of information that exist in modern hospitals. At the same time these systems may also simplify and coordinate the endless streams of communication that take place. A well-designed system has to be intuitive, effective, and flexible enough to meet the specific information and communication needs of a wide range of healthcare professionals. However, the high information intensity and the complexity of the organisation make the system design process particularly challenging. Both the social features of current work practice and the technical features of the system have to be considered when performing requirements gathering and analysis (Reddy, Pratt, Dourish, & Shabot, 2003). One approach to such sociotechnical requirements analysis is to involve users more actively in the design process through methods such as participatory design. In this chapter we introduce and discuss two different techniques for elicitation and analysis of requirements for a mobile electronic patient record (EPR) to be used in hospital wards. Both techniques are based on human-centred and participatory design principles, and they have been developed and used as parts of the MOBEL1 (Mobile Electronic Patient Record) project at the Norwegian University of Science and Technology (NTNU). An EPR is a computer system designed to support clinicians by providing accessibility to complete and accurate patient data. It may also include alerts, reminders, clinical decision support systems, links to medical knowledge, and other aids (Coiera, 2003; Dick, Steen, & Detmer, 1997). Numerous EPR systems exist, most of them developed for stationary computers, but also for various other devices such as handheld computers. The first of the techniques presented in this chapter uses observational studies as a basis for identifying and analysing requirements for a mobile EPR. Observational studies are frequently used within the social sciences, and during the last decades computer science researchers have also acknowledged such methods as useful for understanding the complexity of organisations and the various information needs of different users. Yet system developers may not always be able to transform rich observations to requirements and design decisions. In this chapter we present a framework for structuring and formalising scenarios obtained from observational studies at a hospital ward. Further we discuss how the outcome of characterising these scenarios may be used for producing requirements to a mobile electronic patient record. The second technique, Creative system development through drama improvisation, has been introduced by product designers and software engineers as a method for develop-
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
268 Sørby, Melby and Seland
ing and testing ideas for functionality. However, most of them have only reported results of the method without providing any detail on how the drama sessions were actually performed. We have developed and tested a procedure for how healthcare professionals can contribute to the requirements engineering (RE) process by acting out everyday work situations at a hospital ward. The procedure description is accompanied by a presentation of the findings, including the advantages and limitations of the technique. The next section of this chapter focuses on the hospital as a complex organisation and hence a challenging site for introducing new information and communication systems. We briefly address how traditional RE methods fall short in integrating social processes and work practices in the system development. Furthermore we discuss the tradition of user-centred design and some approaches to requirements elicitation methods for system development in complex organisations such as hospitals. This is followed by a presentation of the two different techniques used in the MOBEL project and a discussion of the advantages and disadvantages of both techniques. Finally we suggest how these methods may be used as a supplement to traditional requirements elicitation methods when developing complex sociotechnical systems.
Background “Traditional” RE methods have previously focused on system functionality, based on the assumptions that the application domain is stable, that information is fully available and known, and that most work consists of formal, routine processes (Reddy et al., 2003). This view is about to change, as system designers are more aware of the importance of including social and organisational processes if they want their systems to be successfully adopted into complex organisations. Air traffic control, underground subway control systems, and financial systems are examples of areas where sociotechnical approaches to requirements analysis have been used successfully (Reddy et al., 2003). Nonetheless traditional requirements analysis is still predominant in the area of clinical healthcare. A great number of costly clinical systems and projects have failed (see, for example, Heath & Luff, 2000; Heath, Luff, & Svensson, 2003; Sicotte, Denis, & Lehoux, 1998), one of the most common reasons being the lack of sufficient requirements analysis. This implies the need for a more thorough requirements analysis and elicitation phase, taking into account both sociological and organisational aspects of clinical work. So far only a few researchers have reported using sociotechnical requirements analysis in this application area (Berg, 1999; Berg, Langenberg, v.d. Berg, & Kwakkernaat, 1998; Heath & Luff, 1996).
The Hospital as a Complex Organisation Today’s hospitals are highly specialised and differentiated organisations. Dedicated departments and services have required an expansion of physical facilities, reallocation of workers, and the integration of new skilled personnel into a continuously changing
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
269
division of labour. This has in turn led to the establishment of complex relationships among a multiplicity of hospital services and departments (Strauss, Fagerhaug, Suczek, & Wiener, 1997). This puts strong demands on coordination and collaboration between different specialist departments and also between the different professions in the hospital. Scheduling, coordination, and communication in hospitals take place through a wide array of sources: electronic, paper-based, and oral. Even in “paperless” hospitals, the EPR is often supplemented by paper-based systems, and such a mixture of systems may cause several problems. A major problem with paper-based systems is that there is often only one copy of each document, and consequently it can only be used at one place and for one purpose at a time. Paper-based systems are not synchronised with the EPR, which might lead to errors and omissions. Furthermore different groups of healthcare workers have their own documentation systems, which imply that important information is stored in different places. Providing this information to all groups of health personnel, by improving accessibility, is an important task. Replacing paper-based systems by computer systems might solve the problem of unavailability and unsynchronised information and also enhance the quality of care by providing healthcare personnel more quickly with information they currently collect from different sources. However, designing such systems brings about at least two important challenges: 1.
How to decide what kind of information health personnel need and consequently what to include in the system. Health personnel have multiple and diverse information needs, and to be able to design a functional system, it is vital to understand their information needs in relation to different tasks and contexts.
2.
How to present the information. Health personnel are mobile workers, and they therefore need mobile information systems. Mobile devices such as handheld computers offer information access at the point of care, but the limited screen size and poor input facilities place strong demands on the presentation and navigation of the information. The lack of good user interfaces has also been identified by several other researchers as a major impediment to the acceptance and routine use of many types of computing systems in healthcare (see, for example, Patel & Kushniruk, 1997).
Human-Centred Design To face the challenges of designing and developing user-friendly and efficient computer applications for healthcare organisations, it is necessary to know and understand the context of use. This is one of the main activities in human-centred design, an approach to interactive system development that focuses specifically on making systems usable (EN ISO 13407, 1999). Figure 1 shows the main components of the human-centred system development cycle.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
270 Sørby, Melby and Seland
Figure 1. Human-centred design activities (from EN ISO 13407, 1999) 1. Identify need for humancentred design 2. Understand and specify the context of use
5. Evaluate designs against requirements
6. System satisfies specified user and organisational requirements
3. Specify the user and organisational requirements
4. Produce design solutions
One of the principles of human-centred design is “the active involvement of users and a clear understanding of user and task environments” (EN ISO 13407, 1999, p. 2). This desire to increase and improve user participation by making users more active through acting out everyday situations is the rationale for using drama improvisation as a part of the system development process. Through establishing a common ground, or a “third space”, for communication (Muller, 2002) we consider this approach useful for improving communication between system developers and prospective users of the system. This approach follows the tradition of several research projects in the Scandinavian countries, where role-play and games have been used to create common spaces between software developers and users from the early efforts in participatory design (Ehn, 1988), to more recent years (Buur & Bagger, 1999). However, few of these methods have been deployed when developing systems for such complex organisations as hospitals. The drama improvisation method relates mainly to activities three, four, and five of the human-centred design approach (see Figure 1). To be able to gain a thorough insight and specify the context of use (activity 2), it may be crucial to perform ethnographic or observational studies. These studies are valuable for exploring the nature of a particular phenomenon and gaining detailed insight into an environment (Atkinson & Hammersley, 1994). Anthropologists and sociologists extensively use these techniques (Coiera & Tombs, 1998; Heath & Luff, 2000; Hughes, Randall, & Shapiro, 1993). There exist a great number of ethnographic studies of healthcare organisations in general and of information needs and communication behaviour among healthcare workers (see, for example, Berg, 1999; Berg et al., 1998; Forsythe, Buchanan, Osheroff, & Miller, 1992; Schneider & Wagner, 1993; Symon, Long, & Ellis, 1996). Nevertheless, a remaining challenging task is how to utilise this sociological insight in informing design. One technique for bridging some of the gap from ethnographic studies to design decisions is by building narrow or rich scenario descriptions of current work practice
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
271
situations in order to perform requirements analysis. This has been one of several roles of scenarios in the system development lifecycle (Carroll, 1995; see also Bødker & Christiansen, 1997). A scenario is a description of a process or a sequence of acts in narrative form (Kuutti, 1995). The next section gives an example of how to structure and characterise scenarios obtained from observational studies at a hospital ward.
Observational Studies: Creating a Framework for Structuring and Analysing Scenarios To be able to produce requirements for a mobile, electronic patient record, our first challenge was to understand how both paper-based and electronic information systems were currently used on the hospital wards. Hence observational studies of physicians’ and nurses’ daily work in three wards at the University Hospital of Trondheim were conducted. One week was spent observing at two of the wards, while a more extensive observational study of four months was conducted in one ward. The observations were supplemented with informal interviews with the health personnel.
Figure 2. Elements of the framework development process Iteration Analysis
Characterization framework
Example scenarios Observations ExSc #n
Iteration
Iteration
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
272 Sørby, Melby and Seland
Framework Outline One of the main purposes for conducting the field studies was to identify scenarios that would improve, change, or even become superfluous by introducing the mobile EPR. As preparation for the observational studies, a set of preliminary attributes that were considered important for structuring and formalising the observations was defined. We also defined a set of values corresponding to the attributes. Next the observational studies were conducted, and based on the observations a number of example ward scenarios were extracted. Subsequently the example scenarios were characterised by applying the framework. The framework has been developed iteratively as new observations, scenarios, and characterisations brought the need for changing attributes and outcome values. Figure 2 illustrates the framework development process. The attributes were prearranged into three main parts: process attributes, input attributes, and outcomes. The process attributes were aimed at depicting the structure of the scenario, for example, if the composition of the scenario was predetermined and if the scenario was decomposable. Other process attributes involve the number of actors and roles in the scenario, dependencies and preconditions, formality level (that is, informal/ semiformal/formal), information flow, location, and temporal nature of the scenario. Process attributes: Below are some examples of process attributes with corresponding values and explanation: Number of participants (1, 2-4, >=5) States the number of participants involved in the scenario. The value “2-4” typically represents a patient care team, while “>=5” represents the ward physicians, nurses, or the entire ward staff. Number of roles (One, Two, Several) Number of roles represented in the scenario (for example, physician, nurse, enrolled nurse, and so forth.). Scenario nature (Informal, Semiformal, Formal) Denotes the formality level of the scenario. Regularity (Shift, Daily, Occasionally) Indicates if the scenario takes place every shift, every (week-) day, or sporadically. Scheduling (On the spot, In advance, Well in advance) States to what degree the scenario is planned and scheduled in advance (“Well in advance” indicates more than one day in advance).
Input and Outcome Attributes Attributes related to input information and outcome include type (for example, whether the information is constructive, for coordination, or motivation), variance, error, excepCopyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
273
tions, medium/modality, time, and validity (for example, novelty, longevity, and delay tolerance). Examples of input information attributes: Recorded (Personal notes, Informal local practice, Forms, Patient record, Not) Denotes how/if the source(s) of the input information is recorded, such as in personal notes, forms, varying due to informal local practice, and/or in the patient records. Longevity (None, Short term, Long term) Denotes the lifetime of the recorded input information used in the scenario. ‘None’ is related to oral input information, ‘Short term’ is related to personal notes or other informal practices, ‘Long term’ indicates permanent storage in the patient record. Medium/mode (Speech, Text, Picture, Other) Denotes the form of the information brought into the scenario. Example of outcome attribute: Type of produced information (Constructive, Cooperation, Coordination, Socialisation, Negotiation, Motivation) Constructive: The information is used as a decision basis or leads to some performed action; Cooperation: Used as a basis for care team work; Coordination*: The practice of encouragement of working relationships between differentiable groups and/or individuals; Socialisation*: The introduction, reinforcement or modification of an organisation’s culture or sub-culture; Motivation*: The increasing of the expenditure of effort, energy, and enthusiasm by members of a group; Negotiation*: A collaboration between two or more parties representing particular interests in specific outcomes where the purpose is ostensibly to achieve these outcomes through a process of discussion and compromise. *Values from (Horrocks, Rahmati, & Robbins-Jones, 1998) All the attributes and the corresponding values are described in (Sørby, Melby, & Nytrø, in press). As previously mentioned, work activities in hospital wards are characterised by a complex mixture of formal procedures and informal practices, cyclicity, and mobility, and the proposed framework tries to capture all these aspects. The selected attributes were inspired by and related to work in traditional requirements engineering, computersupported collaborative work (CSCW), human-computer interaction, and sociology (for example, Horrocks et al., 1998; Sørensen, Wang, Le, Ramampiaro, Nygård, & Conradi, 2002).
Characterising Scenarios by Means of the Framework In Figures 3a, 3b, and 3c three example ward scenarios are presented. An instance of a scenario is here defined as a time-limited process (for an individual patient) in which the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
274 Sørby, Melby and Seland
Figure 3a. Example scenario 1 S1: Pre-rounds conference per patient The pre-rounds conference is held every weekday prior to the ward rounds. One or more physicians and nurses (typically the head physician, one assistant physician, and the team leader nurse) from the care team discuss the care plans of the patients based on the patient chart, possible new test and/or examination results, and supplementary information from the nurse documentation or undocumented information from the participants of the conference. The nurse has a notebook called “ward rounds book” in which he or she registers the tasks of the ward secretaries and the nurses; for instance, if there has been a change in the medications of a patient or if a patient is to be discharged or moved to another ward.
Figure 3b. Example scenario 2 S2: Ward rounds incident: Seeking new test results One of the patients wants to know his haemoglobin percentage. The nurse returns to the office to check the latest laboratory answers, but due to a mistake the test was not ordered in the morning. The consequence is that the patient has to take an additional blood sample, and the physician has to remember to check the result of the test when it becomes available.
Figure 3c. Example scenario 3 S3: Medication - per patient One of the nurses in the patient care team uses information from the patient chart to put today’s medications for the ward patients onto a medicine tray. Later the nurse in charge inspects the medicine tray to ensure that the medicines correspond to the recorded information on the patient chart.
cast (people filling roles) does not change and that has an identifiable start, preconditions, end, and outcome. Based on observable scenario attributes and subjective participant statements, each scenario has been characterised by applying the framework presented earlier in this section. Table 1 shows the result of applying the framework to the example scenarios S1-S3. “N/ A” (not applicable) indicates that the attribute is irrelevant to the scenario in general or as a consequence of the value(s) of previous attributes. For some of the scenarios, several valid values apply to a number of the attributes.
Findings: The Scenario Approach The presented framework is mainly intended for structuring and sorting observations and scenarios from current work situations and establishing a vocabulary for Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
275
Outcomes/produced output
Information input
Process
Table 1. Examples of applying the framework to ward scenarios Attribute
S1
S2
S3
Number of participants
2-4
2-4
2-4
Number of roles
Several
Several
Two
Number of role levels
Several
Several
Two
Composition
Predetermined
Ad-hoc
Predetermined
Decomposition
No
No
Yes
Scenario nature
Semi-formal
Informal
Formal
Regularity
Daily
Occasionally
Daily
Scheduling
Well in advance
On the spot
On the spot
Variance of required information
A lot
No
Somewhat
Location(s)
Predetermined, varying
Multiple
Predetermined, fixed
Spatiality
One place
Two places
One place
Temporality
Synchronous
Asynchronous
Asynchronous
Information exchange
Many-to-many
One-to-many
One-to-many
Initiation
On demand
On demand
On demand Precondition
Delay tolerance of scenario start
None
None
None
Novelty
To some
To all
To some
Recorded
Personal notes Patient record forms
Patient record
Patient record
Longevity
Short & long term
Long term
Short term
Medium/mode
Speech & text
Text
Text
Scope
All
Some
All
Delay tolerance of input information
None
None
None
Explicit
Yes
Yes
Yes
Shared
Yes
Yes
Yes
Novelty
To some
To all
To some
Recorded
All types
Patient record Personal notes
Patient record
Longevity
Short & long term
Long term
Long term
Type of produced information
Constructive Cooperative Coordinating
Constructive
Cooperative Constructive
Medium/mode
Speech & Text
Speech & Text
Text
Scope
Patient care team members
Patient care team members
Patient care team members
Delegation of responsibility
Predefined
On the spot
Predefined
Delegation of tasks
Predefined
On the spot
Predefined
Delay tolerance
Unknown
None
None
Outcome type known in advance
Yes
No
Yes
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
276 Sørby, Melby and Seland
characterising them. To identify system requirements by means of scenarios, it is necessary to perform thorough clustering analysis of the resulting characterisation of example scenarios. Various methods exist for this purpose. Contextual design is one example of an approach that adapts ethnographic methods of understanding human behaviour in context (for example, the workplace) and extends these methods to function within traditional software and usability engineering practices (Carroll, 1995). In this study the manual analysis of applying the framework to a few scenarios indicates that a mobile EPR is beneficial in certain situations, for instance, when the documented decisions and plans are direct results of consulting formalised information from the EPR. Similarly the mobile EPR seems superfluous in other situations — for example, when a process outcome is short-term operational knowledge. Other findings suggest that even if the overall information needs and communication patterns in the different wards are similar, the use of the patient record varies greatly depending on the individual user, that is, how experienced the user is, how long he or she has been working in the particular ward, and how well-known the patients are. This confirmed our assumption that the mobile EPR has to be dynamic and adaptable to various situations and users. When applying the framework to the example scenarios, we faced several challenges. Since a scenario is an abstraction of many underlying narratives, there is considerable variance from observation to observation, and it is therefore important to try to capture and describe this variance as part of a scenario narrative. Seemingly unfinished or inconclusive processes are common, as are deviations from plans or from normal scenarios. These aspects are important for the system design but difficult to capture in the proposed framework. Despite these challenges when modeling the framework, we believe that the final framework may serve as a constructive tool both before and during system design.
Creative System Development through Drama Improvisation The following sections are based on three one-day drama workshops organised at NTNU (Svanæs & Seland, 2004). The main goal of the workshops was to develop a method that involves end-users actively in designing a mobile health information system through scenario building, drama improvisation, and low-fidelity prototyping. The method also enables system developers to gain a better understanding of the users’ domain by observing how healthcare workers stage and act out current and future use scenarios.
Workshop Structure and Contents The workshops were held in a full-sized model of a future hospital ward. The model contained several partly furnished patient rooms where most of the acting took place. Two groups of three healthcare workers (physicians and nurses) participated in each workshop. Besides the organisers and a few observers, two graduate students in Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
277
computer science taking roles as system developers also participated in the workshops. In addition a drama teacher was hired as a facilitator in the first workshop. The system developers were neither involved in the scenario selection nor allowed to suggest design solutions, but during the rehearsal of the scenarios they briefly discussed the scenarios and design solutions with the healthcare workers. After a brief introduction of the participants, the organisers gave a general introduction to system development processes, to user-centred methods specifically, and the rationale behind using drama improvisation as a method. After the introduction the participants were introduced to simple warm-up exercises before they were split in two teams. Both teams performed a brainstorming session on communication- and information-rich situations from their hospital ward to identify scenarios to be dramatised later on. Example scenarios were written on Post-it notes and placed on a wall, clustering similar situations (Figure 4). After deciding which situation they would prefer to present, they decided upon details such as the exact number of participants and the time and location of the event. The teams rehearsed their scenarios before presenting them to the other participants. Each scenario was presented twice, first as the team had rehearsed it and next with interruptions from the other team. An example of an interruption is that the physician’s pager beeps, and he or she has to leave the room to check the message. The reason for introducing interruptions was to make the participants more used to improvising and changing their well-rehearsed scenario, in addition to obtaining realistic and more diverse situations. After a short break the organisers presented a brief overview of various mobile technological solutions. The healthcare workers were handed low-fidelity prototypes (foam models) that could be used in the next variants of the scenarios. In the first workshop the participants discussed how they could incorporate this technology into their chosen situations, and they sketched “screen shots” on Post-it notes. In the second and third workshops the participants “developed” their systems as they acted. When seeing a need for some information, they stopped acting and sketched their solutions on Postit notes attached to the prototypes. Again the teams acted out their scenarios in front of the other team but this time with “technology” incorporated as a part of the scenario. Figure 5 shows two nurses improvising new work practices using the low-fidelity prototypes. As in the former performances, the groups acted their scenarios both with and without interruptions. At this stage of the workshop, the interruptions were introduced to test the reliability of the suggested solutions. A plenary gathering where all participants discussed and summarised the workshop concluded the day. Topics that were discussed were the realism of the chosen scenarios, experiences from acting out the scenarios, and various considerations about the proposed technological solutions.
Findings: Drama Improvisation as Input into the RE Process To evaluate the drama improvisation method, questionnaires were completed by the participants at the end of each workshop. These questionnaires were supplemented with Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
278 Sørby, Melby and Seland
Figure 4. Nurses brainstorming work situations
Figure 5. Nurses acting out future scenario
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
279
interviews and discussions with the system developers and the healthcare personnel. In addition the system developers wrote preliminary reports from the workshops and subsequent requirements specifications. The following sections discuss some important findings from the evaluation of the drama improvisation method.
System Developers’ Understanding of the Domain and the Technological Needs of the Users One of the most striking features of drama improvisation as a method is its ability to let system developers get a thorough insight into the domain without requiring their actual presence at the work site. The system developers in our workshops found it much easier to understand the domain through the health personnel’s acting than by simply questioning health personnel or otherwise reading or listening to descriptions of work situations. “Watching health personnel ‘working together’, even though fictitious, makes you think about things you previously haven’t considered” (interview with the system developers, 23 May, 2003). The workshops helped in detecting health personnel’s information needs that the system developers were previously unaware of or thought were already being met. Likewise the opposite was also the case: in situations where the system developers predicted a need for formal and written information, the health personnel solved their information needs informally, asking each other. Another issue considered important was the significance of health personnel talking together while acting out the scenario. Through their talking they explained and clarified for the system developers what was happening in the scenario. The system developers were positive and quite impressed by the technological solutions suggested by the health personnel: “I feel that they came up with some pretty clever solutions. And what’s positive is that they came up with it themselves, and then it is more likely that they actually will use it” (interview with the system developers, 23 May, 2003). The users’ suggestions were perceived as a healthy corrective to the system developers’ visions. System developers sometimes tend to design a more sophisticated and advanced system than users really require and want, thus neglecting the users’ actual needs.
Communication between System Developers and Healthcare Workers Good communication between system developers and future users is of great importance in user-centred design approaches. In our opinion, drama improvisation is a suitable method for facilitating communication and obtaining a common understanding of a system design project. It establishes a common ground, a third space, for both system developers and future users. Since the users are the domain experts and their knowledge and creativity are actively used in the design process, they may feel more conversant with the future system and therefore more willingly accept it. When nurses and physicians work together with system developers, it is important to create a common language they all understand. “Telling by showing”, as is the case when Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
280 Sørby, Melby and Seland
work situations are dramatised, is a natural way to describe everyday work and is easy and intuitive to understand. Thus drama becomes a common language of the system developers and the healthcare personnel. Another important point is that simply bringing system developers and future end users, in this case health personnel, together and providing them with time to talk and ask each other questions within an informal atmosphere proved helpful in the process of identifying requirements. Because the participants acted scenes out rather than merely analysing or describing them, a playful atmosphere was created. This resulted in discussions and a lot of interesting information being shared in the breaks between the formal schedules. As the acting sessions took place outside the hospital, the system developers were also able to “freeze” the situations and ask clarifying questions without fear of disturbing real patients.
Creating Requirements Specifications Based on the Workshops Based on the last two workshops, two preliminary requirements specifications were created. These specifications demonstrate one of the main limitations of the method: Some functional requirements were described in detail, while others were missing due to the specific focus in the workshops. This implies that the method has to be supplemented by other RE methods to explore the remaining functional and non-functional requirements of the system. Another important issue for the outcome of the workshop is the participants’ personal opinions and technological skills. As some participants had strong opinions regarding solutions, they tried to take a leading position in defining the technological needs. The organisers therefore had to make sure that every participant’s opinion was heard. Likewise the different participants did not always agree on what solution would be the best in their daily work. This led to healthy discussions about advantages and disadvantages of the different solutions, but it also complicated the resulting requirements specification, as the various solutions had to be considered.
Discussion The techniques presented in this chapter are both based on human-centred system development, but they contribute in different phases of the human-centred system development cycle. One main difference is that the drama improvisation method is more interactive and the users are directly involved in specifying the requirements of the system, even if this is not explicitly stated. The framework approach, on the other hand, puts strong demands on researchers to “translate” observations into examples of representative scenarios, characterise them via the framework, and then deploy the results in the system design. In a real hospital setting a wide range of real-life situations can be observed, in contrast to drama improvisation where a one-day workshop typically includes only two (fictional)
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
281
situations. Furthermore the outcome of the workshops depends very much on the individual participants. It is therefore crucial to try and find representative, “average” users. During observational studies all groups of employees can be watched in their daily work. This gives a more complete picture and a better understanding of the context of use. However, it is impossible to “freeze” situations when conducting field studies, and the observers might hold back questions in order to interrupt as little as possible in a busy environment. In the workshops freezing situations and asking questions were perceived as very useful by the system developers. We believe that both methods presented in this chapter are promising and complementary contributions to requirements elicitation and analysis, not only for hospital information systems but also for a wide variety of complex, sociotechnical systems. Observational studies are particularly useful for gaining knowledge of the domain while drama workshops seem especially important in the introductory phase of a project, in order to create a common ground for the system developers and some of the end users of the system. The drama improvisation approach has also proven advantageous when system developers have little or no knowledge of the domain and when it is inconvenient to perform field studies; for instance, when a project involves a large group of system developers. When combining the methods, field studies can be used to identify situations of interest prior to the drama workshops and to validate situations that have been developed during the workshops, and as such they may reinforce each other’s potential. Both techniques presented in this chapter focus on user requirements elicitation and are not sufficient for producing complete requirements specifications. As previously discussed, the methods are particularly useful for gaining knowledge of the domain in the introductory phase of a system development project, but they must be supplemented by other, traditional, methods for requirements gathering and analysis (for example, questionnaires, surveys, interviews, analysis of existing documentation, prototyping, and so forth).
Acknowledgments We would like to thank the staff at the University Hospital of Trondheim for their cooperation both during the observational studies and by participating in the workshops. Thanks also to Ann Rudinow Sætnan, Øystein Nytrø, and Dag Svanæs for valuable comments and contributions. This research was financed by NTNU Innovation Fund for Business and Industry and Allforsk Research.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
282 Sørby, Melby and Seland
References Atkinson, P., & Hammersley, M. (1994). Ethnography and participant observation. In N.K. Denzin & Y.S. Lincoln (Eds.), Handbook of qualitative research (pp. 248-261). Thousand Oaks: SAGE. Berg, M. (1999). Patient care information systems and health care work: A sociotechnical approach. International Journal of Medical Informatics, 55(2), 87-101. Berg, M., Langenberg, C., v.d. Berg, I., & Kwakkernaat, J. (1998). Considerations for sociotechnical design: Experiences with an electronic patient record in a clinical context. International Journal of Medical Informatics, 52, 243-251. Buur, J., & Bagger, K. (1999). Replacing usability testing with user dialogue. Communication of the ACM, 42, 63-66. Bødker, S., & Christiansen, E. (1997). Scenarios as springboards in cscw design. In G. Bowker, S.L. Star, B. Turner & L. Gasser (Eds.), Social science, technical systems, and cooperative work. Beyond the great divide (pp. 217-233). Lawrence Erlbaum Associates. Carroll, J.M. (Ed.) (1995). Scenario-based design: Envisioning work and technology in system development. John Wiley & Sons, Inc. Coiera, E. (2003). Guide to health informatics (2nd ed.). London: Arnold. Coiera, E., & Tombs, V. (1998). Communication behaviours in a hospital setting: An observational study. BMJ, 316(7132), 673-676. Dick, R.S., Steen, E.B., & Detmer, D.E. (Eds.) (1997). The computer-based patient record: An essential technology for health care (Revised ed.). Washington D.C.: National Academy Press. Ehn, P. (1988). Work oriented design of computer artifacts. Stockholm: Arbetslivscentrum (Swedish Center for Working Life). EN ISO 13407:1999. London: British Standards Institution. Forsythe, D.E., Buchanan, B.G., Osheroff, J.A., & Miller, R.A. (1992). Expanding the concept of medical information: An observational study of physicians’ information needs. Computers and Biomedical Research, 25, 181-200. Heath, C., & Luff, P. (1996). Documents and professional practice, ‘bad’ organisational reasons for ‘good’ clinical records. Proceedings of the ACM Conference on Computer-Supported Cooperative Work. Heath, C., & Luff, P. (2000). Technology in action. Cambridge: Cambridge University Press. Heath, C., Luff, P., & Svensson, M. S. (2003). Technology and medical practice [Special issue]. Sociology of Health & Illness, 25, 75-96. Horrocks, S., Rahmati, N., & Robbins-Jones, T. (1998). The development and use of a framework for categorising acts of collaborative work. Proceedings of the 32nd Hawaii International Conference on System Sciences.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Identifying and Analysing Requirements for Mobile Electronic Patient Records
283
Hughes, J.A., Randall, D., & Shapiro, D. (1993). From ethnographic record to system design. Some experiences from the field. Computer Supported Cooperative Work (CSCW), 1(3), 123- 141. Kuutti, K. (1995). Work processes: Scenarios as a preliminary vocabulary. In J.M. Carroll (Ed.), Scenario-based design: Envisioning work and technology in system development (pp. 19-36). John Wiley & Son, Inc. Muller, M. J. (2002). Participatory design: The third space in HCI. In J.A. Jacko & A. Sears (Eds.), The human-computer interaction handbook: Fundamentals, evolving technologies and emerging applications. Lawrence Erlbaum Associates. Patel, V.L., & Kushniruk, A.W. (1997). Human-computer interaction in health care. In J.H. v. Bemmel & M.A. Musen (Eds.), Handbook of medical informatics. Heidelberg: Springer-Verlag. Reddy, M., Pratt, W., Dourish, P., & Shabot, M.M. (2003). Sociotechnical requirements analysis for clinical systems. Methods of Information in Medicine, 42(4), 437-444. Schneider, K., & Wagner, I. (1993). Constructing the ‘dossier représentatif’. Computerbased information-sharing in French hospitals. Computer Supported Cooperative Work (CSCW), 1, 229-253. Sicotte, C., Denis, J.L., & Lehoux, P. (1998). The computer based patient record: A strategic issue in process innovation. Journal of Medical Systems, 22(6), 431-443. Strauss, A.L., Fagerhaug, S., Suczek, B., & Wiener, C. (1997). Social organization of medical work. New Brunswick: Transaction Publishers. Svanæs, D., & Seland, G. (2004). Putting the users center stage: Role playing and lowfi prototyping enable end users to design mobile systems. Paper presented at the Conference on Human Factors in Computing Systems, Vienna, Austria. Symon, G., Long, K., & Ellis, J. (1996). The coordination of work activities: Cooperation and conflict in a hospital context. Computer Supported Cooperative Work (CSCW), 5(1), 1-31. Sørby, I. D., Melby, L., & Nytrø, Ø. (In press). Characterizing cooperation in the ward: A framework for producing requirements to mobile electronic healthcare records. International Journal of Health Care Technology and Management. Sørensen, C.F., Wang, A. I., Le, H.N., Ramampiaro, H., Nygård, M., & Conradi, R. (2002). The MOWAHS Characterisation framework for mobile work. Proceedings of the IASTED International Conference on Applied Informatics, Innsbruck, Austria.
Endnote 1
The project includes members from Department of Computer and Information Science, Department of Sociology and Political Science, Department of Telecommunications, Department of Linguistics, and the Faculty of Medicine at NTNU. MOBEL was initiated in 1999 and since 2003 has been part of the Norwegian Centre for Electronic Patient Records (NSEP) in Trondheim.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
284
Kerkow, Dörr, Paech, Olsson and Koenig
Chapter XVII
Elicitation and Documentation of Non-Functional Requirements for Sociotechnical Systems Daniel Kerkow, Fraunhofer Institute for Experimental Software Engineering, Germany Jörg Dörr, Fraunhofer Institute for Experimental Software Engineering, Germany Barbara Paech, University of Heidelberg, Germany Thomas Olsson, Fraunhofer Institute for Experimental Software Engineering, Germany Tom Koenig, Fraunhofer Institute for Experimental Software Engineering, Germany
Abstract This chapter describes how non-functional requirements (NFR) can be elicited and documented in the context of sociotechnical systems. An approach is presented based on use cases and on quality models derived from ISO 9126, as well as general problems and challenges when working with NFR. Requirements in general and NFR in particular are subjective, have many stakeholders and are often conflicting. The approach presented includes processes for prioritizing quality attributes that are important to a specific context, eliciting NFR, and identification and analysis of
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
285
dependencies among the NFR. The aim is to provide an experience-based approach that facilitates efficient and effective elicitation and documentation of NFR. Having a structured method that aims at providing measurable, traceable, and focused requirements rather than having ad-hoc and ambiguous ones achieves this. The approach uses use cases as the main technique, though the general principle of having a structured and experience-based process is applicable to other techniques as well.
Introduction Technology and the interfaces between technical devices and the persons using them are becoming a natural part of our life. The only time they are consciously thought about is when they fail to meet our expectations, for example, a person wants to send a multimedia message via a cell phone, but it takes too long and there is a time-out on the connection. The time it takes to send the multimedia message confronts the user with efficiency issues. The number of selections required to find a function confronts users with usability issues, and the need to install updates to get a new dictionary confronts them with maintainability issues. All these issues must live up to the users’ goals and expectations. If these expectations are not fulfilled, the users will be unsatisfied with the product, perhaps making it useless or even dangerous for them. While some issues only have an impact on the users’ perception of comfort, there are issues that have a more severe impact on the users or on their environment. A financial transaction, for example, is very sensitive to security issues. The term “sociotechnical” refers, in the examples above, to the interaction of humans (the users) with a technical device during the usage of a system. This has, of course, an impact on the system development process as well as on the processes in which the software is being used. During development there are many decisions to be made with respect to the environment of the software, the software itself, and the software development process. These decisions not only depend on the users’ expectations but also on the interests of other stakeholders, such as developers or procurers. Thus it is very important for the requirements engineering activities that these expectations and interests are elicited thoroughly. In this chapter we discuss issues in the elicitation and documentation of so called Non-functional Requirements (NFR), which essentially cover all constraints on how a system should achieve its functionality (Kitchenham & Pfleeger, 1996, Menasce, 2001). The ISO Standard 9126 (2001) proposes the following taxonomy: •
Efficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used, under stated conditions.
•
Portability: The capability of the software product to be transferred from one environment to another.
•
Maintainability: The capability of the software product to be modified. Modifications may include corrections, improvements, or adaptations of the software products to changes in environment and in functional specifications.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
286
Kerkow, Dörr, Paech, Olsson and Koenig
•
Functionality1: The capability of a software product to provide functions that meet stated and implied needs when the software is used under specified conditions.
•
Usability: The capability of the software product to be understood, learned, used, and to be attractive to the user when used under specified conditions.
•
Reliability: The capability of the software product to maintain a specified level of performance when used under specified conditions.
There exist several different definitions for NFR. Chung, Nixon, Yu, & Mylopoulos (2000) define an NFR as: “... in software system engineering, a software requirement that describes not what the software will do, but how the software will do it, for example, software performance requirements, software external interface requirements, software design constraints, and software quality attributes. NFR are difficult to test; therefore, they are usually evaluated subjectively.” This definition is quite fuzzy. It mainly gives examples of types of NFR but fails to classify of them. Loucopoulos & Karakostas (1995) present one possible classification of NFR. They distinguish between process, product, and external requirements. Product requirements specify the desired characteristics in terms of quality attributes such as performance and security. Process requirements are constraints on the development process. External requirements are requirements that arise from external sources, either within the company or outside. While working with various kinds of NFR, we experienced that this classification is not sufficient, in particular for products requirements. The ISO Standard 9126 (2001) gives a detailed classification on product requirements (see section below on basic concepts). However it does not give specific guidelines on how to specify or elicit the different NFR. The rest of the chapter is structured as follows: In section 2, we introduce basic terminology and discuss related work with respect to NFR. The challenges of requirements engineering for NFR are discussed in the next section. Our method and its application in a case study are presented in the section on eliciting and documenting NFR. In the following section we present lessons learned from applying the techniques. We end with some ideas on future research and a conclusion.
Basic Concepts Figure 1 shows a meta-model in which all the concepts of a method that meet the challenges of elaborating NFR are described. Requirements can be functional, architectural, or non-functional. Functional requirements are concerned with tasks performed by the user or by the system. Architectural requirements constrain the system (or more precisely, the architecture of the system). NFRs concern the organization or tasks of two types: user tasks and system tasks. User tasks are tasks a certain user performs. They are supported by the system (for example, “monitoring of certain machines”) and include user involvement. System tasks are tasks the system performs. In contrast to user tasks, these tasks do not involve the user. Tasks can be refined into sub-tasks. Organization refers to what Loucopoulos & Karakostas
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
287
Figure 1. The Meta-model refined into
Task
1 *
describes 1 1
Organization
System Task
User Task
refined into
System
1..* 2..* constrains
Functional Requirement
* 1
1
1
*
*
*
Organization Quality Attribute
User Task Quality Attribute
1 *
SystemTask Quality Attribute
System Quality Attribute
Architectural Requirement
Requirement
* Means
achieved by *
refined into
1 *
has influence on
Quality Attribute *
* influences
*
*
Non-functional Requirement
constrains
1
*
* 1 measured by
1
*
1..*
Metric
Value
determines 1
1..* justifies
1..* Rationale
1..*
(1995) call external- and process-related NFR. Optimizing software engineering processes or adhering to external standards, for example, can improve the quality of a software product. This does not directly affect software artifacts but has to be elicited during the requirements engineering process. To support continuous and experience-based reuse, we distinguish quality attributes from NFR. A quality attribute (QA) is an abstract and reusable non-functional characteristic. The distinction between different types of quality attributes is important for our elicitation process. Each type of quality attribute is elicited differently. QA can be refined into further QA. In addition they can have positive or negative influences on each other. An NFR describes a certain value (or value domain) for a QA that should be achieved in a specific project. To ensure measurable NFR it is necessary to determine a value for a metric associated with the QA. For example, the NFR “The database of our new system shall handle 1,000 queries per second ” constrains the QA “workload of database.” The value is determined based on an associated metric “Number of jobs per time unit.” For each NFR a rationale states reasons for its existence. A means is a solution-oriented pattern to achieve a certain set of NFR. In many cases a means describes an architectural option for the system to achieve a certain QA (for example, “load balancing” is used to achieve a set of NFR concerning the QA “workload distribution”). However a means can also be process-related (for example, “automatic test case generation” can be used to fulfil (NFR with respect to reliability).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
288
Kerkow, Dörr, Paech, Olsson and Koenig
Problems and Challenges A typical challenge in dealing with NFR is that users perceive the quality of a product differently and that they have different expectations. These expectations are subjective and often expressed through NFR. Besides the user of a system, there are more stakeholders, like customers, suppliers, developers, and marketing. Requirements elicited from different sources are often in conflict with each other. For example, domain experts for various types of NFR frequently identify problems in the realization of requirements phrased by a naïve user. Gross & Yu (2001) particularly support the customer with their method to relate business goals to architecture. Chung et al. (2000) developed the first comprehensive framework that describes relationships between NFR in the ’90s. The NFR framework, including the softgoal notation, provides detailed guidance on how to refine NFR (called soft-goals) and how to denote relationships between the NFR. As the focus is on refinement, no detailed elicitation support or support for documenting NFR as part of a requirements document is given. The intertwined nature of NFR is what makes them special. Considering the impact of NFR on FR and Architectural decisions (AD) as early as possible without forcing early design decisions, and finding the right point in time and a suitable way to treat all three of them together, is an important issue. Abowd, Bass, Clements, Kazman, Northrop, & Zaremski (1997); Barbacci, Klein, & Weinstock (1997); Bass, Clements, & Kazman (1998); and Shaw & Garlan (1996) discuss various approaches to coping with architectural issues in detail. Approaches exist that consider the dependencies between NFR and FR (Alexander, 2001; Firesmith, 2003; Petriu & Woodside, 2002; Sindre & Opdahl, 2000; Sutcliffe & Minocha, 1998) and between FR and architecture (Clements, Bass, Kazman, & Abowd, 1995; Egyed, Grünbacher, & Medvidovic, 2001; In, Kazman, & Olsson, 2001; Liu & Yu, 2001), respectively. Cysneiros & Leite (2001) describe an approach that combines NFR and use cases (Cockburn, 2001). Use cases and NFR are elicited separately and are then combined to make sure that the use cases satisfy the NFR. Methods such as the Software Architecture Analysis Method (SAAM) (Bass et al., 1998; Clements et al., 1995) and the Architecture Trade-off Analysis Method (ATAM) (Kazman, Klein, & Clements, 1999) combine NFR with AD. Both are scenario-based methods for architecture analysis. The goal of SAAM is to identify whether a software architecture satisfies its modifiability requirements expressed through scenarios. The outcome of ATAM is the risk that results from conflicting requirements and from requirements that have not been addressed in the architecture. Experiences with SAAM in case studies are presented in Kazman, Bass, Abowd, & Webb (1994) and Kazman, Abowd, Bass, & Clements (1996). Neither method helps to elicit measurable NFR in an early phase, but both are based on a set of scenarios. However, in practice, the elicitation of NFR, FR, and AD has to be intertwined (Moreira Brito, & Araújo, 2002; Paech, Dutoit, Kerkow, & von Knethen, 2002; Paech, von Knethen, Doerr, Bayer, Kerkow, Kolb, Trendowicz, Punter, & Dutoit, 2003). NFR are not only related to AD and FR, they also have internal dependencies (for example, performance and maintainability) that have to be detected and handled. It is a
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
289
challenging issue to find all these dependencies as well as solutions to overcome conflicts. NFR are very project-specific and hardly reusable as such. Thus specific measures are needed to support experience transfer between different projects. This becomes even harder due to the fact that NFR are often expressed vaguely and not in a quantitative way. This often gives rise to misunderstandings. For the importance of experience management, consult Basili & Rombach (1988) and Basili (1992). Klein & Kazman (1999) provide a framework for organizing the existing knowledge about quality attributes and about the effects of architectural design decisions on the quality attributes of a software system. Similar to the approaches above and to many other approaches (for example, Gross & Yu, 2001; Liu & Yu, 2001), we use goal graphs for dependencies and refinement. However we use goal graphs for capturing the relationships between categories of NFR such as efficiency and maintainability but not for the actual NFR. The actual NFR are captured as part of requirements documents intertwined with FR and AD. This avoids the need to develop complicated dependencies in each project anew. Furthermore it supports coherent documentation of NFR, FR, and AD. Another challenge that is underrepresented in the literature is the effort spent on NFR: Typically, there are not enough resources to elicit all NFR equally well. In general one can say that current approaches for dealing with NFR provide a framework for thinking about NFR, but they are no help for many practical problems emerging during software development. In the following we elaborate on these practical issues and show how to alleviate these problems through general principles such as checklists and templates or iterative development.
Eliciting and Documenting NFR In this section we present methods and techniques to tackle the issues identified in the previous section. First we discuss how the concepts presented in the section on basic concepts help to deal with the challenges of the problems and challenges section. We also give an overview of the main artefacts. Then we discuss in detail the activities for eliciting and documenting NFR.
Addressing the Challenges Based on the concepts of the meta-model presented in the basic concepts section we can address the above-mentioned challenges: •
Quality is a subjective concept. To elicit and communicate the latter, we integrate all stakeholders in elicitation workshops. Furthermore our distinction into organization, user task, system task, and system QA helps to clarify different views (for example, users rather think of user task requirements while developers think of
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
290
Kerkow, Dörr, Paech, Olsson and Koenig
system task requirements). Artefacts such as checklists, templates, and experience-based quality models support an interactive and structured way of communication. (IEEE Std. 830, 1998) •
To enable stakeholders to identify the most important NFR in a specific context, we use a standardized questionnaire to prioritise NFR.
•
In order to be able to consider the impact of NFR as early as possible, we apply our method on artefacts available in an early stage of the requirements engineering process.
•
In order to be able to manage NFR in a generic and reusable way, we create quality models based on QA and phrase instances of the QA (the NFR) within the requirements document.
•
We capture general dependencies in an experienced-based quality model and in the rationale of the NFR. This helps to systematically identify and avoid conflicting requirements (compare Dutoit & Paech, 2001).
•
To cope with the challenge that NFR are intertwined with FR and AD, we propose to stick to common practices such as iterative development. We present a method that integrates several artefacts into a refinement process that iteratively adds more and more information and results in a complete collection of NFR.
•
To prevent stakeholders from providing vague NFR, we require the usage of metrics. The metrics to be used are also captured in the quality model.
Overview of the Artefacts. Figure 2 shows the main artefacts used and produced in our method. A (prioritisation) questionnaire is used to focus the elicitation and documentation process. This questionnaire asks questions about goals and context of the current project and recommends the most important QA. Reference (quality) models, checklists, and templates are used as a starting point. These are then tailored to specific project contexts. The reference (quality) model represents the refinement of each of the QA mentioned in the introduction. It is based on the definitions from ISO 9126 and captures the semantic knowledge about the construct of a high-level QA. To create the tailored, project-specific artefacts quality model, template, and checklist, the following steps are necessary: 1.
Select QMs: QA should be prioritised. Therefore a prioritisation questionnaire is used. The result is a prioritised selection of the most critical quality models for the project.
2.
Tailor Quality Model: Starting with the most important QA, a common definition among the stakeholders has to be found. Therefore, the process prescribes a focus group workshop in which representatives from all stakeholder groups participate. As an input for the discussion, the participants use the reference model. Every QA of the reference model has to be defined and the relevance for the current product,
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
291
Figure 2. Overview of the artefacts involved in the process E xperience-based artifacts Questionnaire
S elect QMs
R eference mo del
T ailor Quality Model
R eference ch eck lists
Identify dependencies
Quality mo del
Derive ch eck lists/ template
T emplate
R eference template
Improved with project experience
Checklist
T ailored, project-s pecific artifacts
project, or organization has to be discussed. The result is a set of tailored quality models. An example for a tailored quality model is given in Figure 3. 3.
Identify Dependencies: There exist dependencies between QA. These dependencies have been elicited and collected through literature studies and project experience, and they are represented in the reference QM. If the participants of the workshop foresee further dependencies, especially for those QA added during the workshop, they add these to the QM. Dependencies between QA of different QM should also be captured. The result is a dependency list.
4.
Derive checklists/templates: To facilitate further activities in the process, checklists and templates are created based on the information captured in the QM. QA always have to be refined keeping the project and the usage context in mind. Therefore average and worst-case scenarios as well as further assumptions (for example, about the system architecture) serve as input for the checklist.
The process described above makes it possible to build a common understanding of the required quality of a system and prepares the elicitation and documentation activities. The latter again foresee a focus group workshop with the stakeholders of the future system involved, which is described in detail the next section.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
292
Kerkow, Dörr, Paech, Olsson and Koenig
Figure 3. Example for a tailored quality model (efficiency) Efficiency
Required Documentation
Locality
Parallelism
Boot / Start Time
Usage Time
Throughput (network)
Mbit/sec.
Resource Utilisation
Efficiency Compliance
Time Behaviour
Type and position of devices
Workload
Response Time
Experience
Workload Distribution
Capacity
Load Balancing
% of resource consumption Cost / unit
#jobs / time unit
Quality Attribute Customer View
Quality Attribute
User Task Quality Attribute
Quality Attribute Developer View
Quality Attribute
System Task Quality Attribute
Quality Attribute
System Quality Attribute
Quality Attribute
Organization Quality Attribute
Means
Metric
Elicitation and Documentation of NFR In the following we describe the activities to be performed within the elicitation and documentation process. We use examples from a case study of a wireless framework for mobile services called CMWE. The application enables up to eight users to monitor production activities, manage physical resources, and access information within a manufacturing plant. The user can receive state data from the plant on his or her mobile device, send control data from the mobile device to the plant components, position the maintenance engineer, and get guidance on fixing errors on machines. The elicitation process uses the available project documentation. These are typically: •
the system functionality (behaviour), preferably described by use cases (UC),
•
the physical architecture and further implementation constraints (for example, constrained hardware resources or constraints derived from the operating systems),
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
293
Figure 4. Activity “Elicit organizational NFR”
•
assumptions about the average and worst-case usage scenarios of the system, and
•
a user model.
We distinguish between different elicitation activities: organizational NFR elicitation, user task NFR elicitation, system task NFR elicitation, and system NFR elicitation. A checklist that is derived from the quality model guides each activity. Figure 4 shows the activity that elicits NFR that constrain QA of the organization. The checklist suggests thinking about domain experience, size, structure or age of the supplier organization, as well as required standards (e.g., RUP), activities (for example, inspections), documents, or notations (for example, statecharts). In the case study some of the requirements expressed were: •
“The supplier needs at least three years of experience in the domain of mobile devices.“
•
“The supplier has to create a specification document.”
To avoid premature requirements, the stakeholders are instructed to scrutinize the NFR again, just like Socrates would try to get to the bottom of statements over and over. This form of Socratic dialogue serves to uncover the rationale behind that NFR and prevents the customer from constraining the system unnecessarily. NFR are reformulated until they reflect the rationale. In the “elicit user task NFR” activity (see Figure 5), NFR for user task QA are documented for each use case included in the use case diagram, because each use case represents a user task. In order to support the specific needs of the users of the system a user model is needed that describes the potential users and their characteristics. As shown in Figure 6 NFR are added to use cases with the help of notices. In our case study the requirement “the use case shall be performed within 30 min.” was attached to the use case “handle alarm.” Again a justification as described above is performed to prevent premature design decisions. The resulting rationale “breakdown of plant longer than 30 min is too expensive” is documented in parenthesis behind the NFR. The elicitation of NFR related to system task QA (see Figure 7) is based on the detailed interaction sequence (also called flow of events) documented in the use case (see Figure
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
294
Kerkow, Dörr, Paech, Olsson and Koenig
Figure 5. Activity “Elicit user task NFR”
Figure 6. Use Cases with attached user task NFR
8) and specific characteristics of potential users, described in a user model. For this activity maximum and average usage scenarios are needed. With these scenarios in mind every step and every exception described by the use case description is checked. Figure 8 shows the textual description of the use case “handle alarm.” It describes that the system indicates an alarm and the location where it was produced. As reaction to this the user acknowledges the alarm, so other users know he or she is taking care of it. The NFR “at least in 5 sec.” was attached to the use case step two “System shows alarm and where the alarm was produced,” and the NFR “just one click” was attached to the user’s reaction described in use case step three. Both requirements were documented in the NFR field within the textual description of the use case, after being justified by the customer in the Socratic dialogue. The rationale led to the statement that the NFR elicited were only
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
295
Figure 7. Activity “Elicit system task NFR”
Figure 8. UC steps with attached system task NFR
estimated times and could be changed, if necessary. As shown in Figure 8, the rationale was documented in parenthesis. In the “elicit system NFR” activity (see Figure 9) NFR are elicited that constrain QA of the system and subsystems. In this activity maximum and average usage scenarios are needed again. Additionally a user model and the architecture of the physical subsystems are used, if available. The subsystems and architecture constraints on our case study are shown in Figure 10.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
296
Kerkow, Dörr, Paech, Olsson and Koenig
Figure 9. Activity “Elicit system NFR”
Figure 10. Constraints on system architecture
Constraints on overall architecture: W indows CE OS at PDAs Stan dar d PDAs (replace ab le) Standard Netw ork Components (replaceable) Throughput: WLAN 11MBit/sec Server: Windows 2000 Secondary Database -> PDAs: Wireless Netw ork required Downloading and Monitoring at the same time is not possible
As Figure 11 shows, the NFR field of the use case description is segmented into NFR related to every physical subsystem. In the use case “handle alarm” NFR for the QA “capacity” could only be phrased for the physical subsystem “PDA.” The subsystem shall have a maximum capacity of 64 MB and shall be able to handle up to 50 alarms at the same time. The rationale for this NFR is the need for usage of standard components available on the consumer market. This rationale is documented as well. The QA “throughput” does only apply to the subsystem “Network” by definition. Our experience shows that some QA are related to only one subset of subsystems. This relationship is documented in the quality model. The elicited NFR for single subsystems are documented within the textual use case description as well as in the section “use case overspanning textual description of NFR.” This is done in order to be able to consolidate the requirements across several use cases. Finally the NFR are analysed for conflicts. This activity includes two sub-activities. In the first NFR for one physical subsystem are analysed across all use cases. The checklist gives hints on how to identify conflicts and how to solve them. It has to be checked, for example, whether NFR can be achieved if use cases are executed in parallel. In the second sub-activity NFR that constrain different QA are validated taking into consideration the dependencies documented within the quality model. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
297
Figure 11. Excerpt of a UC with attached system NFR T hroughput requirements: Net work between secondary database and PDA: •Shall be able to deal in average case with 2 alarms every 10 minutes with 16 machines (assumed average numbers of alarms) •Shall be able to deal with maximal 8 (1/PDA)* 60 alarms at the same time assumed maximal number of alarms) •Shall be able to deal with maximal 8 people that download 1 doc (size of 8 docs constrained to: <55MBit)/person within 5-10 secs (assumed maximal number of downloads)
Capacity requirements: PDA: •Shall have a maximum capacity of 64 MB (standard components shall be used to reduce costs) •Shall be able to handle up to 60 alarms at the same time (assumed maximal number of alarms)
Workload requirements: PDA: •Shall allow 5 programs to be opened at the same time (assumed maximal number of programs that will be opened by the user)
In the case study this activity discovered an important conflict between the determined throughput requirements and the defined hardware constraints. As shown in Figure 11, one of the throughput requirements stated: “The network between secondary database and PDA shall be able to deal in the worst case with 8 people that download 1 doc (size of 8 docs constrained to <55Mbit) / person within 5-10 seconds.” The restriction of the total size of 8 documents to 55 Mbits was added because the hardware constraints shown in Figure 10 constrained the network to an 11Mbit/sec WLAN. The additional requirement would not have been found without the consolidation activity.
Experiences So far we have used this approach in a case study with a large German company in the domain of embedded systems and in an industrial workshop with 10 practitioners from various enterprises. In the case study the domain expert (customer) filled out the prioritisation questionnaire to find out the three most important high-level quality attributes to spend the effort on. Then we spent half a day with the customer in discussing and tailoring the default quality model to the case study project and half a day in eliciting the NFR. In the industrial workshop we spent one hour explaining our method and then, within another two hours, we interactively went through the checklists and filled the template with NFR.
Prioritisation Questionnaire The prioritisation questionnaire was used in the case study. Only the quality attributes maintainability, efficiency, reliability, and usability are currently supported by the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
298
Kerkow, Dörr, Paech, Olsson and Koenig
questionnaire. This was due to the fact that these were the quality attributes of interest for the case study. The prioritisation questionnaire performed very well. The order of prioritisation of the high-level quality attributes conformed to the expert judgement from experienced developers of the manufacturing plant in our case study. Nevertheless the validity of the prioritisation questionnaire has to be strengthened by more case studies. Other quality attributes (for example, portability) should be integrated for future use.
Workshop for Tailoring the Quality Models The goal of the moderated workshop was to come up with quality models for the most important quality attributes. During the workshop quality models for the quality attributes efficiency, maintainability, and reliability were developed based on the reference quality models, taking into account the specific characteristics of the project and enterprise. As a first step the reference quality models were evaluated with respect to their appropriateness for the given project context. Then they were iteratively refined and adapted. To direct the discussions the architecture and functional requirements (in the shape of Use Cases) were used. The results of the workshop were consolidated offline and finally documented. It was shown that the moderated workshop worked very well for developing projectspecific quality models. The decision to start from an existing reference quality model has proven to be more efficient than starting from scratch. However the workshop contained phases where information retrieval took quite long or where too much information came at one time. The former was due to the fact that it is not easy for a domain expert to think of all important quality attributes, especially if ISO 9126 and the initial quality model were not sufficiently detailed. In the latter case the domain expert has a good grasp on a particular quality attribute and offers a lot of information, which makes it difficult to maintain a hold on all of it. In addition to the resulting quality models the moderated workshop also created a common ground for the NFR elicitation, as the workshop participants (customer and workshop moderator) created a common understanding of the quality attributes’ meaning. The workshop resulted in quality models of good quality. The hypothesis that there is no general-purpose quality model but that the quality model is dependent on the actual project and its characteristics, was confirmed.
Workshop for Elicitation of NFR The elicitation of NFR for the three most important quality attributes took place with the expert of the customer company. We used the different types of checklists derived from the quality models to guide him through the elicitation process. The elicitation of NFR in the workshop was successful. New NFR, which were not included in the original specification, were elicited. These new NFR were judged by the domain expert to be necessary for the development of the system. Furthermore the domain expert acknowledged that the time for elicitation was worthwhile. Also capturing
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
299
the rationale behind the NFR proved to be useful for a better understanding of the NFR during later discussions on the NFR. Unfortunately rationales were only captured for a few NFR because of a lack of time in the workshops. Furthermore some NFR were not expressed measurably. This was mostly not due to imprecise questions in the questionnaire but rather to inconsequent usage of the questionnaire, for example, the moderator should have asked for a more precise statement from the customer.
Overall Impression Concerning the case study the customer acknowledged that the time was very worthwhile as he discovered many new NFR he had not been aware of before. Also it helped him to specify them more precisely. In the industrial workshop the feedback was also very positive, as the participants acknowledged that this was the first systematic method they had seen to elicit efficiency NFR. They particularly liked the idea of the quality model, checklists, and template to capture experience on NFR. In addition they liked the use of use cases and the architecture to ensure completeness and facilitate traceability. They also pointed out the need for capturing the rationale and a supporting tool environment.
Summary and Future Work The presented approach deploys an intertwined refinement of NFR and Use Cases. Use Cases are a standard notation to specify functionality, but there are also many enterprises that do not employ use cases. Therefore it is worth investigating how the approach can be used with different functional specifications, like plain and structured text or graphical notations such as sequence diagrams. Especially for enterprises using the approach in several projects, it would be inefficient to execute the complete preparation phase for each project. Therefore the preparation phase should be designed in a way to satisfy a multi-project context. It is a challenge to find the commonalities and variabilities for the various sociotechnical systems, stated by the variety of involved stakeholders, to come to a valid quality model. Concerning the moderation of the workshops to tailor the quality models and to elicit the NFR, it would be worthwhile to analyse the suitability of tools to support an online discussion of distributed stakeholders or offline negotiation via groupware (compare In et al., 2001). The presented approach gives a comprehensive method on how to elicit NFR on a measurable level. Interesting research will be in the field of detailing the experience-based reference quality models for the various quality attributes. Their maturity is a key factor for having a set of NFR that is as complete as possible. Eliciting the expectations on a software system is a difficult task. This becomes even more difficult when the NFR are taken into account. A method for eliciting and specifying NFR was presented in this chapter. It has several important characteristics: Eliciting is a
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
300
Kerkow, Dörr, Paech, Olsson and Koenig
human-intense activity – we take this into account by providing support for requirements engineers in the form of checklists and reference models. These artefacts help the requirements engineer to elicit a complete set of NFR. Of course completeness and cost are always subject to a compromise – this compromise is dealt with through having explicit prioritisation of QA. Another important issue we had to consider is that every situation is different – even though we provide reference models, tailoring these to specific situations and environments is indispensable. Experience is important but often forgotten. In our quality models, we capture the experience from the community at large. By tailoring the quality models, a company can capture its experience in adaptations of the models. Experience capture is of special importance when software systems are large, complex, and contain many dependencies. By providing a comprehensive meta-model, we support documentation, traceability, and change management, that is, activities that become difficult when the system under development grows.
Acknowledgments This project has been funded by the German BMBF in the context of the ITEA project Empress. We thank all our project partners and, in particular, the case study participants for valuable discussion and feedback.
References Abowd, G., Bass, L., Clements, P., Kazman, R., Northrop, L., & Zaremski, A. (1997). Recommended best industrial practice for software architecture evaluation. (CMU/SEI-96-TR-025). Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute. Alexander, I. (2001). Misuse case help to elicit nonfunctional requirements. IEE CCEJ. Barbacci, M.R., Klein, M.H., & Weinstock, C.B. (1997). Principles for evaluating the quality attributes of a software architecture. (CMU/SEI-96-TR-036). Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute. Basili, V.R. (1992). Software modeling and measurement. The goal/question/metric paradigm. Computer Science Technical Report Series NR: CS-TR-2956 / NR: UMIACS-TR-92-96. Basili, V.R., & Rombach, H.D. (1988). The TAME project: Towards improvement-oriented software environments. IEEE Transactions on Software Engineering, 14(6), 758773.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Non-Functional Requirements for Sociotechnical Systems
301
Bass, L., Clements, P., & Kazman, R. (1998). Software architecture in practice. AddisonWesley. Chung, L., Nixon, B.A., Yu, E., & Mylopoulos, J. (2000). Non-functional requirements in software engineering. Kluwer Academic Publishers. Clements, P., Bass, L., Kazman, R., & Abowd, G. (1995). Predicting software Quality by architecture-level evaluation. Proceedings of the Fifth International Conference on Software Quality. Cockburn A. (2001). Writing effective use cases. Addison Wesley. Cysneiros, L.N., & Leite, J.C.S.P. (2001). Driving non-functional requirements to Use cases and scenarios. Proceedings of XV Brazilian Symposium on Software Engineering. Dutoit, A.H., & Paech, B. (2001). Rationale management in software engineering. In S.K. Chang (Eds.), Handbook of software engineering and knowledge engineering. World Scientific. Egyed, A., Grünbacher, P., & Medvidovic, N. (2001). Refinement and evolution issues in bridging requirements and architecture – the CBSP approach. Proceedings of International Conference on Software Engineering-Workshop STRAW. Firesmith, D. (2003). Security use cases. Journal of Object Technology, 2(3), 53-64. Gross, F., & Yu, E. (2001). Evolving system architecture to meet changing business goals: An agent and goal-oriented approach. Proceedings of International Conference on Software Engineering-Workshop STRAW. IEEE Recommended Practice for Software Requirements Specifications, IEEE Std. 8301998. In, H., Boehm, B.W., Rodgers, T., & Deutsch, W. (2001). Applying winwin to quality requirements: A case study. International Conference on Software Engineering, 555-564. In, H., Kazman, R., & Olson, D. (2001). From requirements negotiation to software architectural decisions. Proceedings of International Conference on Software Engineering-Workshop STRAW. ISO/IEC 9126-1. (2001). Software Engineering - Product Quality - Part 1: Quality Model. Kazman, R., Abowd, G., Bass, L., & Clements, P. (1996). Scenario-based analysis of software architecture. IEEE Software, 13(6), 47-55. Kazman, R., Bass, L., Abowd, G., & Webb, M. (1994). SAAM: A method for analyzing the properties of software architectures. Proceedings of the 16th International Conference on Software Engineering, 81-90. Kazman, R., Klein, M., & Clements, P. (1999). ATAM: Method for architecture evaluation. (CMU/SEI-2000-TR-004). Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute. Kitchenham B., & Pfleeger S.L. (1996). Software quality: The elusive target. IEEE Software, 12-21. Klein, M., & Kazman, R. (1999). Attribute-based architectural styles. (CMU/SEI-99-TR022). Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
302
Kerkow, Dörr, Paech, Olsson and Koenig
Liu, L., & Yu, E. (2001). From requirements to architectural design – Using goals and scenarios. Proceedings of International Conference on Software EngineeringWorkshop STRAW. Loucopoulos, P., & Karakostas, V. (1995). System requirements engineering. McGrawHill. Menasce, D.A. (2002). Software, performance or engineering. Proceedings of Workshop on Software and Performance, 239-242. Moreira, A., Brito, I., & Araújo, J. (2002). A requirements model for quality attributes, early aspects: Aspect-oriented requirements engineering and architecture design. Proceedings of International Conference on Aspect-Oriented Software Development, Enschede, Holland. Paech, B., Dutoit, A., Kerkow, D., & von Knethen, A. (2002). Functional requirements, non-functional requirements and architecture specification cannot be separated – A position paper. Proceedings of International workshop on Requirements Engineering for Software Quality. Paech, B., von Knethen, A., Doerr, J., Bayer, J., Kerkow, D., Kolb, R., Trendowicz, A., Punter, T., & Dutoit, A. (2003). An experience based approach for integrating architecture and requirements engineering. Proceedings of International Conference on Software Engineering-Workshop STRAW. Petriu, D., & Woodside, M. (2002). Analysing software requirements specifications for performance. Proceedings of Workshop on Software and Performance, 1-9. Shaw, M., & Garlan, D. (1996). Software architecture – Perspectives on an emerging discipline. Prentice Hall. Sindre, G., & Opdahl, A. (2000). Eliciting security requirements by misuse cases. Proceedings of TOOLS Pacific 2000, 120-131. Sutcliffe, A., & Minocha, S. (1998). Scenario-based analysis of non-functional requirements. Proceedings of Workshop on Requirements Engineering For Software Quality.
Endnote 1
Functionality is, in fact, the label of a set of NFR in the ISO standard. It is defined by the sub-quality aspects accuracy, compliance, interoperability, suitability, and security.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Software Requirements and Rationale
303
Chapter XVIII
Capture of Software Requirements and Rationale through Collaborative Software Development Raymond McCall, University of Colorado, USA Ivan Mistrik, Fraunhofer Institut für Integrierte Publikations - und Informationssysteme, Germany
Abstract This chapter explains how natural language processing (NLP) and participatory design can aid in identifying system requirements. It argues that getting a complete list of requirements is often an iterative process in which some requirements are elicited only when users react to the system’s design. Costs of iterative requirements identification can be reduced by discovering new requirements during the design process, before implementation begins. This is facilitated when users participate in design, reacting to features as they are proposed. As users evaluate proposals, they often mention requirements not previously documented. Transcripts of participatory design sessions thus provide a rich source of new requirements for developers. The chapter explains how semantic grammars can be used to simplify the extraction of requirements from such
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
304 McCall and Mistrik
transcripts. The authors hope that an understanding of the value of participatory design and NLP will aid in the creation of better tools for support of software development.
Introduction One of the main reasons that contemporary software projects are so difficult is that, by themselves, software developers do not have all the knowledge they need to create effective systems. The so-called “thin spread of application knowledge,” that is, developers’ poor knowledge of application domains, continues to plague the software industry more than 15 years after it was first described by Curtis, Iscoe, & Krassner (1988). Much of the knowledge that developers need is in the heads of the users of the proposed system (Rittel, 1972). Users have expert knowledge of use situations. Unfortunately their expertise is often in the form of tacit knowledge rather than explicit knowledge, and they only have access to much of it in the context of system use (Schon, 1983). As a consequence users typically cannot fully identify all their requirements before design of a system begins. Getting a full and accurate list of software requirements is thus not a simple, one-shot process. We argue that it is an iterative process in which some crucial requirements are not elicited until users can react to decisions about the system’s design. Developers’ need to elicit requirements from users implies that successful projects depend on extensive collaboration with users. If, as we claim, some requirements can only be elicited when users react to design decisions, the question is then how to discover these requirements as soon as possible, so as to minimize the amount of work that needs to be redone. Above all, we want to identify new requirements before the system is in operation — when implementing new requirements is most expensive by far (Grady, 1999). Our conclusion is that the best approach is to get users’ reactions to the system as it is being designed. During this sort of collaboration – which is called “participatory design” (Schuler & Namioka, 1993) – users typically make comments that imply new requirements. We have devised tools that help developers identify and extract such requirements from user commentary.
Iteration in the Elicitation of Requirements Iterative Software Development In recent years there has been a substantial movement toward iterative approaches to software development in general and requirements identification in particular. The most full-blown set of approaches is known as “agile software development” (Highsmith, 2004; Martin 2002), though variations on this are known as “evolutionary” and “adapCopyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Software Requirements and Rationale
305
tive” development (Highsmith & Orr, 2000; Larman, 2004). We will use the term iterative development as an “umbrella term” under which to group all such approaches that reject the waterfall model in favor of an iterative development process (except for well-defined problem domains and strict contractual procedures). Iterative development emerged primarily from the experiences of developers working on “bleeding edge” real-world projects – for example, large-scale Internet-related applications. This movement has, however, gotten backing from academic studies that indicate that non-iterative, waterfall-based approaches produce higher rates of project failure and user dissatisfaction (Johnson, 2002; Taylor, 2000). One common feature of iterative development is evolutionary requirements analysis, meaning that instead of restricting requirements definition to a single initial phase, requirements can be discovered and changed at later stages of development as well (Larman, 2004). We will not attempt here to restate the arguments for iterative development and evolutionary requirements analysis. Instead we will explain (1) how iterative requirements elicitation happens, (2) how user participation promotes requirements elicitation, and (3) how natural language processing can support this process. To accomplish these things we will first present a model of software development as a multilevel process.
A Multi-Level Model of Software Development A Simple, Multi-Level Model We can define the term software specification as an explicitly stated set of goals, each of which designates some desired program behavior. A software implementation we can then define as that which results in achievement of the specification. The top-level specification is the set of software requirements and includes both desired functionality and software quality attributes. Implementing any given goal in a specification may be either a one-step or two-step process. In a one-step process code is directly written that achieves the specified behavior. In a two-step process the first step is to determine what needs to be done to implement the goal. In other words the first step is to make a more detailed software specification. The second step is then to implement this new specification. This implementation will in turn be either a one-step or two-step process. This recursion results in a structure containing multiple, and increasingly specific, levels of specification and implementation. The structure terminates in the writing of code. So far there is nothing new here. We have merely described a standard means-end hierarchy generated by a process sometimes called “problem reduction.” But this is by no means the end of the story. This structure, by itself, fails to account for the elicitation of new requirements. We will therefore extend this structure to show how such elicitation occurs.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
306 McCall and Mistrik
Extending the Simple, Multi-Level Model The model described above is simplistic in that it does not take into account the larger context of goals of the organization to which users belong. In fact, software requirements constitute only a small subset of the total set of goals of an organization. Some of these goals are actually the reasons for the stated requirements – that is, the larger goals that the software requirements are meant to help achieve. Other organizational goals are external — or “exogenous” — to the stated set of software requirements. The larger goal context of the organization is the source both of new requirements and the iterative process of eliciting them. To understand why, there are two crucial points to consider. One is that the completeness of software requirements depends on whether they are sufficient to achieve the higher-level goals of the organization. The other crucial point is that it is not safe to assume that the implementation of software requirements has no impact on the exogenous goals in the organization. In fact such impacts can result in the incorporation of previously exogenous goals as new requirements for the software. We will explain and illustrate these points with examples from our empirical studies of user reactions to developers’ design decisions. In particular we will use examples of interactions between developers and users in a project aimed at the creation of software to support collaborative design by architects (building designers) and their clients.
Completeness of Requirements New requirements can get elicited when it turns out that the original set of requirements is incomplete for accomplishing a work task. In designing the collaborative design software for architects the developers had originally identified a set of requirements for a group of people in different locations to draw collaboratively while communicating verbally. These requirements were correct but incomplete in several crucial ways. For example, in reacting to a proposed system design, a user informed the developers that their software would not adequately support productive design collaboration. To do that, he said, the system would also need to support the sequential presentation of the verbal and graphical information that constituted a design proposal. In other words what was required was something like the slide-show capability of PowerPoint. According to the user this was needed because collaborative design discussions typically begin with someone presenting their work in a pre-prepared sequence of annotated graphics. Despite the fundamental nature of this requirement, the user was only able to articulate it once he had imagined trying using the proposed system in the sort of design meetings he was familiar with. We might be tempted to dismiss this late discovery of such a fundamental requirement as due to negligence by the developers in the initial requirements elicitation phase, but such omissions are not uncommon in software development. Requirements get discovered late because the only definitive test of the completeness of requirements is system use.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Software Requirements and Rationale
307
Impacts of Design Decisions on Exogenous Goals New requirements are often elicited when design decisions have significant impacts on exogenous goals of the organization. There are at least three ways in which this happens: through negative side effects, positive side effects, and what has been called affordances.
Negative Side Effects We often find that the means for implementing a requirement have important — though unintended — consequences for goals not originally included in the requirements. One way this can happen is that the means for implementing a given requirement can have unintended side effects that interfere with the satisfaction of certain exogenous goals. Consider the following example from the project to develop software for collaborative architectural design. The developers originally proposed that the system facilitate collaboration between architects and clients by using voice communication. A user (an architectural illustrator) working with the developers pointed out that this would be problematic because architects’ offices in the U.S.A. typically have “open plan” layouts with no acoustic shielding between the people in the office. Having people in the office engaged in lengthy voice chat sessions during the workday would be unacceptably disruptive. As a consequence of this user feedback a new requirement was added, namely that collaborative communication should not disrupt other people’s work in the office.
Positive Side Effects Not all unintended side effects are negative. Some help to achieve goals not originally listed as requirements. For example, one feature of the proposed collaborative software described above was that it provided an exact record of what had been agreed to between client and designer – something not provided by the users’ present, non-computerbased process. One user realized that this would be extremely useful for settling contractual disputes with clients. This legal consideration then became an important requirement and resulted in several new system features.
Affordances The situation with affordances is similar to that for positive side effects. Affordances are potential new types of useful functionality that existing features of a design might make easier to implement. The affordances that elicit new requirements are ones that facilitate the achievement of exogenous goals. For example, in the collaborative architecture project it was originally decided not to include any requirements for designing buildings from scratch. Instead the system only had requirements for supporting design review sessions, that is, sessions in which
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
308 McCall and Mistrik
existing designs are critiqued and revised. But after the set of needed drawing tools for such review sessions had been decided on, one user realized that with very little extra functionality the system would be minimally adequate for designing things from scratch. It was then decided to include the requirement for designing from scratch – at least in this minimal form.
Participatory Design Using Participatory Design to Identify New Requirements The best user feedback about the completeness and correctness of requirements naturally comes from actual use of a fully designed and implemented system. Unfortunately this feedback comes at precisely the time when it is most costly to modify the system so as to satisfy newly discovered requirements. To minimize the cost of creating a high-quality system, developers need to make the discovery of new requirements happen as early as possible in the development process – ideally during design of the system. We suggest that the best way to do this is to have users work collaboratively with developers during design of the system, reacting to proposals for design features as soon as possible after these proposals are made. Such a participatory design strategy would seem to offer the promise of greatly reducing the cost of implementing newly discovered requirements. In fact the examples given above for the elicitation of new requirements were taken from exactly such a session of participatory design in which a representative of the user community for a collaborative design tool for architects worked with a software developer during design of the system. Our studies of such sessions have shown that they almost invariably lead to the elicitation of new requirements before any implementation of the system has begun. While there is some cost for re-design, there is no cost of reimplementation. The best way to minimize the costs of redesign seems to be to have users work with developers while they are designing the system. This enables users to react to design proposals as they arise. New requirements are typically elicited exactly when the evaluation of design proposals begins. More specifically, new requirements appear in evaluative comments that users make about the pros and cons of proposed system features. This makes it possible for software designers to discover new requirements as they arise and to adjust the design accordingly.
Obtaining Transcripts of Participatory Design Sessions It is not enough for a software developer to learn about to new requirements as they arise. A complex software development project is almost invariably done by a team, and it is
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Software Requirements and Rationale
309
crucial that all members of the team be able to learn about the new requirements. And it is crucial to have a list of all known requirements whenever a system or part of the system is redesigned. Furthermore it is essential to keep a record of the rationale for all requirements. In other words new requirements and their rationale need to be systematically identified and documented as they arise and then made available to all relevant participants in the development project. A crucial difficulty confronts us here, because extracting the new requirements and their rationale from the participatory design sessions requires solution of a problem that has long appeared to be intractable: the problem of design rationale capture. Requirements and their rationale constitute a subset of the total design rationale (DR) for a project. More than 30 years of efforts at developing DR capture systems has produced a flurry of papers and a variety of clever software concepts (Moran & Carroll, 1996). Unfortunately it has not so far resulted in systems that software developers are willing to use in practice (Fischer, Lemke, McCall, & Morch, 1991). We maintain, however, that there is good reason to believe that the collaboration found in development of complex system provides the opportunity for a breakthrough in DR capture. We argue that there has been a two-fold obstacle to effective to design rationale capture. The first part of this obstacle has been the insistence in every major DR method that system designers write up the rationale for what they do. The second part of the obstacle has been the insistence that this write up be in a semi-formal structure according to some pre-specified schema. For example, in the IBIS (Issue-Based Information Systems) (Kunz & Rittel, 1970) method for capturing design rationale, designers are to write up their rationale in the form of questions – called issues – and proposed answers to the questions and arguments for and against the proposed answers and/or the other arguments. The various issue-based discussions are then to be linked together using pre-specified relationships. Each of the DR methods proposed has a different schema, but all require that DR be captured in a schema. The problem is that writing up rationale is typically a great deal of work over and above all the design work itself – especially since all approaches to DR capture have required a great deal of editorial work to put the rationale in a specific schema. This work is tedious, requires a great deal of skill, and seldom has any immediate payoff for the person who does it. While many pilot projects claim to have shown the benefits of different approaches, in practice designers have almost invariably refused to do this extra work. Thus apparently promising software systems developed to facilitate DR capture have invariably fallen into disuse. The breakthrough for DR capture in collaborative design comes from the fact that collaboration in design is made possible by communication in which the rationale for decisions is discussed. Simply recording this communication in a machine-readable form produces an extensive – if not complete – record of project rationale. If this communication can be put into textual form, you get a searchable record of project rationale – that is, one from which new requirements can be located and extracted. Over the past year we have gotten excellent transcripts of collaborative design sessions by having university students use off-the-shelf collaboration tools, such as the Groove peer-to-peer software for collaboration. This technique, however, currently seems
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
310 McCall and Mistrik
restricted to users under the age of 25 – the so-called “digital natives,” that is, people for whom digital communication is part of their “native language.” For users in the over25 crowd of “digital immigrants” this technique has, unfortunately, turned out to be not at all promising. We did a year’s worth of generation and analysis of transcripts by student designers who communicated with each other by text-based online “chat” sessions. These studies were useful to start with, but did not tell us enough about real-world projects. We wanted to look at what professionals would do: both professional software developers and users who are professionals in their respective problem domains. We wanted to study the design of actual software products rather than mere academic prototypes. The professional users we worked with were significantly older than the students (the youngest professional user was 35 years old) and were not at all comfortable with textbased chat as a means of collaboration. We therefore used face-to-face meetings, which we videotaped. The audio from these tapes was then transcribed by hand. This technique is not practical for real-world development, but it enabled us to see what might be practical in the not-too-distant future, when highly accurate transcripts of design sessions can be generated automatically using speech recognition – something that is not really feasible with today’s commercial speech recognition software. One of these transcribed sessions is the source of the examples in this article.
Extracting New Requirements from Participatory Design Transcripts Transcripts of participatory design sessions can provide a rich source of information about new requirements and their rationale. Unfortunately this information is not structured in any of the various edited and schematized forms that have been proposed for capturing and indexing DR. Instead it is merely in the form of an unstructured conversation that typically goes on for several hours. As a consequence, locating new requirements in these transcripts is labor-intensive, tedious, and error-prone. Conventional information retrieval techniques are unlikely to be adequate for the needs of future designers looking for requirements and their rationale in transcripts. The reason for this is the fact that almost all information retrieval strategies rely on what is known as content-based retrieval. In this strategy information is located using descriptions of its content – for example, keywords or freetext terms. The problem, however, in searching for requirements is that we almost invariably have no idea what the content of new requirements statements is. If we did know the content, we would not need to look for them. Our task is actually to discover their content by first discovering the requirements statements themselves and then analyzing them. By definition content-based retrieval cannot help us to do this. What is needed is some way of discovering requirements statements without first having to know their content. We have recently devised a method for doing this using a simple type of natural language analysis based on semantic grammars – that is, grammars in which the rules of replacement are based on semantic rather than syntactic relationships (Jurafsky & Martin, 2000). We use this technique to identify requirements-related
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Software Requirements and Rationale
311
rationale and structure it in a form closely related to the IBIS schema for DR. While our technique is still in its early stages of testing, it offers the possibility of making DR capture practical by having the computer rather than humans do the lion’s share of the work of structuring transcripts. And since producing the transcripts of the rationale requires no work beyond that needed for the communication that occurs naturally in a collaborative project, designers can with little or no effort produce searchable records of requirementsrelated rationale for projects.
Using Semantic Grammars to Capture Requirements and their Rationale The scope of this paper does not allow us to provide full description of our technique for identifying requirements and their rationale, but in the following paragraphs we give an overview of our approach.
Our Approach to Use of Natural Language Processing To date NLP has had only limited success in producing machine understanding of ordinary human discourse, so it might seem odd that we are looking to a very simple NLP technique – namely, semantic grammars – to find requirements and their rationale in the discourse of participatory design. But our approach is made practical in part by the limited nature of what we are attempting to do. First and most basically we are not attempting to have the computer “understand” requirements. We are instead attempting to help humans find requirements within transcripts of design sessions in which very little of the discourse deals with requirements. Thus it is sufficient if we can significantly reduce the number of utterances that a human analyst needs to look at in a transcript to find new requirements. Another factor that suggests the practicality of our approach is the particular circumstances under which new requirements are elicited. They are almost invariably found in users’ responses to proposed features of a system. More specifically requirements are typically found in arguments – to use the IBIS term – about the pros or cons of proposed features. This means that we can narrow the search for requirements by identifying those places in a participatory design transcript where system features are proposed. As it turns out user arguments on such proposals are found in the texts immediately following the statement of the proposal. In addition there is a friendly principle at work here, for only a very small subset of English is employed in stating proposals. And our studies suggest that this subset can be described using a relatively small semantic grammar.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
312 McCall and Mistrik
Design Proposals Design proposals generally fit into a few basic types. Some proposals suggest that something is a good means for accomplishing some desired functionality. Some suggest that one or more properties should be given to an object. Others suggest that an object with certain properties be created. Still others suggest that one object be put in a specific relationship with another object. The objects, properties, and relationships used in such proposals tend to be drawn from a list that characterizes a particular domain – such as the design of human/computer interaction for the architectural design domain. In addition most of the proposals made in design projects are preceded by introductory wording that is only used in making proposals. Some examples of such wording include the following: “We were thinking that we could …,” “What if we …,” “Why don’t we …,” “The best solution would probably be to …” The list of such introductory word strings is large, but we have found that a semantic grammar can be constructed to detect them in design utterances. An example of a proposal in the project for design of the collaborative architectural software would be, “We were thinking that it would be best to use Voice over IP to let architects and clients communicate.” In this statement the introductory wording, “We think it would be best to use…” is the first evidence that we are dealing with a proposal. The terms “Voice over IP,” “communication,” “architects,” and “clients” are characteristic of the computer-support design domain. The phrase “Voice over IP” describes a means to accomplish the goal described by the phrase “to let architects and clients communicate.” The words “would be best” imply that the means described is the best of the ways available to accomplish the stated goal. This structure is typical of one common type of proposal, and we have created a semantic grammar that can reliably detect such structures. Figure 1 shows in “outline” form the parse tree produced by our semantic grammar for this proposal statement.
Requirements in Arguments on Proposals Once proposals have been identified the next task is to find where users state arguments in reaction to them. First of all we can restrict our search to the users’ utterances, thus eliminating roughly half of the utterances that follow the proposal in the transcript. Arguments about proposals often have introductory wording only found in arguments. The simplest examples are “Yes, because …” and “No, because …” In fact some arguments have introductory wording found only in arguments about proposals. Examples include the following: “That’s not going to work, because …,” “Yeah, I think that’s good idea, because …,” “I like that idea. It will …,” and “The problem with that is …” A crucial point here is that the negative evaluation of voice-based communication uses a criterion variable not found in any of the previously listed requirements: degree of noise. This is a crucial clue that a new requirement has surfaced. As it turns out, this is confirmed by the very next utterance by the user: “OK, so we have an issue: talking. OK?
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Software Requirements and Rationale
313
Figure 1. The parse tree for the sentence, “We were thinking that it would be best to use Voice Over IP to let architects and clients communicate.” PROPOSAL-STATEMENT SUBJECT-THOUGHT SUBJECT we THINK were thinking INTENTIONAL-CONJUNCT that GOOD-IF it would be best to THING-PROPOSED EMPLOY use MEANS Voice over IP TO-ACHIEVE to let GOAL USER architects CONJUNCT and USER clients BEHAVIORAL-VP communicate
Figure 2. The parse tree for the sentence, “The problem with that is that you are going to have all these people in the office talking simultaneously and it becomes a cacophony.” ARGUMENT NEGATIVE-ARGUMENT-ON-PROPOSAL ARGUMENT-INTRO NEGATIVE-ARGUMENT-ON-PROPOSAL-INTRO the problem with that is INTENTIONAL-CONJUNCT that NEGATIVE-EFFECT-OF-PROPOSAL PREDICTION-INTRO you are going to have NEGATIVE-PREDICTION USER-BEHAVIOR USER all these people in the office BEHAVIOR talking simultaneously CONJUNCT and NEGATIVE-CONSEQUENCE EXISTS it becomes NEGATIVE-PERFORMANCE a cacophony Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
314 McCall and Mistrik
Noise pollution.” What the user is calling an “issue” here implies a new requirement for the system. Semantic grammars can be used to detect all of these facts.
Related Work We are by no means the first to explore the use of formal NLP techniques in requirements engineering (RE). In fact the field is more than 15 years old. In 1988 Maarek and Berry proposed the use of NLP techniques for requirements extraction (Maarek & Berry, 1988). In 1992 Rolland and Proix argued for the development of CASE tools for RE based on NLP (Rolland & Proix, 1992). By the early 1990s there was enough optimism about the prospects for use of NLP in RE that Ryan felt it necessary to issue a cautionary note (Ryan, 1993) about the limits on using NLP for automating RE. We, like most others, accept the notion that the proper goal for NLP is not the automation of RE processes but rather easing the burden on human requirements engineers. But some researchers, most notably those involved with the Circe project (for example, Gervasi & Nuseibeh, 2002), remain optimistic about the possibility and value of automation. Roughly speaking there seem to be three major NLP approaches being used in RE. One is template matching, another is rule-based parsing, yet another is probabilistic analysis. These approaches are sometimes used in combination, and sometimes the border between template matching and rule-based parsing is blurred. Thus the REVERE project (Rayson, Garside, & Sawyer, 2000; Sawyer, Rayson, & Garside, 2002) is primarily oriented towards probabilistic analysis, yet it employs some rule-based parsing as well, at least in the sense of identifying parts of speech. The Circe project (Ambriola & Gervasi, 1997) is based on template matching but uses rules of replacement in a manner reminiscent of a grammar. The semantic grammar approach that we have described is strictly a rule-based parsing approach, but one that differs from syntactic parsing in the inclusion of semantic information in the parse rules. This has the distinct advantage of avoiding having to deal with many of the apparent ambiguities that do not turn out to represent real ambiguities in meaning. There is, however, an important argument raised by the participants in the REVERE project that presents a challenge to our purely rule-based approach. These authors point out that rule-based systems are inherently brittle and limited to a subset of natural language. One exception to the rules is all it takes for a rule-based system to break down. The REVERE researchers use this as an argument for a probabilistic approach to NLP. (It is interesting to note that the Circe project mitigates this difficulty by using fuzzy criteria for template matching.) We acknowledge the legitimacy of these arguments and the fact that probabilistic approaches to NLP have in fact proved to be highly successful. There is, nevertheless, reason to pursue the use of rule-based approaches in addition to and as a complement to probabilistic approaches. The reason is that rule-based approaches that include semantics do a better job at identifying the full meaning of many sentences than probabilistic techniques can. What we recommend is that semantically informed, rulebased techniques – such as semantic grammars – be used to extract as much information
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Software Requirements and Rationale
315
as possible from natural language transcripts and other documents. Where such techniques fail, probabilistic techniques should be brought in as a backup. This will provide a graceful degradation strategy that will enable the extraction of the most information from such documents.
Conclusion and Future Work Future Work The examples we have given of our semantic grammar approach only suggest its plausibility. We are still engaged in testing our approach. Preliminary results with a halfdozen design transcripts appear encouraging, but testing is needed with more developers and users in a wider range of projects to confirm these results. Semantic grammars may not be the only viable approach for capturing requirements. Fundamentally different approaches might work as well or better. This topic needs much research. In addition there may be room for a wide range of tools supporting various aspects of iterative elicitation of requirements throughout the software life cycle. These include tools for eliciting, modeling, analyzing, specifying, validating, evolving, and integrating requirements. One thing we have not really dealt with here is the fact that user participation in design will inevitably produce many new problems for managing the development process. These managerial problems are closely intertwined with the process aspects, and adequate tools will be needed to deal with them. In fact a number of large software firms – including Microsoft and Macromedia – are investing significant resources in developing such tools. In addition research-oriented communities in academia and industry are also working on tools for managing collaboration – for example, SnipSnap, a content management system based on Wiki and Weblog technologies (www.snipsnap.org).
Conclusion In this paper we have attempted to show the importance of collaboration between developers and future users of software systems for determining system requirements. While this interaction complicates the development process, it also offers the possibility of increasing software quality. By encouraging this collaboration as early as possible in design, the benefits can be increased while the costs are minimized. While we have focused on collaboration between developers and users, many other types of collaboration play a crucial role in software development, such as collaboration within development teams and collaboration amongst users. These types of collaboration also constitute a potential source of much of the information needed for successful software projects. The techniques we have described here might also be usefully extended to support these other kinds of collaboration. There is much that remains to be
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
316 McCall and Mistrik
done to understand and deal with collaboration in software development, but there are good reasons for optimism about the potential benefits of research in this field.
References Ambriola, V., & Gervasi, V. (1997). An environment for cooperative construction of natural-language requirement bases. Proceedings of the Eighth Conference on Software Engineering Environments, 124-130. Curtis, B., Iscoe, N., & Krasner, H. (1988). A field study of the software design process for large systems. Communications of the ACM, 31(11), 1268-1287. Fischer, G., Lemke, A.C., McCall, R., & Morch, A.I. (1991). Making argumentation serve design. Human Computer Interaction, 6(3), 393-419. Gervasi, V., & Nuseibeh, B. (2002). Lightweight validation of natural language requirements. Software – Practice and Experience, 32(2), 113-133. Grady, R.B. (1999). An economic release decision model: Insights into software project management. Proceedings of the Applications of Software Measurement Conference, Orange Park, FL. Highsmith, J. (2004). Agile software development ecosystems. New York: AddisonWesley. Highsmith, J.A., & Orr, K. (2000). Adaptive software development: A collaborative approach to managing complex systems. New York: Dorset House. Johnson, J. (2002). Keynote speech, XP 2002 conference, Sardinia, Italy. Jurafsky, D., & Martin, J. H. (2000). Speech and language processing: An introduction to natural language processing, computational linguistics and speech recognition. Englewood Cliffs, NJ: Prentice Hall. Kunz, W., & Rittel, H.W. J. (1970). Issues as elements of information systems. (Working Paper No. 131). Stuttgart, Germany: Institut für Grundlagen der Planung, Universität Stuttgart. Larman, C. (2004). Agile and iterative development: A manager’s guide. New York, Addison-Wesley. Maarek, Y., & Berry, D. M. (1988). The use of lexical affinities in requirements extraction. (Technical Rep.). Haifa, Israel: Technion, Faculty of Computer Science. Martin, R.C. (2002). Agile software development, principles, patterns, and practices. Englewood Cliffs, NJ: Prentice Hall. Moran, T., & Carroll, J, (Eds.) (1996). Design rationale: Concepts, techniques and use. Mahwah, NJ: Lawrence Erlbaum Associates. Rayson, P., Garside, R., & Sawyer, P. (2000, April 12-14). Assisting requirements engineering with semantic document analysis. Proceedings of RIAO 2000 (Recherche d’Informations Assistie par Ordinateur, Computer-Assisted Information Retrieval) International Conference, Paris. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Software Requirements and Rationale
317
Rittel, H.W.J. (1972). On the planning crisis: Systems analysis of the first and second generations, Bedriftokonomen, 8, 390-396. Rolland, C., & Proix, C. (1992). A natural language approach for requirements engineering. Lecture Notes in Computer Science, Vol. 593. Ryan, K. (1993). The role of natural language in requirements engineering. Proceedings of IEEE International Symposium on Requirements Engineering, 240-242. Sawyer, P., Rayson, P., & Garside, R. (2002). REVERE: Support for requirements synthesis from documents. Information Systems Frontiers, 4(3), 343-353. Schon, D.A. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books. Schuler, D, & Namioka, A. (Eds.). (1993). Participatory design: Principles and practices. Mahwah, NJ: Lawrence Erlbaum Associates. Taylor, A. (2000). IT projects: Sink or swim? The Computer Bulletin, January 2000, British Computer Society.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
318 Hall and Rapanotti
Chapter XIX
Problem Frames for Sociotechnical Systems Jon G. Hall, The Open University, UK Lucia Rapanotti, The Open University, UK
Abstract This chapter introduces Problem Frames as a framework for the analysis of sociotechnical problems. It summarizes the Problem Frames approach, its techniques and foundations, and demonstrates, through theory and examples, how it can be applied to simple sociotechnical systems. The chapter continues with the description of an extended Problem Frame framework that allows the treatment of more general sociotechnical problems. This extension covers social components of a system — individuals, groups or organisations — bringing them within the remit of the design activity. The aim of the chapter is to make the Problem Frames framework more accessible to the software practitioner, especially those involved in the analysis of sociotechnical problems, as these problems have so far received only scant coverage in the Problem Frames literature.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
319
Introduction By sociotechnical system we mean a collection of interacting components in which some of the components are people and some are technological. In this chapter we focus on the requirements analysis of sociotechnical systems in which some of the technological subsystems are computer-based, these systems forming the largest part of modern software design problems. More precisely, there are two (not necessarily disjoint) sub-classes of sociotechnical systems that we will treat in this chapter. The first sub-class contains those systems in which existing components or sub-systems (that is, domains) are to be allowed, through software, to interact. An example from this first class might be the problem of designing software for the operator of heavy machinery. The larger second class contains those systems for which software, a user interface, and user instruction are to be designed to enable a new process or service. An example of this second class might be the development of a new customer call centre. The use of Problem Frames (PFs) underpins our requirements analysis process. As described in Jackson (1998), PFs are a concretization of the ideas of Michael Jackson and others in the separation of machine and its environment’s descriptions. This separation is generally accepted as being a useful principle for requirements analysis. We will have cause, later in the chapter, in dealing with a more general class of sociotechnical problems, to further detail this separation, but nothing we do compromises its fundamental status. The usual representation of the separation of machine and environment descriptions is as the “two ellipse” model, illustrated in Figure 1. In that figure world knowledge W is a description of the relevant environment; R is the statement of requirements; S is the specification that mediates between environment and machine; M is the description of the machine; and P is the program that, on machine M, implements the specification S. The role of W is to bridge the gap between specification S and requirements R. More formally (Gunter, Gunter, Jackson, & Zave, 2000; Hall & Rapanotti, 2003; Zave & Jackson, 1997), W,S |-R. One of the aims of the PF framework is to identify basic classes of problems that recur throughout software development. Each such class should be captured by a problem frame that provides a characterization for the problem class. Sociotechnical systems are Figure 1. The requirements analysis model
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
320 Hall and Rapanotti
an important class of problems and so should be representable within the PF framework, possibly with their own (collection of) problem frame(s). In a fundamental sense, of course, the PF framework already deals with sociotechnical systems: problem frames are an attempt to allow customer and developer to come together to match real-world problems and technological solutions. There are many examples in Jackson (2001) as to how this relationship can be facilitated using problem frames. We observe, however, that the application of problem frames to particular sociotechnical problems remains under-explored. Currently some discussion of HCI appears in Jackson (2001), and some analysis appears in Jackson (1998), but otherwise there is little in-depth coverage of how to apply problem frames in this context. In this chapter we show, in some detail, how problem frames can be applied to sociotechnical systems. Our development is threefold. We first show how the problem of representing interaction with (and not just control of) technology can be represented within the PF framework. To do this we introduce two new basic problem frames, the User Interaction Frame and the User Commanded Behaviour Frame, each dealing with the class of user-interaction problems. Secondly we show how architectural artifacts can be used to guide the analysis of sociotechnical problems. To do this we discuss the notion of an Architectural Frame (AFrame for short), a new PF artifact that can be used to guide problem decomposition in the light of particular solution expertise as might, for instance, exist in a software development company. As an exemplar of AFrames and their use, we define and apply an AFrame corresponding to the Model View Controller (MVC) architectural style (Bass, Clements, & Kazman, 1998). Lastly we adapt the PF framework to meet the needs of representing the problems of more complex sociotechnical systems, including those in which, as well as user-machine interaction, user training needs to be addressed. This causes us to consider a reification of the two-ellipse model into three ellipses to represent machine, environment, and user descriptions. Consequently, by interpreting this third ellipse in the PF framework, we discover a new type of PF domain – the knowledge domain – to represent a user’s knowledge and their “instruction” needs, and argue that this leads to a more general PF framework for sociotechnical systems.
Chapter Overview In the major part of this chapter we will use a well-known chemical reactor problem (Dieste & Silva, 2000; Leveson, 1986) as an example for illustration of techniques. Later we briefly describe the design of a “cold-calling” system. The chemical reactor is a sociotechnical system and is representative of the class of operator-controlled safety- (and mission-) critical systems. A schematic for the chemical reactor hardware appears in Figure 2. A statement of the problem is as follows: A computer system is required to control the safe and efficient operation of the catalyst unit and cooling system of a chemical reactor. The system should allow an operator to
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
321
Figure 2. The chemical reactor schematic (adapted from Dieste & Silva, 2000)
issue commands for activating or deactivating the catalyst unit, and to monitor outputs. Based on the operator’s commands, the system should instruct the unit accordingly and regulate the flow of cooling water. Attached to the system is a gearbox: whenever the oil level in the gearbox is low, the system should alert the operator and halt execution. The chapter is organized as follows. The next section develops the problem frame representation of the chemical reactor problem and uses this to recall the basis of problem representation in the PF framework. The section on problem classification provides a small taxonomy of problem classes, including those of relevance to sociotechnical systems. The next section addresses problem decomposition, both in the classical PF framework and through AFrames. The section on requirements analysis models for sociotechnical systems details our separation into three of the various domain descriptions and uses this to motivate a new type of PF domain, the knowledge domain. The final section concludes the chapter.
Problem Representation We first consider the PF representation of the chemical reactor problem. Within the PF framework problems are represented through problem diagrams. A problem diagram defines the shape of a problem: it records the characteristic descriptions and interconnections of the parts (or domains) of the world the problem affects; it places the requirements in proper relationship to the problem components; it allows a record of concerns and difficulties that may arise in finding its solution. For the chemical reactor, there are a number of domains, including those that appear in the schematic of Figure 2. Also the operator will play an important role in issuing commands to control the catalyst and cooling systems. Placing all of these domains in their correct relationship to each other leads to the problem diagram shown in Figure 3. The components are: •
Operation machine: The machine domain, that is, the software system and its underlying hardware. The focus of the problem is to build the Operation machine.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
322 Hall and Rapanotti
Figure 3. The chemical reactor problem diagram
a: {open_catalyst, close_catalyst} b: {catalyst_status, water_level} c: {open_catalyst, close_catalyst} d: {is_open, is_closed} e: {increase_water, decrease_water}
f: {water_level} g: {oil_level}
•
Other boxes (Cooling System, Catalyst, and so forth): Given domains representing parts of the world that are relevant to the problem.
•
Shared phenomena: The ways that domains communicate. These can include events, entities, operations, and state information. In Figure 3, for example, the connection between the Operation machine and the Cooling System is annotated by a set e, containing the events increase_water and decrease_water, and a set f, containing the phenomenon water_level. Phenomena in e are controlled by the Operation machine; this is indicated by an abbreviation of the domain name followed by !, that is, OM!. Similarly the phenomenon in f is controlled by the Cooling System, indicated by the CS!.
•
The dotted oval Safe and efficient operation: The requirement, that is, what has to be true of the world for the (operation) machine to be a solution to the problem.
•
In the connections between the requirement and the domains, a dotted line indicates that the phenomena are referenced by (that is, an object of) the requirement, while a dotted arrow indicates that the phenomena are constrained (that is, a subject for the requirement). In Figure 3, for instance, the oil level in the gear box is referenced while the cooling system’s water level is constrained.
•
Phenomena at the requirement interface (for example, those of sets f or d) can be, and usually are, distinct from those at the machine domain interface (for example,
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
323
those of sets c and e). The former are called requirement phenomena; the latter, specification phenomena. The intuition behind this distinction is that the requirement is expressed in terms of elements of the problem, while the specification (that is, what describes a machine domain) is expressed in terms of elements of the solution. Other artifacts that are not represented on the problem diagram but are related to it are domain and requirement descriptions. Such descriptions are essential to the analysis of a problem and address relevant characteristics and behaviours of all given domains, the machine domain, and the requirement. An important distinction in the PF framework is that of separating two types of descriptions: indicative and optative. Indicative descriptions are those that describe how things are; optative descriptions describe how things should be. In this sense, in a problem diagram, given domain descriptions are indicative, while requirement and machine descriptions are optative. In other words things that have to do with the problem domain are given, while things that have to do with the solution domain can be chosen. For instance, in the chemical reaction problem indicative descriptions of the Catalyst, Cooling System, and Gear Box should include characteristics of the domains that are of interest in the specification of the machine, say, the mechanics of opening the catalyst, or of changing the water level in the cooling system, or the oil level in the gear box. On the other hand the requirement should express some constraints on the status and operations of those domains that, when satisfied, result in their safe and efficient operation. Finally the machine specification should describe how we would like the control system to behave and interact with the given domains so that the requirements are met. Indeed not all domains share the same characteristics. There is a clear difference between a cooling system and an operator. In a cooling system there exist some predictable causal relationships among its phenomena. For instance, it could be described as a state machine, with a set of clearly identified states and predictable transitions between them. A domain with such characteristics is known as a causal domain. On the other hand, an operator’s phenomena lack such predictable causal relationships. We can describe the actions an operator should be allowed to do but cannot guarantee that they will be executed in any particular order, or not at all, or that some other (unpredicted) actions will not be executed instead. A domain with such characteristics is known as biddable. The distinction between causal and biddable domains is an important one, as it has ramifications for the type of descriptions we can provide and the assumptions we can make in discharging proof obligations during the analysis of a problem, as we will see in the following sections. Of course there exist other types of domains, each with its own characteristics. An exhaustive domain classification is beyond the scope of this chapter and can be found in, for example, Jackson (1995), Jackson (1998) and Jackson (2001).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
324 Hall and Rapanotti
Figure 4. Simple problem frame taxonomy: adding domain types
Problem Classification One of the intentions of the PF framework is to classify problems. An initial problem classification is given in Jackson (2001). Therein are identified five basic problem frames. They are basic because they represent relatively simple, recurrent problems in software development. In describing the basis of problem frames, the intention is not to be exhaustive in its classification of problems. Indeed there are other frames by Jackson (see, for example, Jackson, 1998) that do not make it into that classification. In this section we present a simple taxonomic development, beginning with the simplest of all problem frames and adding domain types (modulo topology). As each type has different characteristics, the resulting problem frames represent different classes of problems. In doing so we introduce a new basic problem frame, the User Interaction Frame, which is novel in the problem class it represents within the PF framework (but not, of course, within software engineering). The taxonomy is summarized in Figure 4.
Programs The simplest form of problem representable in problem frames is that of producing a program from a given specification. This is illustrated in Figure 5. Although this is a subproblem of all software engineering problems, it is not a very interesting problem class to be analyzed using problem frames: nothing exists outside the machine. Other techniques, such as JSD (Jackson, 1997) or design patterns (Gamma, Helm, Johnson, & Vlissides, 1995), are probably more appropriate to tackle this problem class. Indeed the PF framework does not include a problem frame for this class of problems. Figure 5. Writing programs
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
325
Figure 6. Required Behaviour Frame
Embedded Controllers For a problem class to be purposefully analyzed in PFs, some given domain(s) of interest must exist outside the machine. An interesting problem class is identified in Jackson (2001) by introducing a single causal domain. The problem frame is known as the Required Behaviour Frame, and its characterizing problem is that of building a machine that controls the behaviour of some part of the physical world so that it satisfies certain conditions. Some software engineers may find it easy to identify this problem with that of building an embedded controller (although it does apply also to more general software problems). The Required Behaviour Frame is illustrated in Figure 6. The frame has a topology that is captured by a frame diagram (that of the illustration). The frame diagram resembles a problem diagram, but it also includes some further annotation. This provides an indication of the characteristics of the domains and phenomena involved in problems of the class. For the Required Behaviour Frame: •
The Controlled domain is causal (the C annotation in the figure). Its phenomena are also causal (indicated by C on the arcs) — they are directly caused or controlled by a domain and may cause other phenomena in turn.
•
The Control machine has access to causal phenomena of the Controlled domain (in C2) and controls another set of phenomena, which are also shared with the Controlled domain (in C1). Intuitively phenomena in C1 are used by the machine to control the domain, while phenomena in C2 are used to obtain information and feedback on the functioning and state of the domain.
•
The requirement, Required behaviour, is expressed in terms of a set (C3) of causal phenomena of the Controlled domain.
When a problem of a particular class is identified, it can be analyzed through the instantiation of the corresponding frame diagram. The instantiation is a process of matching problem and frame’s domains and their types, as well as problem and frame’s phenomena types. The result of the instantiation is a problem diagram, which has the same topology of the frame diagram but with domains and phenomena grounded in the particular problem. Let us return to the chemical reactor problem and consider the problem of regulating the water level in isolation. This can be regarded as a required behaviour problem (Figure 7) with some safety requirement on the water level.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
326 Hall and Rapanotti
Figure 7. Regulating the water level in the cooling system as a required behaviour problem
e:{increase_water,decrease_water} f:{water_level} Figure 8. Frame concern for the Required Behaviour Frame
1 2 3
We will build a machine that behaves like this, so that... knowing that the controlled domain works like this... we can be sure that its behaviour will be this.
For a problem to be fully analyzed, the instantiation of a problem frame is only the first step of the process. Suitable domain and requirement descriptions (see the section on problem representation) need to be provided and the frame concern needs to be addressed. The frame concern is an overall correctness argument, common to all the problems of the class. It is the argument that must convince you, and your customer, that the specified machine will produce the required behaviour once combined with the properties of the given domains. Each problem frame comes with a particular concern whose structure depends on the nature of the class problem. For the Required Behaviour Frame, the argument is outlined in Figure 8.
User Interaction Another interesting class of problems can be obtained by including a single biddable domain outside the machine, which represents the user of the system. We call the resulting problem frame the User Interaction Frame, and its characterizing problem is that of building a machine that enforces some rule-based interaction with the user. The frame diagram for the User Interaction Frame is given in Figure 9.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
327
Figure 9. User Interaction Frame
Figure 10. Operator/system interaction as an instance of the User Interaction Frame
a: {open_catalyst, close_catalyst}
b: {catalyst_status, water_level}
Figure 11. Frame concern for the User Interaction Frame
1 Given this set of machine phenomena, when the user causes this phenomena (it may or may not be sensible or viable)... 2 if sensible or viable the machine will accept it... 3 resulting in this set of machine phenomena... 4 thus achieving the required interaction in every case.
The Interaction machine is the machine to be built. The User is a biddable domain representing the user who wants to interact with the machine. The requirement gives the rules that establish legal user/machine interactions. The manifestation of the user/machine interaction is through exchanges of causal phenomena (in C1) controlled by the user and symbolic phenomena (in Y1) controlled by the machine. Intuitively the user issues commands in C1 and the machine provides feedback through Y1. The interaction rules specify the legal correspondence of user and machine phenomena.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
328 Hall and Rapanotti
In the chemical reactor problem we can isolate the operator/machine interaction as a user interaction problem as illustrated in Figure 10, where the requirement establishes some rules on the relationship between operator’s commands and system feedback, say, that an open_catalyst command cannot be issued when the water_level value is below a set threshold. Note that this would be the perspective of a User Interface (UI) designer, whose main concern is the user interacting with a black box system. Indeed, in the wider problem analysis of the chemical reactor problem, machine responses to user commands depend on a faithful representation of the internal state of, say, the catalyst or the cooling system. Figure 11 illustrates the frame concern for the User Interaction Frame.
User Commanded Behaviour The Required Behaviour and the User Interaction Frames are representative of relatively simple problems, albeit often recurring in software development. It is possible, and indeed likely, that other interesting problem classes could be identified by considering single given domains of some other type (see discussion at the end of the section on problem representation). However we do not pursue this any further here. We look, instead, at what happens when there are two given domains outside the machine. As for the single domain case, other interesting classes of problems emerge. In fact the remaining four basic problem frames introduced in Jackson (2001) are all of this form. If we add a biddable domain to the Requirement Behaviour Frame, we obtain a User Commanded Behaviour Frame, which is illustrated in Figure 12. Its characterizing problem is that of building a machine that will accept the user’s commands, impose control on some part of the physical world accordingly, and provide suitable feedback to the user. Jackson (2001) introduces a subclass of this frame, the Commanded Behaviour Frame, which does not require the user to receive any feedback. In the chemical reactor problem, we can apply the User Commanded Behaviour Frame to analyze how the catalyst is controlled by the operator. The corresponding problem diagram is given in Figure 13. A possible description of the interaction rules could be as follows. The machine shall allow the user to control the catalyst under the following constraints:
Figure 12. User Commanded Behaviour Frame
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
329
Figure 13. Controlling the catalyst as an instance of a user commanded behaviour problem
a: {open_catalyst, close_catalyst} c: {open_catalyst, close_catalyst}
b: {catalyst_status} d: {is_open, is_closed}
Figure 14. State machine model for the catalyst
1
catalyst_status is a faithful representation of the state of the catalyst
2
the initial state of the catalyst is catalyst_closed
3
possible user commands are open_catalyst or close_catalyst
4
state transitions are represented in Figure 14.
The frame concern for the User Commanded Behaviour Frame is given in Figure 15. From the figure you will notice that the argument has two parts: satisfying the required behaviour of the domain (from 1 to 4) and providing suitable feedback to the user (5 and 6).
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
330 Hall and Rapanotti
Figure 15. The frame concern for the User Commanded Behaviour Frame
1 Given a choice of commands in the current state, when the user issues this command (it may or may not be sensible). 2 if sensible or viable, the machine will cause these events... 3 resulting in this state or behaviour... 4 which satisfies the requirement... 5 and which the machine will relate to the user... 6 thus satisfying the requirement in every case.
Problem Decomposition Most real problems are too complex to fit basic problem frames. They require, rather, the structuring of the problem as a collection of (interacting) sub-problems. In this section we discuss two ways of decomposing problems within the PF framework. The first, classical decomposition, proceeds through sub-problem identification and problem frame instantiation. The second, our novel approach, combines sub-problem identification with guided architectural decomposition using AFrames.
Classical Decomposition In classical PF decomposition a problem is decomposed into simpler constituent subproblems that can then be analysed separately. If necessary each sub-problem can be decomposed further, and so forth, until only very simple sub-problems remain. Decomposition proceeds through the identification of sub-problems that fit a recognised problem class and the instantiation of the corresponding problem frame. We illustrate the process on the chemical reactor problem. Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
331
Figure 16. Raising the alarm as an instance of the information display problem
h:{ring_bell} i:{bell_ringing} g:{oil_level}
Figure 17. Further decomposition of the user Commanded Behaviour sub-problem
a: {open_catalyst, close_catalyst} c: {open_catalyst, close_catalyst}
b: {catalyst_status} d: {is_open, is_closed}
There are three sub-problems: 1.
a user-commanded behaviour problem, for the operator to control the catalyst;
2.
a required behaviour problem, for regulating the water flow in the cooling system; and
3.
a sub-problem to issue a warning (and halt the system) when there is an oil leak in the gearbox.
Addressing sub-problems 1 and 2 means instantiating the corresponding problem frames to derive problem diagrams for each sub-problem. These were depicted in Figures 13 and 7, respectively. The third sub-problem has no standard problem frame to represent it. The closest fit would be the Information Display Frame (Jackson, 2001), but this requires a decision on Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
332 Hall and Rapanotti
how the alarm will be raised. Here we have made an arbitrary choice of introducing a Bell domain, and assume that it will ring when the oil level in the gearbox is below a certain threshold. The resulting sub-problem diagram is shown in Figure 16. Finally we already know from the simple taxonomy in the third section that sub-problem one can be decomposed further, resulting in a required behaviour and a user interaction sub-problem. These are shown in Figure 17.
AFrames AFrame decomposition complements classical decomposition in providing guidance and decomposition and recomposition rules. The rational behind AFrames is the recognition that solution structures can be usefully employed to inform problem analysis. AFrames characterise the combination of a problem class and an architectural class. An AFrame should be regarded as a problem frame for which a “standard” sub-problem decomposition (that implied by an architecture or architectural style) exists. AFrames are a practical tool for sub-problem decomposition that allow the PF practitioner to separate and address, in a systematic fashion, the concerns arising from the intertwining of problems and solutions, as has been observed to take place in industrial software development (Nuseibeh, 2001). Further motivation for, and other examples of, AFrames can be found in Rapanotti, Hall, Jackson, & Nuseibeh (2004). The MVC (short for Model-View-Controller) (see, for example, Bass et al., 1998) is a way of structuring a software solution into three parts – a model, a view, and a controller – to separate and handle concerns related, respectively, to the modeling of a domain of interest, the visual feedback to the user, and the user input. The controller interprets user inputs and maps them into commands to the model to effect the appropriate change. The model manages one or more data elements, responds to queries about its state, and responds to instructions to change state. The view is responsible for feedback on the model’s state to the user. Standard communication patterns (for example, the Observer pattern (Gamma et al., 1995)) apply between the MVC parts. Here we introduce the MVC AFrame as applied to the User Commanded Behaviour Frame. This represents the class of user commanded behaviour problems for which an MVC solution is to be provided. Figure 18. MVC annotation of the User Commanded Behaviour Frame
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
333
Figure 19. Decomposition templates for the MVC AFrame
The intention of using the MVC in the solution space is recorded through an annotation of the machine as illustrated in Figure 18. Guidance on decomposition is in the form of decomposition templates, which are applied to obtain sub-problem diagrams. The decomposition templates for the MVC AFrame are given in Figure 19. It can be seen from the figure that the original problem is decomposable into two sub-problems, whose machine domains are the View and Controller machines (in the MVC sense). Also a Model domain is introduced that represents an abstraction of the real-world domain to be controlled. This is a designed domain (Jackson, 2001), that is, one that we have the freedom to design, as it will reside inside the solution machine. The resulting subproblems are then: that of building a View machine to display the Model’s representation of the state of the controlled domain and that of building a Controller machine that acts on the Model, which will pass on the commands to the controlled domain. In PF terms the Model acts as a connection domain between the real-world domain and presentation and control subsystems. The application of the MVC AFrame to the sub-problem of controlling the catalyst (see Figure 13) results in the decomposition of Figure 20. We see at least two strengths of AFrames. The first is that they suggest how a problem would need to be restructured for a particular solution form. For instance, in the MVC case, that an abstract model of the catalyst needs to be produced (or, for that matter, a connection domain between Operator and Gearbox — a Bell — would be needed). The second is that they help the recomposition of sub-problem solutions into the original problem. Recomposition is facilitated by the fact that AFrame decomposition is regularized through the application of the AFrame templates. For the MVC this is through the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
334 Hall and Rapanotti
Figure 20. MVC decomposition of the user commanded behaviour sub-problem
a: {open_catalyst, close_catalyst} b: {catalyst_status} c: {open_catalyst, close_catalyst} d: {is_open, is_closed} e: {open, closed}
Figure 21. MVC recomposition
identification of the links among its architectural elements. The recomposition diagram for the MVC AFrame is illustrated in Figure 21 and its frame concern in Figure 22. In Figure 21 the recomposition of the model, view, and controller domains follows the MVC architectural rules.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
335
Figure 22. Discharging the correctness argument in MVC recomposition
1 2 3 4 5 6
Given a choice of commands in the current state, when the user issues this command (it may or may not be sensible)... if sensible or viable, the machine will cause these events... resulting in this state or behaviour... which satisfies the requirement... and which the machine will relate to the user... thus satisfying the requirement in every case.
A Requirements Analysis Model for Sociotechnical Systems The consideration of more sophisticated human-machine relationships is our next concern. To be specific we now wish to look at users’ behaviour as being the subject of requirements statements, admitting that users are the source of much of the flexibility in sociotechnical systems. In short we wish to allow the design of human instruction to be the subject of the requirements engineering process addressed through problem frames alongside that of the program. Paraphrasing this we might say that the human, as well as the machine, is to be the subject of optative descriptions. Foundationally this means the separation of the description of the world from that of the human who is the subject of the design. This leads to the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
336 Hall and Rapanotti
Figure 23. The extended requirements analysis model
reification of the original ellipse model shown in Figure 23. In it we have three ellipses – that for the Human H with knowledge K, that for the Machine M with program P, and that for the remaining Environment W with requirements R. Of course, just as machines outside the design process have indicative descriptions in W, so do humans. With the introduction of the human H, we identify and separate two new areas of interest, which now form explicit foci for design: •
the specification UI, anonymous in the S region in the original model, which determines the Human-Machine interface; and
•
the specification I, missing from the original model, which determines the knowledge and behaviour that is expected of the human as a component of the sociotechnical system.
As in the original model the description W has the role of bridging the gaps between the requirement R and the specification S, in our extension W has the role of bridging the gaps between the requirement R and the instruction I, human-machine interface UI and specification S together. More precisely we assert that S, I, UI, and W must be sufficient to guarantee that the requirements of the sociotechnical system are satisfied. More formally: W,S,I,UI |- R
A Problem Frame Interpretation In the PFs framework the machine domain represents the machine for which the specification S must be designed. By analogy a new domain type will be required to represent the human for which the instruction I has to be designed. To this end we introduce into the PF framework the notion of a knowledge domain to represent that domain. In a problem diagram a knowledge domain should be represented as a domain box with a double bar on the right-hand side (to distinguish it from the machine domain). Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
337
Figure 24. The general “sociotechnical problem” diagram
Figure 25. Outline problem diagram for the cold-calling system
The most general form of a sociotechnical problem, as a problem diagram, is shown in Figure 24. In the figure both Knowledge and Machine domains are subjects of design, as are their shared user interface phenomena. An example of how a real-world sociotechnical problem could be treated in the new model is that of designing a “cold-calling” system to allow an interviewee to be telephoned and for their responses to certain questions, asked by an interviewer, to be recorded in a database for future analysis. The problem is to design both the technical subsystem (the machine domain) and the instructions that guide the interaction of the interviewer (the knowledge domain) with the interviewee. The interviewee sits firmly in the environment (as a biddable, indicative domain). The interaction with the database is a machine task. The outcome of the design process, in addition to the specification for the technical subsystem, might be a script for the interviewer and the human-machine interface as used by the interviewer. The problem diagram for this example is outlined in Figure 25.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
338 Hall and Rapanotti
Conclusion In their classical form problem frames happily represent interactions between a user and a machine, as might be characteristic of simple sociotechnical systems. In this chapter we have presented an enrichment of the PF framework to allow the representation and analysis of more complex sociotechnical systems. To do this we have introduced two new problem frames, the User Interaction and User Commanded Behaviour Frames. Although not exhaustive in their treatment of socio-technological interaction problems, they hopefully will provide a sound basis for a richer taxonomy of user interaction within the PF framework. One of the as-yet under-developed areas within the PF framework is the treatment of problem decomposition, in particular from the perspective of how to do it in practice. We are currently exploring the development and use of AFrames. An AFrame offers guidance in problem decomposition on the basis of solution space structures. In this chapter we have shown how basic sociotechnical interaction problems can be decomposed when the target architecture is to be the MVC style. Although both these enrichments are new in the PF framework, they do not move outside of its original conceptual basis in the two-ellipse model of requirements analysis. In contrast we have seen in this chapter that the development of general sociotechnical systems raises challenges for the PF framework. We have suggested solutions to these challenges in the reification of the two-ellipse model to a three-ellipse version, in which social sub-systems – individuals, groups, and organisations – can also be considered as the focus of the design process. With the introduction of the knowledge domain, the manifestation of this extension in problem frames, we aim to bring the sociotechnical system problems under the design remit of the PF framework.
Acknowledgments We acknowledge the kind support of our colleagues, especially Michael Jackson and Bashar Nuseibeh in the Department of Computing at the Open University.
References Bass, L., Clements, P., & Kazman, R. (1998). Software architecture in practice. SEI Series in Software Engineering. Addison Wesley. Dieste, O., & Silva, A. (2000). Requirements: Closing the gap between domain and computing knowledge. Proceedings of SCI2000, II. Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1995). Design patterns. AddisonWesley.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Problem Frames for Sociotechnical Systems
339
Gunter, C., Gunter, E., Jackson, M., & Zave, P. (2000). A reference model for requirements and specifications. IEEE Software, 3(17), 37-43. Hall, J. G., & Rapanotti, L. (2003). A reference model for requirements engineering. Proceedings of the 11th International Conference of Requirements Engineering, 181-187. Jackson, M. (1995). Software requirements & specifications: A lexicon of practice, principles, and prejudices. Addison-Wesley. Jackson, M. (1997). Principles of program design. Academic Press. Jackson, M. (2001). Problem frames. Addison-Wesley. Jackson, M. A. (1998). Problem analysis using small problem frames [Special issue]. South African Computer Journal, 22, 47-60. Leveson, N. (1986). Software safety: What, why and how. ACM Computing Surveys, 18(2), 125-163. Nuseibeh, B. (2001). Weaving together requirements and architectures. IEEE Computer, 34(3), 115-117. Rapanotti, L., Hall, J. G., Jackson, M., & Nuseibeh, B. (2004). Architecture-driven problem decomposition. Proceedings of the 12th International Conference of Requirements Engineering. Zave, P., & Jackson, M. (1997). Four dark corners of requirements engineering. ACM Transactions on Software Engineering and Methodology, 6(1), 1-30.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
340
Cronholm and Goldkuhl
Chapter XX
Communication Analysis as Perspective and Method for Requirements Engineering Stefan Cronholm, Linköping University, Sweden Göran Goldkuhl, Linköping University, Sweden
Abstract In this chapter we challenge the view of perceiving information systems as systems for storing, retrieving, and organizing large amounts of data. We claim that the main purpose of information systems is to support the communication that takes place between different actors in a work practice. We describe a communication perspective on information systems and its consequences for performing requirements engineering. In this perspective business documents play a prominent role. The perspective is operationalized into a method and an example from a case study is used in order to describe the method.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
341
Introduction Traditionally information systems (IS) are viewed as systems for storing, retrieving, and organizing large amounts of data. The aim of implementing IS has often been to achieve increased efficiency in business administration processes. This old view of IS can be challenged since IS nowadays are an integrated part of the daily work when performing different work activities. We believe that IS are more than a support for storing and retrieving data and claim that one of the main purposes of an IS is to support the communication that takes place between different actors in a work practice. The purpose of this chapter is to describe a communication perspective of IS and its consequences for performing requirements engineering (RE). The communication perspective is operationalized into parts of an RE-method. The proposed method is not a comprehensive method for RE. The aim is to cover some aspects of the RE process that unfortunately are too often disregarded. A case study is used to illustrate the perspective and method. The case study concerns development of an IS to support home care service for elderly people. The conclusions are based on both existing theory and empirical findings. After this introductory section the communication perspective is discussed in the next section. In the next section the concept of document is defined. In the section following that we present communication analysis (CA) as a strategy for RE. The next section informs about how to perform CA. In the final section we end up with conclusions.
The Communication Perspective We will argue for viewing IS as work practice communication. The theoretical bases for this view are social action theory (for example, Weber, 1978) and language action theory (Goldkuhl & Lyytinen, 1982; Habermas, 1984; Searle, 1969; Winograd & Flores, 1986) and conversation analysis (Goldkuhl, 2003; Linell, 1998; Sacks, 1992). One of the main points in Weber’s theory of social action is that such action is intentional and performed taking into account the behaviour of other persons. Social actions are performed with social grounds and with social purposes. Using a social action perspective means that it is not acceptable to view IS as a black box with some social and organizational consequences (Dietz, 2001). IS should be perceived as systems for action. The language action theory conceives communication as one type of action. Communication is not restricted to a mere transfer of information. To communicate is to establish interpersonal relationships between the sender and the receiver. Language action theories (for example, Searle, 1969) distinguish often between the propositional contents (what is talked about) and the communicative function (what kind of interpersonal relationship is established) of a message. Such communicative relationships involve expectations and commitments between the communicators. In this language action perspective, IS are viewed as sociotechnical systems for action. This view differs from strict representational views of information. A representational Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
342
Cronholm and Goldkuhl
view of information means that designers try to create an “image” of the reality in order to have the analysed piece of reality properly represented in the system’s database. This strict representational view can be challenged from a language action perspective (for example, Goldkuhl & Lyytinen, 1982; Winograd & Flores, 1986). In a language action perspective, IS are not considered as “containers of facts” or “instruments for information transmission” (Goldkuhl & Ågerfalk, 2001). This perspective emphasises what users do while communicating through an IS (Goldkuhl & Ågerfalk, 2001). IS are sociotechnical systems for action in work practices, and such actions are the means by which work practice relations are created. Conversation analysis is an approach for studying the organisation of communication, that is, how utterances are related to each other. There are utterances that are closely linked to each other. Adjacency pairs (Sacks, 1992) consist of a first utterance followed by a second. The first functions as an initiative for the second, which functions as a response (Linell, 1998). Such a second utterance may be dialogically multifunctional. It may not only be a response; it may also function as an initiative for following utterances. From these theoretical bases we propose a communication perspective. People in organisations communicate with each other through oral and written messages. Communication in organisations is performed in order to transfer knowledge and establish commitments and other relations between people (Cronholm & Goldkuhl, 2002a; Goldkuhl & Röstlinger, 2002). This need for communication can be supported by IS. A communication perspective means that information systems are regarded as systems for technology-mediated work practice communication. From this perspective IS are seen as vehicles for people to communicate with each other. An IS is a social and a technical system. It is a social system since it is a system for communication between organisational actors. At the same time it is a technical system using computer hardware and software. This sociotechnical perspective is founded on the concept of IS actability. Actability is defined as an IS’s ability to perform actions and to permit, promote, and facilitate users to perform their actions both through the system and based on messages from the system, in some work practice context (for example, Cronholm & Goldkuhl, 2002b; Goldkuhl & Ågerfalk, 2001; Ågerfalk, 2003).
Documents as Communication Instruments Documents are information sources and are, for example, used for commitments, agreements, planning, and as reminders. A document could be a sheet of a more formal and structured character. It could also be an informal piece of paper containing hand-written information. Documents can also be computerised. Any form on a user interface can be seen as a document. Documents play an important role in work practices. Documents are something that is produced, used, and changed. To create a document (or parts of it) is to be seen as performing a communicative act. An actor in the work practice not only represents something in a document. He or she also creates certain relationships (commitments, Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
343
expectations, and so forth) with other persons when writing something in a document. This is in accordance with language action theory and how it has been adopted in the IS area (see the section on communication perspective). Documents have important characteristics that distinguish them from oral utterances. Documents have persistence. People can read them and save them and read them again. Documents can also be changed. A person can add something to an existing document or change something in it. This means that documents can be results of several communicative acts, which also can be performed by different people. Documents (forms and so forth) in a work practice play important roles when used as communication instruments for performing the work practice. Such documents represent important acts in the work practice, and they are also used to externalise work knowledge, thus being a collective external memory for the work practice. The documents are also carriers of the work language of the practice. Being representations of actions, knowledge, and language in the work practice, documents are not only instruments for performing the work practice, they are also instruments for constituting the practice. Because of these fundamental characteristics, we assert that work practice documents are important means for studying and understanding a work practice (Goldkuhl & Röstlinger, 2002).
Requirements Engineering as Meta Communication The aim of this section is to describe CA as an RE strategy. We describe in the next subsection an overview and motivate the importance of analysing documents. In the second sub-section a set of questions is proposed. The aim of these questions is to support the analysis of the documents. The questions are structured into a communication model consisting of three related communicative categories: conditions, actions, and consequences.
RE Strategy RE is a process of creating, eliciting, and stating requirements for future IS. Following the perspective of IS as a sociotechnical system for work practice communication, this implies that RE can be seen as meta communication. RE is communication (through analysis and design) about current and future work practice communication. In order to capture and analyse work practice communication we propose an examination of existing documents. There are many reasons for choosing existing documents as a starting point in an RE process:
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
344
Cronholm and Goldkuhl
•
work practices usually have a large number of documents
•
a major part of the language/concepts used in the organisation may be represented in the documents
•
the documents function as result of action and as conditions for subsequent actions
•
the language/concepts in the work practice documents is familiar to the users
The last point is important in relation to the strategy for RE. In order to establish a participative RE process with high involvement of the users there is a need to create appropriate conditions for such participation. We start working with what is known and familiar to the users. The documents used in the work practice are usually well known to the users. A study of such documents will reveal the action and communication logic of the work practice. This will be done in ways that are comprehensible to both users and analysts. Interviews are recommended as a complementary eliciting technique. Interviews may have several purposes. One purpose is to gather initial information about the work practice studied. Such an introductory understanding and overview of the work practice support the analysis of existing documents. Interviews can also be used as a support for in-depth studies of documents. Such interviews can be used to explicate what seems unclear or ambiguous from studying the documents. From such an understanding of current practice it is possible to move on to articulating the requirements for a new system. A new system will probably mean a development of the work practice documents, and this may also entail a shift of media. Such a change of media will imply the introduction of IT-based (information technology-based) screen documents (user-interfaces) in the new system. The design of future documents will, of course, be more abstract to the users than examining the current documents. It is important that this document design does not become too abstract and incomprehensible for the users. An active use of prototypes (Ehn, 1988) can assist in making the future design more concrete. A retroactive link back to the existing documents (and their contents and functions) may support the accountability and comprehensibility of the new documents to all actors involved in the RE process (Cronholm & Goldkuhl, 2002a). The RE process that we propose is divided into three iterative phases: describe – examine – redesign.
Communication Analysis of Documents Important parts of the RE process will consist of a communicative analysis of documents within the work practice. This means to analyse existing documents as well as future ITbased documents. This analysis of documents and concepts is supported by a set of questions and some modeling techniques. Due to limited space we point out references to where the modeling techniques used are thoroughly explained. The formulations of the questions are derived from the theory discussed in the section on communication perspective and also inspired from qualitative analysis (Strauss & Corbin, 1998). As can
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
345
Figure 1. Communication model Communicative conditions Sender
Communicative actions
Communicative consequences
Communicative functions
Receiver
Content
Creation
Media
Consequential actions
be seen below some of the questions overlap. This is a kind of triangulation, and therefore we view it as a strength (Denzin, 1978). The purpose of asking these questions is to improve the communication in the work practice. We have structured the questions in a communication model consisting of three related communicative categories: conditions, actions, and consequences (see Figure 1). Each of these categories is divided into sub-categories, and each sub-category consists of several questions to be asked. Below we present the proposed questions for each (sub-) category. The questions are generic in the way that they should be asked during the analysis of existing documents as well as for future IT-based documents.
Communicative Conditions Creation: •
When is the information created?
•
What are the circumstances for creating the information?
•
Why is the information communicated?
•
What is needed in order to create information in the document?
•
What kinds of work practice rules govern the creation and use of the document?
Sender: •
Who is the communicator?
•
Is there anyone mediating (transferring) the information/document?
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
346
Cronholm and Goldkuhl
Communicative Actions Content: •
What is communicated? What is the content of the document?
•
What are the meanings of different concepts?
•
Is the terminology comprehensible and well known?
Communicative functions: •
What communicative functions does the document carry?
•
What kind of communicative relations are created between sender and receiver?
•
Are communicative functions expressed explicitly?
•
Is the document a response to preceding actions (initiatives) within the work practice?
Media: •
What kind of media is used for the document?
•
How is the document stored, accessed, retrieved, used, and changed?
Communicative Consequences Consequential actions: •
What actions are taken based on the information in the document?
•
Is there a clear initiative-response relation between the document and its consequential actions?
•
For what purposes is this information used?
•
Might there be any possible side effects of the document utilisation?
Receiver: •
To whom is the information in the document addressed?
•
Are there actors updating (changing) the document?
•
What kind of knowledge of the receiver is presupposed in the communication?
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
347
How to Perform Communication Analysis The aim of this section is to illustrate CA. The illustrations used are from a case about home care service. First we briefly present the case study. Second we describe one representative document from the home care service and the communicative situations where this document is used. Third we present how we have examined this document with CA. Fourth we give an example of how the document could be redesigned in a future ITbased solution. Finally we reflect upon some of the experiences achieved from using CA.
Briefly About the Case The studied case is a home care unit. The home care unit helps elderly people in their daily life. The major tasks of the home care are to help the elderly with daily hygiene, minor medical tasks, cleaning, doing laundry, and so forth. In order to communicate information and structure their work, the home care assistants use a number of self-made as well as pre-printed documents (for example, journals, diaries, note pads, schedules, and so forth). These self-made documents are used for planning and carrying out tasks. The documents are also used for transferring knowledge between the assistants. Each client has individual needs, and therefore the tasks that will be performed vary. A work practice goal is that a maximal individualisation of care should be given. The personnel consist of two directors and a number of assistants. The case study was performed in a mediumsized Swedish local authority. Researchers, assistants, and directors participated actively in the case study. Clearly there is a lot of communication going on in the home care unit. There is communication between different roles, such as between planners and performers, between directors and assistants, and between assistants and clients. There is also communication between the actors of same role (assistant to assistant, director to director). For example, the assistants work in shifts. This means that the day shift has to hand over information about clients to the night shift and vice versa. The goal that a maximal individualisation of care should be given for each client demands a high communication quality. We consider this work practice as a complex sociotechnical system. The personnel can be classified as computer novices. They have little experience of interacting with computers. The participants in the ISD process from the home care unit can be classified as novices. It was their first participation in an RE-project.
Describing the Document “Morning Tasks” and Its Communicative Situations The document that we have chosen for illustration is labelled “Morning task” (see Figure 2). The reason for choosing this document is that it is frequently used in the assistant’s daily work with clients. The document consists of four columns: 1) the apartment number
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
348
Cronholm and Goldkuhl
Figure 2. Morning tasks Morning tasks Apartment
Time
Name
Task
1414
07:15
Roy
Visiting the bathroom Monday - Wednesday - Friday
1406
07:30
Fanny
Dressing, visiting the bathroom Tuesday - Thursday
203
07:30
Douglas
Make the bed, medicine
108
07:30
Elisabeth
Hygiene, dressing
204
07:45
Ada
Eye drops, change socks
...
...
...
...
number
that informs the assistant about the client’s address, 2) a time that informs them when to visit the client, 3) a name that informs who to visit and 4) a task description that informs them what to do. The tasks can be viewed as agreed commitments to the clients. First, we have identified three communicative situations where the document is used (see Figure 3): •
Planning tasks
•
Choosing tasks
•
Resetting the document
In order to understand each situation, we have used a diagram. In the diagram we have for each situation modeled how the sender, the receiver, and the actions are related to the document. The diagram also shows how the three situations are related to each other. In the first communicative situation a scheduler plans the tasks. New or changed tasks are manually added (hand-written) to the document. The scheduler informs the performers when there are new or changed tasks by manually adding them to the document. The document is placed in a plastic folder. The procedure for choosing a task is the following. The home care assistant reads the document in order to locate the next appropriate task. Before the home care assistant visits the client she simply draws a line with a marker pen on the plastic folder over the actual task. The aim of drawing this mark is to inform other assistants that this task is being performed. After the marking the home care assistant performs the task. When the home care assistant returns from the visit, she reads the document in order to locate the next appropriate task. The next day the administrator rubs out all the markings. The administrator is “resetting” the document so it can be re-used from one day to another. As you can see in Figure 3, there are different communicative situations consisting of different roles communicating with each other. The document has also different communicative functions depending on the situation. In the “planning-situation” the communicative function is to inform performers about the tasks that should be carried out. In the “choosing task-situation” the communicative function is to inform performers of
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
349
Figure 3. Document Activity Diagram: current work practice communication New and changed tasks
Administrator Resetting situation Reset
Add, change
Scheduler
Read
Document: Morning tasks
Read
Media: Paper in plastic folder
Chose
Choosing situation Performer
Planning situation Performing task
Performer
Client
which tasks are performed and which are not. In the “resetting-situation” the communicative function is to inform performers about the tasks for the next day.
Examine the Situation of Choosing a Task In the following examination we have chosen the communicative situation “choosing a task.” The reason for choosing this situation is that it is a limited but rich example for illustrating how we have used CA. In the examination we follow the communication model presented above (see section on communication analysis of documents). The section ends with a summing up of identified problems.
Examination Communicative Conditions Creation: •
What is needed in order to create information in the document? Needed information for choosing a task is planned tasks and information about which tasks already have been performed. Based on this information the assistants create new information. The assistants choose an unperformed task and mark the task as performed.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
350
Cronholm and Goldkuhl
•
When is the information created? The marking of a chosen task is done just before the assistant leaves the office to visit the client. However this marking is not dated. The reason for not using a date is that the same document is used every day. One problem identified with this routine is that there is no way to track which tasks actually have been carried out. This means that there is a problem of quality assurance. There is no possibility to trace what actually has been done or not for the clients.
•
Why is the information communicated? The information (the marking) is communicated in order to inform other performers about which tasks have been carried out and should therefore not be chosen by other assistants.
•
What are the circumstances for creating the information? The marking takes place when the assistants are on their way to a client, not when the task is carried out. Sometimes the assistants are interrupted by other duties, and this interruption means a risk for an inconsistency between the aim of the marking and tasks that actually have been carried out. This problem means that there is a need for identifying the status of each task. Possible statuses are not performed, started, and performed.
•
What kinds of work practice rules are governing the creation and usage of the document? Each client has individual needs, and therefore the tasks that will be performed for each client varies. A work practice goal, derived from the Social Welfare Law, is that a maximal individualization of care could be given.
Sender/Communicator: •
Who is the communicator? Who the communicator is could not be read from the document. In this case the communicator is the performer of a task. The performer is not visible in the document, and the fact that all the markings are rubbed out from one day to another makes it hard to follow up who was responsible for performing the task. In other words, the possibility to trace “who has said what” could not be read from the document. In order to vary tasks over time among the assistants, they plan for rotation. However this rotation is not always followed. Sometimes the planned performer is the actual performer and sometimes not. This makes it even harder to trace the actual performer. One reason that the communicator should be written in the document is if there are any complaints from the clients or from the relatives of the clients. Omitting the communicator means that the home care unit has a problem with identifying who has done what.
•
Is there anyone mediating (transferring) the information/document? No.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
351
Communicative Actions Content: •
What is communicated? What is the content of the document? The information that is communicated consists of apartment number, time, first name, and the task. However some concepts are missing. Sometimes the tasks are not carried out, since the client could be in hospital or visiting relatives. This means that there is need for a column that informs about why a task need not be performed. There is also a need for identifying the status of a task (see question “What are the circumstances for creating the information?”). The concept of status is needed. Other missing concepts are performer and planned performer (see question “Sender”).
•
What are the meanings of different concepts? The apartment number is present in order to be able to locate the client. The time informs the home care assistant when to visit the client. The task column informs the home care assistant what to do. When studying the document closer, you could see that there also are weekdays noted in the task column. This extra information informs the assistants about which weekdays the actual task should be carried out. In order to understand the meaning of occurring concepts and how they relate to each other, we have modeled the concepts in the class diagram (for example, Kruchten, 1999). This model is, however, not illustrated in the chapter.
•
Is the terminology comprehensible and well known? The terminology for the document is consistent and comprehensible apart from that weekdays occur in the task column. This need of writing weekdays in the task column is a sign of a weak conceptual modeling. Since the tasks vary among weekdays, it is not adequate to inform about hours and minutes in the time column. There are specific efforts that vary among the weekdays. These specific and varying efforts are hard to manage, since there is not enough space in the document.
Communicative functions: •
What communicative functions does the document carry? The document has several functions. One function is a planning (directive) function. This means that the document should inform the assistants about which tasks should be performed. Another function is a reporting function. This means that the assistants report which tasks have been carried out.
•
Are communicative functions expressed explicitly? The planning function could be expressed more explicitly (see question “Is the terminology comprehensible and well known?”). The reporting function is not clear since the task is marked when the assistant is on her way to a client. The receiver interprets the marking as if the task is performed.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
352
Cronholm and Goldkuhl
•
What kind of communicative relations are created between sender and receiver? When the home care assistant draws a line with a marker pen, she commits herself to the performance of the task. At same time she informs other assistants that they should not perform this task. This means that there is a relation created between a sender and a receiver. But the sender is not made explicit (you cannot read the sender from the document) and therefore not visible. Sometimes there is a need to be more informed about a specific client. This information can be achieved by asking colleagues. The omission of the sender makes it harder to identify the colleague who has the specific information about a specific client. In Figure 2 we have modeled the actual communicative situation. That means that the relation between the sender and the receiver is made explicit (see Figure 2).
•
Is the document a response to preceding actions within the work practice? The document is a response to the preceding action of planning tasks. In order to identify the order between different communicative situations and actions, we have used action diagrams (Goldkuhl & Röstlinger, 2003) that are not included in this chapter.
Media: •
What kind of media is used for the document? A paper document in a plastic folder is used. Changes in the document are written by hand. The problem with hand-written changes is that they could be hard to read. They could also be hard to understand due to lack of formality.
•
How is the document stored, accessed, retrieved, used, and changed? The document is placed on the assistants’ desk. Data is accessed and used by reading the document. Data is changed and added by hand. The data about performed tasks is not stored.
Communicative Consequences Consequential actions: •
What actions are taken based on the information in the document? The receiver identifies which tasks are not performed, chooses one unmarked task, and performs the chosen task.
•
Is there a clear initiative-response relation between the document and its consequential actions?
•
For what purposes is this information used?
Yes, the assistant responded by performing the task. In order to identify which task has been carried out and which has not. •
Might there be any possible side effects of the document utilisation? If something unusual has happened to the client, a note is made in the client’s journal.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
353
Receiver: •
To whom is the information in the document addressed? The information is addressed to other assistants.
•
Are there actors updating (changing) the document? It is important to understand that the assistant alternates between the roles of sender and receiver. In the current situation the assistant in the role of receiver reads information from the document, and in the role of sender the assistant updates the document (marking a task).
•
What kind of knowledge of the receiver is presupposed in the communication? The receiver must understand the meaning of the marking. As discussed above the marking means only an intention to visit a client. Sometimes the home care assistant is interrupted by other duties on her way to a client, and she may not fulfil her intention, at least not immediately.
Summing up the Identified Problems In order to present a summary of the identified problems, we use a problem list. When a lot of problems occur, a problem diagram can be used that shows how the problems are related to each other (Goldkuhl & Röstlinger, 2003). P1
The information created in the document is not dated (hard to follow up)
P2
A risk for an inconsistency between the markings and tasks that have been carried out.
P3
The communicator (the scheduler) is not visible.
P4
The receiver (a possible performer) cannot read the sender (an actual performer) from the document.
P5
The performer of a task is not visible.
P6
Changes in the document are written by hand (hard to read, lack of formality, hard to manage tasks that are occasional).
P7
Reasons for not performing a task are omitted.
P8
Weekdays occur in the task column.
P9
The communicative function is not clear since the task is marked when the home care assistant is on her way to a client. The receiver interprets the marking as if the task has been performed.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
354
Cronholm and Goldkuhl
Redesign: A Future IT-Based Document “Choice of Tasks” Based on the problems identified from the examination of the document “Morning task” we have proposed a redesign. This origin paper-based document is divided into three IT-based documents that correspond to the communicative situations (see Figure 4). •
Plan tasks
•
Plan performer
•
Choose and report task
In this chapter we illustrated the situation of choosing and reporting tasks (see Figure 5). Since the aim of this chapter is to present a communication perspective on IS, we will just briefly point out the improvements made to the document examined. •
A date stamp is added in the upper left corner that relates to each task (P1).
•
Three different statuses are added for a task: not done, started, and done. The assistants register the status for each task by clicking on the button “Register task as…” (P2, P9).
•
The communicator has been made easily accessible. In the lower left corner there is a button labelled “Open plan tasks.” A click on this button opens a new document from where the scheduler of a task could be read (P3).
Figure 4. Document Activity Diagram: Future work practice communication New and changed tasks Situation: Choosing Situation: Planning tasks Add, Document: change Planning tasks Scheduler
Read
Media: ITform
Read Mark Document: Task schedule Media: ITform
Add, change Read
Document: Planning performer Media: ITform
Assistant Read
Situation: Planning performers
Scheduler
Assistant
Mark
Assistant Situation: Reporting
Performing task
Client
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
355
Figure 5. Proposed IT-based solution
•
The columns planned performer and (actual) performer are added. The default value for the actual performer is the planned performer. This actual performer can be changed (P4, P5). This means that the performers are visible and that all performed tasks are related to a specific performer.
•
An IT-based solution eliminates the problem of hand-written information. It also reduces the problem of limited space. This means that information that supports maximal individualisation of care can be added. It also supports accessing of stored data. The upper part of the document consists of possibilities to access different subsets of tasks. For example, changing the date means that another selection of tasks will be presented. Changing the time from morning to evening will also show another subset of tasks (P6).
•
The column “Task is not done, reason” is added to the document (P7).
•
A date and a time stamp are used. Further the content of a task is just outlined. Detailed information of task can be achieved by either double-clicking on the task or by clicking on the button “Show task details” (P8).
Conclusion Using CA and the proposed questions as driving forces for RE have revealed several problems. For example, the simple questions “When is this document created?” and Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
356
Cronholm and Goldkuhl
“Who is the communicator?” have revealed a major problem of quality assurance. The first question identified a problem of traceability of what has been done for a client and when it has been done. The second question identified that the communicator is omitted in the document. This means that it is hard to trace the responsibility for a performed task. According to the theory of actability the communicator should be visible (for example, Ågerfalk, 2003). All of the questions proposed have in one way or another supported the discovery of problems. The theoretical basis used has supported the RE process. Language action theory and conversation analysis have contributed to view IS as communication systems. This means that there has been a support for establishing explicit relationships between different actors and between different roles in the business (between senders and receivers of information) and a support for identifying what should be communicated and when. This approach recommends starting with what is known and an analysis of existing documents. In a familiar situation the participating users would not be novices; they would be experts. A possible problem with this recommendation might be that there is a risk of focusing too much on existing practices and not being open for new, innovative ideas. This approach does not exclude the possibility of incorporating innovative solutions. A professional designer should have a competence and responsibility for being sensitive and suggestive of new alternative solutions when there is a risk of a toolimited focus. Starting with what is known can be seen as an objection against the Business Process Reengineering (BPR) argumentation about starting development with a “clean slate” (Hammer & Champy, 1993). In order to create radically new business process, one should, according to classical BPR, think away the current processes. We doubt that BPR is a proper approach when involving novice users. There are major differences between CA and popular object-oriented approaches. We emphasise IS as communication systems. Object-oriented approaches (for example, Rational Unified Process (Kruchten, 1999) and OOA&D (Mathiasen, Munk-Madsen, Nielsen, & Stage, 2000)) seem to focus too early in the RE process on concepts like classes and objects. Identifying classes and objects are important but more fundamental is to analyse the work practice communication. The handling of objects and classes in an IS is a consequence of the desired communication. Our perspective and method can be used together with other RE methods, objectoriented as well as other methods. The communication perspective guides analysts and users to focus on communication issues as documents, work practice language, communicative actions, and actors. Our method for CA is not a comprehensive method for RE. It only covers some aspects of the RE process; however, they are very important and unfortunately too often disregarded. CA, as described here as an RE method component, can and should be used together with other RE methods. We think, for example, that several of the notations suggested in object-oriented approaches could be used together with CA. We think that use cases (Jacobson, Christerson, Jonsson, & Overgaard, 1992) can been used in order to identify different communicative situations (see 0). Another notation that can be used is State Transition Diagram. Such diagrams seem promising in order to identify which possible communicative actions can be performed in a certain state. A third object-oriented notation is Class
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Communication Analysis as Perspective and Method for RE
357
Diagram (for example, Kruchten, 1999). Class diagrams have been used in this case study (however, not exemplified) in order to understand the meaning of occurring concepts and how they relate to each other.
References Cronholm, S., & Goldkuhl, G. (2002a, April 10-12). Document-driven systems development - An approach involving novice users. Proceedings of the 7th Annual Conference of United Kingdom Academy for Information Systems (UKAIS), Leeds, UK. Cronholm, S., & Goldkuhl, G. (2002b, September 12-14). Actable information systems Quality ideals put into practice. Proceedings of the 11th Conference on Information Systems Development, Riga, Latvia. Denzin, N. K. (1978). The research act. New York: McGraw-Hill. Dietz, J. (2001). DEMO: Towards a discipline of organization engineering. European Journal of Operational Research, 128(2). Ehn, P. (1988). Work-oriented design of computer artifacts. Stockholm: Arbetslivscentrum. Goldkuhl, G. (2003). Conversational analysis as a theoretical foundation for language action approaches? Proceedings of 8 th Int Working Conference on the Language Action Perspective (LAP2003), Tilburg. Goldkuhl, G., & Lyytinen, K. (1982). A language action view of information systems. Proceedings of third International Conference on Information systems, Ann Arbor, MI. Goldkuhl, G., & Röstlinger, A. (2002). The practices of knowledge – investigating functions and sources. Proceedings of the 3rd European Conference on Knowledge Management (3ECKM), Dublin. Goldkuhl, G., & Röstlinger, A. (2003). The significance of workpractice diagnosis: Socio-pragmatic ontology and epistemology of change analysis. Proceedings of the International workshop on Action in Language, Organisations and Information Systems (ALOIS-2003). Goldkuhl, G., & Ågerfalk, P. (2001). Actability: A way to understand information systems pragmatics, in coordination and communication. In K. Liu, et al. (Eds.), Using signs: Studies in organisational semiotics - 2. Boston: Kluwer Academic Publishers. Habermas, J. (1984). The theory of communicative action 1. Reason and the rationalization of society. Cambridge: Polity Press. Hammer, M., & Champy, J. (1993). Reengineering the corporation. A manifesto for business revolution. London: Nicholas Brealey. Jacobson, I., Christerson, M., Jonsson, P., & Overgaard, G. (1992). Object-oriented software engineering: A use-case driven approach. Addison-Wesley.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
358
Cronholm and Goldkuhl
Kruchten, P. (1999). The rational unified process: An introduction. Reading, MA: Addison Wesley. Linell, P. (1998). Approaching dialogue. Talk, interaction and contexts in dialogical perspectives. Amsterdam: John Benjamins Publ. Mathiassen, L., Munk-Madsen, A., Nielsen, P.A., & Stage, J. (2000). Object-oriented analysis & design. Marko Publishing House. Sacks, H. (1992). Lectures on conversation. Blackwell. Searle, J.R. (1969). Speech acts: An essay in the philosophy of language. London: Cambridge University Press. Strauss, A., & Corbin, J. (1998). Basics of qualitative research. Techniques and procedures for developing grounded theory. Beverly Hills, CA: Sage Publications. Weber, M. (1978). Economy and society. Berkley, CA: University of California Press. Winograd, T. & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. NJ: Ablex Publishing Corporation. Ågerfalk, P.J. (2003). Information systems actability: Understanding information technology as a tool for business action and communication. Doctoral dissertation. Linköping University, Linköping, Sweden.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Editors 359
About the Editors
José Luis Maté is full professor of computer science at the Universidad Politécnica de Madrid (UPM), Spain. He has served as vice-provost of the university and also as dean of the School of Computer Science. He received his telecommunications engineering degree and PhD from UPM. His work on information system design is one of his most prominent activities, an area in which he advises several Spanish and international institutions, including the upper and lower houses of the Spanish Parliament. He was a designated member for the Spanish government in the Spanish Commission for the Y2K. He is the author of several books and papers related to the integration of software and knowledge engineering and in the field of learning environments. Andrés Silva is a lecturer of software engineering at Universidad Politécnica de Madrid (UPM), Spain, where he is responsible for teaching requirements engineering. He has three years of industrial experience in software development and has worked at the Joint Research Centre (JRC) of the European Commission. In 2001 he received a PhD in computer science from UPM. His research interests are requirements engineering and knowledge management. He is the author of several conference and journal papers in requirements engineering.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
360 About the Authors
About the Authors Juan Pablo Carvallo is a doctoral candidate at the Universitat Politècnica de Catalunya (UPC), Spain. He received his degree in computer science from the Universidad de Cuenca, Ecuador. His interests are COTS-based systems development and COTS selection, evaluation, and certification. He is currently a member of the GESSI group at the UPC. His research interests are selection of COTS components, quality model construction, and component-based software development. He has been nominated for the best paper award in ICCBSS’04. Juan Ares Casal is director of the Department of Information and Communications Technologies of the University of A Coruña (Spain). He is an associate professor and co-director of the Software Engineering Laboratory at the university. His main research interests in computer science include conceptual modeling, knowledge management and software process assessment. He worked as director and consultant in several organisations, including Norcontrol Soluziona and Arthur Andersen. He has a BS and PhD in computer science. He is editor of several books and author of numerous chapters and refereed publications in software engineering. Chad Coulin is a PhD candidate and member of the Requirements Engineering Research Laboratory with the faculty of information technology at the University of Technology, Sydney (Australia). He holds a BEng (honors) in microelectronics and is currently working on the development of new and innovative methods and tools to support requirements elicitation for software-based systems. He has worked for a number of years in both the U.S. and Europe as a product and technical manager for leading international software development companies. Stefan Cronholm, assistant professor of information systems development at Linköping University (Sweden), earned his PhD from Linköping University (1998). He is co-director of the Swedish research network VITS, consisting of nearly 40 researchers at six Swedish
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 361
universities. Area of interests other than requirements engineering are information systems evaluation, communication perspectives, and interpretive and qualitative research methods. He is currently developing a theory of evaluation of information systems. Recent works about qualitative research methods are an analysis of Grounded Theory in use and a contribution to the development of Multi-Grounded Theory, (a combination of induction and deduction). For more information, visit [email protected] or www.ida.liu.se/~stecr. Angélica de Antonio has been a faculty member since 1990 in the Languages, Systems, and Software Engineering Department (of which she is currently sub-director) at the Technical University of Madrid (UPM) (Spain), where she also has co-ordinated the doctoral program since 2000. De Antonio has been director of the Decoroso Crespo Laboratory at UPM since 1995, where she has leaded several R&D projects in the areas of intelligent tutoring systems, e-learning, virtual environments and intelligent agents. De Antonio was a resident affiliate at the SEI (Carnegie Mellon University) during 1995. From 1991-1995 she was researcher at the Artificial Intelligence Laboratory (UPM) and assistant director of the SETIAM section of CETTICO (Center of Technology Transfer in Computer Engineering) specializing in the transfer of computer technologies to assist the disabled. Christian Denger studied computer science at the University of Kaiserslautern with a minor in economics. He received his master’s in computer science in 2002. Since then he has worked as a scientist at the Fraunhofer Institute for Experimental Software Engineering in Germany. His research interests are software inspections in the context of defect cost reduction approaches in early development phases and the combination of quality assurance techniques in the context of embedded systems. Currently he is involved in several German and international projects as team member and project leader and is pursuing a PhD from the University of Kaiserslautern. Stefan Dietze studied business information systems at the University of Cooperative Education in Heidenheim, Germany (1998-2001). As part of his studies he worked for SoftM Stuttgart GmbH, a developer of integrated business management software, and for ID-Media in Berlin and Zurich, a multimedia service provider. After graduating with a German diploma in business information systems and with a BA (honors) in business administration, he started working as a research associate at the Fraunhofer Institute for Software and Systems Engineering in Berlin in 2001. There he is engaged in different research activities as well as in consultation projects for the industry. His main research focuses are in the fields of software engineering processes and process modeling in general. He is currently writing his doctoral thesis about the collaborative open source software development processes. Jörg Dörr received his master’s of science in computer science from the University of Kaiserslautern, Germany. He is working as a scientist, consultant, and business area manager for the domain secure software for infrastructure facilities and services at the
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
362 About the Authors
Fraunhofer Institute for Experimental Software Engineering, Germany. His work areas include requirements engineering and agile software development, more specifically non-functional requirements, and requirements engineering for ad-hoc systems such as ambient intelligence systems. He is currently involved in several international and national research and transfer projects. Xavier Franch is an associate professor at the Universitat Politècnica de Catalunya (UPC), Spain. He received his PhD in informatics from UPC (1996). He has been a principal and co-investigator of several funded research projects. He is currently the leading investigator of the GESSI (Software Engineering Group for Information Systems) group at the UPC, a compound of more than 10 full-time researchers. His lines of research include requirements engineering, COTS selection, and quality model construction. He has participated in several industrial collaborations of COTS selection in the fields of ERP systems, document management tools, health-care solutions, and others. He has published more than 40 papers in refereed international journals, at conferences, and other events. He has been nominated for the best-paper award in ICCBSS’03 and ICCBSS’04. Javier Andrade Garda is assistant professor with the Department of information and Communications Technologies of the University of A Coruña (Spain). His main research interests in computer science include conceptual modeling, knowledge management, and natural language processing. He was software engineering and technological solutions consultant at IAL Software Engineering and Norcontrol Soluziona (Quality and Environment Department). He has a BS and PhD in computer science. He is author of several book chapters and refereed publications in software engineering. J. L. Garrido is an associate professor in the computer science department at the University of Granada (Spain). He obtained his PhD in computer science from the same university. His articles and research interests focus on requirements and software engineering, coordination models and languages, development of groupware systems, and notations for specification and modeling, particularly applied to interactive, cooperative, and distributed systems. M. Gea is a researcher in the computer science department at the University of Granada (Spain). He received his PhD in formal methods applied to interactive systems from Granada University. His research focuses on human-computer interaction. The work is based on formal methods and techniques for specification, design, and verification of interactive systems. He is a co-founder of AIPO, the Spanish Association of HumanComputer Interaction, and he had been a member of several program committees of related workshops and conferences. Currently he is working on CSCW and extension to UML, usability methodologies, and ubiquitous computing. Göran Goldkuhl, Professor of Information Systems Development at Linköping University (Sweden) and Professor of Informatics at Jönköping International Business School, earned his PhD from Stockholm University (1980). He is the director of the Swedish Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 363
research network VITS, consisting of nearly 40 researchers at six Swedish universities. He is currently developing a family of theories, which all are founded on socioinstrumental pragmatism: Workpractice Theory, Business Action Theory, and Information Systems Actability Theory. He has a great interest in interpretive and qualitative research methods and has contributed to the development of Multi-Grounded Theory, (a modified version of Grounded Theory). He is active in such international research communities as Language Action Perspective and Organisational Semiotics. For more information, visit [email protected] or www.ida.liu.se/~gorgo/ggeng.html. Jaap Gordijn is an assistant professor in e-business at the Free University Amsterdam (The Netherlands), faculty of exact sciences. His research interest is innovative ebusiness applications. He is a key developer of and has internationally published on ebusiness modeling methodology (called e3-value) addressing the integration of strategic e-business decision making with ICT requirements and systems engineering. Also he is a frequent speaker at international conferences. Recently he was a member of Cisco’s Internet Business Solution Group, and he was employed by Deloitte & Touche as a senior manager of D&T’s e-business group. As such he was especially involved in rolling out e-business applications in the banking and insurance domains and in the digital content industry. For more information, visit http://www.cs.vu.nl/~gordijn. D. Greer is a lecturer in the school of computer science at Queens University Belfast, UK. He has been lecturing to undergraduates and postgraduates in software engineering-related topics for the past 12 years and carrying out research into risk management, incremental software processes, requirements engineering, and adaptive software design. Prior to this he was an analyst/programmer with Bombardier, working on business support systems. His DPhil is from the University of Ulster, and he is a member of the Institute of Electrical and Electronics Engineers. Much of his current and past research is industry related, including collaboration with NEC Corp., the U.K. Civil Service, and British Telecom. Ines Grützner received a diploma in computer science and economics from the Dresden University of Technology. She works as a scientist and project manager in the field of document engineering at the Fraunhofer Institute for Experimental Software Engineering (IESE) in Kaiserslautern, Germany. Her research is focused on technology-enabled learning, especially the systematic, engineering-style development of online courses. She has led several projects that are targeted at the development of online courses on software engineering topics. Jon G. Hall is a lecturer in the computing department at The Open University, UK. His research interests include requirements engineering and software architectures. His work covers real-time, safety-critical, hybrid, and educational systems. He was co-editor of and contributor to Software Architectures: Advances and Applications (SpringerVerlag, 2000) and was a contributing member of the Z Standards committee. He holds research grants to investigate correctness of safety critical systems, the foundations of
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
364 About the Authors
problem frames. Hall holds a BSc (Cantuar) and an MSc (Manchester) in pure mathematics, an MSc (Newcastle) and PhD (Newcastle) in computing, and an MBA (Open). For more information, visit http://mcs.open.ac.uk/jgh23/. Ricardo Imbert is an assistant professor in the Department of Languages, Systems, and Software Engineering in the School of Computer Science at the Universidad Politécnica de Madrid (Spain). He is currently finishing his PhD on cognitive architectures for agents with behaviours influenced by personality and emotion at the Universidad Politécnica de Madrid. Imbert has been a member and project leader at the Decoroso Crespo Laboratory at UPM since 1996, a research group of computer scientists blending technologies such as virtual reality, software agents, and intelligent tutoring systems to create innovative computer learning environments. Sara Jones is a research assistant with the Centre for Human-Computer Interface Design at City University (UK) and is currently working with Neil Maiden on the application of the Requirements Engineering with Scenarios for a User-centred Environment (RESCUE) process to various projects within Eurocontrol, the European Organisation for the Safety of Air Navigation. She has been working in the fields of human-computer interaction and requirements engineering since 1987 and has published more than 50 academic papers in conferences and journals in these areas. Sara has taught various courses in related areas and has supervised four PhDs, as well as sitting on the program committees for various conferences in the field of requirements engineering. For more information, visit http://www-hcid.soi.city.ac.uk/pSarajones.html. Daniel Kerkow holds a master’s of psychology from the University of Saarland, Germany. He works as a scientist and consultant in the field of requirements and usability engineering at the Fraunhofer Institute Experimental Software Engineering (IESE) (Germany), Europe’s largest organisation for applied software engineering research. He has lead several projects and was co-founder of the usability group at IESE. He focuses on non-functional requirements, especially in the perception of usability quality aspects and on usability engineering for processes and methods in software developing organisations. Tom Koenig received his master’s of science in computer sciences from the University of Kaiserslautern, Germany. He has worked as a scientist at the Fraunhofer Institute Experimental Software Engineering (IESE) (Germany) since 2003, in the Department of Requirements and Usability Engineering (RUE). His work areas include e-government and requirements engineering, more specifically elicitation and specification of requirements as well as business processes modeling. He is currently involved in several research and transfer projects in these areas. Marco Lormans is a PhD student working in the software engineering group of the Delft University of Technology, The Netherlands. His research interests are the specification, management, and evolution of requirements with special interests in the non-functional Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 365
properties of embedded systems. He holds a Master’s of Science in computer science from Delft University of Technology. Contact him at: [email protected]. Neil Maiden is professor of systems engineering and head of the Centre for HumanComputer Interface Design, an independent research department in City University’s School of Informatics (UK). He received a PhD in computer science from City University (1992). He is and has been a principal and co-investigator of several EPSRC- and European Union (EU)-funded research projects, including SIMP, CREWS and BANKSEC. His research interests include frameworks for requirements acquisition and negotiation, scenario-based systems development, component-based software engineering, ERP packages, requirements reuse, and more effective transfer of academic research results into software engineering practice. Neil has published more than 100 papers in academic journals and conference and workshop proceedings. Centre details are available from www-hcid.soi.city.ac.uk. For more information, visit http://www-hcid.soi.city.ac.uk/ pNeilmaiden.html. Raymond McCall, PhD, is as a member of the Institute of Cognitive Science and an associate professor of planning and design at the University of Colorado (USA). His research has dealt almost exclusively with the design of software for the capture and delivery of design rationale, and he created the first hypertext systems designed for this purpose. His current interests are in the use of natural language processing for rationale capture. Dr. McCall is also executive director of Nexterra, Inc., a non-profit corporation that makes Web-based software for education. Nexterra is the creator of the ExploreMarsNow.org Web site, which won a Webby and a Scientific American Sci-Tech Web Award. His graduate studies were in architecture and product design. Line Melby is a PhD student in sociology at the Norwegian University of Science and Technology (NTNU). Her research interests are medical sociology, science and technology studies, and methods and techniques for interdisciplinary system development. Melby is especially interested in ethnographic methods, and she performed a four-month ethnographic study at a hospital department as part of her dissertation work. Melby received her master’s degree in sociology from NTNU (1999). In 2000 she joined the MOBEL (Mobile Electronic Patient Record) project at NTNU. Since 2003 MOBEL has been part of the Norwegian Centre for Electronic Health Records Research in Trondheim. Ivan Mistrik is currently a researcher at the Fraunhofer Institute (Germany), a major research and development agency in the field of information and communication. He is a computer scientist who is interested in software engineering and software architecture, particularly requirements engineering and networked multimedia computing. He has more than 30 years’ experience in the field of computer systems engineering and has done consulting on a variety of large international projects sponsored by ESA, EU, National Aeronatics and Space Administration (NASA), North Atlantic Treaty Organisation (NATO), and UN/UNESCO/UNDP. He has also taught university-level computer sciences courses in software engineering, software architecture, distributed information
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
366 About the Authors
systems, and human-computer interaction. His graduate studies were in the fields of electrical engineering, international economic relations, and computer science. Eman Nasr is a research member of the Department of Mathematics and Computational Sciences at the Faculty of Science, Cairo University in Egypt. Her research interests include object-oriented requirements engineering approaches for embedded software systems. She conducted research at the University of York, UK, while working on the PRECEPT project as a research associate on the use of use cases for the requirements engineering of large complex embedded software. Nasr received a BSc with highest honors in computer science from the American University in Cairo and an MSc in computational methods from Cairo University, Egypt. She has more than 15 years of industrial software work experience. Thomas Olsson works as a scientist and consultant at the Fraunhofer Institute for Experimental Software Engineering, Germany. He received a licentiate engineering in software engineering in 2002 and a master’s of science in computer science and engineering in 1999, both from Lund University, Sweden. His research interests are in heterogeneous information and documentation models, especially in the context of requirements. Currently he is involved in two European- and one German-funded project, is the project leader for two of them, and is at the same time pursuing a PhD degree at Lund University. Barbara Paech holds the chair of software engineering at the University of Heidelberg, Germany. Since October 2003 she has been department head at the Fraunhofer Instituts Experimental Software Engineering. Her teaching and research focuses on methods and processes to ensure quality of software with adequate effort. For many years she has been particularly active in the area of requirements and usability engineering. She has headed several industrial, national, and international research and transfer projects. She is spokeswoman for the special interest group on requirements engineering in the German computer science society and member of the accreditation board for the ASQF-Certified Requirements Engineer. Päivi Parviainen (MSc) works as a senior research scientist at VTT Technical Research Centre of Finland (Oulu), Software Engineering Group, where she has worked since 1995. She graduated in 1996 from the University of Oulu’s Department of Information Processing Science. Her research activities include embedded systems and software process improvement, requirements engineering, software reuse, and measurement. She has managed several industrial projects at the national level and participated in many national and international research projects as well. She has published several papers in international journals and conferences. Panayiotis Periorellis received his PhD from the University of Newcastle Upon Tyne (UK) in May 2000 for his thesis on modeling enterprise dynamics. He holds a master’s
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 367
degree and a bachelor’s degree in information systems analysis and design. He has published more than 25 articles, papers, and reports in journals and international conferences. He was part of the Dependable Systems of Systems consortium from June 2000 until June 2003, addressing some of the technical but most importantly organisational aspects of Systems of Systems. He currently works as a senior research associate at Newcastle University. His research interests include distributed systems, dependability, organisational aspects of systems, and modeling. Carme Quer is associate professor in the software department (LSI) at the Universitat Politècnica de Catalunya (UPC), Spain. She received her PhD in software from the UPC. She is currently a member of the GESSI group at the LSI department. Her research interests are selection of COTS components, quality model construction, and component-based software development. She has been nominated for the best-paper award in ICCBSS’04. Lucia Rapanotti is a lecturer in the computing department at The Open University, UK. Previously she has held research and software development positions. Her main research work is in requirements engineering and software architectures. Among her recent publications are works on problem frames foundations and extensions. Rapanotti holds a number of grants for investigations into foundations of problem frames, evaluation of groupware in e-learning, and robotic adaptive controllers. Rapanotti holds a Laurea Cum Laude in computing science from the University of Milan and a PhD in computing from the University of Newcastle upon Tyne. For more information, visit http://mcs.open.ac.uk/ lr38/. M.L. Rodríguez is an associate professor in the computer science department at the University of Granada (Spain). She received her PhD from the University of Granada. Her research interests and publications are in the area of system modeling and software development, UML language, formal methods for specification, object-oriented methods, human-computer interaction, usability, and cooperative systems. Pete Sawyer holds a BSc and PhD from Lancaster University (UK), where he is a senior lecturer in the Computing Department. His research interests are in the general area of software and systems engineering. In particular he is interested in requirements engineering, software process improvement, and applications of natural language processing to the systems engineering process. He is a member of the British Computer Society (BSC), an affiliate member of the Institute of Electrical and Electronics Engineers (IEEE), and a professional member of the Association of Computing Machinery (ACM). He is also a member of the executive committee of the BCS Requirements Engineering Specialist Group (RESG). He has more than 50 peer-reviewed publications and is the co-author (with Ian Sommerville) of Requirements Engineering – A Good Practice Guide (John Wiley, 1997). Gry Seland is a PhD student with the Department of Computer and Information Science at the Norwegian University of Science and Technology (NTNU). She earned her Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
368 About the Authors
master’s degree from the Department of Psychology at NTNU. During her PhD studies Seland spent eight months at the University of Victoria, Canada, where she attended courses at the School of Health Information Science. Seland’s main research focus is developing and evaluation of methods for user-centred development of information technology systems in health care organisations. She has concentrated on role-play with low-fidelity prototypes, giving end-users a tool for communicating their needs and ideas for technology at the workplace. Inger Dybdahl Sørby is a PhD student in computer science at the Norwegian University of Science and Technology (NTNU). Her research interests include medical informatics and human-computer interaction. In particular she focuses on utilising user, context, and process knowledge in intelligent user interfaces to mobile electronic patient records. Sørby received her master’s degree in computer science at the Norwegian Institute of Technology (now NTNU) (1993). In 2001 she joined the MOBEL (Mobile Electronic Patient Record) project at NTNU. Since 2003 MOBEL has been part of the Norwegian Centre for Electronic Health Records Research in Trondheim. Maarit Tihinen (MSc) is a research scientist at VTT Technical Research Centre of Finland (Oulu), where she has worked since 2000. She graduated in 1991 from the University of Oulu’s Department of Mathematical Sciences. She worked as a mathematics and information technology teacher at Kemi-Tornio Polytechnic during the ‘90s. After coming to the VTT she did her secondary subject thesis for the Department of Information Processing science (University of Oulu) (2001). Her current research interests involve requirements engineering during distributed software development and also software process improvement and measurement activities within the improvement process. Rini van Solingen (PhD, MSc) is a principal consultant at LogicaCMG and a professor at Drenthe University, The Netherlands. At LogicaCMG he specializes in industrial software product and process improvement. At Drenthe University he heads a chair in quality management and quality engineering. Van Solingen has been a senior quality engineer at Schlumberger/Tokheim RPS and head of the quality and process engineering department at the Fraunhofer Institute for Experimental Software Engineering in Kaiserslautern, Germany. Rini has published more than 100 titles in journals, conference proceedings, and books. He is a member of the IEEE-CS, NGI, KiVi and SPIder. Rafael García Vázquez is director of the Computer Training Unit of the University of A Coruña (Spain). He is associate professor and co-director of the Software Engineering Laboratory at the Department of Information and Communications Technologies of the University of A Coruña. His main research interests in computer science include conceptual modeling, knowledge management, and project management. He has been project leader in several organisations, including Quibus Computers and Sistema Base. He has a BS and PhD in computer science. He is editor of several books and author of numerous chapters and refereed publications in software engineering.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
About the Authors 369
Antje von Knethen received her diploma degree in computer science from the University of Bremen (1996). Thereafter she was a member of the research group for software engineering in the Department of Computer Science at the University of Kaiserslautern, where she received a PhD in computer science (2001). In the same year she started working in the Requirements Engineering competence team at the Fraunhofer Institute for Experimental Software Engineering (IESE) in Kaiserslautern (Germany). In 2002 she took over the leadership of this competence team. Her research interests include requirements engineering, in particular requirements management, object-oriented analysis, and empirical software engineering. Santiago Rodríguez Yáñez is an assistant professor with the Department of Information and Communications Technologies of the University of A Coruña (Spain). His main research interests in computer science include conceptual modeling, knowledge management, and distributed systems development techniques and methodologies. He was project leader in several Spanish organisations. He has a BS and PhD in computer science. He is author of several book chapters and refereed publications in software engineering. Didar Zowghi is associate professor of software engineering and director of Requirements Engineering Research Laboratory in the faculty of information technology at the University of Technology Sydney (Australia). She holds a BSc (Honors) and MSc in computer science, and a PhD in software engineering. She serves on the program committee of many national and international conferences, in particular IEEE’s International Conferences on Requirements Engineering since 1998. She is the regional editor (and the editor of the viewpoints column) of the International Requirements Engineering Journal and associate editor of the Journal of Research and Practice in Information Technology (JRPIT). She has published extensively on many aspects of requirements engineering.
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
370 Index
Index
A
C
acting out 267 actions 251 actors 251 adaptability 190 AFrame 320 agent-oriented requirements engineering 69 agent-oriented software engineering 69 agents 69 air traffic control 245 architectural frame 320 artifact lifecycles 204 artifact management 203 artifact metadata 203
capability maturity model for software 85 Cartesian method 55 cathedral 192 central source code repository (CVS) 194 change management 3, 160 change requests 195 clinical information systems 267 cognitive task analysis 251 collaborative RE 190 collaborative requirements definition 191 collaborative requirements review 199 communication perspective 341 complexity of the organisation 267 composition 139 computational aspects 58 computational model 63 computer domain 56 computer-oriented 56 conceptual model 228 conceptual modeling 54 conceptual models 54 conceptualisation 54
B bazaar 192 boundaries 252 bug reporting guideline 198 bug reports 196 bug tracking system 194 bug triage 199 Bugzilla 198
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Index 371
conceptualisation process 55 conceptualisation techniques and methods 54 content-based retrieval 310 cooperative systems 226 core developers 194 courseware 171 courseware architecture 185 courseware development 171 courseware requirements specification 175 creative design 245 creative system development 267 CSCW 227
D data gathering 251 definition of a conceptualisation 63 dependability 139 descriptive process model 190 discrepancies 60 distributed control 140, 141 distributed environments 191 distributed processing 140 drama improvisation 267
E educational profile 178 efficiency 285 electronic patient record (EPR) 266 elicitation 38 embedded software system 22 enhancement requests 196 ethnography 228 evolution 139 explicit knowledge 304 eXtreme Programming 88
F functional 3
G goal-oriented requirements engineering 69 goals 251
gradual process of software improvement 193 groupware 227
H handheld computers 267 hospital information systems 267 human activity modeling 245 human/computer interaction 312 hypotheses of conceptualisation 55
I i* notation 245 implementation domain 58 Impression 95 incremental approach 101 information intensity 267 information retrieval 310 Information Systems 139 information systems 340 inspection 160 instructional requirements 184 integration of autonomous systems 140 interaction specification 184
L language action perspective 341 learning task 171
M maintainability 286 maturity levels 85 medical knowledge 267 methodology 226 model view controller 320
N natural language processing (NLP) 303 non-functional 3 non-functional requirements (NFR) 285
O observational studies 266 open source definition (OSD) 191
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
372 Index
open source initiative (OSI) 191 open source software (OSS) 190 organizational 141 organizational boundaries 139 organizational Failure 141 OSS development (OSSD) 190 OSS development methodologies 190 OSSD process model 192
P participatory design 228, 267, 303 patch 195 patch development cycle 195 portability 285 prioritisation 101 problem diagrams 321 problem domain 56 problem frames 318, 319 problem-oriented 56 Problem-sensitive 57 problem-sensitivity principle 58
Q quality 155 qure 95
R raw material principle 58 RE approaches 23 RE challenges 23 REAIMS 85 regulation 139 reliability 286 representations 262 request tracking method 202 request tracking system 202 requirement development 3 requirement management 3 requirements 38, 159 requirements allocation and flow-down 4 requirements analysis 4, 54, 120, 319 requirements definition 190 requirements elicitation 267
requirements engineering (RE) 22, 69, 226, 285 requirements engineering good practice guide 84 requirements engineering process 4 requirements engineering process maturity model 84 requirements gathering 4 requirements management 245 requirements management tools 15 requirements specification 5 requirements validation 3
S scenario 245 selection 121 Skater’s principle 58 social action 341 social sciences 267 sociotechnical system 267, 319 software development 285, 303 software engineering (SE) 190 software evaluation 120 software evaluation criteria 120 software infrastructure 190 software process improvement 84 software quality 122 software requirements development 4 software requirements specification 87 software support 206 solution-sensitive 57 special properties 23 specification 159 stakeholders 155 standards 87 status attribute 199 system design process 267 system development 38, 285 system goal modeling 245 system of systems 141 system requirements development 4 system requirements development methods 7 system use 306
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Index 373
T tacit knowledge 304 task analysis 246 technical products 154 techniques 38 traceability 160 traceability 3 two ellipse model 319
use case models 247 use cases 246 user commanded behaviour 320 user commanded behaviour frame 320 user interaction frame 320
V viewpoint 60
U
W
UML 228 University of Hertfordshire model 84 usability 286
working task 171 workshops 267
Copyright © 2005, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
Instant access to the latest offerings of Idea Group, Inc. in the fields of I NFORMATION SCIENCE , T ECHNOLOGY AND MANAGEMENT!
InfoSci-Online Database BOOK CHAPTERS JOURNAL AR TICLES C ONFERENCE PROCEEDINGS C ASE STUDIES
“
The Bottom Line: With easy to use access to solid, current and in-demand information, InfoSci-Online, reasonably priced, is recommended for academic libraries.
The InfoSci-Online database is the most comprehensive collection of full-text literature published by Idea Group, Inc. in:
”
- Excerpted with permission from Library Journal, July 2003 Issue, Page 140
n n n n n n n n n
Distance Learning Knowledge Management Global Information Technology Data Mining & Warehousing E-Commerce & E-Government IT Engineering & Modeling Human Side of IT Multimedia Networking IT Virtual Organizations
BENEFITS n Instant Access n Full-Text n Affordable n Continuously Updated n Advanced Searching Capabilities
Start exploring at www.infosci-online.com
Recommend to your Library Today! Complimentary 30-Day Trial Access Available! A product of:
Information Science Publishing* Enhancing knowledge through information science
*A company of Idea Group, Inc. www.idea-group.com
Managing Psychological Factors in Information Systems Work: An Orientation to Emotional Intelligence Eugene Kaluzniacky, University of Winnipeg, Canada There have arisen, in various settings, unmistakable calls for involvement of psychological factors in IT work, notably in development and deployment of information systems. Managing Psychological Factors in Information Systems Work: An Orientaion to Emotional Intelligence “pulls together” areas of existing involvement, to suggest yet new areas and to present an initial, and coherent vision and framework for, essentially, extending and humanizing the sphere of IT work. It may be indeed noteworthy that, while the Industrial Revolution may have moved the human person into intellectual predominance, the IT Revolution, with its recent calls for addressing and involving the “whole person”, may indeed be initiating a re-centering of the human being in his/her essential core, giving rise to new consciousness, new vision and new, empowering experiences. May this book encourage the first few steps along a new and vivifying path! ISBN 1-59140-198-4 (h/c) • US$74.95 • ISBN 1-59140-290-5 (s/c) • US$59.95 • 250 pages • Copyright © 2004 “Although all decisions are about facts, we mediate them through our feelings. In periods of rapid change we rely much less on facts and more on our intuitions and experience. The uncertainty associated with Information Systems (IS) means that the corner-stone for success of IS professionals depends on their feelings-based abilities to work with people. With this book Eugene Kaluzniacky has filled a gap in the book-shelf of those who would wish to become successful developers of IS projects.” -Cathal Brugha, University College Dublin, Ireland
Its Easy to Order! Order online at www.idea-group.com or call 717/533-8845 x10 Mon-Fri 8:30 am-5:00 pm (est) or fax 24 hours a day 717/533-8661
Information Science Publishing Hershey • London • Melbourne • Singapore
An excellent addition to your library
NEW RELEASE
Information Security and Ethics: Social and Organizational Issues Marian Quigley, PhD, Monash University, Australia Information Security and Ethics: Social and Organizational Issues brings together examples of the latest research from a number of international scholars addressing a wide range of issues significant to this important and growing field of study. These issues are relevant to the wider society, as well as to the individual, citizen, educator, student and industry professional. With individual chapters focusing on areas including: web accessibility, the digital divide, youth protection and surveillance, this book provides an invaluable resource for students, scholars and professionals currently working in Information Technology related areas. ISBN 1-59140-286-7 (h/c) • US$79.95 • ISBN 1-59140-233-6 (s/c) • US$64.95 • 338 pages • Copyright © 2005
“It is through the ongoing, cross-cultural and inter-disciplinary studies, debates and discussions undertaken by international scholars such as those present in this volume, that we may achieve greater global awareness and possible solutions to the ethical dilemmas we are now facing within technologized societies.” –Marian Quigley Its Easy to Order! Order online at www.idea-group.com or call 717/533-8845 x10 Mon-Fri 8:30 am-5:00 pm (est) or fax 24 hours a day 717/533-8661
IRM Press
Hershey • London • Melbourne • Singapore
An excellent addition to your library