Lecture Notes in Control and Information Sciences Editors: M. Thoma · M. Morari
306
Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo
Z. Zenn Bien Dimitar Stefanov (Eds.)
Advances in Rehabilitation Robotics Human-friendly Technologies on Movement Assistance and Restoration for People with Disabilities With 249 Figures
13
Series Advisory Board A. Bensoussan · P. Fleming · M.J. Grimble · P. Kokotovic · A.B. Kurzhanski · H. Kwakernaak · J.N. Tsitsiklis
Editors Prof. Z. Zenn Bien Human-Friendly Welfare Robot System Research Center Department of Electrical Engineering and Computer Science Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu Daejeon, 305-701 Korea PhD Dimitar Stefanov, SRCS Cardiff & Vale NHS Trust Rehabilitation Engineering Unit Cardiff, CF5 2 YN UK and Institute of Mechanics Bulgarian Academy of Sciences Acad. G. Bonchev Street Block 4, 1113 Sofia Bulgaria
ISSN 0170-8643 ISBN 3-540-21986-2
Springer-Verlag Berlin Heidelberg New York
Library of Congress Control Number: 2004106092 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in other ways, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under German Copyright Law. Springer-Verlag is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2004 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Data conversion by the author. Final processing by PTP-Berlin Protago-TeX-Production GmbH, Berlin Cover-Design: design & production GmbH, Heidelberg Printed on acid-free paper 62/3020Yu - 5 4 3 2 1 0
Preface
It is now evident that one of the major application targets of the service robots is to use them as assistive devices for rehabilitation of the physically disabled and for the elderly people. Rehabilitation robotics (RR) is a relatively young but dynamically developing area of research. Some rehabilitation robots have already got out of the research laboratories and have become important members in everyday lives of growing users from many developed countries. It is expected that, in the near future, the rehabilitation robots (RR) will become a significant component of the futuristic welfare service systems in the world. Primarily limited to a small number of relatively simple movement tasks such as object replacement and eating, the application areas of the rehabilitation robotics, along with various intelligent technologies for movement assistance of people with disabilities, are continuously expanding to new dimensions that aim at improved assistance in different kinds of activities and entertainment of people with disabilities and aged people as well. We are witnessing that such intensive development of novel human-machine interfaces, intelligent control algorithms, new materials and efficient actuators have made it possible to invent and test various advanced design ideas. Common understanding and main tendency in the rehabilitation robotics design is that the robots should be human-friendly in the sense that the robotic machine and peripheral devices must be designed for the user to feel more comfortable, safer, and more convenient. Recent intelligent robotic devices for movement assistance are often designed to be equipped with the control strategies that do not cause high cognitive load to the users with various severe movement disorders. The idea for organizing this volume was inspired from the 8th International Conference on Rehabilitation Robotics (ICORR’2003) that was held during April 22-25, 2003 at Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea. The papers presented at the conference represent of course most recent tendencies of R&D in the rehabilitation robotics and intelligent assistive technology. With confidence, however, we like to declare that the current book is not just a variant of the conference proceedings! Different from the papers reported at the ICORR 2003 where specific problems and solutions of various subjects were discussed, the chapters of this book include original, reworked, and generalized materials that match
VI
Preface
to the style and objectives of the book. The book contains not only review articles on some advanced theoretical ideas in the rehabilitation robotics and results from some of the latest projects under development but also details on new advanced rehabilitation devices which have been recently transferred to the industry. A significant part of the book is devoted to the assessment of new rehabilitation technologies and evaluation of prototype devices with endusers. Safety of rehabilitation robots, historical remarks and perspective of rehabilitation robotics are also commented in the book. Also, different from many other books on rehabilitation engineering, the present volume includes a long chapter on robot-assisted neuro-rehabilitation that is considered as one of the latest trends in that area. One of the principal aims of this book is to promote dissemination of the information on the recent status of the rehabilitation robotics (RR). Our intention was to arrange the book in such a way that it is not just a simple collection of papers that would be of some interest to the specialists in a particular area, but rather, a book that contains some basics on the rehabilitation research and can help beginners to start their work in the same area, such as students and young researchers, or can help lecturers who want to introduce their students basics of the modern rehabilitation technology. In order to achieve this objective, most of the articles contain a detailed introduction to the problem to be discussed and an extended overview on the particular subject matter. The chapters that are contained in this book are authored by leading researchers in the field of rehabilitation robotics and represent a large part of the international research community. The book contains 27 chapters, which are grouped into 7 parts. The book begins with an introductory Part 1 devoted to description of the role of the rehabilitation robotics and some important issues of its development. The same part represents also some important milestones of the development of rehabilitation robotics. The chapters included in Part 2 cover three important issues on rehabilitation robotics for assistance of human movements: conceptions and experimental design, safety issues of the rehabilitation robots, and rehabilitation-robot evaluation. Some recent issues of the prosthetics and orthotics design are discussed in Part 3. Part 4 is concerned with the intelligent wheelchairs that can be considered as special mobile robots, designed to accomplish indoor user transportation. A recent trend in the design of assistive devices for mobility is the mechatronic devices for assistance in walking that sense the user’s movement intentions and provide gentle gait support, giving independency and safety to the user. Some examples of such devices are given in Part 5. Part 6 is dedicated to robotassisted neurorehabilitation. Examples of both upper limb robot mediated therapy and lower limb robot mediated therapy are commented in that part of the book. The final part of the book (Part7) talks about the perspectives and trends of the rehabilitation robotics.
Preface
VII
We are assured that the book would provide with a comprehensive overview of the field of rehabilitation robotics, and would satisfy a large group of readers, including researchers in the field, graduate and postgraduate students, and designers that use the RR technology. We believe that the book will become a representative selection of the latest trends and achievements on the rehabilitation robotics area. We would be extremely happy if such an important goal would be achieved. Finally, as the Editors of present volume, we like to take this opportunity to express our heartiest appreciation for all the authors who have worked their chapters with dedication and integrity, and contributed to the highest standard of this volume. We would like to thank also to Dr. Thomas Ditzinger and Ms. Heather King from Springer-Verlag for encouraging us in editing the present book and for their help in arranging this volume.
Daejeon March 2004
Z. Zenn Bien Dimitar Stefanov
Contents
List of Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XIX
List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXVII Part I Introduction 1 Advances in Human-Friendly Robotic Technologies for Movement Assistance/Movement Restoration for People with Disabilities Dimitar Stefanov, Z. Zenn Bien . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Areas of the RR Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Robotic Systems for Movement Assistance . . . . . . . . . . . . . . 1.2.2 Robots for Physical Support and Indoor Navigation . . . . . . 1.2.3 Robots for Physical Rehabilitation . . . . . . . . . . . . . . . . . . . . . 1.2.4 Vocational RR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Emotional Interactive Entertainment Robots . . . . . . . . . . . . 1.3 Specialized Human-Machine Interface . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Rehabilitation Robots in the Smart House Design . . . . . . . . . . . . . . 1.5 Functional Integration of the Robotic Environment . . . . . . . . . . . . . 1.6 Commercialization of RR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Some Issues for Futuristic Intelligent Robotic House Model . . . . . . 1.8 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 5 5 7 7 8 9 10 10 12 13 16 18
2 Rehabilitation Robotics from Past to Present – A Historical Perspective Michael Hillman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Earliest Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Assistive Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Fixed Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Mobile Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Wheelchair Mounted Manipulators . . . . . . . . . . . . . . . . . . . . . 2.3.4 Human Machine Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25 25 26 27 28 31 33 35
X
2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12
Contents
Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prosthetics and Orthotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robot Mediated Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robotics in Special Needs Education . . . . . . . . . . . . . . . . . . . . . . . . . Robotics in Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Historical Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Commercialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alternatives to Robotics in Rehabilitation . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35 36 37 38 39 40 40 41 42
Part II Rehabilitation Robots for Assistance of Human Movements II.1 Conceptions and Experimental Design 3 Toward a Human-Friendly User Interface to Control an Assistive Robot in the Context of Smart Homes Mounir Mokhtari, Mohamed Ali Feki, Bessam Abdulrazak, Bernard Grandjean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 MANUS Assistive Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Networking Technologies and Developments . . . . . . . . . . . . . . . . . . . 3.4 General Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 User Interface Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Implementation of a Path Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Gesture Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Obstacles Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Towards the Co-autonomy Concept . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47 47 48 49 50 51 52 52 53 54 55
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II Z. Zenn Bien, Kwang-Hyun Park, Dae-Jin Kim, Jin-Woo Jung . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Intelligent Sweet Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Questionnaire Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Assistive Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Intelligent Man-Machine Interfaces . . . . . . . . . . . . . . . . . . . . . 4.3 KARES II System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Questionnaire Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Overall Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Soft Robotic Arm with Visual Servoing . . . . . . . . . . . . . . . . . 4.3.4 Intelligent Human-Robot Interfaces . . . . . . . . . . . . . . . . . . . . 4.3.5 User Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57 57 59 59 62 67 72 73 74 77 79 82 89
Contents
XI
5 “FRIEND” – An Intelligent Assistant in Daily Life O. Kouzmitcheva, C. Martens, A. Pape, H. She, I. Volosyak, A. Gr¨ aser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Basic Concepts and Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 The FRIEND Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Hardware Structure of FRIEND . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Multi-layered Control Architecture of FRIEND . . . . . . . . . . 5.2 Application and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 The “Beverage Serving” Task . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Obstacle Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Task Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Demonstration-Based Programming . . . . . . . . . . . . . . . . . . . . 5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95 95 95 96 97 100 101 109 111 119 124
6 GIVING-A-HAND System: The Development of a Task-Specific Robot Appliance M.J. Johnson, E. Guglielmelli, G.A. Di Lauro, C. Laschi, M.C. Carrozza, P. Dario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Domotic-Robotic Integrated System . . . . . . . . . . . . . . . . . . . . 6.2.2 Localized System of Appliances . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Design Concept for the Giving-A-Hand System . . . . . . . . . . . . . . . . 6.4 Domotic/Telematic and Robotic Assistance . . . . . . . . . . . . . . . . . . . 6.5 The Fetch and Carry Robot Appliance Development . . . . . . . . . . . . 6.6 User-Centered Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Prototype of a Local Network with the Robot Appliance . . . . . . . . 6.8 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
127 127 128 129 130 132 133 133 135 138 140
7 Cooperative Welfare Robot System Using Hand Gesture Instructions Noriyuki Kawarazaki, Ichiro Hoya, Kazue Nishihara, Tadashi Yoshidome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Cooperative Robot System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Measurement of Distance Using Stereo Images . . . . . . . . . . . . . . . . . 7.4 Detection of the Hand and the Target Object . . . . . . . . . . . . . . . . . . 7.4.1 Detection of the Hand Area Using Color Image . . . . . . . . . . 7.4.2 Tracking of the Hand Using CP . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Detection of the Object Using Gesture Instruction . . . . . . . 7.5 Recognition of the Hand Gesture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
143 143 144 145 146 146 147 148 149 150 152
XII
Contents
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon” Ryoji Soyama, Sumio Ishii, Azuma Fukase . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Meal-Assistance Device “My Spoon” . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Operating Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Basic Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Compartment Selection Command Set . . . . . . . . . . . . . . . . . . 8.4.3 Position Adjustment Command Set . . . . . . . . . . . . . . . . . . . . 8.5 Control Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Manual Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Semi-automatic Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Automatic Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Future Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Food Recognition by Using Color Image Processing . . . . . . 8.6.2 Improvements in Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
155 155 155 156 156 157 158 158 159 159 159 160 161 161 162 163
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing A.H.G. Versluis, B.J.F. Driessen, J.A. van Woerden . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Visual Servoing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Vision Aspects of the Visual Servoing . . . . . . . . . . . . . . . . . . 9.3 Control Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Vision System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
165 165 167 168 168 169 169 170 171 171 174
Part II Rehabilitation Robots for Assistance of Human Movements II.2 Safety Issues of the Rehabilitation Robots 10 A Safety Strategy for Rehabilitation Robots Makoto Nokata, Noriyuki Tejima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Principles of Safety Standards for Robots . . . . . . . . . . . . . . . . . . . . . 10.2.1 Framework of New Safety Standards for Robots . . . . . . . . . 10.2.2 Safety Standard for Machinery . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Risk Assessment Process and Risk Reduction . . . . . . . . . . . . 10.2.4 Tolerable Risks for Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . .
177 177 177 177 178 179 180
Contents
XIII
10.3 Case Study on Safety of Rehabilitation Robots . . . . . . . . . . . . . . . . 10.3.1 Risk Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Safety Measures of Risk Reduction . . . . . . . . . . . . . . . . . . . . . 10.3.3 Benefit Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Proposal of Risk Assessment Guideline for Rehabilitation Robots 10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
181 182 183 183 184 185
11 Safety Evaluation Method of Rehabilitation Robots Makoto Nokata, Koji Ikuta, Hideki Ishii . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Safety Strategy for Human-Care Robots . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Injury to Humans from Human-Care Robots . . . . . . . . . . . . 11.2.2 Classification of Safety Strategies . . . . . . . . . . . . . . . . . . . . . . 11.3 Proposing Evaluation Measures of Safety . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Necessity of Safe Quantitative Evaluation . . . . . . . . . . . . . . . 11.3.2 Selection of Evaluation Measures . . . . . . . . . . . . . . . . . . . . . . . 11.4 General Evaluation Method Using Evaluation Measures . . . . . . . . . 11.5 Deriving Danger-Indexes of Safety Strategy . . . . . . . . . . . . . . . . . . . 11.5.1 Safety Design Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Safety Control Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Proposal of Design Optimization and Practical Examples . . . . . . . 11.6.1 Formulating the Design Optimization Method . . . . . . . . . . . 11.6.2 Maximizing Safety Under Fixed Cost . . . . . . . . . . . . . . . . . . . 11.6.3 A New Method of Calculate a Safe Approach Motion . . . . . 11.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
187 187 187 187 188 189 189 189 190 192 192 193 194 194 195 196 197
12 Risk Reduction Mechanisms for Safe Rehabilitation Robots Noriyuki Tejima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Tolerable Risk and Surface Injury . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Force Limitation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 A Straight Movement-Type Force Limitation Mechanism . . . . . . . . 12.5 A Three-Dimensional Force Limitation Mechanism . . . . . . . . . . . . . 12.6 Reflex Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
199 199 199 201 202 204 206 207
Part II Rehabilitation Robots for Assistance of Human Movements II.3 Rehabilitation-Robot Evaluation 13 Usability of an Assistive Robot Manipulator: Toward a Quantitative User Evaluation Bessam Abdulrazak, Mounir Mokhtari, Bernard Grandjean . . . . . . . . . . . 211 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 13.2 Users Needs Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
XIV
Contents
13.3 Hardware and Software Organization . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Hardware Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Software Command Architecture . . . . . . . . . . . . . . . . . . . . . . . 13.4 Quantitative Evaluation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Modes and Time of Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 Actions Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
212 213 214 215 215 216 217 219 219
14 Processes for Obtaining a “Manus” (ARM) Robot within The Netherlands GertWillem R¨ omer, Harry Stuyt, Geer Peters, Koos van Woerden . . . . 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Wheelchair Mounted Service Manipulator ARM . . . . . . . . . . . . . . . 14.3 The Current Process of Providing an ARM to a User . . . . . . . . . . . 14.3.1 Informing Users about the Benefits of the ARM . . . . . . . . . 14.3.2 Indication Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.3 Stand-Alone Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.4 Formal Application and Funding of an ARM . . . . . . . . . . . . 14.3.5 Mounting the ARM on the Wheelchair . . . . . . . . . . . . . . . . . 14.3.6 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.7 Service and Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 The Future Process of Prescribing the ARM . . . . . . . . . . . . . . . . . . . 14.5 Summary of Two Recent Dutch Arm-User Evaluations . . . . . . . . . 14.5.1 User Study Conducted by iRV . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.2 User Study Conducted by hetDorp . . . . . . . . . . . . . . . . . . . . .
221 221 221 223 224 224 224 226 226 226 227 227 228 228 229
Part III Prostheses and Orthoses 15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors of an Underactuated Prosthetic Hand M. Zecca, G. Cappiello, F. Sebastiani, S. Roccella, F. Vecchi, M.C. Carrozza, P. Dario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Mechanical Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Sensory System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Materials and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.1 Slider Position Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.2 Tendon Tensiometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.3 Thumb Position Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4.4 Force Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
233 233 234 235 236 236 237 239 240 241
Contents
16 Design and Testing of WREX Tariq Rahman, Whitney Sample, Rahamim Seliktar . . . . . . . . . . . . . . . . . 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Design of WREX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Gravity Balancing With x "= 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Clinical Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
XV
243 243 244 245 248 248
Part IV Intelligent Wheelchairs 17 A Concept for Control of Indoor-Operated Autonomous Wheelchair Dimitar Stefanov, Alexander Avtanski, Z. Zenn Bien . . . . . . . . . . . . . . . . 17.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.1 Methods for Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1.2 Path Planning and Navigation to the Goal . . . . . . . . . . . . . . 17.2 Conception of Wheelchair Navigation . . . . . . . . . . . . . . . . . . . . . . . . . 17.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2.2 Initial Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Localization of the Wheelchair Position . . . . . . . . . . . . . . . . . . . . . . . 17.4 Scenario of the Wheelchair Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 Navigation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6 Computer Simulation of the Control Algorithm . . . . . . . . . . . . . . . . 17.6.1 Wheelchair Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.2 Modeling of the Sensors and Their Arrangement on the Wheelchair Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6.3 Navigation Algorithm of the Simulator . . . . . . . . . . . . . . . . . 17.7 Evaluation of the Control Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.1 Navigation to Multiple Goals . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.2 Obstacle Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.3 Avoiding a “Trap” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.7.4 Navigation in a Complex Environment . . . . . . . . . . . . . . . . . . 17.7.5 Route Generation in Partially Known Environment . . . . . . 17.8 Future Plans and Concluding Remark . . . . . . . . . . . . . . . . . . . . . . . . 18 Design of an Intelligent Wheelchair for the Motor Disabled Chong Hui Kim, Jik Han Jung, Byung Kook Kim . . . . . . . . . . . . . . . . . . . 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.1 Hardware Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.2 Software Design for Real-Time System . . . . . . . . . . . . . . . . . .
253 253 254 256 258 258 258 260 262 264 267 267 269 272 283 283 284 286 288 292 294
299 299 300 301 302 302 303
XVI
Contents
18.5 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5.1 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5.2 Hierarchical Control Architecture . . . . . . . . . . . . . . . . . . . . . . 18.6 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
304 304 306 307 309
Part V Mechatronics Devices for Assistance in Walking 19 Electrically Assisted Walker with Supporter-Embedded Force-Sensing Device Saku Egawa, Ikuo Takeuchi, Atsushi Koseki, Takeshi Ishii . . . . . . . . . . . . 19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2 Electrically Assisted Walker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Supporter-Embedded Force Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3.1 Requirements for the Force Sensor . . . . . . . . . . . . . . . . . . . . . 19.3.2 Sensor Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3.3 Sensing Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3.4 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
313 313 314 315 315 316 317 318 319 321 321
20 Human-Friendly Care Robot System for the Elderly Dong Hyun Yoo, Hyun Seok Hong, Han Jo Kwon, Myung Jin Chung . . 20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1.1 The Functions of Do-u-mi Robot . . . . . . . . . . . . . . . . . . . . . . . 20.2 Overall System of Do-u-mi Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 Sound Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.4 Face Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.4.1 Face Candidate Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.5 Autonomous Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
323 323 323 325 326 327 328 330 331
21 Newly Designed Rehabilitation Robot System for Walking-Aid Choon-Young Lee, Kap-Ho Seo, Changmok Oh, Ju-Jang Lee . . . . . . . . . . 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Electric Motor Based Gait Rehabilitation System . . . . . . . . . . . . . . 21.2.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Newly Developed Gait Rehabilitation System . . . . . . . . . . . . . . . . . . 21.3.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.2 Control Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
333 333 334 334 336 338 338 340 342
Contents
XVII
Part VI Robot-Assisted Neurorehabilitation 22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation Rui Loureiro, Farshid Amirabdollahian, William Harwin . . . . . . . . . . . . . 22.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Background to Stroke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3 Gentle/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.4 Clinical Prototype for Machine Mediated Neurorehabilitation . . . . 22.4.1 Antigravity Mechanism for the Shoulder and Elbow . . . . . . 22.4.2 Exercises & Movement Guidance . . . . . . . . . . . . . . . . . . . . . . . 22.4.3 Different Therapy Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5 Clinical Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.1 Outcome Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.2 Data Analysis and Statistical Methodology . . . . . . . . . . . . . . 22.5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
347 347 348 349 350 351 353 354 357 357 358 359 360 361
23 Wire Driven Robots for Rehabilitation Paolo Gallina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.1.1 Advantages of Wire Driven Robots . . . . . . . . . . . . . . . . . . . . . 23.1.2 Problems Related to Wire Driven Robots . . . . . . . . . . . . . . . 23.2 Manipulability and Wire Tension Computation . . . . . . . . . . . . . . . . 23.3 NeRebot: An Example of Wire Driven Robot for Rehabilitation . . 23.3.1 Software and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.3.2 Treatment Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.4 Conclusions and Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . .
365 365 366 367 367 369 372 373 374
24 A Wrist Extension for MIT-MANUS Hermano Igo Krebs, James Celestino, Dustin Williams, Mark Ferraro, Bruce Volpe, Neville Hogan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2 Specification for a New Wrist Device . . . . . . . . . . . . . . . . . . . . . . . . . 24.2.1 Kinematic Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2.2 Actuator Placement and Transmission Selection . . . . . . . . . 24.2.3 Actuator Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2.4 Sensor Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.3 Alpha-Prototype Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.4 Robotic Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
377 377 380 381 382 382 383 383 386 388
XVIII Contents
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots Andr´ as T´ oth, Guszt´ av Arz, G´ abor Fazekas, Daniel Bratanov, Nikolay Zlatov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.2 Analysis of Spastic Upper Limb Physiotherapy . . . . . . . . . . . . . . . . 25.3 System Design and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.1 Mechanical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.2 The Instrumented Orthoses . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.3 Control Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.4 User Interface and Programming . . . . . . . . . . . . . . . . . . . . . . . 25.3.5 Safety Measures and Devices . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4 Testing and Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5 Clinical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.1 Subjects of the Clinical Trial . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.2 Assessment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.3 Analysis of Assessment Results . . . . . . . . . . . . . . . . . . . . . . . . 25.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
391 391 392 394 394 399 399 401 402 403 405 405 406 408 409
26 STRING-MAN: A Novel Wire-Robot for Gait Rehabilitation Dragoljub Surdilovic, Rolf Bernhardt, Tobias Schmidt and Jinyu Zhang 26.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.2 Development Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.3 Robotic Mechanisms Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.4 Human/Robot Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.5 Sensory Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.6 Control Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
413 413 414 414 419 420 421 424
Part VII Perspectives and Trends of the Rehabilitation Robotics 27 Great Expectations for Rehabilitation Mechatronics in the Coming Decade H.F. Machiel Van der Loos, Richard Mahoney, Chantal Ammi . . . . . . . . 27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.2 Emerging Demographics and Healthcare Trends . . . . . . . . . . . . . . . . 27.3 Emerging Technologies Relevant to Robotics . . . . . . . . . . . . . . . . . . 27.4 RoadBlocks and Enablers of Robotic Applications in Rehabilitation 27.5 Mechatronic/Robotic Applications to Rehabilitation . . . . . . . . . . . . 27.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
427 427 428 429 431 432 432
Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 About the Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
List of Contributors
Abdulrazak, Bessam GET/Institut National des T´el´ecomunications–INSERM U.483, Handicom Lab. Evry, France
[email protected] Amirabdollahian, Farshid The University of Newcastle, C.R.E.S.T., Stephenson Building, Claremont Road, Newcastle Upon Tyne, NE1 7RU, UK,
[email protected] Ammi, Chantal Dept. of Business Administration, National Telecommunications Institute, 9, rue Charles Fournier, 91011 Evry, France,
[email protected] av Arz, Guszt´ Budapest University of Technology and Economics, Department of Manufacturing Engineering, Egry J. u. 1. Budapest 1111, Hungary
[email protected] Avtanski, Alexander Savvion Inc., 5104 Old Ironsides Dr, Santa Clara, CA 95054, USA
[email protected] Bernhardt, Rolf Fraunhofer Institute IPK-Berlin, Department Robotics and Automation, Pascalstraße 8-9, 10587 Berlin, Germany
Bien, Z. Zenn Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected]
Bratanov, Daniel University of Rousse, Department of Manufacturing Engineering, Automation & Robotics Laboratory, 8 Studentska str. 7017, Rousse, Bulgaria
[email protected]
Cappiello, Giovanni Centro INAIL RTR, Via Vetraia 7, 55049 Viareggio (LU), Italy
[email protected]
Carrozza, Maria Chiara ARTS Lab, Scuola Superiore Sant’Anna, Polo Sant’Anna Valdera, viale Rinaldo Piaggio, 34-56025 Pontedera (PI), Italy; Research Centre on Rehabilitation Bioengineering of the Centro Protesi INAIL, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected]
XX
List of Contributors
Celestino, James Department of Mechanical Engineering, Massachusetts Institute of Technology, Room 3-173, 77 Massachusetts Ave, Cambridge, MA 02139, USA
[email protected] Chung, Myung Jin Dept. of Electrical Engineering and Computer Science, KAIST, 373-1, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Dario, Paolo ARTS Lab, Scuola Superiore Sant’Anna, Polo Sant’Anna Valdera, viale Rinaldo Piaggio, 34-56025 Pontedera (PI), Italy; Research Centre on Rehabilitation Bioengineering of the Centro Protesi INAIL, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected] Di Lauro, G.A. Scuola Superiore Sant’Anna, ARTS (Advanced Robotics Technology Systems) Lab, c/o Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy; Research Centre on Rehabilitation Bioengineering of the Centro Protesi INAIL, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected] Driessen, B.J.F. TNO TPD, PO-BOX 155, 2600 AD Delft, The Netherlands,
[email protected] Egawa, Saku Mechanical Engineering Research
Laboratory, Hitachi, Ltd., 502 Kandatsu, Tsuchiura, Ibaraki 300–0013, Japan
[email protected] Fazekas, G´ abor National Institute for Medical Rehabilitation, Szanatorium u. 19 Budapest 1528 Hungary
[email protected] Ferraro, Mark Burke Rehabilitation Hospital, 785 Mamaroneck Avenue, White Plains, NY 10605, USA
[email protected] Feki, M.A. GET/Institut National des T´el´ecomunications–INSERM U.483, Handicom Lab. Evry, France
[email protected] Fukase, Azuma Intelligent Systems Laboratory, SECOM Co., Ltd., R& D Center, 8-10-16, Shimorenjaku, Mitaka, Tokyo 181–8528, Japan
[email protected] Gallina, Paolo Department of Energetics, University of Trieste, Trieste, via A. Valerio 10, 34127 Trieste (Italy)
[email protected] Gr¨ aser, Axel Institute of Automation, University of Bremen, Otto Hahn Allee NW1, 28359 Bremen, Germany,
[email protected] Grandjean, Bernard GET/Institut National des T´el´ecomunications–INSERM U.483, Handicom Lab. Evry, France
[email protected]
List of Contributors
XXI
Guglielmelli, E. Scuola Superiore Sant’Anna, ARTS (Advanced Robotics Technology Systems) Lab, c/o Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy; Research Centre on Rehabilitation Bioengineering of the Centro Protesi INAIL, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected]
of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa, 243-0292, Japan
Harwin, William The University of Reading, School of Systems, Engineering, Department of Cybernetics, Whiteknights, Reading, RG6 6AY, UK
[email protected]
Ishii, Sumio Intelligent Systems Laboratory, SECOM Co., Ltd., R& D Center, 8-10-16, Shimorenjaku, Mitaka, Tokyo 181–8528, Japan
[email protected]
Hillman, Michael Bath Institute of Medical Engineering, Wolfson Centre, Royal United Hospital, Bath BA1 3NG, UK
[email protected]
Ishii, Takeshi Mechanical Engineering Research Laboratory, Hitachi, Ltd., 502 Kandatsu, Tsuchiura, Ibaraki 300–0013, Japan
[email protected]
Hogan, Neville Department of Mechanical Engineering, Massachusetts Institute of Technology, Room 3-173, 77 Massachusetts Ave, Cambridge, MA 02139, USA; Brain and Cognitive Sciences Dept., Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
[email protected] Hong, Hyun Seok Dept. of Electrical Engineering and Computer Science, KAIST, 373-1, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Hoya, Ichiro Department of Welfare Systems Engineering, Kanagawa Institute
Ikuta, Koji Department of Micro System Engineering, Graduate School of Engineering, Nagoya University
[email protected] Ishii, Hideki Department of Micro System Engineering, Graduate School of Engineering, Nagoya University
[email protected]
Johnson, M.J. Scuola Superiore Sant’Anna, ARTS (Advanced Robotics Technology Systems) Lab, c/o Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy; Research Centre on Rehabilitation Bioengineering of the INAIL Centro Protesi, Vigorso di Budrio (Bologna), via della Vetraia 7, 55049 Viareggio (Lucca), Italy
[email protected] Jung, Jik Hang Department of Electrical Engineering & Computer Science, KAIST. 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected]
XXII
List of Contributors
Jung, Jin-Woo Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Kawarazaki, Noriyuki Department of Welfare Systems Engineering, Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa, 243-0292, Japan
[email protected] Kim, Byung Kook Department of Electrical Engineering & Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Kim, Chong Hui Department of Electrical Engineering & Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea,
[email protected] Kim, Dae-Jin Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Koseki, Atsushi Mechanical Engineering Research Laboratory, Hitachi, Ltd., 502 Kandatsu, Tsuchiura, Ibaraki 300–0013, Japan
[email protected] Kouzmitcheva, Olena Institute of Automation, University of Bremen, Otto-Hahn-Allee, NW1, 28359 Bremen, Germany,
[email protected]
Krebs, Hermano Igo Massachusetts Institute of Technology, Department of Mechanical Engineering, 77 Massachusetts Ave, 3-137, Cambridge, MA 02139 USA; Weill Medical College of Cornell University, The Winifred Masterson Burke Medical Research Institute, Department of Neurology and Neuroscience
[email protected] Kwon, Han Jo Dept. of Electrical Engineering and Computer Science, KAIST, 373-1, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Laschi, C. Scuola Superiore Sant’Anna, ARTS (Advanced Robotics Technology Systems) Lab, c/o Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy
[email protected] Lee, Choon-Young Digital Content Research Division, ETRI, 161 Gajeong-dong Yuseong-gu, Daejeon, 305-350 Korea
[email protected] Lee, Ju-Jang Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Loureiro, Rui The University of Reading, School of Systems, Engineering, Department of Cybernetics, Whiteknights,Reading, RG6 6AY, UK
[email protected]
List of Contributors XXIII Mahoney, Richard Rehabilitation Technology Division, Applied Resources Inc., 1275 Bloomfield Ave., Fairfield, NJ, 07004 USA
[email protected] Martens, Christian RHEINMETALL-DEFENCEELECTRONICS, Br¨ uggeweg 54, 28309 Bremen, Germany,
[email protected] Mokhtari, Mounir GET/Institut National des T´el´ecomunications–INSERM U.483, Handicom Lab. Evry, France
[email protected] Nishihara, Kazue Department of Welfare Systems Engineering, Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa, 243-0292, Japan
[email protected]
Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected] Peters, Geer RTD hetDorp, Heijenoordseweg 130, NL-6813 GC, Arnhem, The Netherlands,
[email protected] Rahman, Tariq A.I. duPont Hospital for Children, 1600 Rockland Rd, Wilmington, DE 19899
[email protected] Roccella, Stefano Centro INAIL RTR, Via Vetraia 7, 55049 Viareggio (LU), Italy
[email protected] R¨ omer, Gert Willem Exact Dynamics, Edisonstraat 96, NL-6942 PZ, Didam, The Netherlands,
[email protected]
Nokata, Makoto Department of Robotics, Faculty of Science and Engineering, Ritsumeikan University
[email protected]
Sample, Whitney A.I. duPont Hospital for Children, 1600 Rockland Rd, Wilmington, DE 19899
[email protected]
Oh, Changmok Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected]
Schmidt, Tobias Fraunhofer Institute IPK-Berlin, Department Robotics and Automation, Pascalstraße 8-9, 10587 Berlin, Germany
Pape, Andreas Robert BOSCH GmbH, Gasoline Systems, GS/EFA3, Postfach 30 02 40, 70442 Stuttgart,
[email protected] Park, Kwang-Hyun Department of Electrical Engineering and Computer Science, KAIST, 373-1
Sebastiani, Francesco Centro INAIL RTR, Via Vetraia 7, 55049 Viareggio (LU), Italy
[email protected] Seliktar, Rahamim School of Biomedical Engineering, Drexel University, 32 and Chestnut Sts, Philadelphia PA 19104
[email protected]
XXIV List of Contributors Seo, Kap-Ho Department of Electrical Engineering and Computer Science, KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
[email protected]
Tejima, Noriyuki Department of Robotics, Faculty of Science and Engineering, Ritsumeikan University
[email protected]
She, Haiying Institute of Automation, University of Bremen, Otto-Hahn-Allee, NW1, 28359 Bremen, Germany,
[email protected]
T´ oth, Andr´ as Budapest University of Technology and Economics, Department of Manufacturing Engineering, Egry J. u. 1. Budapest 1111 Hungary
[email protected]
Soyama, Ryoji SECOM Co., Ltd., Intelligent Systems Laboratory, Medical Welfare Division, 8-10-16 Shimorenjaku, Mitaka, Tokyo 181–8528, Japan
[email protected] Stefanov, Dimitar Cardiff & Vale NHS Trust, Rehabilitation Engineering Unit, Cardiff, CF5 2YN, UK; Institute of Mechanics, Bulgarian Academy of Sciences, Acad. G. Bonchev Street, Block 4, 1113 Sofia, Bulgaria D
[email protected] Stuyt, Harry Exact Dynamics BV, Edisonstraat 96, NL-6942 PZ, Didam, The Netherlands
[email protected] Surdilovic, Dragoljub Fraunhofer Institute IPK-Berlin, Department Robotics and Automation, Pascalstraße 8-9, 10587 Berlin, Germany
[email protected] Takeuchi, Ikuo Mechanical Engineering Research Laboratory, Hitachi, Ltd., 502 Kandatsu, Tsuchiura, Ibaraki 300–0013, Japan
[email protected]
Van der Loos, H.F. Machiel, Rehabilitation R& D Center, VA Palo Alto Health Care System, 3801 Miranda Ave. # 153, Palo Alto, CA, 94304 USA
[email protected] van Woerden, J.A. TNO TPD, PO-BOX 155, 2600 AD Delft, The Netherlands,
[email protected] van Woerden, Koos TNO TPD, Stieltjesweg 1, NL-2628 CK, Delft, The Netherlands,
[email protected] Vecchi, Fabrizio Centro INAIL RTR, Via Vetraia 7, 55049 Viareggio (LU), Italy
[email protected] Versluis, A.H.G. TNO TPD, PO-BOX 155, 2600 AD Delft, The Netherlands,
[email protected] Volosyak, Ivan Institute of Automation, University of Bremen, Otto-Hahn-Allee, NW1, 28359 Bremen, Germany,
[email protected]
List of Contributors Volpe, Bruce Department of Neurology and Neuroscience, Weill Medical College Cornell University, Burke Medical Research Institute, 785 Mamaroneck Avenue, White Plains, NY 10605, USA; Burke Rehabilitation Hospital, 785 Mamaroneck Avenue, White Plains, NY 10605, USA
[email protected] Yoo, Dong Hyun Dept. of Electrical Engineering and Computer Science, KAIST, 373-1, Guseong-dong, Yuseong-gu, Daejeon, 305-701, Korea
[email protected] Yoshidome, Tadashi Department of Welfare Systems Engineering, Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa, 243-0292, Japan
[email protected]
XXV
Williams, Dustin Interactive Motion Technologies, Inc., 56 Highland Ave, Cambridge, MA 02139, USA
[email protected] Zecca, Massimiliano Scuola Superiore Sant’Anna, ARTS Labs, Polo Sant’Anna Valdera, Viale Rinaldo Piaggio 34, 56025 Pontedera (Pisa), Italy
[email protected] Zhang, Jinyu Fraunhofer Institute IPK-Berlin, Department Robotics and Automation, Pascalstraße 8-9, 10587 Berlin, Germany Zlatov, Nikolay Cardiff University, Manufacturing Engineering Centre, The Parade P O Box 925 Cardiff CF24 0YF Wales, UK
[email protected]
List of Abbreviations
ACC ADL AGW ARM BWS COP DLS DoF DoFs EMG FMMNN FSR HMM ICORR LNA LRF LPM QOL ROL RR RRs RTAI sls TOD
active compliance control activities of daily living automatically guided wheelchairs assistive robotic manipulator body weight support center of pressure double limb support degree of freedom degrees of freedom electromyography, electromyographic Fuzzy Min-Max Neural Networks force sensing resistors Hidden Markov Model International Conference on Rehabilitation Robotics low noise amplifier laser range finder log-polar mapping quality of life respect of living rehabilitation robot rehabilitation robots Real-Time Application Interface single limb support task-oriented design
1 Advances in Human-Friendly Robotic Technologies for Movement Assistance/Movement Restoration for People with Disabilities Dimitar Stefanov and Z. Zenn Bien
Abstract Rehabilitation robots (RR) are expected to play an important role toward the independent life of older persons and persons with disabilities. Such intelligent devices, embedded into the home environment, can provide the resident with 24-hour movement assistance. Modern home-installed robots tend to be not only physically versatile in functionality but also emotionally human-friendly, i.e. they may be able to perform their functions without disturbing the user and without causing him/her any pain, inconvenience, or movement restriction, instead possibly providing him/her with comfort and pleasure. This paper analyzes the main categories of RR. The paper will then discuss some important issues of the future development of an intelligent residential space with a human-friendly rehabilitation robots integrated with it.
1.1 Introduction Recent statistics show a trend of rapid growth in the number of persons with physical disabilities and aged people who need external help in their everyday movement tasks [1]. The problem of caring for older persons and persons with physical disabilities will become more serious in the near future when a significant part of the increasing global population is in the group of 65 years of age or over, and the existing welfare model is not capable to meet the increased needs. It is obvious that such a problem cannot be solved by increasing the number of care-givers. On the other hand, an optimistic view is growing in that the quality of life for the people with movement limitations can be significantly improved by means of various modern technologies, in particular, through rehabilitation robots [2, 3]. It is expected that rehabilitation robots (RR) will have a strong positive emotional impact on persons with physical disabilities and older persons, improving their quality of life, increasing their movement independence, and giving them privacy. The same notion will reduce to some extend the medical care costs per person. Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 3-23, 2004. © Springer-Verlag Berlin Heidelberg 2004
4
D. Stefanov and Z.Z. Bien
The idea of rehabilitation robots evolves with the recent development of relevant technology. While the initial conceptions for RR design were mainly concerned with a manipulator usually controlled in a direct mode, the latest design tendencies target to multi-agent robotic solutions where more than one home installed robots are integrated with other home installed devices. Whereas many service robots for ordinary non-impaired people easily meet requirements that are standardized to some extend, the problems and needs of older persons and persons with physical disabilities significantly vary from person to person, which add many serious requirements to the RR design. The robots for people with special needs should apply control algorithms based on a small number of commands relevant to the specifics of the user’s own motions. The technology solutions should also consider the individual’s habits and should meet all safety regulations. The development of sophisticated RR was strongly influenced by the recent fast progress of various intelligent technologies such as fuzzy logic, artificial neural net, and evolutionary algorithms. Thus, those RR’s are sometimes termed “intelligent RR”. The concept of RR for older persons and persons with physical disabilities has been considered important by many social and economic institutions in the advanced countries. In fact, numerous research and demonstration projects on RR have been already completed or are in the phase of development all over the world. Many of these projects are often funded by international R&D organizations and involve participants from different countries. Research activity on RR is relatively higher in Japan, Europe, and the USA, where there is a strong growth (by far) of the aged population along with the availability of broad high technology achievements. Some RRs are available on the market as commercial products and become an important helper for increasing number of patients, improving significantly their quality of life. The objective of this paper is to provide a summary of some recent works related to the notion of the RR for service of older persons and persons with physical disabilities. The subject matter lies in the interdisciplinary area of many different branches of science and technology, and its successful design can be possible only from the result of joint efforts of many specialists in different areas. We find that the technologies and solutions for RR should be human friendly, i.e. the RR should be designed according to the notion of human-centeredness and should possess a high level of intelligence in their control, actions, and interactions with the users, offering them a high level of comfort and functionality. In that chapter we have included some ideas and tendencies that have been proposed and developed recently. Comments on the RR research projects, products, and conceptions are done with a focus on technology innovations that have been taking place. This chapter does not treat non-technical components of the RR design such as interior design, organization of medical care and care-giver servicing, maintenance, etc. This chapter is organized as follows. In Sect. 1.2, we give a brief classification of the RR regarding their application and introduce some recent projects and commercially-available RR products. Next, an overview on the place of the RR in the arrangement of some smart home projects is given. Our vision on the futuristic robotic smart house is also presented.
1 Advances in Human-Friendly Robotic Technologies
5
1.2 Areas of the RR Application Rehabilitation robots are intended to meet different aspects of user’s needs. Although the area of the rehabilitation robotics is relatively new, its intensive development during the last 20 years formed several different domains. 1.2.1 Robotic Systems for Movement Assistance Most of the rehabilitation robots are designed to assist disabled individuals in their everyday movement needs, such as eating, drinking, objects replacement, etc. [4]. Currently, three main types of robot schemes are in operation: desktop-mounted robots (workstations), wheelchair-mounted robots, and mobile autonomous robots. In some simple applications, the robot is fixed to a desk or to the floor. The operator is located in a suitable position near the worktable and controls the robot that performs unaided pick-and-place ADL tasks [5]. Wheelchair-mounted robots [6, 7] can be used both for indoor and for outdoor assistance for the user. The attachment of a robot to the wheelchair significantly increases the movement independence of the user because one can move freely to different location of the house and can perform manipulative tasks in each position with the help of the rehabilitation robot. The drawbacks of such a solution are inclination of the wheelchair due to the weight of the robot, enlargement of the wheelchair width (which is critical for narrow door passage), and changes of the dynamic characteristics of the wheelchair. The mobile robots are remotely controlled devices that navigate autonomously through the home environment and serve the user who is located at a certain position (bed, chair, etc.). The cognitive load of a user can be reduced considerably if the robot automatically performs repetitive movements (in a pre-programmed mode of control). Programs can be successfully executed if the robot, the user, and the manipulated objects remain in the same initial position every time when the concrete task is performed. In the case of a wheelchair-mounted manipulator, the relative position of the user with respect to the manipulator remains the same, but the relative position between the manipulator and the objects may depend on the wheelchair position. In order to avoid this problem, a technique of vision-based automatic navigation of the gripper is adopted to handle the grasped object (KARES I, KARES II, TAURO, etc.) [8–10], or the user performs the end-point control in which the trajectory, orientation, and velocity of the gripper are directly adjusted (HOPE) [5]. Automatically guided wheelchairs (AGW) (known also as “go-to-goal wheelchairs”) are intended to facilitate transportation of individuals with severe dexterity limitations in their hands. Because of their automatic guidance in the home environment, these wheelchairs are often considered a special class of mobile rehabilitation robots for the transportation of a user. Most of the devices of that category are oriented to application in indoor environment but recently some results from design of outdoor autonomous wheelchairs were reported [11]. Different from the standard wheelchairs where the user directly controls the wheelchair movement, AGW
6
D. Stefanov and Z.Z. Bien
autonomously navigate toward the goal. After receiving the user’s instruction about the destination point of the wheelchair, the navigation system first generates the travel routine and then independently steers the wheelchair to the desired position. The automatic control of go-to-goal wheelchairs dramatically reduces the cognitive load of the user and makes possible the safe movement through narrow doors and corridors. In the most wheelchair projects, wheelchair movement is often assumed to occur in a structured or semi-structured home environment. Localization of the current wheelchair position is based either on fixed-location beacons strategically placed at pre-defined locations within the operating home environment [12, 13] or on natural landmarks of the structured environment [14]. Beacon-based systems can be further grouped into systems that refer to active beacons (most often fixed activebeacon emitters installed on the walls) and systems which get navigational parameters from passive targets [12]. Usually, passive beacons are of low cost and can be easily attached to the walls, but the procedures of detecting such markers (typically CCD sensors) and extracting the coded information are rather complicated. The localization systems that are based on active-beacon emitters typically involve simple sensors to identify the beacon positions and apply a simple information decoding procedure. However, such a solution does not allow flexible change of the beacon positions because each sensor should be separately powered and controlled. The guidepath systems for wheelchair navigation can be considered as a special class of beacon-based systems where the guide tracks are embedded in the floor. The magnetic-tape guidepath scheme [15] involves a strip of flexible magnetic material attached to the floor surface. The magnetic stripe follower is based on an array of fluxgate or Hall effect sensors, mounted on the board of the vehicle. Although the approach is widely used in many material-handling applications, its use for wheelchair guidance in the home environment is limited due to the complexity of the movement routines and the need for frequent reconfiguration of the path. An additional limitation comes from the requirement of embedment of the guidepath in the floor. The natural landmark navigation does not require installation of special beacons and the algorithms allow faster adaptation of the wheelchair to the unknown home environment. The proposed navigation solutions vary from detection of the location of ceiling mounted lamps [16] to detection of doorframes and furniture edges [14, 17]. New travel routines can be easily added to the computer memory without assistance of a specialized staff. On the other hand, the solutions based on natural landmarks require much complicated vision sensors, involve complex algorithms for analysis of the visual scene, and entail detection of artifacts from ambient lights. In order to achieve successful wheelchair navigation in case of absence or malfunctioning of some beacons/landmarks, most of the control algorithms identify the current wheelchair location not only by the information from the beacon navigation system but also by the information on the angular position of driving wheels (dead-reckoning procedure). Apart from the automatic navigation to the goal, most of the automatically guided wheelchairs can automatically perform obstacle avoidance maneuvers, referring to the information from range sensors (mostly sonar or optical retroreflective sensors) [18]. In many wheelchair solutions, the same range sensors are used for running the wheelchair in a semi-autonomous mode in which
1 Advances in Human-Friendly Robotic Technologies
7
the user’s instructions can be modified in conjunction with the sensor information on nearly located objects. For semi-autonomous control, several schemes are proposed/tested, such as wall following, people following/avoidance, and narrow corridor passage [14]. In [19] is considered the design of a robotic device that combines autonomous navigated wheelchair with a manipulator installed on it. It is noted that such a wheelchair can be used in three modes: (1) mode for autonomous indoor transportation of the user; (2) mode as a mobile robot that is remotely-controlled to deliver different objects to the user when he/she is placed in the bed; (3) mode as an home inspection robot that is remotely controlled and sends TV images from different places from the user’s home. Powered feeders are rehabilitation robots especially designed for feeding patients with severe movement limitations of their upper limbs. Controlling the robot by themselves, such users can eat a normal meal at their own pace. To this class belong Handy 1 [20, 21], My Spoon [22, 23], and ISAC [24–26]. Handy 1 also has optional functions for brushing teeth, shaving and applying make up. The ISAC implements image recognition in order to determine the exact position of the user’s mouth. 1.2.2 Robots for Physical Support and Indoor Navigation Devices from that group are intended to assist users with both movement weakness and visual impairments. Potential users of these RR are aged people and people with multiple impairments. Such machines are usually designed as motorized base that gives physical support to the user. Forces, caused by the user’s body are monitored and used for the control of the platform. Robot control system determines the current location of the platform and outputs voice-synthesized navigation instructions or warnings about potential obstacles on the intended route. Good examples of that conception are the HITOMI project [27] and the PAM-AID project [28–31], [W1]. The system WHERE (Walking and moving HElper Robot system) developed at KAIST [32, 33] aims to assist the user in walking and gait rehabilitation and provides body weight support to the user. The system automatically detects the user intention regarding walking speed and movement direction. A picture of the WHERE system is shown in Fig. 1.1. 1.2.3 Robots for Physical Rehabilitation Different from the electromechanical devices for passive motion rehabilitation (such as the Artromot system of the ORTHOMED Medizintechnik Ltd) [W2], robotassisted systems for movement rehabilitation can perform various movement programs and can sense the user’s force reactions. In relation to this application, we may mention here the robotic device for stroke rehabilitation in the GENTLE/S Project [34], the robotic therapy system, called Stanford Driver’s SEAT [35], the MIME project [36] of the VA Palo Alto Rehabilitation R&D Center, the robotic
8
D. Stefanov and Z.Z. Bien
system for neuro-rehabilitation of the Newman Laboratory at MIT [37], the system of the University of California, Irvine [38], REHAROB project [39, 40], etc. These robotic systems can be easily programmed to implement different rehabilitation exercises that fit to the concrete needs of the particular users and offer flexible adjustment of various movement parameters such as range of flexion and extension, pause between the sequential motions, force, speed, etc.
Fig. 1.1. Walking and moving helper robot system (WHERE) developed at KAIST (Courtesy of Ju-Jang Lee). The system is used for gait rehabilitation and assists users in walking by providing body weight support. The robot automatically detects user intention regarding walking speed and movement direction
1.2.4 Vocational RR Different from the robots that are primarily designed to help those paralyzed users in their everyday movement activities, the vocational rehabilitation robots perform assistance for the paralyzed user in concrete occupational activities, such as office work [41, 42], programming (RAID and Epi-RAID) [43], work at chemical and biology laboratories (Walky) [44], visual inspection of hybrid integrated circuits (IRVIS) in a real manufacturing environment [45, 46], and operation of a commercial lathe [47].
1 Advances in Human-Friendly Robotic Technologies
9
1.2.5 Emotional Interactive Entertainment Robots Similar to the virtual reality systems for communication with virtual subjects or game play, the emotional interactive entertainment robots (EIAR) [48] are intended to increase the emotional comfort and give some emotional relief of people who live alone. Different from some virtual reality software products where the virtual creatures appear only as images on the computer screen, the entertainment robots are mechatronics devices that exhibit animal-like behavior. Pet robots are one of the latest tendencies in the development of entertainment home robots. Within the project HII House (Home Information Infrastructure House), National Panasonic has demonstrated a conceptual idea for the user-friendly interface for older people in their homes. National Panasonic has developed an electronic home interface and memory jogger designed as cuddly toys – Tama the robocat and Kuma the robot bear. A speech synthesis device can reproduce a number of phrases as a voice response to particular user voice-activated inputs. The device can also be programmed to remind the user to take his/her medication at a particular time. Failure to respond to the device can activate a direct-dial call to the care-giving staff that can then check whether or not the user’s condition is normal. The interactive robot BECKY [49], developed at KAIST, Korea, demonstrates different behaviour in accordance with the emotional status of the user. BECKY recognizes the current emotion of the user by observing the user’s response to its actions and considering the environmental information that can affect the user’s emotional status. BECKY can adapt to the user’s preferences by a learning mechanism based on neural networks. In addition, BECKY can choose and play some music to resume human spirits. The seal robot and the cat robot, developed in Japan [50, 51], are recent results of the research study on the physical interaction between the human and pet robots. The robots were tested for aged people in the hospital in Japan. To the same category we may classify the AuRoRA Project [52, 53], [W3] that studies application of the robots to the education and therapy for children with autism helping them to develop and increase their communication and social interaction skills. Robot is used as an interactive "toy" whose behaviour can change, depending on the child response. We may note that applications of rehabilitation robots are not limited to the listed above. Contemporary technology achievements would be a premise for the design of much sophisticated robots that meet new sides of user’s needs. As a result, we may expect in the near future some new categories of RR that refer to much sophisticated tasks, such as helping the user in bathing, linen change, assisting in cloth change, helping for recovering after falling on the floor, lifting the patient from the bed to the wheelchair, cooking, etc.
10
D. Stefanov and Z.Z. Bien
1.3 Specialized Human-Machine Interface Human-Machine Interface (HMI) can translate the user’s commands for proper operation of the rehabilitation robots, wheelchair, or other home equipment such as lamps, TV sets, telephones, doors, home security systems, etc. on an easy and efficient way. Movement limitations of users with severe paralysis cause a serious problem for reliable control of the home-installed assistive systems. As an HMI, the head-tracking devices are widely used because of their ability to produce up to three independent proportional signals that correspond to the forward-backward head tilting, left-right head rotation, and lateral head tilting. The manner of control is very natural for the user. Recently, some new head tracking techniques involving facial detection [54] and optoelectronic detection of light-reflective head-attached markers (Tracker2000, Head Mouse) have been proposed [55, 56]. Some new technologies such as eye-movement control, brain control [57], as well as gesture recognition [58] and facial expression recognition give new opportunities for human – friendly interaction between the user and the home-installed devices and become a base for new interface solutions in the near future. Voice control is also considered as a natural and easy way of operation of home-installed devices, but its application is still limited because of the high dependence of the recognition ratio on the voice specific and the ambient noise level. Recently, some new voice recognition algorithms based on neural networks have provided optimism that those drawbacks can be overcome soon, and the voice control could be applied in a noisy environment [59]. The soft computing technique1 offers new perspectives on the application of the EMG signals in the control of the home environment, allowing successful extraction of informative signal features in case that a strong noise signal interferes with useful EMG signals [60].
1.4 Rehabilitation Robots in the Smart House Design Intelligent houses for persons with physical disabilities and for older persons should provide their residents with better environmental conditions for independent indoor life. Numerous smart devices and systems, installed in the house, should be capable of being linked with each other, to process information from the inhabitant or from his/her environment, and to make independent decisions and actions for the survival of the resident in cases of emergency. During the last decade, different concepts of a smart house have been developed and tested under various projects. These developments are oriented to different groups of people with special needs (PSN) and refer to different social infrastructure
1
Soft computing differs from conventional (hard) computing in that it is tolerant of imprecision, uncertainty, and partial truth. It includes neural networks, fuzzy logic, evolutionary computation, rough set theory, probabilistic reasoning, and expert systems.
1 Advances in Human-Friendly Robotic Technologies
11
(that can vary from single house to village, nursing home, hospital, etc.). Rehabilitation robots become an important part of the recently developed smart-house models. The “Robotic Room” is a project developed at the Sato Laboratory of the Research Center for Advanced Science and Technology in the University of Tokyo [61, 62]. The Robotic Room can be considered as a step ahead toward the realization of intelligent houses for persons with physical disabilities and as a further development of the idea of the Techno Houses. The environment of the Robotic Room consists of a ceiling-mounted robot arm (called the “long reach manipulator”) and an intelligent bed with pressure sensors for monitoring of the inhabitant’s posture. Modules can monitor person’s respiration without attachment of any additional sensors to the user’s body. The robot arm is intended to bring various objects to the user. The developed life-support infrastructure is to comply with the needs of the rapidly aging society. In the “Intelligent Sweet Home” project at KAIST [63], gestures are adopted for the control of home-installed devices. Parts of the scenario are two rehabilitation robots. First of them is mounted to the user’s bed and helps the user in activities such as book reading, object replacement, quilt adjustment, massage, scratching, etc. The experimental design employs a Manus robot. A supplementary mechanical module provides translation of the robot along the bed. The second robot is a mobile one and it role is to perform transportation tasks. (A Pioneer robot from ActivMedia Robotics, LLC was used in that design.) Ceiling-mounted cameras are used for localization of the current robot position. The same cameras detect the positions of the quilt ends. The interface, called “soft remocon”, consists of three ceiling-mounted video cameras that detect the orientation of the user’s hand. By pointing at the robots, television, or curtains, the user can choose the device that will be controlled. Special light signals confirm user’s selection. After choosing the device, the user sets his/her instruction to it by pre-defined hand gestures. A voice-generated message confirms the recognized gesture command before its execution. A picture of the KAIST Intelligent robotic room is presented in Fig. 1.2.
Fig. 1.2. The intelligent robotic room at KAIST, Korea
12
D. Stefanov and Z.Z. Bien
Figure 1.3 shows the main idea of the “soft remocon”.
a
b Fig. 1.3. Gesture-based human-machine interface. a Soft remocon – The user’s hand gesture is automatically recognized by the TV-based image recognition system and the desired action (“Turn the TV on!”) is executed; b Pointing recognition system – By pointing to the concrete object, the user specified the object that should be replaced, and the robot performs the preliminary defined task
1.5 Functional Integration of the Robotic Environment In some new developments, RR’s are components of the intelligent home environment and act in conjunction with other home-installed devices. In such arrangement, RRs need to be controlled in a coordinated manner. The M3S (Multiple Master Multiple Slave) is a communication strategy especially designed for functional integration of home-installed rehabilitation devices. The M3S specification started from TIDE Project [64, 65], and later it was developed as an open standard (available for free). M3S allows users to assemble a specific complete modular system and to extend or modify the system later in a plug-and-play manner. In case of emergency, the user can halt the operation of the whole M3S system by a Dead Man Switch. The efficiency of that integration strategy has been demonstrated by several evaluations with users from some European countries. The ICAN Project (Integrated Control for All Needs) developed further the idea of functional integration of the home-installed devices [66], [W4]. The main objec-
1 Advances in Human-Friendly Robotic Technologies
13
tive of the project is to propose an optimal control over all home-installed devices by a single interface device, such as a joystick or switch input. A portable device, named function carrier, distributes the commands of a single interface device to various output devices. For example, depending on the setting of the functional carrier, the user will be able to control the wheelchair or the rehabilitation robot using one joystick. The function carrier itself can be designed as a portable computer or palmtop PC that is connected to the interface modules of the separate rehabilitation devices. The overall integration method applies the M3S architecture and communication protocols. ICAN is a collaborative project supported by the Telematics Applications Programme of the European Commission [67].
1.6 Commercialization of RR Commercialization of the rehabilitation robots is often impeded by several factors, such as high cost, low efficiency, and existing welfare regulation. Despite of these problems, some rehabilitation robots, such as Manus (Netherlands) [68, 69], Handy1 (UK) [20, 21], Raptor (USA) [70], and My Spoon (Japan) [22, 23] have already become commercially available products that can be used daily by an increasing number of end-users, offering them enhanced movement assistance and comfort in operation. Although the research in RR continuously explores new service areas for people with disabilities, it seems that people with disabilities have not taken enough advantage of this technology so far. When the MRI was introduced for musculoskeletal imaging, clinicians were resistant to learn the technology because it did not offer anything over CT scans. However, it is now been accepted as a vital part of diagnostic imaging. Similarly, it can be expected that after certain period of improvement and advertising, RR will become a vital part of the treatment of people with disabilities. In general, the decision for buying a rehabilitation robot (DB) can be expressed as:
efficiency, appearance, safety, easiness in control DB = f price
(1.1)
Efficiency of the RR can be found if good answers can be found for the following important questions: 1. What is the list of activities that can be assisted with that robot? 2. What is the importance of each performed tasks for the everyday life and user’s movement freedom? - For example, scratching is a quite natural human action that is important for the user but other activities, such as feeding, bathing, handling with fine objects have dominant priority for the everyday life. The impact of the RR to the quality of life of individuals with movement disabilities can be enhanced by careful choice of activities where the RR can help. However, the gap between academic laboratory research and clinical practice continues to
14
D. Stefanov and Z.Z. Bien
3. 4. 5.
6.
7.
8. 9. 10. 11. 12. 13. 14. 15.
1. 2.
exist. In order to significantly advance the application of RR to rehabilitation practice, the link between scientific investigations rehabilitation robotics and the clinical application of results must be strengthened. Does the robot contribute to the user’s privacy? – Can the robot help activities that are strictly private? - For instance, cloth change and bathing are strict private activities and it will be important if the robot can be helpful for them. For what kind of movement impairments the robot can be applied? –Larger user’s group means better chances for manufacturing of the robot in large series. What is the robot contribution for completion of each task? - For instance, when we command robot assistance in eating, we must clarify the details on whether the robot can prepare the food by itself or whether it is used to serve food that has been prepared in advance by the human-helper. Knowing the robot characteristics, we should answer also the question how long time in a day the robot is expected to be used, i.e. how many hours in a day the robot will replace the human-helper and how many hours movement independence will be provided with such robot. What is the speed of execution of certain task with the robot if we compare it with the speed of natural motions? For example, if we assume that nonimpaired person who uses his/her hands usually completes certain task within 20 seconds, what is the expected speed of execution of a similar task by the user that uses the robot to perform the task, i.e. what is the intensity of the help? On the other hand, speed should be relatively low because of safety considerations and it is highly influenced on the skills of a particular user to interact with the robot. What kind of HMI is used? Does it require any sensor attachments to the user? What is the robot reliability? What are the results from the test evaluation and the user’s feedback about the robot? What is the noise level when the robot is in operation? What kind of energy is applied for robot operation and how long time the robot can operate with batteries that are initially completely charged? What space can be accessed with the robot? Can the robot be used to pick up distant objects (for instance, objects that are located on the floor)? Is the robot gripper precise enough to handle with thin objects and objects with sophisticated shapes? What are the weight and dimensions of the robot? In case of a wheelchair-mounted robot, it should be calculated how the robot attachment affects some wheelchair characteristics, such as ability for narrow door passage and shifting of the centre of gravity. What are the lifting power of the robot and its maximal speed?
Easiness in control relates to the following questions: How long time is needed for a user to learn how to operate the robot? Does the robot control demand some user’s instructions in each moment of the task execution? What is the level of automatic task performance?
1 Advances in Human-Friendly Robotic Technologies
3.
15
Does the robot refer to the exact user’s commands only or its control system can respond on a correct way to tasks that are set with certain level of “fuzziness”?
For instance, if the user wants to drop a letter at the mail box, usually that requires many adjustment maneuvers controlled by the user until the gripper becomes oriented in the correct position regarding the mail slot. However, the user’s task will become much easier if the gripper adjustment can be done automatically based on sensor information. Another aspects of automation of some tasks are: the dosage of the force applied to the object; sliding prevention of the grasped object; maintaining of the initial object orientation during pick-and place tasks with liquid containers (cup, spoon, etc.). Appearance of the rehabilitation robot is very important issue from the aesthetical and psychological point of view. Appropriate robot design should make the robot naturally looking and will not attract others’ attention. Safety is an item of major consideration in the robot design. Different from other robots, the RR’s are intended to work in close proximity to the face and head of the user with limited own motions. RR’s are complicated mechatronic devices, produced in very small numbers and currently the price of such robots is relatively high. User’s readiness to meet the asked price depends on two main factors: 1. The amount, provided by insurance companies, government agencies, charity organizations, etc. 2. Own financial contribution of the end-user. In addition to the price for purchase of the robot, we should take into account also the maintenance costs of the robot during the period of its usage. The maintenance costs include labor from highly qualified personnel and will increase much if user’s house is very far from the technical service center. When we comment commercialization of the rehabilitation robots, we should take into account that the design of robots for service of people with movement disabilities is a relatively new area and currently existing applications cover only a small number of user’s needs. Due to the small number of the users, the average cost of a RR is quite high.–Currently, RRs are only a tiny part of the service robots, which are small portion of the manufactured robots. Probably the situation will be dramatically changed when home robots are developed. Currently, many successful research projects reveal optimism that we are not far from the era when personal robots will become just a part of the home environment, similar to the refrigerator, video, computer, etc. Perhaps in a changed situation, the RR will differ from the ordinary home robots only by some special functions that they will have added. Apart from their primary functions, such as objects replacement, cooking, serving meal, house guarding, partnering in playing games, interlocutors, etc., the new generation of RR will possess advanced human-friendly interface and will be able to help persons with disabilities and aged people in a much efficient way.
16
D. Stefanov and Z.Z. Bien
1.7 Some Issues for Futuristic Intelligent Robotic House Model A research trend in the smart house design is concerned with the futuristic residential space equipped with advanced monitoring system that observes not only the physical status of the inhabitants but also their behavior and emotional condition. It is also expected that one of the biggest reforms in the home arrangement will be the implementation of various service robots that help the inhabitants in many ways. Actions of the RR will be matched with the operation of other home-installed devices. In this sense, the residential space can be considered as a multi-agents robotic system. In this paper, we note that new concept as “Intelligent Robotic House (IRH)”. One example of such intelligent structure is shown in Fig. 1.4.
Fig. 1.4. A Futuristic Intelligent Robotic House Model. The intelligent robotic house integrates advanced technology for non-invasive health monitoring, movement assistance, and leisure activities, and offers easy human-machine interaction
Each subsystem in the residential space will be capable of solving any local task, while a central control unit will coordinate the work of all subsystems and will build the general control strategy. Performed actions in the intelligent residential space will be initiated either by the user’s command or from analysis of the information from home installed sensors. The human-friendly HMI will be based on general user’s instructions, addressed to the common control system of the IRH. Complex tasks may involve many home installed devices. For instance, user’s command for bringing some food will activate first some special automatic cooking devices to prepare the food; next, the mobile robots will serve the food and another robot will feed the paralyzed user. In this example, the user does not control the robot directly.
1 Advances in Human-Friendly Robotic Technologies
17
Instead, a special computer plans the task and distributes separate subtasks to the different devices. A block diagram of the Intelligent Robotic Home is given in Fig. 1.5. Here, some of the sensor information is used by several subsystems. For example, the information on the user’s temperature and heart rate is used by the health monitoring system, the system for emotional status recognition, and the home environment controller; while the visual information of the home-installed TV cameras is utilized for gesture recognition, face expression recognition, walk pattern recognition, posture recognition, and for home security. The same visual information is also applied for monitoring the health and emotional status of the user.
Fig. 1.5. Functional Architecture of a model Intelligent Robotic House. All home-installed devices and robotic systems are controlled by a common control system. The user can use different modalities in order to communicate with his/her house. Some information is used from several subsystems
The example structure includes three service robots: kitchen service robot, robot for movement assistance, and entertainment robot. The entertainment robot is controlled not only upon the information from its local vision sensors but also on the information from the home-installed vision sensors. Audio and video programs in the intelligent home can be selected automatically upon the recognized current state of emotion of the inhabitant. By monitoring of the user’s behavior during listening to the music or watching the video program, the system learns the user’s preferences and includes favorite programs in the future play lists.
18
D. Stefanov and Z.Z. Bien
It is expected that the intelligent robotic house will be based on advanced techniques for sensing and health monitoring. Some recent vision-based health monitoring systems refer to recognition of the facial expression and the facial color (paleness) as an indication of the current health status of the patient. Emotion monitoring is another challenging subject for providing information on the human state of emotion. An initial result exploring this idea can be found in [71]. New interface design will offer more convenient, human-friendly, natural, and autonomous, or simply, “intelligent” ways of human-machine interaction. Facial expressions and gestures are indicative of human intention and emotion. According to some statistics [72], 93% of messages in the face-to-face communication situation are transmitted via facial expression, gestures, and voice tone, while only 7% are via linguistic words. Home-installed RR should be highly efficient and capable of responding to the user’s exact commands and also to the user’s intentions with a high level of “fuzziness” as well, treating and executing them in a proper way. It is expected that the new generation interface devices will be able to adapt to the user’s specifics and will be highly resistant to various artifacts. We anticipate that a major reform of the future residential space will take place by the advent of the use of various service robots. Low cost robots with increased functionality and high reliability will extend the range of activities in which the persons with disabilities will be supported. The IRH will implement strong fusion among different sources of sensor information. Collected data from the wearable sensors will be used not only for monitoring of the inhabitant’s health status but the same information can be also considered in the control of the home environment. For example, the room temperature can be adjusted in consideration of the current health condition of the user.
1.8 Concluding Remarks The rehabilitation robots have become an important goal of development with strong social and economic motivations. Its realization will render a powerful solution for many existing problems in the society with increasing numbers of aged/physically weak and disabled people and will make human life more pleasant and easier. In this paper, we have tried to formulate and classify some recent tendencies of the research in the area so as to obtain our vision about the future tendencies in the development of the technology and its organization. Through examples, we have shown that the development strategy of the intelligent home-installed technology has changed from the design of separate devices (at the beginning) to a form of integrated system arrangement where many homeinstalled devices communicate with each other and synchronously serve/monitor different parameters of the house. Because the problem for the aging society becomes quite common for many countries, the research problems on the smart-home technology with service robots have already become an important subject of international research. Diverse forms of international cooperation, such as international
1 Advances in Human-Friendly Robotic Technologies
19
conferences, international joint projects for development, and result evaluation, are in progress. From the design point of view, we observe that the development strategy is also experiencing a rapid evolution. Although the first designs of home-installed devices were hardware-oriented, the recent strategies are mainly oriented toward intelligent algorithms, where the software solution takes the main part of the whole design. In addition, we have stressed the fact that the future RR’s will include humancentered technologies where important technological components should provide human-friendly interaction with the user. The home-installed technology will further be oriented toward a custom-tailored design where the modular-type components of the smart house will meet the individual user’s needs, emotional characteristics, and preferences.
References 1.
Saito M (2000) Expanding welfare concept and assistive technology. In: Proc. IEEK Annual Fall Conference, Ansan, Korea 2. Warren S, Craft R (1999) Designing smart health care technology into the home of the future. In: Proc. 1st Joint BMES/EMBS Conf., Atlanta, GA, USA, p 677 3. Lindström JI (2001) From R&D to market products – the TIDE Bridge phase. In: Marinček C, Bühler C, Knops H, Andrich R (eds) Assistive Technology – Added Value to the Quality of Life. Amsterdam, The Netherlands: IOS Press, pp 688–692 4. Harwin W, Rahman T, Foulds R (1995) A review of design issues in rehabilitation robotics with reference to North American research. IEEE Trans. Rehab. Eng 3(1): 3–13 5. Stefanov D (1994) Model of a special orthotic manipulator. Mechatronics 4(4): 401–415 6. Kwee H (2001) Integrating control of MANUS and wheelchair. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, France, pp 107–112 7. Rosier JC, van Woerden JA, van der Kolk LW, Driessen BJF, Kwee HH, Duimel JJ, Smits JJ, Tuinhof de Moed AA, Honderd G, Bruyn PM (1991) Rehabilitation robotics: the MANUS concept. In: Proc. 5th Int. Conf. Advanced Robotics, Pisa, Italy, pp 893 – 898 8. Song WK, Lee H, Bien Z (1999) KARES: Intelligent wheelchair-mounted robotic arm system using vision and force sensor. Robotics and Autonomous Systems 28(1): 83–94 9. Bien Z, Kim DJ, Stefanov DH, Han JS, Park HS, Chang PH (2002) Development of a novel type rehabilitation robotic system KARES II. In: Keates S, Langdon P, Clarkson PJ, Robinson P (eds) Universal Access and Assistive Technology. London, UK, Springer, pp 201–212 10. Pauly M (1995) TAURO – Teilautonomer Serviceroboter für Überwachungsaufgaben. In: Dillmann R, Rembold U, Lüth T (eds) Autonome Mobile Systeme. Berlin, Germany: Springer, pp 30–39 11. Prassler E, Scholz J, Fiorini P (2001) A robotic wheelchair for crowded public environments. IEEE Robotics and Automation Magazine 7(1): 38– 45 12. Baumgartner E, Skaar S (1994) An autonomous vision – based mobile robot. IEEE Trans. Automat. Control 39(3): 493–502
20
D. Stefanov and Z.Z. Bien
13. Yoder JD, Baumgartner E, Skaar S (1996) Initial results in the development of a guidance system for a powered wheelchair. IEEE Trans. Rehab. Eng 4(3): 143 –302 14. Gomi T, Griffith A (1998) Developing intelligent wheelchairs for the handicapped. In: Proc. Evolutionary Robotics Symp., Tokyo, Japan, pp 461–478 15. Wakuami H, Nakamura K, Matsumara T (1992) Development of an automated wheelchair guided by a magnetic ferrite marker lane. J. of Rehab. Research and Development 29(1): 27–34 16. Wang H, Kang CU, Ishimatsu T, Ochiai T (1996) Auto navigation on a wheelchair. In: Proc. 1st Int. Symp. Artificial Life and Robotics, Beppu, Oita, Japan 17. Kreutner M, Horn O (2001) Contribution to rehabilitation mobile robotics: Localization of an autonomous wheelchair. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, Paris, pp 207–214 18. Yanco H (1998) Wheelesley: A robotic wheelchair system: indoor navigation and user interface. In: Mittal VO, Yanco HA, Aronis J, Simpson R (eds) Assistive Technology and Artificial Intelligence – Application in Robotics, User Interfaces and Natural Language Processing. Heidelberg, Germany: Springer, pp 256–286 19. Stefanov D (1999) Integrated control of a desktop mounted manipulator and a wheelchair. In: Proc. of the Sixth International Conference on Rehabilitation Robotics (ICORR'99), Stanford University, USA, July 1-2, 1999, pp 207 - 214 20. Finney R, Topping M (1997) After sales care provision for the Handy 1 robotic aid to independence. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Bath, UK 21. Topping M, Smith J (1999) The development of Handy 1, a robotic system to assist the severely disabled. International Conference on Rehabilitation Robotics (ICORR’99), 1999, Stanford, CA, pp 244 – 249 22. Ishii S, Tanaka S, Hiramatsu F (1995) Meal assistance robot for severely handicapped people. Proceedings IEEE Robotics and Automation Conference, pp 1308-1313 23. Soyama R, Ishii S, Fukase A (2003) The development of meal-assistance robot 'MySpoon'. Proceedings of the 8th International Conference on Rehabilitation Robotics, pp 88–91 24. Bagchi S, Kawamura K (1994) ISAC: A robotic aid system for feeding the disabled. AAAI Spring Symposium on Physical Interaction and Manipulation, March 1994 25. Kawamura K, Bagchi S, Iskarous M, Bishay M (1995) Intelligent robotic systems in service of the disabled. IEEE Transactions on Rehabilitation Engineering 3(1): 14 –21 26. Kawamura K, Peters II RA, Wilkes MW, Alford WA, Rogers TE (2000) ISAC: foundations in human-humanoid interaction. IEEE J Intelligent Systems 15: 38–45 27. Mori H, Kotani S, Kiyohiro N (1998) HITOMI: Design and development of a robotic travel aid. In: Mittal VO, Yanco HA, Aronis J, Simpson R (eds) Assistive Technology and Artificial Intelligence – Application in Robotics, User Interfaces and Natural Language Processing. Heidelberg, Germany: Springer, pp 221 – 234 28. Lacey G, Mac Namara S, Dawson-Howe KM (1998) Personal adaptive mobility aid for the infirm and elderly blind. In: Mittal VO, Yanco HA, Aronis J, Simpson R (eds) Assistive Technology and Artificial Intelligence – Application in Robotics, User Interfaces and Natural Language Processing. Heidelberg, Germany: Springer, pp 211–220 29. Lacey G, Dawson-Howe KM (1997) Evaluation of robot mobility aid for older persons blind. In Proc. Symp. Intelligent Robot Systems, Stockholm, Sweden 30. MacNamara S, Lacey G (1999) A robotic walking aid for frail visually impaired people. In: Proc. 6th Intl. Conf. Rehabilitation Robotics, Stanford, CA, USA, pp 163–169
1 Advances in Human-Friendly Robotic Technologies
21
31. MacNamara S, Lacey G (1999) PAMAID: a passive robot for frail visually impaired people. In: Proc. RESNA Annual Conf., Long Beach, CA, USA, pp 358–361 32. Lee CY, Seo KH, Oh C, Lee JJ (2000) A system for gait rehabilitation with body weight support: Mobile manipulator approach. Journal of HWRS-ERC 2(3): 16–21 33. Lee CY, Seo KH, Kim CH, Oh SK, Lee JJ (2002) A system for gait rehabilitation: Mobile manipulator approach. Proc. of the 2002 IEEE International Conference on Robotics&Automation, Washington DC, May 2002, pp 3254–3259 34. Harwin W, Loureiro R, Amirabdollahian F, Taylor M, Johnson G, Stokes E, Coote S, Topping M, Collin C, Tamparis S, Kontoulis J, Munih M, Hawkins P, Driessen B (2001) The GENTLE/S project: A new method of delivering neuro-rehabilitation. In: Marinček C, Bühler C, Knops H, Andrich R (eds) Assistive Technology–Added Value to the Quality of Life. Amsterdam, The Netherlands: IOS Press, pp 36–41 35. Johnson MJ, Van der Loos HFM, Burgar CG, Shor P, Leifer LJ (1999) Designing a robotic stroke therapy device to motivate use of the impaired limb. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, France, pp 123–132 36. Lum PS, Burgar CG, Shor PC, Majmundar M, Van der Loos HFM (2002) Robotassisted movement training compared with conventional therapy techniques for the rehabilitation of upper limb motor function after stroke. Archives of PM&R, 83: 952 – 959 37. Krebs HI, Hogan N, Aisen ML, Volpe BT (1998) Robot-Aided Neurorehabilitation. IEEE Trans. Rehab. Eng 6(1): 75–86 38. Lum PS, Reinkensmeyer DJ, Lehman SL (1993) Robotic assist devices for bimanual physical therapy: Preliminary experiments. IEEE Trans. Rehab. Eng 1(3): 185–191 39. Arz G, Toth A (2001) REHAROB: A project and a system for motion diagnosis and robotized physiotherapy delivery. In: Mokhtari M (ed) Integration of Assistive Technology in the Information Age, ICORR’2001, 7th International Conference on Rehabilitation Robotics. IOS Press, Amsterdam, Evry, France, pp 93–100 40. Toth A, Arz G, Varga Z, Varga P (2001) Conceptual design of an upper limb physiotherapy system with industrial robots. In: Mokhtari M (ed) Integration of Assistive Technology in the Information Age, ICORR’2001, 7th International Conference on Rehabilitation Robotics. IOS Press, Amsterdam, Evry, France, pp 109–116 41. Van der Loos HFM, Hammel J, Lees DS, Chang D, Perkash I (1990) Field evaluation of a robot workstation for quadriplegic office workers. Eur. Rev. Biomed. Tech 5(12): 317– 319 42. Van der Loos HFM (1995) VA/Stanford rehabilitation robotics research and development program: Lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Trans. Rehab. Eng 3(1): 46–55 43. Eftring H (1994) Robot control methods and results from user trials on the RAID workstation. In: Proc. 4th Int. Conf. Rehabilitation Robotics, Wilmington, DE, USA, pp 97– 101 44. Neveryd H, Bolmsjö G (1995) WALKY, an ultrasonic navigating mobile robot for persons with physical disabilities. In: Proc. 2nd TIDE Congress, Paris, France, pp 366–370 45. Keates S, Clarkson PJ, Robinson P (2001) Designing a usable interface for an interactive robot. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, France, pp 156–162 46. Clarkson PJ, Keates S, Dowland R (1999) The design and control of assistive devices. In: Proc. Int. Conf. Engineering Design, pp 425–428
22
D. Stefanov and Z.Z. Bien
47. Oderud T, Tyrmi G (2001) One touch is enough… In: Marinček C, Bühler C, Knops H, Andrich R (eds) Assistive Technology–Added Value to the Quality of Life. Amsterdam, The Netherlands: IOS Press, pp 144–147 48. Shibata T, Tanie K (2001) Emergence of affective behaviours through physical interaction between human and mental commit robot. Journal of Robotics and Mechatronics 13(5): 505–516 49. Do JH, Park KH, Bien Z, Park KC, Oh YT, Kang DH (2001) A development of emotional interactive robot. In: Proc. 32nd Int. Symp. Robotics, Seoul, Korea, pp. 544–549 50. Shibata T, Tashima T, Tanie K (1999) Emergence of emotional behaviour through physical interaction between human and robot. In: Proc. IEEE Int. Conf. Robotics and Automation, Detroit, MI, USA, pp 2868–2873 51. Shibata T, Tashima T, Tanie K (1998) Emergence of emotional behaviour through physical interaction between human and pet robot. In: Proc. IARP 1st Int. Workshop Humanoid and Human Friendly Robotics, Tsukuba, Japan, pp 1–6 52. Dautenhahn K (2003) Roles and functions of robots in human society-implications from research in autism therapy. Robotica 21: 443–452 53. Werry I, Dautenhahn K, Harwin W (2000) Challenges in rehabilitation robotics: A mobile robot as a teaching tool for children with autism. Proc. International Workshop Recent Advances in Mobile Robots, June 29th 2000, De Montfort University, Leicester, UK 54. Xu G, Sugimoto T (1998) Rits eye: a software-based system for realtime face detection and tracing using pan-tilt-zoom controllable camera. In: Proc. 14th Int. Conf. Pattern Recognition, Brisbane, Australia, pp 1194–1197 55. MacBride C, Fleming B, Tanberg BJ (2001) Interdisciplinary team approach to AAC assessment and intervention. In: Proc. 16th Annual CSUN’s Conf. Technology and Persons with Disabilities, Los Angeles, CA 56. Evans DG, Drew R, Blenkhorn P (2000) Controlling mouse pointer position using an infrared head-operated joystick. IEEE Trans. Rehab. Eng 8(1): 107–117 57. Levine SP, Huggins JE, BeMent SL, Kushwaha RK, Schuh LA, Rohde MM, Passaro EA, Ross DA, Elisevich KV, Smith BJ (2000) A direct brain interface based on eventrelated potentials. IEEE Trans on Rehab. Eng 8(2): 180–185 58. Bien Z, Kim JB, Jung JW, Park KH, Bang WC (2000) Issues of human-friendly manmachine interface for intelligent residential system. In: Proc. 1st Int. Workshop Humanfriendly Welfare Robotic Systems, Taejon, Korea, pp 10–14 59. Kasabov N, Kozma R, Kilgour R, laws M, Taylor J, Watts M, Gray A (1997) A methodology for speech data analysis and a framework for adaptive speech recognition using fuzzy neural networks. In: Proc. 4th Int. Conf. Neural Information Processing, Dunedin, New Zealand, pp 1055–1060 60. Han JS, Stefanov DH, Park KH, Lee HB, Kim DJ, Song WK, Kim JS, Bien Z (2001) Development of an EMG-based powered wheelchair controller for users with high-level spinal cord injury. In: Proc. Int. Conf. Control, Automation and System, Jeju Island, Korea, pp 503–506 61. Nakata T, Sato T, Mizoguchi H, Mori T (1996) Synthesis of robot-to-human expressive behaviour for human-robot symbiosis. In: Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Minneapolis, MN, USA, pp 1608–1613
1 Advances in Human-Friendly Robotic Technologies
23
62. Eguchi I, Sato T, Mori T (1996) Visual behaviour understanding as a core function of computerized description of medical care. In: Proc. IEEE/RSJ Int. Con. Intelligent Robots and Systems, Minneapolis, MN, USA, pp 1573–1578 63. Bien Z, Park KH, Bang WC, Stefanov DH (2002) LARES: An intelligent sweet home for assisting older persons and the handicapped. In: Proc. 1st Cambridge Workshop Universal Access and Assistive Technology, Cambridge, UK, pp 43–46 64. Dallaway J, Jackson R, Timmers P (1995) Rehabilitation robotics in Europe. IEEE Trans. Rehab. Eng 3(1): 35–45 65. Nelisse M (1995) M3S: A general-purpose integrated and modular architecture for the rehabilitation environment. In: Proc. 2nd Int. CAN Conf., London, UK, pp 10.2–10.9 66. Willems C (1999) ICAN Integrated Communication and Control for All Needs; Assistive technology on the threshold of the new millennium. In: Proc. 5th AAATE Conf., Dusseldorf, Germany 67. Allen B (1999) Bus systems in a three tiered world, experiences from the ICAN project. In: Proc. Int. Conf. Smart Homes and Telematics, Eindhoven, Netherlands 68. Eftring H, Boschian K (1999) Technical results from MANUS user trials. In: Proc. 6th Int. Conf. Rehabilitation Robotics, Stanford, CA, USA, pp 136–141 69. Gelderblom GJ, de Witte L, van Soest K, Wessels R Dijcks B, van’t Hoofd W, Goossens M, Tilli D, and van der Pijl D (2001) Evaluation of the MANUS robot manipulator. In: Marinček C, Bühler C, Knops H, Andrich R (eds) Assistive Technology – Added Value to the Quality of Life. Amsterdam, The Netherlands: IOS Press, pp 268–273 70. Mahoney R (2001) The Raptor wheelchair robot system. In: Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, Paris, pp 135–141 71. Bien Z, Do JH (2000) Interactive robot for emotion monitoring. In: Proc. Korea-Japan Joint Workshop on Network based Human Friendly Mechatronics and Systems, Seoul, Korea, pp 62–65 72. Mehrabian A (1968) Communication without words. Psychology Today 2(9): 52–55.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective Michael Hillman
2.1 Introduction Popular culture presents the image of a robot as a mechanical, humanoid device, often evil in intent. Those with a slightly more informed technical knowledge would point to the industrial robot arm of the automotive factory, Even professional engineers, are affected to some extent or other by these popular images. In attempting to survey four decades of the development of rehabilitation robotics it is wise to start from an official definition. The Robot Institute of America defined a robot as “A re-programmable, multifunctional manipulator designed to move material, parts, tools or specialized devices through variable programmed motions for the performance of a variety of tasks.” Although this definition was obviously intended for industrial robots, it identifies the key features of programmability, flexibility and movement. Robotics has obviously moved on from that early definition. While robots were initially employed as handling machines in factories, their application is now much wider. In 1987 the Department of Trade and Industry in the UK had an Advanced Robotics initiative to encourage the wider use of robotics in areas other than factories. They used this definition of Advanced Robotics. “The integration of enabling technologies and attributes embracing manipulators, mobility, sensors, computing (IKBS, AI) and hierarchical control to result ultimately in a robot capable of autonomously complementing man’s endeavours in unstructured and hostile environments.” This is a very wide-ranging definition, but one of the key features is the “integration of technologies”. The aim of this integration of advanced technologies is to produce a device that can both operate autonomously and in an environment which may be unstructured and/or hostile. More recently the term “Mechatronics” has been coined as “the synergistic combination of precision mechanical engineering, electronic control and systems thinking in the design of products and processes.” (Festo Didactic GmbH Co 1998). In many ways, mechatronics and robotics cover much of the same territory and it is the application of this level of technology to “the restoration of a person to an optimal level of physical, mental, and social function and well being”, that we are concerned with here. Not just in the UK, but across the world there are now many examples of “advanced robotics”. What is key is not just the level of technology, but that the roZ.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 25-44, 2004. © Springer-Verlag Berlin Heidelberg 2004
26
M. Hillman
bots have moved out of the factory and into the wider, unstructured and sometimes hostile world. The examples cover a wide area and include devices for the exploration both of Mars and earth’s oceans as well as more domestic applications such as filling a car with fuel, mowing the lawn and vacuuming the floor. The use of robotics in rehabilitation is another major area in which robots are coming “out of the factory”. The use of robots in rehabilitation is often associated with assistive devices, some used in a vocational environment, some as aids to daily living or more specifically as feeding devices. There are however other areas either where robotic technology has been applied or might be. One such area is in mobility, increasing the usefulness of more traditional powered wheelchairs. Prosthetics and orthotics is another area where robotics has already been applied. A major area at present is in robot mediated therapy. Education is an area where robotic technology has been applied in some instances and where modern robotic toys might have an impact as well as equipment more specifically aimed at the disabled. Finally communication is an area which has used advanced computer technology, but where there is the scope of incorporating more mechanical technologies as well. This survey covers 40 years of the development of rehabilitation robotics. Rather than attempting to mention every research group and project, this survey is deliberately selective. The emphasis is on those projects that represent the first in a line of development, those that seem particularly innovative, those that have a commercial significance and those which have been used by the greatest number of real life users. The choice of projects is also unashamedly personal in that these are the projects that have most influenced the author. The term “robotics” should be interpreted as widely as possible in an inclusive rather than an exclusive way. The definition above of mechatronics is the one that most simply describes the scope of our survey.
2.2 Earliest Work Most reviews of rehabilitation robotics cite the work at the CASE Institute of Technology in the early 1960’s [24] as the first application of robotics technology to a rehabilitative manipulator. This was a powered orthosis with four degrees of freedom. The exoskeletal structure supported the user’s paralysed arm while performing pre-recorded manipulative tasks, these sequences being taught by an able bodied assistant during training. Interestingly, when so much current work uses electrical actuators, this used pneumatic actuators with closed loop position control achieved using incremental encoders. Another early project was the Rancho Los Amigos “Golden Arm” (Fig. 2.1) [24]. This was a seven degree of freedom battery powered electric orthosis. Several versions were built, and at least one was wheelchair mounted. It was controlled using a form of joint-by-joint control, which was found during evaluation to be not very intuitive.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
27
Fig. 2.1. Rancho Golden Arm
In considering these two early devices it is instructive to put them in the context of the technology of the day. The early 1960’s was a time when the integrated circuit had just been invented, and ten years before the microprocessor. Computers were beginning to come down in size from room size to a more compact cabinet. Neil Armstrong and Buzz Aldrin weren’t to set foot on the moon until the end of the decade, in July 1969.
2.3 Assistive Robotics Assistive robotics can be divided into three main areas based on the mobility of the device. Firstly we consider those that operate in a fixed site. Secondly will be those that may be moved around from one location to another. Thirdly those devices that are attached to a wheelchair. Many people [30] have surveyed the potential uses of robotics to assist people with physical disabilities, and the following areas have been identified. • Eating & drinking • Personal hygiene – washing, shaving, applying make up • Work & leisure – particularly computer use, equipment such as hi-fi and video systems, also games • Mobility – opening doors, windows • General reaching – up to shelves, down to the floor. These are all valuable areas. There are many devices dedicated to specific tasks, some designed for people with disabilities others readily available on the general market. In this case the choice and installation of such devices (not without consultation with the user) predetermines what tasks he or she can carry out. Most assistive robots however are designed as a general-purpose tool and intended to be used as the user desires, rather than for any predetermined task. Independence only comes when the user can decide at any time what activity they would
28
M. Hillman
like to do. A good example of this from our own work is the user who gained great satisfaction from using it to open his Christmas presents. 2.3.1 Fixed Site Apart from the two orthotic devices mentioned above, work in the more specific area of assistive rehabilitation robotics started in the mid 1970's. One of the earliest projects was the workstation based system designed by Roesler [32] in Heidelberg, West Germany. The purpose designed, five degree of freedom manipulator was placed in a specially adapted desktop environment, using rotating shelf units. Another early workstation system was that of Seamone and Schmeisser [33] at the Johns Hopkins University, supported by the Veterans Administration in the United States from 1974. The arm of this system was based around an electrically powered prosthetic arm, mounted on a horizontal track. Various items of equipment (e.g. telephone, book rest, computer discs) were laid out on the simple but cleverly designed workstation table and could be manipulated by the arm using pre-programmed commands. The system thus required that items be in precisely known positions as there were no sensors on the arm. User input was by simple scanning switch selection of routines on a simple LED display. In France, an early project was the Spartacus robot [21], based around a large high power manipulator from the nuclear industry. The table-mounted arm was able to reach down to the floor or up to a shelf. User control was by an analogue input, particularly a head position operated joystick. Safety was of particular importance with this relatively high power device, and early training of users was done with the arm behind a clear screen. This project is of particular significance in that it led to the Manus project in Holland and the Master project in France. In any review of work in rehabilitation robotics there must be recognition of the continuing work at Stanford University, initiated by Larry Leifer in the Department of Mechanical Engineering, with Machiel van der Loos at the Palo Alto VA Center. They built 4 generations of DeVAR (Desktop Vocational Assistive Robot) systems [11, 38]. DeVAR III was a tabletop system laid out for daily living tasks, while DeVAR IV was used in a vocational environment. The DeVAR IV system (Fig. 2.2) used the Puma 260 arm, a standard industrial manipulator, mounted upside down on an overhead track, thus making much better use of the available space. This highlights the problem of using a commercially available robot in that the work environment has to be tailored around the arm. How this is achieved can be crucial to how successful the system is. The usefulness of the system depends on how many “tasks” can be laid out within the work environment. Whenever a certain task is not available to the robot or the user, the usefulness of the system comes to a halt and a colleague has to be called in to intervene. Obviously in an office environment this can be allowed for, but the ability to work independently has been compromised.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
29
Fig. 2.2. DeVAR IV workstation
Besides the engineering work this project is notable for its use in a real working environment on an 8 hours a day basis at the Pacific Gas and Electric Company by Bob Yee. The Stanford group has also done a lot of work to justify the high cost of such a system ($50 - $100,000) in terms of financial saving relative to the cost of employing a human assistant. While the Stanford system used the Puma arm, initially designed for an industrial application, many other projects have been based around the RT series robot. The RTX robot was designed by Tim Jones of UMI (Universal Machine Intelligence, UK) in 1985. The arm was of what is known as a modified SCARA configuration. This has one vertical degree of freedom and two rotational joints with vertical axes that allow the main arm to move in a horizontal plane. This configuration has proved particularly appropriate for rehabilitation applications. One of the early application areas promoted by the manufacturers was its use in rehabilitation (another application area was in the laboratory). The original RTX robot was followed by higher powered versions, the RT100 and the RT200. A mobile version was also investigated, R-Theta but never commercialised One early use of the RTX was a workstation system developed by Caroline Fu [9] of Boeing in Seattle for one of their employees. Interestingly in a number of cases the impetus for using robotics has been from employers, to enable them to continue with valuable staff after accidents, and to keep their expertise within the company. Another significant use of the RT series robot was by the Master project [5] in France. Their approach to maximising the workspace was to mount the arm at the back of the workstation. This gave good visibility of the whole work area. In setting up any workstation it is vital that the arm does not obscure visibility of the environment, in this case either the large vertical column or the bulk of the arm or gripper in different orientations. One significant area of work using the RT series robot has been not just in designing workstations but also in developing and modifying the software and hardware for rehabilitation applications. The use of interchangeable or alternative grippers is one area and this has been a part of the Master system. Several reha-
30
M. Hillman
bilitation friendly programming languages have been developed for the RT robots, although different drivers could allow the software to be applied to other robots. One example is the CURL Cambridge University Robot Language [4]. The RT robots were used as the basis for the RAID project [17], funded by the European Commission. The RAID project, as with all projects under the TIDE initiative (Telematics for the Integration of Disabled and Elderly people), was collaborative and multinational. Amongst the partners were Oxford Intelligent Machines (OxIM, UK) who were then the manufacturers of the RT robots, and the Master project team. The RT series robot was set up in an extended workstation. Another example of the work space of the basic robot being extended by, in this case, mounting the robot on a horizontal track enabling it to retrieve paper work etc from shelving units. The outcome of the RAID project was commercialised by OxIM (who have now ceased trading), and Afma Robots in France So far all the systems we have looked at have used commercially available robots, whether the Puma, or the RT robot, with the problem of integrating such robots into a work environment. The other approach is to design and build a manipulator to best suit the environment. The main advantage is that the robot is designed to be best suited to the likely tasks and working environment, rather than modifying the tasks and environment to suit the robot. Commercially there is the advantage of not being reliant on the continuing availability of a commercial device. Against this there are several possible disadvantages. The resulting device, produced in small numbers, is likely to be more expensive than one made for a wider market in higher volumes. This disadvantage may be countered by making compromises to the design appropriate to the specific system requirements. There is likely to be a longer development time and cost. Any finished device needs to be of a quality and reliability comparable with the best commercial standards.
Fig. 2.3. Regenesis workstation
This was the approach taken at the Neil Squire Foundation in designing their Regenesis manipulator (Fig. 2.3) [3] to best suit the environment and tasks. Their manipulator was based on a horizontal beam, around which an extending arm could translate and rotate. This gave access to a large working volume and could be mounted at the back of a desk, or across a bed for example. This device was, for a while, made commercially available.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
31
Fig. 2.4. Handy 1
In terms of numbers sold, probably the most successful rehabilitation robot is the Handy 1 (Fig. 2.4) [36], sold by Rehab Robotics (UK). The project originated from the Masters Degree project of Mike Topping at the University of Keele (UK). Topping had a neighbour, a young boy named Peter, who had severe problems eating. He used a cheap educational robot to produce a device that allowed Peter to eat independently, at the speed he chose, for the first time. The company has sold at least 250 systems with very positive feedback of its effectiveness. One of the strengths of this project, and reasons for its success, is that it stemmed from the real problems of an individual client. Another feature is that originally it made no attempt to be multi functional. However more recently extensions of the system have been developed for it to be used for applying make up, painting, washing and shaving. The Handy 1 is essentially a feeding aid based around a robot. By comparison the Winsford feeder (RTD-ARC, New Jersey, US) has been available for a long while as a feeding aid, but only recently has been promoted as being a robotic device. With around 2000 units having been sold its commercial impact is obviously greater than the Handy 1. The MySpoon (Secom Co. Ltd, Tokyo, Japan) is available in Japan and is similar in concept. In the UK the Neater Eater (Buxton, UK) was initially designed as a purely manual device to assist those with a tremor eat, utilising a damped arm. More recently a powered, programmable device has emerged which may be considered to be a robotic device. 2.3.2 Mobile Robots Compared with the workstation based devices, the number of mobile assistive robots is very small and the commercial impact has been negligible. One of the best known is probably the MoVAR system [39] from Stanford University, which is essentially a DeVAR on wheels. The mobile base had sophisticated omni-directional “mechanum” wheels. The MoVAR was controlled
32
M. Hillman
from a console with several monitor screens giving feedback to the user from an on board camera as well as a map of the environment and a control environment. A very capable system but at the time it was not packaged in a way which would appeal to a general user. With advances in computer technology the idea could be revisited and a more marketable product achieved. As the founder of the Unimation Company, Joe Engelberger has been called the father of robotics. He has had an interest in service robotics and particularly in medical applications. He proposed [6] the use of his Helpmate robot as a fetch and carry robot for a disabled person. However, while the Helpmate has been successfully used for moving supplies around a hospital, it has not been used successfully in a rehabilitation application. The cluttered environment of a home is not appropriate for such a mobile robot. More recently the KARES II robot system [2] has been developed at KAIST in Korea. This is a wide-ranging project investigating various modes of user control including the use of visual servoing, an eye mouse and a haptic suit as well as the design of the robot arm itself. The arm has been mounted in a number of different configurations, but primarily on a remote controlled mobile base.
Fig. 2.5. Wessex robot
A different approach was investigated at the Bath Institute of Medical Engineering with their Wessex robot (Fig. 2.5) [13]. Having identified the shortcomings of a fixed site robotic workstation in a domestic environment, a non-powered mobile base was designed. While a workstation system works well in a vocational environment it may be very restricting in a domestic environment where different tasks are normally carried out in different rooms of the home. For example to wash in the bathroom, to listen to music in the living room and to eat in the kitchen or dining room. The mobile base was intended to be moved from room to room by a carer or might be clipped to the front of a wheelchair.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
33
2.3.3 Wheelchair Mounted Manipulators If a mobile robot can be seen as a mechanical servant or slave, the concept of mounting a manipulator onto a wheelchair provides what may be termed a third arm. One very early wheelchair mounted robot was designed by Carl Mason [27] at the VA Prosthetics Center in New York. Mechanically it was a well engineered system, able to reach from the floor to ceiling. Apparently though it was too springy, and the hook end effector (as used by many prosthetic hand wearer’s) was not very successful. The control was quite basic, being operated on a joint by joint basis that has been found to be unintuitive and time consuming to control. By comparison placing a simple educational robot on a wheelchair is the most basic of arrangements. Zeelenberg’s son was diagnosed as having Muscular Dystrophy. His parents wanted him to make the fullest development of his abilities and skills. His father obtained an educational robot and simply mounted it on the wheelchair tray [41]. With Muscular Dystrophy it is possible to use a small push button controller and this was chosen for the input device. This wouldn’t pretend to be a technically sophisticated device, but arose out of a real need and close collaboration between the developer and user. Amongst the uses of the robot are for opening the door, moving chess pieces and using the telephone. One reason why this is a very significant part of our historical survey is that out of this work came the Manus project.
Fig. 2.6. Manus
Manus (Fig. 2.6) [22] is extremely well known and although it hasn’t sold as many units as the Handy 1 it is at least as well respected. The work started as far back as 1984 involving collaboration between IRV, (Institute for Rehabilitation Research) in Hoensbroek, led by Hok Kwee, the Institute for Applied Physics and the TNO Product Centre in Delft and the Netherlands Institute of Preventive Health Care. It is a sophisticated robotic manipulator able to be mounted to a number of different wheelchairs. It has seven degrees of freedom, as well as a simple gripper.
34
M. Hillman
The extra degree of freedom extends its vertical range, while allowing it to fold compactly at the side of the wheelchair. The mounting of the arm, protruding from the side of the chair, raises the crucial issue for all wheelchair mounted robots of integrating the arm with the wheelchair, not least to ensure there is no unacceptable increase in overall width, or compromise of the stability of the wheelchair. Many have been sold to rehabilitation centres with much development going on around it, but more importantly with significant sales to end-users. Manus is seen as the standard against which other rehabilitation robotic systems are measured and has been commercialised by Exact Dynamics. Further development of Manus is being carried out by both the manufacturers and also under the European Commanus project [7].
Fig. 2.7. Raptor
The other wheelchair-mounted manipulator that is available commercially is the Raptor (Fig. 2.7) [26], which is being produced by the Rehabilitation Technologies Division of the Applied Resources Corporation. It makes an interesting comparison with Manus. While Manus is a relatively high cost, sophisticated device, the Raptor has introduced compromises to bring the cost down to about a third of that of Manus. In particular it has only four degrees of freedom. It will be very interesting to see how these two devices perform commercially and in terms of their effectiveness, with their differences in cost and functionality. It will also be interesting to see how the larger American market affects the viability. A slightly off-beat approach to wheelchair-mounted manipulators came from Jim Hennequin in the UK. He was best known for his Spitting Image satirical puppets, which appeared on UK television. The puppets used a pneumatic air muscle and these were used for the drive motors of the Inventaid [15] wheelchair mounted robot. He claimed a high power to rate ratio for the air muscles. He also claimed that it was simple enough to be maintained by a back-street fitter.
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
35
2.3.4 Human Machine Interface A mobile assistive robot may be envisaged as a “slave” – it can be instructed to fetch an item from the kitchen or to place a book on the shelf. If the technology is adequate for it to operate autonomously, the instructions can be in what is virtually a natural language. A similar approach can be taken to a workstation environment. Since the environment is more structured it is easier for the robot to operate autonomously. By comparison a wheelchair-mounted manipulator may be seen as a “third arm” and would normally be controlled in some direct way, for example to move the arm forward or to grip an object. It is one of the great challenges to come up with a control system that can even begin to compete with the way in which those without disabilities freely move their arms and hands. This distinction need not be hard and fast. It is obviously possible to drive a mobile robot around the home in the same way one would drive a remote controlled car, perhaps with the benefit of an on-board camera. Similarly task type commands can be given to a wheelchair-mounted manipulator. For example “Grip the red mug in front of you”, or “simply reach down to the floor”. Different media can be used to communicate such commands to the controlling computer. For natural language commands speech recognition may well be used, while for direct control a joystick might be seen as more appropriate. For those not able to control a joystick some form of scanning system can be used. The chosen solution in either case may well be a combination of different media and will also depend on the abilities of the user. While in earlier days the interface was a fixed part of the overall system, more recently the user is able to choose the most appropriate interface device.
2.4 Mobility Within mainstream robotics a major area of both research and commercial application is that of “Automatic Guided Vehicles” (AGV). This technology obviously has potential in addressing the mobility needs of people with a wide range of disabilities. All powered wheelchairs operate in two degrees of freedom, conventionally under the direct control of the user and so are not that much different from a two degree of freedom telemanipulator. Modern powered wheelchairs will often add other powered functions such as leg raisers, and seat tilt. Most modern powered wheelchairs also have programmable controllers. Such chairs may be technically sophisticated but would not make the claim of being robotic. As soon as sensors are added, and a control system which can react to the output of those sensors, the wheelchair is becoming an AGV. Such devices are normally referred to as smart wheelchairs. These use sensors to detect objects in the environment. On board processing of the constantly changing and moving relative environment can allow the user such functions as to track a wall, go through a door or dock at a table or desk.
36
M. Hillman
One approach is to adapt a standard commercial base. For example the CALL Centre in Edinburgh, UK have many years experience in this area. In their initial work [28] they used a standard electric wheelchair to produce a smart wheelchair for children and teenagers. In their latest smart wheelchair the Smart Controller acts as if it was a second joystick plugged into the DX (Dynamic, New Zealand) wheelchair bus system. Various Smart Wheelchair ‘tools’ can be easily selected in different combinations to suit the pilot and environment Alternatively it is possible to build a wheelchair that is “smart” from the outset. Such an approach was used by the CEC TIDE funded OMNI project [14]. This was an omni-directional wheelchair integrated with autonomous control features The big difference between an AGV and a smart wheelchair is that a powered wheelchair is not normally required to be completely autonomous. The issue is how to handle the conflicts between when the user should have an appropriate level of control of the chair and when the smart processor will take over, and vice versa. A different approach to mobility comes from Dean Kamen who invented the iBOT wheelchair. While the chair may be driven in a conventional way, gyroscopic sensors allow the chair to balance on two wheels or to climb stairs. The safety issues of relying on gyroscopes and processors to provide the basic stability of the device are paramount. In common with other safety critical “fly by wire” systems multiple redundancy is used. A commercial company Independence Technology (US) has recently received FDA approval for the iBOT in the US and hopes to start making them available to selected clinics and rehabilitation centres towards the end of 2003. Not all mobility implies that the person needs to be transported by the device. In 1977 Meldog [35] was developed at the Mechanical Engineering Labs at Tsukuba Science City in Japan. It provided mobility for a blind person by guiding them around city streets, downloading a basic map and using landmark sensors. It would function in much the same way as a guide dog would be used in other cultures. With the increasing miniaturisation of electronics and GPS positioning it is possible that the same functionality could be obtained today on a body worn device without the problems of kerbs and the steps. Many people have investigated simple electronic white sticks [8] which have met with limited success, but there may be scope for a much more sophisticated device.
2.5 Prosthetics and Orthotics It is clear from the early work mentioned above that prosthetics and orthotics have been closely associated with rehabilitation robotics. It is useful at this stage to define a prosthesis as an artificial limb (although the term can also be used for an internal organ or joint) and an orthosis as a device to support or control part of the body. The early work at CASE and Rancho Los Amigos (mentioned above) were orthotic systems. More recently the Mulos project, funded under the EU TIDE
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
37
funding initiative [40] was a powered upper limb orthotic system. Rahman and colleagues at the AI duPont Institute (Wilmington, US) have been intimately involved in the rehabilitation robotics field. They have been involved in the design of both powered and non-powered arm orthoses. In particular their anti gravity arm orthosis [31] is noteworthy although, being a balanced system, with no external power supply may not come within our definition of a robotic system. Although there is a lot of commercial work in prosthetic arms and hands very little of this has used robotic technology, but rather has been a development from existing technologies. However, one computer controlled upper arm prosthesis is the Utah/MIT artificial arm and dextrous hand developed by Jacobsen [16]. Another longstanding project is the work that originated with the Southampton Hand [23] at Southampton University and progressed at the Nuffield Orthopaedic Centre in Oxford. Initially a complex five fingered hand, the mechanism has been simplified, while retaining the capabilities of forming the hand into several functional configurations both for precision and power. As a continuation of this work, the ToMPAW project [29] combines the earlier Leverhulme hand prosthesis with the prosthetic arm developed at Edinburgh. There are two main issues in powered prosthetics and orthotics. One is the issue of miniaturisation and the other of power. The problem of miniaturisation is particularly critical for a hand prosthesis, where the complete system has to fit within the outline of a human hand. We have already noted the problems of integrating a robotic system onto a wheelchair. The problems of prosthetics are obviously at a far greater level of magnitude. Although a hand prosthesis and an upper arm orthosis require different levels of power both present problems in how to store the energy to give a day’s use of the device before recharging is required. With a hand prosthesis the requirement is to fit batteries with sufficient capacity within the hand. For an upper arm prosthesis the energy requirement is far greater. Although the volume/mass constraints are not so difficult, for a truly portable system this is still a major problem area. Besides electrical batteries, compressed CO2 has also been used as a power supply. Blatchford’s Intelligent Knee prosthesis (Basingstoke, UK) however is a nonpowered device. It uses sensors to regulate the swing of the knee, dependent on the rate of walking and other programmable values. Although it is a passive, rather than an active, device it is truly a robot, although nowhere in Blatchford’s publicity is the word “robot” used.
2.6 Robot Mediated Therapy The use of robotics to provide movement therapy for the rehabilitation of patients following stroke has been an area of major growth within rehabilitation robotics over the past few years. It is interesting to see the growth in the number of papers presented at the International Conference on Rehabilitation Robotics (ICORR) in this area. Before 1999 there was at the most a couple of papers. Since 1999 the number of papers has increased from to 15 in 2003.
38
M. Hillman
The reason for the increase is obviously the potential both for more effective therapy and more cost effective therapy. Such potential exists for all areas of rehabilitation robotics, but is perhaps most easily quantifiable in this area. A robot could be used to replicate the exercise regime used by physiotherapist, but has potential for other regimes not easily carried out by a human. For rehabilitation following stroke there are three main ways [25] in which robots have been applied. • Passive is where the movement is externally imposed by the robot while the patient remains relaxed. This movement can maintain the range of motion at the joints • Active assisted is where the patient initiates the movement, but the robot assists along a predefined path • Active resisted is the opposite case where the patient must move against a resistance generated by the robot. At Palo Alto the MIME system [34] can be used both in a passive or active mode or a bilateral mode in which the patient attempts to move both the affected and unaffected limbs. While MIME uses two six degree of freedom Puma robot arms, the ARM Guide developed by Reinkensmeyer and colleagues uses a one degree of freedom robotic device working in a similar fashion to a trombone slide [18]. The MIT-Manus system [20] is a two degree of freedom system, similarly designed for stroke rehabilitation, and is now available as a commercial product. Another stroke rehabilitation project is the GENTLE/S project [1], which encourages the patient to move against a resisted haptic arm in a computer, generated virtual 3D room. With several different approaches it is important to be able to demonstrate the clinical effectiveness. A lot of useful information has been accumulated for the MIME project. After one and two months the group receiving therapy from the robot showed a faster improvement, although after 6 months there was little difference from those who had received more conventional therapy. While the robot seems to be at least as effective as conventional therapy, it seems to be working in a different fashion, and it may be that different approaches will be needed for different levels of impairment. Most of the current work is involved with stroke rehabilitation, but in the past robots have also been used with patients with cerebral palsy and following orthopaedic surgery. At Santa Clara University two planar robot arms have been used for the rehabilitation of joints following surgery [19]. The two arms, each with force sensors at base and gripper, hold firmly two adjacent limb segments (e.g. upper and lower leg). Using the two robots, the leg is manipulated, with the joint kept under compression for effective rehabilitation.
2.7 Robotics in Special Needs Education The use of robots in education for those with physical and learning disabilities has received attention over the years. For example a young child learns much from
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
39
simple play activities. However if he has impaired mobility these opportunities are greatly restricted. The Cambridge/CUED robot [12], based on a UMI RTX robot with a vision recognition system, allowed the child to interact with his environment in various ways ranging from simply dropping a toy brick onto a drum, to painting or playing board games.
Fig. 2.8. Cambridge University Educational Robot
Many electronic toys are now cheaply available on the mainstream market, which may provide especial benefit for the disabled. However AnthroTronix (Maryland, US) are developing what they describe as telerehabilitation tools to motivate and integrate therapy, learning, and play
2.8 Robotics in Communications We have already identified that most communication aids are not robots. This certainly applies when communication is through an acoustic medium. Communication can however use other senses – especially visual, but also tactile The Dexter hand [10] is intended to act as a finger spelling hand for those who are deaf/blind. Such people use finger spelling for communication but this language is not widely known by the general population. Dexter enables someone to input text, for example at a keyboard which is then converted to finger spelling by Dexter, to communicate with a person who uses finger spelling. One difficulty which many physically disabled people encounter is the desire to read a book, magazine or newspaper. Page turning is one of the tasks that comes
40
M. Hillman
highest on the list of priorities for an assistive robot. It is also one of the most difficult, although there are ways to achieve it. There are several page-turners on the market, most of them bulky, expensive. They are often not very effective and limited in what they can achieve. Surely this is a prime area for the application of robotic technologies. For what is essentially a single function device, cost may be a constraint, but if it was reliable and effective there would be a market for such a device.
2.9 Historical Perspective Looking back at the past 40 years it is sobering to consider what has been achieved, although challenging to imagine what might be achieved in the next 40. This survey has outlined the progress in rehabilitation robotics from the early orthotic devices, workstation based systems, and the start of the commercial products Handy 1 and Manus in the 1980’s. The first International Conference on Rehabilitation Robotics (ICORR) was held in 1990 at AI duPont Institute, Delaware where 12 papers were presented. For several years it alternated between the US and Europe, and most recently was held in Korea in 2003 with 66 oral presentations, and 27 demonstrations and posters. The term “robot” was first used in Capek’s “Rossums Universal Robots” in the 1920’s, from the Czech word for a slave. The film Metropolis in 1927 represented the familiar humanoid robot loved by science fiction. In practical terms robotics can be dated back to the DeVilbiss paint sprayer in the 1930’s. In the 1940’s the fiction author Asimov formulated the 3 laws of robotics. In the 1950’s the Unimation robot company was formed. There are now 750,000 robots in industrial use. The progress in surgical/medical robotics has been very impressive. From the early work in the 1980’s they are now used in neurosurgery, orthopaedic surgery and assistance during endoscopic operations. In rehabilitation, wheelchairs are a much more traditional product – easily defined in what they can and should achieve. Everest & Jennings launched the first modern wheelchair in the 1930’s and the first powered wheelchair in the 1950’s. Since then they have sold their 1 millionth wheelchair and there are many other companies worldwide. It is an industry in which many of the products follow a very traditional design and perhaps the first truly novel product is the iBOT wheelchair.
2.10 Commercialisation The real benefit of rehabilitation robotics is in devices being readily available on the open market, so I believe the number of systems sold commercially is of paramount importance. Table 2.1 is a summary of those systems that are currently commercially available. In a number of cases it has not been possible to quote
2 Rehabilitation Robotics from Past to Present – A Historical Perspective
41
numbers of systems sold, as the manufacturers hold this as commercially sensitive information.
Table 2.1. Current (as at April 2003) commercially available rehabilitation robots No. sold
Cost (USD)
AfMaster
?
$50,000
Manus Raptor
>150 13
$35,000 $12,500
Handy 1 Winsford Neater MySpoon
>250 2000 100 ?
$6300 $2499 $3600 $3200
MIT Manus – Planar MIT Manus – Wrist
20 2
$70,000 $65,000
Workstation Wheelchair mounted Feeder
Therapy
Although a number of workstation systems have been available in the past, the AfMaster is the only one currently available. For wheelchair mounted robots there is the interesting comparison between Manus and Raptor. It will be interesting to see which will win out – the technically and functionally superior Manus or the lower cost Raptor with the benefit of the huge US market? Feeders are an interesting lower cost area, and the Winsford feeder has been available for many years. Handy 1 is the device that is most obviously marketed as being robotic, but its cost is highest. Therapy based robots are beginning to become available. Besides the MIT/Manus, the Arm Guide and MIME projects are being commercialise by the Rehabilitation Technologies Division of ARC as the ARC-MIME system.
2.11 Alternatives to Robotics in Rehabilitation Before concluding this survey the alternatives to using robots in rehabilitation should be considered. These alternatives should be seen not as competition, but complementary. Most assistive robots aim to be multifunctional, but stand alone devices, whether sold specifically for the disabled market or mainstream market can have elements of the same functionality. One area of growth at the moment is the integration of different technological devices and approaches, particularly within a smart house environment. Nowadays many activities can be carried out on a computer without the need to interact with the real world. Examples are computer art and music, computer chess and the whole area of 3D computer games and virtual reality.
42
M. Hillman
Animals are often used in rehabilitation. Particularly dogs for the blind, but also for those with mobility and hearing impairment. Monkeys have also been used although more difficult to train. Human carers will always be important. Against the issues of independence must be balanced the need for human interaction and companionship. However sophisticated our technical systems, they are unlikely to match the abilities of a human. In parallel with the development of robotic devices for rehabilitation, much research is continuing into the origins and treatment of debilitating diseases and conditions.
2.12 Conclusions However good the research may be, success is ultimately measured in assistance being given to patients and disabled people in real life. Research must be seen as a stepping stone to commercial products. This will be through devices being sold and bought commercially (whether through private, institutional or state funding). For devices to succeed commercially the correct balance between function and cost must be achieved. One way in which this field will expand is with a move away from the traditional idea of a robot “arm” to the concept of using robotic technologies – sometimes as a multifunctional device, or sometimes as a single function tool – using robotic technologies in the most appropriate fashion. Finally the question – does it matter whether it’s a robot - should we use the “R” word? Commercially the word robot may be used positively to give an impression of technical sophistication. On the other hand many consumers are frightened by the word robot. But ultimately and finally the most important aspect is the benefit of disabled people and their carers.
References 1.
2.
3.
4.
Amirabdollahian F, et al. (2001) Error correction movement for machine assisted stroke rehabilitation In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 60–65 Bien Z, et al. (2002) Development of a novel type rehabilitation robotic system KARES II, In: Keates S, et al. (eds) Universal Access and Assistive Technology, Springer, London, pp 201–212 Cameron WM. (1986) Manipulative appliance development in Canada, In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, World Rehabilitation Fund pp 24–28 Dallaway JL, et al. (1993) An interactive robot control environment for rehabilitation applications, Robotica, 11: 541–551
2 Rehabilitation Robotics from Past to Present – A Historical Perspective 5. 6. 7. 8. 9.
10. 11. 12.
13. 14. 15. 16. 17. 18.
19.
20. 21. 22.
23. 24. 25.
43
Detriche JM et al. (1991) Development of a Workstation for Handicapped People Including the Robotized System Master. Proc. ICORR, Atlanta Engelberger J. (1989) Robotics in Service. Cambridge, Massachussets, The MIT Press Evers HG, et al. (2001) MANUS towards a new decade, In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 155–161 Farcy R, Belik Y (2002) Locomotion assistance for the blind, In: Keates S, et al. (eds) Universal Access and Assistive Technology, Springer, London, pp 277–284 Fu C (1986) An Independent Vocational Workstation for a Quadriplegic. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, World Rehabilitation Fund, pp 42–44 Gilden D, Jaffe D. Dexter, (1988) A Robotic Hand Communication Aid for DeafBlind, Int’l Jnl. of Rehabilitation Research 11(2): 188–189 Hammel J, et al. (1989) Clinical evaluation of a desktop robotic assistant, J. of Rehabilitation Research and Development 26(3): 1–16 Harwin.WS, Ginige A, Jackson RD. (1986) A Potential Application in Early Education and a Possible Role for a Vision System in a Workstation Based Robotic Aid for Physically Disabled Persons. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, pp 18–23 Hillman M, Gammie A (1994) The Bath Institute of Medical Engineering assistive robot, In: Proc ICORR ’94 Wilmington, US, pp 211–212 Hoyer H, et al. (1997) An Omnidirectional wheelchair with enhanced comfort features. In: Proc ICORR 97, Bath, UK, pp 31–34 Jackson RD, (1993) Robotics and its role in helping disabled people, Engineering Science and Education Journal 2(6): 267–272 Jacobsen SC, et al. (1982) Development of the Utah Artificial Arm, IEEE Transactions on Biomedical Engineering 29(4): 249–269 Jones T. (1999) RAID – Towards greater independence in the office and home environment. In: Proc ICORR 99, Stanford, US, pp 201–206 Kahn LE, et al. (2001) Comparison of robot assisted reaching to free reaching in promoting recovery from chronic stroke. In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 39–44 Khalili D, Zomlefer M. (1988) An Intelligent Robotic System for Rehabilitation of Joints and Estimation of Body Segment Parameters. IEEE Transactions on Biomedical Engineering 35: 138 Krebs HI, et al. (2003) Robotic applications in neuromotor rehabilitation. Robotica 21(1): 3–12 Kwee HH et al. (1983) First Experimentation of the Spartacus Telethesis in a Clinical Environment. Paraplegia 21: 275 Kwee HH, et al. (1989) The MANUS Wheelchair-Borne Manipulator: System Review and First Results. In: Proc. IARP Workshop on Domestic and Medical & Healthcare Robotics, Newcastle Kyberd PJ, et al. (2001) The design of anthropomorphic prosthetic hands: A study of the Southampton Hand, Robotica 16(6): 593–600 Leifer L (1981) Rehabilitative Robotics, Robotics Age May/June 1981, pp 4–15 Lum P, et al. (2002) Robotic devices for movement therapy after stroke: Current status and challenges to clinical acceptance. Topics in stroke rehabilitation 8(4): 40–53
44
M. Hillman
26. Mahoney RM. (2001) The Raptor Wheelchair Robot System. In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 135– 141 27. Mason CP, Peizer E. (1978) Medical Manipulator for Quadriplegic. In: Proc Int’l Conf. on Telemanipulators for the Physically Handicapped. IRIA 28. Nisbet P, et al. (1988) The CALL Centre Smart Wheelchair. In: Proc. First Int’l Workshop on Robotic Applications in Medical and Healthcare, Ottawa, Canada 29. Poulton AS, et al. (2002) Progress of a modular prosthetic arm, In: Keates S, et al. (eds) Universal Access and Assistive Technology, Springer, London, pp 193–200 30. Prior SD, (1990) An electric wheelchair mounted arm – a survey of potential users, Jnl of Medical Engineering and Technology 14(4): 143–154 31. Rahman T, et al. (2001) An anti-gravity arm orthosis for people with muscular weakness. In: Mokhtari M (ed) Integration of Assistive Technology In The Information Age. IOS, Netherlands, pp 31–36 32. Roesler H et al. (1978) The Medical Manipulator and its Adapted Environment: A System for the Rehabilitation of Severely Handicapped. In: Proc Int’l Conf. on Telemanipulators for the Physically Handicapped. IRIA 33. Seamone W, Schmeisser G. (1986) Evaluation of the JHU/APL Robot Arm Workstation. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, pp 51–53 34. Shor PC, et al. (2001) The effect of Robotic-Aided therapy on upper extremity joint passive range of motion and pain, In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp79–83 35. Tachi S, et al. (1985) Electrocutaneous Communication in a Guide Dog Robot (MELDOG). IEEE Transactions on Biomedical Engineering 32: 461 36. Topping M. Handy 1, (2001) A robotic aid to independence for severely disabled people. In: Mokhtari M (ed) Integration Of Assistive Technology In The Information Age. IOS, Netherlands, pp 142–147 37. Topping M, Smith J (1999) The development of Handy 1, a robotic system to assist the severely disabled, In: Proc ICORR 99, Stanford, US, pp 244–249 38. Van der Loos M. (1995) VA/Stanford rehabilitation robotics research and development program: Lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Transactions on Rehabilitation Engineering, 3(1): 46–55 39. Van der Loos M, Michalowski S, Leifer L. (1986) Design of an Omnidirectional Mobile Robot as a Manipulation Aid for the Severely Disabled. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York pp 61–63 40. Yardley A et al. (1997) Development of an upper limb orthotic exercise system In: Proc ICORR 97, Bath, UK, pp 59–62 41. Zeelenberg AP. (1986) Domestic use of a training robot-manipulator by children with muscular dystrophy. In: Foulds R (ed) Interactive Robotic Aids. World Rehabilitation Fund Monograph #37 New York, pp 29–33.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot in the Context of Smart Homes Mounir Mokhtari, Mohamed Ali Feki, Bessam Abdulrazak, and Bernard Grandjean
Abstract The design of robot dedicated to person with disabilities necessitates users implication in all steps of product development: design solution, prototyping the system, choice of users interfaces, and testing it with users in real conditions. However, before any design of any system, it is necessary to understand and meet the needs of the disabled users. In this chapter, we describe our research activity on the integration of a robotic arm in the environment of disabled people who have lost the abilities to use their proper arms to perform daily living tasks and who are able to use an adapted robot to compensate, even partly, the problems of manipulation in their environments. To develop a human friendly interface it is necessary to act on the system itself to make it more flexible and easy to use. Improvement of assistive robot’s functionalities must comply with users environment which is composed of several assistive aids complementary to the robot. To meet this target we should have a consistent knowledge on users’ needs and taking into account their specific types of disability, their restricted possibilities, and also their acceptation level of technologies. This requires multidisciplinary competencies on several research areas, such as computer sciences, networking, robotics, home automation, and also on ergonomic to provide standardized functionalities which allow efficient usability of assistive technological aids. This chapter describes the adaptation software architecture developed for an assistive robotic arm, called Manus manipulator, in the context of smart homes where the robot is considered as an object among the others.
3.1 Introduction Usually our environment became not adapted for people having lost the ability to use their proper lower limbs to walk or their proper arms to perform daily living task, as such opening a door, eating, or even having access to a computer. To compensate their incapabilities people with disabilities have often recourse to assistive technological aids such as, electrical wheelchair to compensate moving function, a robot manipulator to move objects in their environment, environmental control systems to control the home environment, and communication Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 47-56, 2004. © Springer-Verlag Berlin Heidelberg 2004
48
M. Mokhtari et al.
systems to improve their ability to communicate with people or to get access to information by the use of a computer. Consequently the user is confronted to several heterogeneous systems, imposing several user interfaces, providing multiple and complementary functionalities, and forming a whole complex environment that we describe as being a smart environment. This situation is usually described in the literature and by some industrials as the smart homes concept that is not necessary limited to home environment, but also to hospital environment and outside (school, train station, leisure places …). Assistive robotics consists mainly on developing systems aiming to compensate motor capabilities of people having lost the ability to use their proper arm, usually due to spinal cord injuries or muscular dystrophies. Several systems appeared these last two decades, with the goal to perform daily living tasks, such as the Manus manipulator we are using in this research work. Smart homes, or as defined in Europe as domotics, consist on acting on the user environment to make it more accessible by adding automated controlled systems used through a common user interface which is defined as an environmental control system. The smart homes context is not only limited to home environment, but also to hospital, school, outside and so on. In term of tasks we could say that smart homes are dedicated to control systems in the environment, such as doors, windows, lights, TV, VCR and so on, and that assistive robotics is mainly dedicated to manipulate objects, such as gripping object from the floor, drinking, eating and so on. The aim of our work is to develop human-friendly user interface, independent from the controlled system, or from the communication protocols, which must be flexible and personalized for each end user. The objective is to develop a generic and unified user interface able to control, not only the robot manipulator, but also any available equipments in user’s environments, such as electrical wheelchair, telephone, TV, Doors and so on. This implies to experiment existing and emerging technologies to fits the needs of people with disabilities.
3.2 MANUS Assistive Robot The MANUS tele-manipulator is a robot mounted on an electric wheelchair (Fig. 3.1). Its objective is to favour the independence of severally handicapped people who have lost their upper and lower limbs mobility, by increasing the potential activity and by compensating the prehension motor incapabilities. Manus is a robot with six degrees of freedom, with a gripper in the extremity of the arm which permits the capturing of objects (payload of 1,5 kg) in all directions, and a display. All are controlled by a 4x4 buttons keypad or by a joystick with the latest prototype version and soon, a mouse or a touch screen. The 4x4 keypad gives the user the possibility of handling MANUS, and the display unit gives the current state of the MANUS.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot
49
Fig. 3.1. Manus robot used in a supermarket
3.3 Networking Technologies and Developments To design a mobile system optimized to offer mobility for the handicapped people, we need to determine the main parameters and technological means allowing supporting solutions in residential and outside environments. Human-Machine interaction in this system appears not only by the user interface, but it is also dependent on wireless and wired networks protocols which control the devices. Indeed, human-machine interface makes possible to control various environment equipment and taking into account their current status “feedback”, whereas, wireless networks ensure the desired mobility of the users [2]. The Figure 3.2 shows the hardware architecture of our concept of smart home and attempts to outline the networking problems of heterogeneous systems in user’s environment. The remote controller integrates the human-machine interface and is accessible by suitable input to each end user (joystick, touch screen, microphones...). The user interface has as a crucial managing role of various equipments functionalities. Among equipment we distinguish several types of devices: electrical devices (White goods), household equipment (Brown goods), data-processing equipment (Gray goods), and also mobile devices (mobile phones, pocket PCs…). The diversity of these products brings a wide range of networking protocols necessary to manage the whole smart environment, such as radio (ex. Bluetooth and 802.11), Infra-red (ex. IrDA), Ethernet, and Power lines communications. The solution consists on the design of a generic user interface independent of the communication protocols.
50
M. Mokhtari et al.
This approach permits to obtain a rather acceptable time response without weighing down the task of the supervisor. Indeed, supervisor plays the central role by processing various interconnections between protocols to allow control communication to corresponding specific devices [5].
Fig. 3.2. Smart homes concept
3.4 General Software Architecture Based on the Commanus software architecture we have developed a new software architecture which consider the Manus robot as an object of the environment, at the same level as an electrical wheelchair, or a domotic system as a TV or VCR (Fig. 3.3). This software architecture is decomposed into three main layers: − User Interface layer: manage user interface events according to any input device (keypad, joystick, voice recognition…) selected and configured with the OT software according to each end user. Remote maintenance of the system is also planned through the Tele-Maintenance Unit (TMU) [6]. − Human-Machine Interface (HMI) layer: convert user events into actions according to selected output devices (Manus, TV, VCR…) − Low Level controller: deals with the specific characteristics of any output device and according to its communication protocol (infra red, radio…). Full control of the Manus robot has been implemented through this software architecture. Actually we are implementing control of home devices through a unified user interface.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot
51
Fig. 3.3. General software architecture
3.5 User Interface Adaptation Re-design of software control architecture is not sufficient to allow access to smart environment by people with severe disabilities. The problem is that each end user, with his deficiencies and his individual needs, is considered as a particular case who requires a typical configuration of any assistive system. Selecting the most adapted input device is the first step and the objective is to allow the adaptation of available functionalities according to his needs. For this purpose we have developed a software configuration tools, called OT (Occupational Therapist interface), which allows a non expert in computer science to configure easily any selected input device with the help of different menus containing activities associated to action commands of any system including the Manus robot. The idea is to describe each device using XML, to generate automatically corresponding functionalities, and to display actions in an interactive graphical user interface. According to user needs, and to the selected input devices, the OT offers the mean to associate graphically the selected actions to the input device events (buttons, joystick movements…). The OT software is actually running and fully compatible with the Manus robot. The extension to other home equipment is under development.
52
M. Mokhtari et al.
Below we will focus on the improvements performed on the Manus controller to facilitate the use of the robot in real condition, in a non deterministic environment, as the home environment.
3.6 Implementation of a Path Planner To favour the integration of Manus in home environment it was necessary to facilitate its control by providing natural movements of the arm taking into account obstacles. 3.6.1 Gesture Library In the human physiology, any complete natural gesture is described as being twophased: an initial phase that transports the limb quickly towards the target location and a second long phase of controlled adjustment that allow reaching the target accurately. These two phases are defined respectively as a transport component and a grasp component [4]. Each component is a spatio-temporal transformation between an initial state, and a final state of the arm (Fig. 3.4).
Oi (xi , yi , zi , yawi , pitchi ,rolli )
Of (xf , yf , zf , yawf , pitchf ,rollf )
Gripper trajectory Fig. 3.4. Robot configurations characterising a gesture
In our approach, we are interested in automating the first phase. The second one requires complex sensors (such as cameras and effort sensors) that are, from a usability point of view, not suitable to be integrated on the Manus. The gesture library contains a set of generic global gestures corresponding to transport component. Each gesture (Gi) is characterised by an initial operational variable of the robot workspace (Oii) corresponding to the initial robot arm configuration, and a final operational variable (Oif) corresponding to the final robot arm configuration. Each variable (Oi) is defined in the Cartesian space by the gripper position (xi, yi, zi) and orientation (yawi, pitchi, rolli). The gestures generated by our system are linked only to the final operational variables. The path planner is able, from any initial arm configuration, to generate the appropriate trajectory to reach the final configurations.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot
53
3.6.2 Obstacles Avoidance To improve the point-to-point mode which performs movement in blind way without taking into account environmental obstacles, we have decided to design a new strategy based on the dynamic generation of 3D obstacle. One example of commonly performed task with the Manus is gripping a glass from a table as shown in Fig. 3.5. The path planner takes into account obstacles located inside the working space of the robot between the initial and final configuration of the gripper. Physical obstacles are virtually encapsulated in boxes playing the role of forbidden areas. In this case a first box represents the arm column which could not be crossed during the movement, and a second box representing the table obstacle. Intermediates points defining the robot trajectory are generated by the use of a developed avoidance algorithm based 3D geometrical calculation [2, 3, 10, 11]. Actually, the 3D virtual boxes are defined statically to validate path planner functionalities. A dynamic definition of forbidden area is currently under design to allow the user defining obstacles according to his own changing environment. The path planner integrates the intermediate points when calculating an automatic gesture as defined above. Consequently, the control of the arm should be simplified for the user and this will offer gain in term of time and of control efficiency.
Fig. 3.5. Path planning process avoiding obstacle
This concept is complementary to the co-autonomy concept, described below, where, in case of a non defined obstacle, the user should always have the ability to modify the trajectory generated by the planner trajectory.
54
M. Mokhtari et al.
3.7 Towards the Co-autonomy Concept The co-autonomy concept was recently introduced as a promising way to design assistive robots intended to meet the needs of disabled people [1]. This concept is based on the control charring between the human and the assistive robot. This approach was also proposed for obstacle avoidance in telerobotic systems applications in hazardous environments [3]. Three types of situations were mentioned to define the co-autonomy concept. 1. User is in total control. 2. Machine is in total control. 3. User and the machine share the control. The software command architecture is designed to fit this co-autonomy concept. In the first version of the command architecture, the first and the third type of the situations cited above can occur. Users are in total control when they are using the Cartesian Mode and share the control with an autonomous controller when they are using the Point-to-Point Mode. As describe in [4] the gesture in the Point-to-Point mode is controlled by the user by pressing, for example, the keypad button continuously until completion. The gesture stops if the button is released or continues otherwise (We can qualify such control as a pseudo-sharing control). This was designed to prevent collisions with the user, other persons, or obstacles. Pressing a button of a keypad or pushing a stick of a joystick until completion of the gesture may sometimes be exhausting for some users with severe disability. To prevent from this fatigue, we thought to include the second type of situation of the co-autonomy concept in the command architecture and integrate the user in the autonomous control loop, i.e. allowing him/her to intervene during the automated gestures. The user may then, during the progression of the arm towards the target, make gripper position adjustments. For example, it could occur that the path planner generates a trajectory that would go throw an obstacle. In this case, a collision of the arm with the obstacle will happen. The user may then, act on the input device to ovoid this collision. Such a situation may be done, as shown in Fig. 3.6, in three phases. An automatic phase, where the end-effector follows the trajectory processed by the path planner, a semi-automatic phase where the user intervenes to avoid the obstacle, and finally, another automatic phase, when the user stops intervening, and where a new trajectory towards the target is generated. We have called such control mode Pointing-and-Doing Mode which is complementary to Point-to-Point Mode. As shown in Fig. 3.6, the task is performed following different phases: st − 1 phase: autonomous phase − The end-effector follows the trajectory processed by the path planner nd − 2 phase: semi-autonomous phase − The user intervenes during the autonomous phase to avoid the obstacle rd − 3 phase: autonomous phase. The user stops intervening: A new trajectory towards the target is generated.
3 Toward a Human-Friendly User Interface to Control an Assistive Robot
55
Fig. 3.6. Pointing-and-Doing mode
3.8 Conclusion In this chapter we have outlined the developments done on the Manus robot, mainly on the software architecture, through the Commanus project, and the evolution of the control system to be adapted to the smart environment. This system is designed on one hand, to reduce manipulation problems for user when using the robot in their environment, and on the other hand, to solve the problems linked to the user-interface. With its new functions, we plan to reduce the task time and the number of commands necessary for complex task. The application of results obtained in assistive robotics in the context of smart homes is considering the Manus robot as a standard object in an environment composed of several objects. The main advantage of this approach is, in one hand, to promote the use of assistive robotics as a function of a whole system, and on the other hand, to favour the integration of a unified and adaptable user interface.
56
M. Mokhtari et al.
This work is currently supported by the GET1 through a national project on smart homes for dependant people. The continuation of this research work to improve the Manus robot and facilitating its integration in daily users environment is insured through the AMOR2 project which is starting with the support of the European Commission.
Acknowledgment The authors would like to thank the people who have participated actively in this presented research work, in particular C. Rose from AFM, and J.P. Souteyrand from INSERM U.483 for graphical design. Funds for this project are provided by GET and Foundation Louis Leprince Ringuet through the Smart Homes project, in association with ENST Bretagne and ENST Paris; and by European Commission through AMOR project.
References 1. 2. 3.
4.
5.
6.
Chatila R, Moutarlier P, Vigouroux N (1996) Robotics for the impaired and elderly persons, IARP Workshop on Medical Robots. Vienna, Austria Feki MA (2002) Communication development system in case of smart homes. Engineering Handicam Lab report, ENIS-Sfax, Tunisia Guo C, Tarn TJ, Xi N, Bejczy AK (1995) Fusion of Human and Machine Intelligence for Telerobotic systems", IEEE International Conference on Robotics and Automation, Nagoya, Japan, pp 3110–3115 Jannerod M (1981) Intersegmental coordination during reaching at natural visual object. In: Long J and Baddeley A (eds.) Attention and Performance IX, Hillsdale, NJ: Lawrence Erlbaum Associates, pp 153–169 Mokhtari M, Abdulrazak B, Fki MA (2003) Human-smart environment interaction in th case of severe disability. Proc. 10 International Conference on Human Computer Interaction (HCI’2003). Greece Truche C, Mokhtari M, Vallet C (1999) Telediagnosis and remote maintenance system on Internet network for the Manus robot. Fifth European Conference for the Advancement of Assistive Technology (AAATE’99), IOS Press, Düsseldorf, Germany.
1 GET: Groupe des Ecoles des Télécommunications, which federates several telecommunication Engineering Schools, including INT. 2 AMOR project EEC Growth program: Mechatronic upgrade & wheelchair integration of the Manus Arm manipulator. Partners involved: Exact dynamics, TNO-TPD and Koningh in the Netherlands, Ideasis and ExpertCam in Greece, Lund University in Sweden, HMC in Belgium, and INT and AFM in France
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II Z. Zenn Bien, Kwang-Hyun Park, Dae-Jin Kim, and Jin-Woo Jung
4.1 Introduction Utopia – it would be a society where the welfare of the people is properly guaranteed. In the society, each constituent would live his/her life with the blessed feeling of equality. In particular, it would be a society where the elderly and even the handicapped would live well, independently, and comfortably along with people without disabilities. It is instructive to note that the number of the elderly is drastically increasing along with the number of the handicapped caused by a variety of accidents in the complicated and diversified society [64]. In order to realize a welfare-driven society, it is essential to build an infrastructure with a variety of convenient facilities, high-tech equipments, and systems based on advanced technologies with human-friendliness. Rehabilitation robotics is mostly concerned with application of the robotic technology to the rehabilitative needs of people with disabilities as well as growing population of the elderly [28]. Rehabilitation robotic systems aim to solve daily living problems in individual activities. One may say that the primary role of rehabilitation robotic systems is to endow as much independence as possible so as to improve human life quality. Typically, rehabilitation robotic systems are classified as a kind of service robots in the area of robotic technology, while they are also considered as some form of assistive devices in rehabilitation engineering. In fact, major functions of such robotic systems are two types: one for replacement (or rehabilitation therapy) of the user’s handicapped function [46, 52, 60] and the other for assisting the user to carry out necessary tasks. In this chapter, we shall concentrate on developing the second type of robotic systems with assistive functions, and, to this end, shall consider the development of a welfare-driven smart home for high quality of daily life and an indispensable assistive robotic system for the severely-handicapped. The importance of smart home for the elderly and the handicapped may be well understood from the various existing studies such as AID project [7], Smart House project at University of Sussex, HS-ADEPT [25], HERMES, the Smart Home project at Brandenburg Technical University, the Gloucester Smart House, the SmartBO Project [21], the smart house at the Colorado University [55], Welfare Techno Houses in Japan, Robotic Room at the University of Tokyo [56], etc. Considering the existing smart homes, we comment some of the concepts that are implemented in our Intelligent Sweet Home project. In the proposed smart home, some recent technology innovations are considered as well as some specifics in Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 57-94, 2004. © Springer-Verlag Berlin Heidelberg 2004
58
Z. Z. Bien et al.
the lifestyle and traditions in Korea. Since R&D of smart home is urgently in demand to comply with the needs of human being to lead more convenient and safe living and to deal with the increase in the number of the elderly and the handicapped, Intelligent Sweet Home for assisting the elderly and the handicapped, developed at the Human-friendly Welfare Robot System Engineering Research Center in KAIST, aims at development and testing of new ideas for the future smart home and their control. Our work is based on the idea that the technologies and solutions for such a smart home should be human-friendly, i.e. smart homes should possess high level of intelligence in their control, actions and interactions with the users, offering them high level of comfort and functionality. According to their implemented form, the assistive robotic systems are divided into three kinds: 1) Workstation-based system, 2) Mobile robot-based system, and 3) Wheelchair-based system. Some of the workstation-based systems assist the user by using the voice command. DeVAR (Desktop Vocational Assistant Robot) [22], TIDE-RAID (Robot for Assisting for Integration for the Disabled) [27], ISAC (Intelligent Soft Arm Control) [39], IST-MATS (Mechatronic Assistive Technology System) [32] and AFMASTER [1] are well-known examples. Basically, the workstation-based system performs various delicate tasks in a stable mode but its operation is confined to a predefined limited workspace due to lack of mobility. The mobile robot-based system consists of robotic arm and mobile platform. This system is used for transportation of small baggage, guidance for the user, and so on. Walky [18], MoVAR (Mobile Vocational Assistant Robot) [66], TIDE-MOVAID (MObility and actiVity AssIstance system for the Disabled) [16], Care-O-bot I/II [9] and Helpmate [31] are good examples in this category. The wheelchair-based system is currently focused on assistance of the daily living activities of the elderly and the physically handicapped. This type of assistive robotic systems adopts various user interfaces. MANUS [47], FRIEND [53] and RAPTOR [15] are well-known examples of this category. Since, in assistive robotic systems, the human-robot interaction technology becomes increasingly important for user’s convenience and safety in addition to autonomous function of the robotic operations, we report some important results in designing and evaluating KARES II, which is newly developed in KAIST, considering the various human-friendly interfaces and the adaptability to the user. The following items are considered as very important factors for future direction of R&D on the rehabilitation robotics [18, 33]: − − − −
Intelligent interaction/interface that is adaptable to the levels of disability Human-friendly design that assumes the user’s comfort Development of the technology for the user’s safety Increase of the system’s autonomy to compensate the user’s laborious direct control.
Considering these factors, this chapter introduces the development of welfareoriented service robotic systems – Intelligent Sweet Home and KARES II.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
59
4.2 Intelligent Sweet Home Intelligent Sweet Home consists of four main parts based on the results of statistical survey [40,57] and questionnaire survey [45] for the demand of the users: intelligent bed robot system, intelligent wheelchair system, transferring system, and human-friendly interfaces including soft remote control system, intention reading mechanism and health monitoring system. The information exchange between subsystems is performed by the home network using wired and wireless communications. An overall view of the current scenario of the Intelligent Sweet Home is shown in Fig. 4.1. The target users of the system will be preferably the elderly and the physically handicapped based on our statistical survey. And the functionalities of the system are derived from the demand of the users.
Fig. 4.1. Overall view of Intelligent Sweet Home
4.2.1 Questionnaire Survey We conducted a questionnaire survey for the people with disability in their limbs. More than 70% of them are elderly people and most of them in the hospital or in the rehabilitation center for a long time. The total number of participants is 70. We first asked a number of questions to understand inconveniences of the participants in daily life, and Table 4.1 shows the results of the survey about what are they feel mostly inconvenient in their basic living. Their responses were concerned with eating meal, easing nature and small activities such as handling nearby things and slight moving of one’s body. Table 4.1. Causes of inconveniences in daily life Function Eating meal Moving on the bed Bedsore Transferring between bed & wheelchair Easing nature Using home appliances Putting on/Taking off one’s clothe
Percentage of respondents 51% 58% 55% 76% 96% 76% 90%
60
Z. Z. Bien et al.
Our system is developed to eliminate or ease those inconveniences by assisting the activities of the users. In the following, such systems for this purpose are described in more detail. 4.2.1.1 Questionnaire Survey on Intelligent Bed Robot System We have asked the potential users several questions for designing an efficient bed robot system. These questions are related to the life style of users and the function that they hope the bed to do. In the survey, the users who stay in bed for more than 10 hours are about two times as many as the users who stay for less than 10 hours. Table 4.2 shows that the main inconveniences of a current bed system which the handicapped feel are movement on bed or movement between bed and wheelchair, since they cannot use one’s arms and legs freely and they tumble down and slip on bed. To surmount this type of difficulties, we have designed the robotic structure to have a supporting bar. Table 4.2. Inconveniences in bed Function Movement between bed and wheelchair Movement on bed Management of feces and urine Reading book or newspaper Avoidance of decubitus Eating Hobbies
# of respondents 10 10 7 5 5 4 2
4.2.1.2 Questionnaire Survey on Intelligent Wheelchair System The powered wheelchair is an important rehabilitation device for the handicapped and the elderly. We carried out a survey to identify various requirements of potential wheelchair users. Surveys were conducted with 62 handicapped persons, among whom 12 are powered wheelchair users and 50 manual wheelchair users. Most of them had a spinal cord injury (SCI) and, especially, powered wheelchair users had SCI at C4 ~ C6. We found that about 51% of respondent has spent over 9 hours daily in the wheelchair. Thus, the wheelchair can be a major important rehabilitation device for the handicapped to maintain daily life. In spite of indispensability, many potential users do not employ the powered wheelchair because of cost, difficulty of transferring between bed and wheelchair, safety, unfriendliness appearance, maintenance, and so on. In response to a question regarding preferable control input device for operation, 55% of the surveyed preferred to voice recognition, while 19% wanted touch screen and 26% wanted joystick. Most of the participants wished that the wheelchair would support automatic battery charging, prevention of decubitus which was frequently occurred in hips, door passage, and autonomous navigation, as shown in Table 4.3. Some participants suggest that driving a
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
61
car in wheelchair, standing/lying function for the circulation of the blood, smooth motion control, and adjustment of height of chair are also desirable. Table 4.3. Most necessary function in wheelchair Function Automatic battery charging Prevention of decubitus Door passage Autonomous navigation Home appliance control Others
Percentage of response 19.3% 18.4% 18.4% 15.8% 14.0% 14.1%
4.2.1.3 Questionnaire Survey on Transferring System between Bed and Wheelchair The purpose of this survey for the handicapped is to find out what kind of help in moving between bed and wheelchair can make them feel convenient and necessary. Totally 45 people answered questions. Among them, 35 people can move between bed and wheelchair by themselves, while the other 10 people need some assistants. Table 4.4 shows the result of the survey. Table 4.4. Transfer form between wheelchair and bed
Transfer process
Necessity of some system Reason
By oneself (can move by own force) 1. Put away armrest in wheelchair 2. Put down leg on the floor between bed and wheelchair 3. Use two arms and move hips to bed − Necessary: 13 persons − Unnecessary: 10 persons − No response: 6 persons Comfort
1. 2. 3. − − −
By help of family (difficult to move by oneself) Put away armrest in wheelchair Put down leg on the floor between bed and wheelchair Hold arms to bed and transfer body by protector’s help Necessary: 14 persons Unnecessary: 1 person No response: 1 person Safety
Among those who responded in the survey, we found that about 70 % says that the current way of transfer is very inconvenient or uncomfortable, and that most of them can not take shower nor go to a stool without assistant. 4.2.1.4 Questionnaire Survey on Home Appliance Control by Intelligent Man-Machine Interface We have asked to choose a reason why conventional remote controllers are inconvenient to the handicapped among the following examples: 1. It is difficult to bring it to him 2. The size of its button is too small to push
62
Z. Z. Bien et al.
3. It is a heavy task to push the button 4. It is cumbersome to need individual one for each appliance 5. No problem. Among the examples, most of respondent checked no. 1 and no. 4 as shown in Fig. 4.2. From this result, we have confirmed that it is necessary to develop a system that can control most of home appliances in a natural and easy way.
Fig. 4.2. The reason for inconvenience of conventional remote controllers
4.2.2 Assistive Systems 4.2.2.1 Intelligent Bed Robot System [37] Based on the survey that we conducted, we have developed an intelligent bed robot system composed of a pressure sensor-laid bed and a manipulator as shown in Fig. 4.3. Most previous researches have focused on the system which can monitors the patient's posture and motion on bed [56]. We, however, propose a robotic system which can actively help the patient using a robotic manipulator. While the patient is on the bed, the pressure sensors monitor his posture and motions. When he moves on the bed, the robotic manipulator can support his body.
Fig. 4.3. Intelligent bed robot system
In developing the pressure sensor-laid mattress, a set of Force Sensing Resistors (FSR) is used as a pressure sensor as shown in Fig. 4.4. The resistance value
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
63
of the FSR decreases in proportion to applied force on the active surface. Meas2 urement range of force is up to 10 kg/cm and the size of pressure mattress is 1900 mm×800 mm. The spatial interval is 70 mm in vertical direction and 50 mm in horizontal direction. This sensor pad is divided into three modules in order to correspond to reclining beds. The sampling frequency of the images is 10 Hz with its resolution being 10 bits. By using these pressure distribution images, posture and gross movement estimation are realized.
Fig. 4.4. Arrangement of pressure sensors in bed mattress
In designing the robotic manipulator, the parallel manipulator was proposed for supporting the patient’s body. The mechanism consisted of three actuated arms, which were attached to the mobile base via the driven revolute joints as shown in Fig. 4.5. A 50 W DC motor was used as the actuator. Three passive arms were also attached to an upper platform via passive revolute joints. This parallel manipulator is built on the moving platform that is actuated by a 50 W DC motor. The parallel manipulator is responsible for subtle motion on the fixed position of the mobile platform, and the mobile platform can move between the ends of the bed so that the manipulator can reach any position on the bed.
Fig. 4.5. Linkage mechanism in bed robot
64
Z. Z. Bien et al.
4.2.2.2 Intelligent Wheelchair System [41] We have developed an intelligent wheelchair that can help daily life of the elderly and the physically handicapped. We have first analyzed commercial power wheelchair and developed interface board between wheelchair and PC controller based on real-time Linux, RTAI. The system has two incremental encoders and laser range finder to sense environment, localize its position, and detect obstacle as shown in Fig. 4.6. Fig. 4.7 shows the result of laser range finder in intelligent wheelchair and Fig. 4.8 shows the consecutive localization result that was carried out in Intelligent Sweet Home in Fig. 4.1.
Fig. 4.6. Intelligent wheelchair
Fig. 4.7. Result of laser range finder
Fig. 4.8. Localization result
Since charging of a wheelchair battery is one of the tiresome burdens for the handicapped, we concurrently developed a battery charging station and plug for autonomous charging. Fig. 4.9 shows the prototype system. In order to ensure mechanical and electrical security, we have used several micro-switches and relays.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
65
In an undocked state, all electrodes are electrically disconnected to the charging station and plug. During docking process, charging begins by relay control of PC.
Fig. 4.9. Battery charging station and plug
4.2.2.3 Transferring System We have developed the robotic system that can transfer the handicapped between bed and wheelchair. This system consists of mobile part, man-machine interface part and charging system. Fig. 4.10 and Table 4.5 show the mobile part of the system and its specification, respectively. Sling part was specially designed for safety and comfortability. We have developed the man-machine interface part as shown in Fig. 4.11, to be applicable for both left and right handler, and even the handicapped. We have also designed the charging system that is applicable to the other system and can guarantee its safety as shown in Fig. 4.12. Since charging battery is one of the tiresome burdens for the handicapped, the battery charging station and plug will be very helpful to the handicapped.
Fig. 4.10. Transferring system
66
Z. Z. Bien et al. Table 4.5. The specification of the mobile part Part Overall length Narrowest external base width Docking foot inside width Docking foot height Unit net weight Max speed
Length 1330 mm 650 mm 650 mm 250 mm 100 kg 0.4 m/s
Fig. 4.11. Man–machine interface of transferring system
Fig. 4.12. Automatic charging system in transferring system
4.2.2.4 Home Network and Management System [62] Information exchange between subsystems is performed by the home network using wired and wireless communications based on Ethernet. The configuration of the network adopts both server-client and peer-to-peer methods. When a new subsystem is added or an existing subsystem is removed, the server updates address map and send the map to each subsystem. The server collects the state of subsystems and shows the information using a graphic user interface. On the other hand, some operations are achieved using peer-to-peer communication. For example, when the user wants to move from the bed to the wheelchair, the corresponding subsystems directly communicate each other so that they can move to the specified place according to some predefined procedure. For effective management of Intelligent Sweet Home, an outdoor management is also investigated. Even when the elderly or the handicapped is left alone at
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
67
home, the conditions of the home environment can be controlled and checked by caregivers by using internet. 4.2.3 Intelligent Man-Machine Interfaces 4.2.3.1 Soft Remote Control System [5, 19, 58] Significant part of Intelligent Sweet Home is devoted to development of innovative solutions for more natural human-oriented interface for control of homeinstalled devices. From the survey result, we have learned that frail handicapped would feel much comfortable if the applied HMI does not require any attachment of sensors to the user. In our study, we propose soft remote control system and voice recognition system. Soft remote control system is a concept for control of the robot and home appliances by predefined hand gestures remotely sensed by ceiling-mounted CCD cameras. There are two modes of control of the home environment [5]: − Simple mode: User selects devices by pointing to them, and then dwelling or voice is used to confirm the selection and activation of the selected device. Next pointing to the same device turns it off. This mode is applied to “on-off” control of home appliances such as TV, lamps, curtains (opening/closing), etc. − Extended mode: User first activates a mode where a list of tasks and services appears on the TV screen by using a hand gesture. Then, pointing to the TV and moving one’s hand, the user selects from the menu a command that should be executed. Last, taking certain hand posture or using voice commands, the user confirms selected command and initiates its execution. Such mode can be used for changing of the TV channels, setting of home environmental parameters such as indoor temperature, light intensity, sound loudness of audio devices, as well as for selection of pre-programmed tasks that will be automatically executed by the robot or other home-installed devices. Three ceiling-mounted CCD color cameras with pan/tilt motions are used to acquire the image of the room. For the simple identification of the commanding hand in the complex background, it is assumed that the user should wear a color (red & blue) hand band. The color hand band is tracked by means of the condensation algorithm [30]. Then, image segmentation is applied to extract the hand color region from the neighborhood of the color hand band region. For representation of raw data, a feature extraction procedure is also included. It is followed by pointing recognition procedure that recognizes the pointing gesture and calculates the orientation angle and pointing direction of the hand. The control procedures end with sending appropriate IR signal for controlling home appliances. We have tested pointing gestures in Intelligent Sweet Home as shown in Fig. 4.13. When the user points at the TV, as the result of recognizing the pointing gesture, the TV is turned on/off. As an extended version, we are testing pointing gestures with 3×3 menu matrix on TV (Fig. 4.14). With the menu, we can control
68
Z. Z. Bien et al.
appliances including home robots. By taking hand posture and orientation into consideration, this system enables supplementary extension of possible number of commands and enhances freedom of user’s movement.
Fig. 4.13. Soft remote controller used in Intelligent Sweet Home: hand pointing recognition
Fig. 4.14. Extended mode of Soft Remote Controller with 3×3 menu matrix
4.2.3.2 Intention Reading in Bed [37] Until now, there have been many intelligent bed systems but most of them were focused on the monitoring of user’s behavior on bed based on temperature, pressure, or vision sensor. So, it can tell what he is doing but cannot tell what he wants to do. Compared with other systems, our research is also focused on finding what he likes to do, i.e. user’s intention as an input of the system as shown in Fig. 4.15. This function becomes very important for the elderly and the handicapped since they are not good at mobility and at manipulation of devices. To recognize the user’s intention, we use pressure sensors on bed as mentioned in 4.2.2.1. When the user intends to lift his/her body up, center of pressure (COP) moves toward his/her hip and total contacting area (TA) becomes bigger. When the user intends to lower his/her body down, COP moves toward his/her head and
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
69
TA becomes smaller. By these observations, we use COP and TA as the features for recognizer and use Hidden Markov Model (HMM) as recognizer since the intention feature data is a sequential form as shown in Fig. 4.16. From 6 person’s experiment, we could find that intention recognition rate is directly proportional to the ratio of intention interval and motion interval. Fig. 4.17 shows the recognition results of body lowering HMM and body lifting HMM during its motion. In Fig. 4.17, y-value means recognition rate and x-value means time. Motion was started at x = 60 and ended at x = 140 . By setting proper intention interval, we could get user’s moving intention. We are now studying the effect of mattress on pressure distribution and classification of various intentions including turning or moving.
Fig. 4.15. Interaction between user and intelligent bed robot
Fig. 4.16. Intention recognition algorithm
70
Z. Z. Bien et al.
a
b Fig. 4.17. Recognition rates: (a) body lowering, (b) body lifting
4.2.3.3 Health Monitoring System [4, 49] Human health monitoring system is one of the intelligent man-machine interfaces in the sense that it can detect the unconscious human intention [67]. For example, when someone catches a cold, his or her body will generate heat during the sleep. In this example, human health monitoring system can detect the heat from the user and give him/her a proper task such as raising the room temperature. Since the elderly or the handicapped are always apt to become sick suddenly, the health monitoring system is much needed and important [38]. Thus, the health monitoring system must be able to detect the basic human signs by telemetries, diagnose the emergent level of some disease and tell the doctors or the caregivers by network as an example in Fig. 4.18. If this system can know the pre-information about the user’s health, then the reliability of this system will be bigger. Finally, the elderly and the handicapped can have a conviction about his or her health using this health monitoring system.
Fig. 4.18. Diagram for human health monitoring system
For Intelligent Sweet Home, we have developed telemetry for various biosignals with 4-ch wireless bio-signal monitoring system as shown in Fig. 4.19.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
71
Test results for various bio-signals are shown as examples in Figs. 4.20, 4.21 and 4.22.
Fig. 4.19. 4-ch wireless biosignal monitoring system
Fig. 4.20. EMG test result during human walking
Fig. 4.21. Human ECG test result
Fig. 4.22. EEG test result during human sleep
72
Z. Z. Bien et al.
4.3 KARES II System In 1996, we developed KARES I system as a wheelchair-based rehabilitation robot system [64]. KARES I can provide four basic tasks (pick a cup on the table, pick an object on the floor, approaching a cup to the user and switch on/off on the wall) to help severely physically handicapped person. KARES I consists of a 6DOF robotic arm with a mono-vision system for visual servoing, voice recogTM nizer, 6D force-torque sensor and 3D input device (SpaceBall ). All the subsystems are integrated into a powered wheelchair platform. Although KARES I contains many factors as pointed out by [18, 33], human-friendliness and adaptability to the levels of disability have to be considered for the better system. Furthermore, due to flexibility of the rubber tire of powered wheelchair, the vibration of the robotic base causes very unstable operation of the robotic arm during the task. The lessons learned from KARES I and analysis of many conventional rehabilitation robotic systems enable us to address two technical issues as follows:
− A new kind of rehabilitation robot system is required for comfortable and robust operation. Although KARES I is a compact version of various sensors and technologies, exact operation is hard to achieve due to vibration of the robotic base caused by rubber tire of the powered wheelchair. One may say that a workstation-based system (e.g. ISAC of Vanderbilt University [39]) is one solution for resolving vibration problems since such a system enables the robot to operate in the stable mode. However, the workstation-based system cannot provide enough workspace due to its limited mobile capability. Thus, a novel combination of a wheelchair-based system and a workstation-based system can be adopted to realize a futuristic rehabilitation robotic system. In this case, a mobile base plays a key role due to its stable operation during stoppage as well as its free-to-move capability. − To cope with variety of handicaps, man-machine interaction/interface should be realized in a modular form. On the contrary to conventional rehabilitation robot systems, a futuristic rehabilitation robot system has to provide wide range of services for variety of handicaps. The modularized approach of dealing with different degrees of disability will give the user a benefit of minimum redundancy regarding the components/subsystems, and thus enhance the user affordability about cost. In KARES II system, various kinds of interaction/interface are implemented so that the user can choose according to the degree of disability. Comparing with conventional rehabilitation robots, KARES II shows several unique features as follows:
− KARES II, a product based on “Task-oriented Design” (TOD), has adaptability to the user according to his/her level of disability. From the able-bodied to the severely spinal cord injured (lesion level C4 or C5), KARES II can support twelve predefined tasks. The twelve tasks are collected from six month’s sur-
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
73
veys for the handicapped in the factories and clinics. Based on the survey, those specific twelve tasks were defined according to the usability, feasibility and relationship with the rehabilitation purpose. − KARES II possesses high-level autonomy along with various human-robot interaction interfaces. − KARES II is a successful combination of the fixed workstation-type system and the wheelchair-based system. Through previous experiences on KARES I, we have ensured that this hybrid type system is operation-wise effective in many cases. − Opinions by user trials have been used for refining not only the technical aspects but also aesthetic (or human-friendly) design of KARES II system. This section describes some important requirements and philosophy of design as well as the overall system structure of KARES II system which we have developed for assisting the daily living activities of the physically handicapped. 4.3.1 Questionnaire Survey KARES II system is implemented according to the principle of Task-Oriented Design (TOD) [10] which confirms realization of the predefined tasks, and as a byproduct, it may further attain additional tasks owing to the flexible nature of a robotic system. Specifically, we surveyed as a first step basic activities of end-users (i.e. the spinal cord injured people and the other physically handicapped) and caregivers for six months (see Table 4.6). Table 4.6. Information for samples of survey Period Locations Type(numbers) Living situations
1999.1 – 1999.6 hospital(1), industrial workplace(3), asylum(6) quadriplegia(21), poliomyelitis(9), mental disorder(6), others(4) inpatient(24), outpatient(or dwelling at home)(16)
After having examined those activities collected through surveys, about 150 items were related as possible tasks in a brainstorming process. Then, these items were categorized according to their usability, feasibility and suitability for assistive purposes. Finally, twelve basic tasks were determined as described in Table 4.7. These tasks are the ultimate target for TOD which has guided our development of the subsystems such as a robotic arm, necessary user interfaces, and other hardware modules. Along with the notion TOD, we have taken the concept of “humanfriendliness” into consideration as design philosophy. Since the robotic arm is very likely to interact with human users, safety should be guaranteed when the robotic arm makes contact with the users. For the user interfaces, easier accessibility to the system is required since most of the handicapped people are not good at operating robotic systems. For human-friendliness of the robotic arm, we have
74
Z. Z. Bien et al.
adopted active compliance control for the safety of a user in contact with the robotic arm. For easier accessibility to the system, all the user interfaces have a capability of faster execution of each task. In addition, the appearance of every subsystem of KARES II is paid attention so that the subsystems look human-friendly and comfortable. Another important design philosophy considered in KARES II is “modularization of subsystems”. In consideration of the variety of the level of handicaps, accessible interfaces should differ for cost effectiveness and simplicity. The modularized subsystems can make it possible to construct a personally optimized system. Table 4.7. Twelve tasks for KARES II Task no. 1 2 3 4 5 6 7 8 9 10 11 12
Task name Serving a meal Serving a beverage Wiping/Scratching face Shaving Picking up objects Turning switches on/off Opening/Closing doors Making tea Pulling a drawer Playing games Changing CD/tapes Removing papers from printer/fax
Distance between user & robot hand Near Near Near Near Far Far Far Far Far Near/Far Near/Far Near/Far
4.3.2 Overall Structure 4.3.2.1 H/W Structure of KARES II System As shown in Fig. 4.23, KARES II system consists of the wheelchair platform with various user interfaces (specified as positions 3–6) and the mobile platform with a robotic arm for compliance control and for visual servoing (positions 1 and 2 in Fig. 4.23). Here, the mobile platform is essential to perform tasks that consist of operations far away from the user. In the mobile platform, the mobile base gives mobility and extends the workspace of KARES II system. Considering twelve tasks in Table 4.7, we have found that the mobile platform is very effective to perform some tasks which should be done in the spots far from the user: the tasks include picking an object, turning switches on/off and opening/closing the door. We have also concluded that the mobile base needs not to be omni-directional but autonomous. For the robotic arm manipulation, a six DOF robotic arm with all revolute joints is developed to perform the twelve predefined tasks. It has the PUMA type Denavit-Hartenberg parameters [17] and the lengths of links are optimized for the predefined tasks. The design procedure begins with task points of the predefined twelve tasks and as shown in Fig. 4.24, the optimized arm is obtained [10].
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
75
Fig. 4.23. KARES II system: conceptual view
Fig. 4.24. Design procedure of KARES II
As the wheelchair platform for human-robot interfaces, a powered wheelchair with programmable capability is adopted [29], so that the interface developed by us can control the wheelchair itself. 4.3.2.2 S/W Structure of KARES II System: I/O Relations and Control Architecture Fig. 4.25 shows the I/O relations among all the subsystems of KARES II. As we can see from Fig. 4.25, KARES II system includes various human-robot interfaces for smooth communication and comfortable interaction between the user (the elderly and the people with spinal cord injury) and the robotic arm (position 1). Each interface commands velocity and position in response to the inputs from the user. The user can control the robotic arm as well as the wheelchair itself using various interfaces such as EMG interface (position 6), Eye-mouse (position 3), head interface (position 4) and shoulder interface (position 5). Choice may be dictated according to the levels of disability. In the case of shoulder interface, the user can acquire status information of the system through a haptic feedback function. Also, the visual servoing subsystem (position 2) can provide with two kinds of services:
76
Z. Z. Bien et al.
(i) autonomous service in which the user’s intervention is not necessary and (ii) human-friendly service in which the user’s intention can be acquired via the user’s facial expressions. The latter one is considered as a form of feedback from the user to robot.
Fig. 4.25. KARES II system: I/O relations among each subsystem
Realization of KARES II system as a whole is possible only if the system is constructed based on some efficient control architecture. Fig. 4.26 shows the control architecture of KARES II. If the user selects a certain task through GUI, an overall task sequencer (OTS) decides the necessary interfaces and the sequence for the task. Based on the arrangement of the overall task sequencer (OTS), a submodule task sequencer (STS) decides the corresponding module’s action-like request information to a sub-module, the command to the actuators, notification of the result (if any), and so forth. The sequencer acts as the central coordination unit within the control architecture [54]. Even though we have confirmed that the integrated system based on the above control architecture works reasonably well, there are rooms for optimization. For improvement, we find that a top-down approach of architecture is preferred to the evolutionary design approach in consideration of the following aspects [54]:
− Enhancement and reuse of current software − Modularity of subsystems to support easy exchange of components − Implementation of distributed system concept in order to achieve scalable computing performance.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
77
Fig. 4.26. Control architecture for KARES II system
4.3.3 Soft Robotic Arm with Visual Servoing In this section, we describe two human-robot interaction technologies employed for the robotic arm of KARES II. First, we report that the robot system is designed to have the compliance function. Notably, as shown in Table 4.7, Task 3 and Task 4 require it in the course of task execution. The compliance function can increase the safety level when there occurs an unexpected collision with the user. Moreover, it can provide more comfortable services to the user. Second, it is noted that the robotic arm is equipped with a visual servoing function. This function is required not only for detecting and locating an object autonomously but also for basic intention reading by analyzing facial expressions of the user. 4.3.3.1 Active Compliance Control of the Robotic Arm [10, 11, 12] Since our robotic arm should have various values of compliance in accordance with the twelve tasks, the active compliance control (ACC) is implemented on our robotic arm. In our approach, we have implemented ACC without any force or torque sensors. For ACC, the controller requires not only the position measurement but also the force which is usually measured from the force sensors or the torque sensors. However, we have adopted a method for sensorless measurement for simple and low-cost design. Provided below is the method for sensorless torque measurement. At a static contact situation, the output torque of a motor ( τ motor ) is equal to the external torque ( τ ext ) due to the contact. By using this fact, we can easily know
τ ext while the motor of the robotic arm is controlled to exert τ motor . But, unfortunately, this easy method requires some hard constraints such as negligible back-
78
Z. Z. Bien et al.
lash and friction. These constraints can hardly be satisfied for those robotic arms with speed reducing gears since most gears, by nature, have frictions and backlashes. As a remedy to this problem, the cable-driven mechanism is adopted for speed reduction. The cable-driven mechanism is known to have negligible friction and backlash [65]. Accordingly, it enables the sensorless torque sensing. To realize the desired compliance, Time-Delay Control Based Compliance Control is proposed [12]. This concept is easily implemented while providing with efficiency in control performance. This compliance increases the level of safety for unexpected collision, and furthermore, can make the user feel more comfortable when he/she is performing contact-type tasks such as shaving and wiping of his/her face. To confirm the compliance control algorithm proposed above, a simple experiment is conducted. For a static configuration of the robotic arm, an external force is given as shown in Fig. 4.27. The desired compliances are given by Cd 1 = 2.21 deg/Nm, Cd 2 = 1.17 deg/Nm, and Cd 3 = 0.39 deg/Nm. The results show that the desired compliances are realized at the first three joints, which verifies that the torque-senseless compliance control works well.
Fig. 4.27. Schematic diagram for the experiment
4.3.3.2 Visual Servoing [43, 63] Visual servoing (Fig. 4.25, position 2) module is adopted to provide a visionbased control for autonomy of the robotic arm [13, 59] and to implement a humanfriendly interface for face recognition and intention reading. In the first version of our wheelchair-based rehabilitation robotic systems called KARES I [64], we had found that visual servoing is not an easy task due to requirements of real-time control and robustness to varying illumination, and in particular, the performance is deteriorated due to vibration of the robotic base supported by flexible rubber tires of the wheelchair. In KARES II system, we have separated the robotic arm from the wheelchair platform and have used a vision technique called “space variant vision” for real-time control and robustness to varying illumination. For effective execution of the predefined tasks, we have used a novel stereo camera head in an eye-in-hand configuration [63]. The developed smallsized/light-weighted stereo camera head is installed on the robotic arm in eye-in-
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
79
hand configuration [14]. For fast image processing, the log-polar mapping (LPM) is adopted which is a kind of space variant vision technique [6]. Since LPM image shows invariance to the scaling and the rotation, in addition to its high image reduction ratio (of 22:1 in our system), it is very suitable for visual servoing with eye-in-hand camera configuration [36]. Here, we report that we have performed an experiment of “intention reading” by utilizing the visual images obtained through visual servoing. We assumed that one can show his/her intention to drink or not to drink by opening or closing one’s mouth. Thus, we implemented an intention reading skill based on the information about the user’s mouth [43]. Fig. 4.28 shows sequential images of the user’s face with different degree of mouth openness and the result of intention reading. According to the extracted features about the user’s mouth, we can easily estimate the positive/negative level of the user’s intention to drink or not to drink. With 110 facial images, this method results in a reasonable classification rate (92.7%) [43].
a
b
Fig. 4.28. Intention reading from sequential images: (a) face images, (b) extracted intention
4.3.4 Intelligent Human-Robot Interfaces For the user of KARES II, there are four types of human-robot interfaces as will be described shortly, and it is proposed that the user choose a proper combination of interfaces according to his/her level of disability. Such combination has an advantage of guaranteeing better reliability of the system. Table 4.8 shows a guide to select appropriate interfaces according to the level of disability. In fact, selection of an interface is determined not only by the level of disability but also by the residual functional ability of the user. Table 4.8. Appropriate human–robot interfaces according to the level of disability and the residual functional ability Interface Eye-mouse Head interface Shoulder interface EMG interface
Head/neck Ο Ο × Ο
C4 shoulder(partial) Ο Ο ¨ Ο
C5 shoulder Ο Ο Ο Ο
arms(partial) Ο Ο Ο Ο
80
Z. Z. Bien et al.
4.3.4.1 Eye-Mouse [42, 68] In order for the people with severe motor disability such as C4 lesion to use the KARES II system, it is recommended to use the Eye-mouse system as an input device. The users can indicate the position of the object that they want to grab and give commands to the robot to do something about the object by using the Eyemouse on a computer that is mounted on the wheelchair. So far, there have been many techniques reported to obtain the eye-gaze direction [23]. Those methods can be divided into two types: the contact method and the non-contact method. In the non-contact method, no device is attached on a user’s head while some sensor around the user estimates the user’s eye-gaze direction. CCD camera [3, 20] has been popularly used since it needs no attached device that may cause inconvenience for a user. However, this method has lower accuracy than the contact method and need a head-pursuing system under the free head condition. In the contact method, the head pose and eye movement is obtained using some device that is attached on the user’s head [2, 61]. This method is more accurate than the non-contact method and has no need for a head-pursuing system. However, a disadvantage of this method is that it is inconvenient due to the attached device. In spite of the inconvenience of the contact method, we adopt the contact method because an interface system for the handicapped should be accurate and reliable under the free head condition. Some commercial systems are available adopting the contact method (e.g., [2, 61]), but the design of the systems is not suitable for supporting the handicapped in daily life, so we have developed our own system. We have developed a human-friendly Eye-mouse system based on the opinion of the handicapped (Fig. 4.29). Here, head pose is measured by using ‘Magnetic Sensor Receiver’ on the cap. Eye-gaze direction is acquired by image-based method using CCD camera, IR LED and mirror [20, 44, 48].
Fig. 4.29. Proposed Eye-mouse system
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
81
4.3.4.2 Biosignal-Based Interface [26] EMG (electromyogram) signal is a form of electric manifestation of neuromuscular activation associated with contracting muscle. KARES II adopts EMG interface for the user with disability who can move one’s shoulder or head for controlling a robotic arm or a powered wheelchair. We have developed a small sized LNA (low noise amp)-type EMG AMP with differential AMP to remove common mode component noise, two biquad (2nd order) notch filters to reduce hum noise [34], and a band-pass filter to remove high frequency band. To extract the user’s intentions from muscle movement of shoulders, we defined basic motions as shown in Fig. 4.30. To tackle the user dependency problem exhibited in many previous works, we propose an algorithm capable of classifying the bio-signals obtained from different subjects into the predefined classes using a Fuzzy C-means algorithm and a rough set-based technique selecting a necessary and sufficient set of features out of all feature combinations extracted [26]. The overall signal processing procedure is briefly explained as follows. EMG signals of the predefined motions are measured from four predetermined muscles (channels) with electrodes attached to each subject. A second order high pass butterworth filter with 30Hz cut-off frequency is used for reducing low frequency noise such as motion artifacts. Well-known features such as integral absolute value (IAV), variance (VAR), zero crossing (ZC), frequency ratio (FR) are extracted to classify the predefined motions from noiseless EMG signals. By applying a well-established feature extraction algorithm [26] to these numerous extracted features, we can obtain the minimized feature combination sets which have enough information for complete classification. The minimized feature combination sets extracted by the proposed algorithm are used as input-output pairs to make Fuzzy Min-Max Neural Networks (FMMNN) learn the motions. After learning, FMMNN gives the classified results and actuates the robotic arm based on the user’s movement. In the experiment1, the basic motions are recognized with success rates of approximately 90 % for four untrained users.
Fig. 4.30. Basic 8 motions
1
Each user has 10 trials of each motion for validation.
82
Z. Z. Bien et al.
4.3.4.3 Head&Shoulder Interface [50, 51] Head interface is a two DOF interface for people with C4 lesion. It is used for body-operated control of a wheelchair and a robotic arm. Force sensitive resistor (FSR) is a suitable element for developing a human-robot interface satisfying the guidelines because of its characteristics: low price, ease to measure force, arbitrary shape, and thinness. Human head motion is analyzed in order to determine the motion detection range of the head interface. Average maximum tilt angles are 41° for the front, 73° for the rear, and 60° for right and left side. A head interface valid in the analyzed range (73°) has been developed as shown in Fig. 4.31(a). Shoulder interface is a wearable sensor suit converting the human body motion into a useful command [50, 51]. Humans shoulder motion is also analyzed in the same method as in the previous subsection. The average maximum ranges of shoulder motion are 7.5 cm for the front, 7 cm for the rear, 10.1cm for the upper direction, and 2.5 cm for the downward direction. We have decided that the lift motion of shoulder is most useful for human-robot interaction. A tension sensor measuring the lift motion of shoulder has been developed as shown in Fig. 4.31 (b).
(a)
(b)
Fig. 4.31. Main components of head/shoulder interface: (a) angle sensor, (b) tension sensor
4.3.5 User Trials We have performed user trials by letting six subjects with spinal cord injury use the various functions of KARES II system. Six subjects were rehabilitants of Korean NRC (National Rehabilitation Center) in Seoul who are currently taking appropriate rehabilitation programs (see Table 4.9). The order of user trials is basically composed of following steps. At first, we showed to the subjects how to execute the scenario of “Task 2” in Table 4.7 by using the integrated system. We helped then the subjects to understand the objective of KARES II system. Next, we let the subjects experience by actually letting them operate and observe the various interface subsystem/modules of the system. In every case, some short questionnaires are given to the users with detailed ex-
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
83
planations by us. Next, based on the collected qualitative answers and quantitative measurements for each subsystem, we have drawn the figure which depicts a distribution of the satisfaction degrees (0-100 %) according to the predefined evaluation aspects of each subsystem2. From these procedures, we have obtained the results of user trials. Table 4.9. Information for subjects subject ID A B C
sex
age
M M M
33 36 35
lesion level C4 C5 C4
residual motor ability Head/neck shoulder arms ¨ Ο × Ο Ο × ¨ Ο ×
D
M
51
C5
Ο
Ο
Ο
E
M
21
C4
Ο
¨
×
F
M
31
C5
Ο
Ο
¨
technical aids none none none powered wheelchair none powered wheelchair
4.3.5.1 Robotic Arm When the task of shaving or cleaning-face is conducted by the robotic arm, there exists some form of contact between the trial subject and the robot hand. For such a task, the robot must have some level of compliance for safety of the user. In these cases, the magnitude of compliance may be dependent on each individual and on the kind of tasks. In order to implement comfortable shaving and cleaningface tasks, we investigated a preferred compliance level in each task by performing the task, for the users followed by a set of questionnaires. Three levels of compliance are used in the user trial for both tasks as shown in Table 4.10. Table 4.10. Three levels of compliance [deg/Nm] 1st axis 2nd axis 3rd axis
Level 1 1.232 0.7815 0.1309
Level 2 2.053 1.303 0.218
Level 3 4.107 2.61 0.436
Compliance is realized to each of the 1st, 2nd and 3rd axes in the form of joint compliance in consideration of link length. In Table 4.10, Level 1 is the lowest compliance level (or the highest stiffness case), and Level 3 is the highest compliance level (or the lowest stiffness case). Based on the investigation of human arm compliance done in [24], we have designed three levels; the first one is smaller than that of human arm, second one is nearly the same as that of human arm and the third one is bigger than that of human arm. 2
Predefined evaluation aspects are diverse due to each subsystem's unique characteristics.
84
Z. Z. Bien et al.
For the shaving task, we applied three levels of compliance to six subjects and found that the preference of the handicapped is distributed as shown in Fig. 4.323. Fig. 4.32 (a) and (b) show the degree of safety and the degree of satisfaction of each compliance level that was obtained from the interviews for shaving task case. As shown in Fig. 4.32, Level 2 and Level 3 mark higher preference in regard to safety and comfortable feeling. This result shows that in case of shaving task, the trial subject prefers the level 2 of compliance which corresponds to a weak nurse arm’s compliance [24]. In the task of cleaning-face, we have found that all three levels of compliance in Table 4.10 fail to render a satisfactory result: the degree of satisfaction is very low. We learned that the six subjects preferred to compliance that is lower than the presented values in Table 4.10, which means that the task of cleaning-face needs a more strong contact force with high stiffness to perform the task in a satisfactory way. Also, we have found that it is desirable to position a towel in a desirable position and let the trial subject move their head to clean up the face.
a
b Fig. 4.32. Evaluation of shaving task by the subjects: (a) safety, (b) ease of use
4.3.5.2 Visual Servoing The visual servoing subsystem consists mainly of a newly-developed stereo camera head in an eye-in-hand configuration, a function for object recognition, a function for face recognition and a function for intention reading based on facial expression recognition. In the user trials, we have asked all the subjects about their opinions on physical appearance of the visual servoing mechanism. Fig. 4.33 shows the evaluation summary of visual servoing subsystem. As shown in Fig. 4.33, the stereo camera head should be redesigned to provide more human-friendly appearance. Also, most of the subjects pointed out some difficulty of the language of our GUI due to its small-sized font and English-expression. For easy use and higher satisfaction degree, GUI should be organized using our mother tongue language with bigger character size. In the aspect of function for object recognition, the number of recognizable objects should be predefined by the user candidates and should be increased so as to be capable of handling various 3
In every figure, each degree of satisfaction is represented by ‘Boxplot’ (Boxplot 2002).
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
85
objects in our real life. Finally, we report that all the users are satisfied with the function for intention reading. For more general usage of intention reading, recognition of some parts of face (other than mouth) is recommended.
Fig. 4.33. Results from use trials for visual servoing
4.3.5.3 Eye-Mouse We have carried out user trials with the proposed Eye-mouse system. The accuracy result of the estimated eye-gaze direction in Table 4.11 was obtained when the distance between the user and the monitor is about 500 mm. The two numbers in the each cell of the first and second rows of the table are the horizontal error and the vertical error, respectively. With these errors, a suitable size of the button of the interface program was computed as shown in the third row of the table. The fourth row includes the maximum resolution in case of a 15" monitor. It is noted that the result is worse than the resolution (14×12) obtained in the laboratory [48] and the results are different for different users. The possible reasons for these results are as follows: 1. To extract eye-gaze direction using the Eye-mouse, the center coordinate of eye should be obtained anatomically with respect to the receiver of the magnetic sensor 2. Since the motions of eyes include saccadic motion, concentration is needed for the users to fix gaze on one point 3. Because the proposed method to track the pupil is a vision-based method, it can be affected by illumination of surroundings. From the survey results shown in Fig. 4.34, we may say that most of the users were satisfied with the structure of the interface program. Especially, the convenience of the additional ‘OK’ button for click operation was acclaimed. The users also said that it is easy to control the pan/tilt unit due to the function of ‘automatic pushing’ when the mouse pointer moves into the button. The satisfaction degree
86
Z. Z. Bien et al.
about the design and easiness for wearing the system was turned out to be acceptable. The users wanted the system to be light and not to be tight. They also pointed out the problem of perspiration on the surface of contact. Since the proposed method is a ‘contact’ method, tiredness will be a critical problem when using the device for a long period of time, and thus, the level of tiredness needs to be observed for a longer period of user trials. Table 4.11. Experimental results of accuracy of Eye-mouse Subject Error Mean (pixel) Error STD (pixel) Possible Button Size (mm) Possible Monitor Resolution (15")
1
2
3
4
5
6
(-36.1,-6.7)
(6.3,-7.1)
(1.3,20.0)
(-44.5,-8.9)
(-49.9,-4.8)
(66.4,-5.7)
(50.4,45.1)
(98.4,88.4)
(32.6,15.4)
(63.9,79.8)
(46.1,34.6)
(31.3,43.1)
51.5 × 31.1
62.3 × 57.2
20.2 × 21.2
64.6 × 53.2
57.2 × 23.6
58.2 × 29.2
5.9 × 7.4
4.9 × 4.0
15.1 × 10.8
4.7 × 4.3
5.3 × 9.6
5.2 × 7.9
Fig. 4.34. Results of inquiries about Eye-mouse
4.3.5.4 Head&Shoulder Interface Two experiments were performed to evaluate the performance of the shoulder interface and head interfaces which can make 2 DOF signals. The experimental process is shown as follows: 1. Wear our new interface and try to make four direction signals, which are forward, backward, right and left direction. This list of actions is for familiarizing the user with the new interfaces. 2. Operate the real wheelchair while adapting oneself to the moving mechanism with the new interface.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
87
3. While driving the wheelchair along the predefined path shown in Fig. 4.35, the total elapsed time from start to end point, and the number of collision and recognition rate of interface are measured. 4. Evaluate the performance of new interface.
Fig. 4.35. Predefined path for wheelchair control
The results for wheelchair control according to the path are shown in Fig. 4.354. Table 4.12 is the results about head interface control and shoulder interface control. We conducted another experiment in which the same procedures are conducted by some ordinary person who uses hand to operate joystick. This latter result shows that the elapsed time is 21 seconds while the average number of collision is 0.5 times. In comparison with this case of ordinary person, the experimental result of using joystick shows that the traveling time by the handicapped is twice longer than that of ordinary person and the collision occurs more frequently as many as five times. But people with spinal cord injury do not necessary to spend much adapting time to control the powered wheelchair by our new interfaces. They control the wheelchair along the path with 2 times collision on the average. This result of user trial presents big potential for applicability in that even the severelyhandicapped can control the power wheelchair without any assistance. Also, the results show that our interfaces may not perfectly work with all the handicapped. In the test of shoulder interface, the first and the fifth subject could not use each shoulder independently and therefore the experiment was not carried out. And in the head interface test, the first subject had difficulty in tilting the head for roll motion so he could not turn to appropriate direction. From the survey results in Fig. 4.36 (a) and (b), we may say that the subjects were satisfied with overall impression about both interfaces. Most of the people who used the shoulder interface put in the overcoat said it is a very attractive device because it does not attract other people’s attention. 4
Based on Jones et.al (Jones et al. 1998), we modify the path.
88
Z. Z. Bien et al. Table 4.12. Experimental results of two interfaces Interface Type
subject Elapsed Time (s) Number of Collision Recognition rate (%)
1
Head interface 2 3 4 5
6
1
2
Shoulder interface 3 4 5
6
45
42
58
92
120
34
30
113
28
2
0
2
2
5
2
3
2
1
80
80
65
80
60
80
65
55
80
a
b
Fig. 4.36. Survey results about two interfaces: (a) head interfaces, (b) shoulder interface
4.3.5.5 EMG Interface Wheelchair control was performed also by an EMG interface. To assess the performance of EMG interface objectively, we have measured the total elapsed times, and the number of collisions while driving along a predefined path shown in Fig. 4.35. After experiment, we interviewed the users with questionnaire for subjective evaluation. We tested two control modes as shown in Table 4.13. The difference between Mode 1 and Mode 2 is a command for forward movement. In Mode 1 case, the wheelchair will go forward until a user keeps up-forward command motion such as both shoulders up. In Mode 2 case, the same command was executed by using a toggle switch which makes the wheelchair go straight or stop according to its current state. We attached four electrodes (two channels, bipolar type) for measuring EMG signals in both Trapezius muscles. Table 4.13. Motion command for controlling of wheelchair Motion Initial state Both shoulders up Right shoulder up Left shoulder up
Wheelchair motion in Mode 1 Stop Forward movement Right movement Left movement
Wheelchair motion in Mode 2 Current state hold (forward/stop) Forward/Stop (toggle) Right movement Left movement
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
89
After experiment, we have found that the forward command in Mode 1 has made the users to get tired because the users had to maintain forward motions till the wheelchair reached the target position ahead. The users, however, gave forward command easily in Mode 2 by both shoulders up. The difficulty of forward command usage in Mode 2 is the wheelchair response delay time. The wheelchair response delay time makes the user confuse whether the controller accepts the command correctly. After using this controller for a while, however, the user adapted to this delay time, and felt more comfortable than the forward command in Mode 1. The users also commented about electrode attachment and its outlook appearance. Some users disliked the procedure of electrode attachment (including skin preparation) to skins. Overall wheelchair control performance of the subjects is not as good as ordinary persons, but we have ascertained that EMG interface based on the head movement or shoulder movement would be applied as a wheelchair controller to the users with spinal cord injury with lesion level C4 and C5 (see Fig. 4.37).
Fig. 4.37. Results from user trial for EMG interface
4.4 Concluding Remarks As the preliminary result of developing welfare-oriented robotic systems, we have suggested and implemented several subsystems that help the elderly and the handicapped have independent daily living. We have further collected feedbacks from the design stage according to evaluation by the handicapped. We believe that one of the most important things in welfare-oriented robotic systems is to adopt various service robots to help the habitants in many ways. The idea of Intelligent Sweet Home is treated at present not as a science fiction but as important goal with strong social and economics aspect. Its realization will give a solution of many existing problems of the welfare society and will make the handicapped/elderly’s lives as well as human life much pleasant and easier. We have also shown that, by hybridizing a workstation-based frame and a wheelchair-based one, a novel type of rehabilitation robotic system can be realized
90
Z. Z. Bien et al.
to have advantages of the two types, and that modularized man-machine interface/interaction is realized to cope with the variety of handicaps. In realizing the user’s input commands and interaction mechanism, various human-robot interfaces including Eye-mouse, head/shoulder interfaces, and EMG signal interface are developed so as to cope with different levels of disability. Based on our experiences of developing Intelligent Sweet Home and KARES II, and the feedback from the users, we have concluded that the proposed systems need further improvement in several aspects as follows:
− Further study is needed to design a convenient operation methodology of the system on behalf of novice users and long-term handling. More sensitive and wide intention reading capability of various kinds is desirable for humanfriendly interaction. − Although each subsystem performs its own functions well, we find that a central decision maker is desirable as a means of communication between subsystems for exchanging necessary information. If this kind of decision maker is installed, each subsystem could have worked fully without additional software programming. In this case, what the subsystems need to do is only to send information that is requested from the decision maker. And also, if it is necessary to add a new task, the system operator may modify the decision maker properly only. In this way, we can make the system more simple and flexible. − It is necessary for each subsystem to check whether the information communication is well under way or not. This capability is necessary in preventing safety reduction because of communication failure.
Acknowledgement This research is supported by Human-friendly Welfare Robot System Engineering Research Center (Sponsored by KOSEF) of KAIST and the Ministry of Science and Technology of Korea as a part of Critical Technology 21 Program on “Development of Intelligent Human-Robot Interaction Technology”. We like to acknowledge various forms of support from Prof. Ju-Jang Lee, Prof. Byung Kook Kim, Prof. Jin-Oh Kim, Prof. Jong-Tae Lim, Prof. Heyoung Lee and their student staffs in developing Intelligent Sweet Home, and acknowledge helpful comments on extended mode of soft remote control system from Dr. Dimitar Stefanov. We also like to acknowledge a variety of aid from Prof. Pyung-Hun Chang, Prof. Myung Jin Chung, Prof. Dong-Soo Kwon, and their student staffs in developing KARES II system, and acknowledge assistance from Dr. Byung-Sik Kim and his staff of National Rehabilitation Center, Korea in user trials.
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
91
References 1. 2. 3. 4.
5.
6.
7.
8. 9. 10. 11.
12.
13. 14. 15. 16. 17. 18. 19.
20.
Afma-robots (2003) AFMASTER, http://www.afma-robots.com ASL501 (2003) Model 501. http://www.a-s-l.com/501_home.htm ASL504 (2003) Model 504. http://www.a-s-l.com/504_home.htm Bang W, Stefanov D, Jung J, Kim M, Lee J, Lee H, Bien Z (2001) Human-friendly th health monitoring system for service to the elderly and disabled. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2001), Evry Cedex, France, pp 333–339 Bien Z, Park KH, Kim JB, Do JH, Stefanov D (2003) User-friendly interaction/interface control of intelligent home for movement-disabled people. Proceedings th of the 10 International Conference on Human-Computer Interaction, Crete, Greece, Vol. 4, pp 304–308 Bolduc M, Levine MD (1998) A review of biologically motivated space-variant data reduction models for robotic vision. Computer Vision and Image Understanding 69(2):170–184 Bonner S (1998) AID HOUSE: Edinvar housing association smart technology demonstrator and evaluation site. Proceedings of the 3rd TIDE Congress, Helsinki, Finland, pp 396–400 Boxplot (2002) http://www.shodor.org/interactivate/activities/boxplot/ Care-O-bot (2003) http://www.care-o-bot.de/english/Care-O-bot_2.php Chang PH, Park HS (2003) Development of a robotic arm for handicapped people: a task-oriented design approach. Autonomous Robots 15(1): 81–92 Chang PH, Park HS, Park J, Jung JH, Jeon BK (2001) Development of a robotic arm th for handicapped people: a target-oriented design approach. Proceedings of the 7 International Conference on Rehabilitation Robotics (ICORR2001), pp 84–92 Chang PH, Kang SH, Park HS, Kim ST, Kim JH (2003) Active compliance control for th the disabled with cable transmission. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Dajeon, Korea, pp 84–87 Chen N, Parker GA (1994) Inverse kinematic solution to a calibrated puma 560 industrial robot. Control Engineering Practice 2: 239–245 Choi J (2001) Design of a behavior-based controller using a novel camera head and its application to service robots (in Korean). MS Thesis, KAIST, Korea Colello MS, Mahoney RM (2002) Commercializing assistive and therapy robotics. Universal Access and Assistive Technogy, Keates S et al. (eds), pp 223–234 Conte G, Longhi S, Zulli R (1996) Motion planning for unicycle and car-like robots. International Journal of Systems Science 27(8): 791–798 Craig JJ (1989), Introduction to robotics: mechanics and control. Addison-Wesley Publishing Co. Dallaway JL, Jackson RD, Timmers PHA (1995) rehabilitation robotics in Europe. IEEE Transactions on Rehabilitation Engineering 3: 35–45 Do JH, Kim JB, Park KH, Bang WC, Bien ZZ (2002) Soft remote control system using hand pointing gesture. International Journal of Human-friendly Welfare Robotic Systems 3(1): 27–30 Ebisawa Y (1998) Improved video-based eye-gaze detection method. IEEE Transactions on Instrument and Measurement 47(4): 948–955
92
Z. Z. Bien et al.
21. Elger G, Furugren B (1998) SmartBO – an ICT and computer-based demonstration home for disabled people. Proceedings of the 3rd TIDE Congress, Helsinki, Finland, pp 392–395 22. Erlandson RF (1995) Applications of robotic/mechatronic systems in special education, rehabilitation therapy, and vocational training: a paradigm shift. IEEE Transactions on Rehabilitation Engineering 3: 22–32 23. Glenstrup AJ, Engell-Nielsen T (1995) Eye controlled media: present and future state. B.S. Dissertation, Copenhagen University 24. Gomi H, Kawato M (1997) Human arm stiffness and equilibrium-point trajectory during multi-joint movement. Biological Cybernetics 76: 163–171 25. Hammond J, Sharkey P, Foster G (1996) Integrating augmented reality with home systems. Proceedings of the 1st International Conference on Disability, Virtual Reality and Associated Technologies ECDVRAT '96, pp 57–66 26. Han JS, Bang WC, Bien ZZ (2002) Feature set extraction algorithm based on soft computing techniques and its application to EMG pattern classification. Journal of Fuzzy Optimization and Decision Making 1: 269–286 27. Harwin WS, Rahman T, Foulds RA (1995) A review of design issues in rehabilitation robotics with reference to north American research. IEEE Transactions on Rehabilitation Engineering 3: 3–13 28. Hillman M (1998) Introduction to the special issue on rehabilitation robotics. Robotica 16: 485 29. Hillman M, Hagan K, Hagan S, Jepson J, Orpwood R (2002) The Weston wheelchair mounted assistive robot – the design story. Robotica 20: 125–132 30. Isard M, Blake A (1998) Condensation-conditional density propagation for visual tracking. International Journal of Computer Vision 29(1): 5–28 31. ISRA (1995) The service robot market, an in-depth study from the international service association. ISRA 32. IST-MATS (2003) http://www.bcdi.be/en/projects/data.html 33. Iwata H, Hoshino H, Morita T, Sugeno S (1999) A physical interference adapting hardware system using MIA arm and humanoid surface covers. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 1216–1221 34. Johnson DE (1975) Rapid practical designs of active filters, John Wiley&Sons 35. Jones DK, Cooper RA, Albright S, DiGiovine M (1998) Powered wheelchair driving performance using force- and position-sensing joysticks. Proceedings of the IEEE 24th Annual Northeast Bioengineering Conference, pp 130–132 36. Jruger V (1995) Optical flow computation in the complex logarithmic plane. DiplomaThesis, University of Kiel, Germany 37. Jung JW, Lee CY, Lee JJ, Bien ZZ (2003) User intention recognition for intelligent th bed robot system. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Daejeon, Korea, pp 100–103 38. Kawarada A, Takagi T, Tsukada A, Sasaki K (1998) Evaluation of automated health th monitoring system at the ‘welfare techno house’. Proceedings of 20 IEEE/EMBS, pp 1984–1987 39. Kawamura K, Isakarous M (1994) Trends in service robots for the disabled and the elderly. Proceedings of IROS’94, pp 1647–1654 40. KIHASA (2000) National survey of the disabled persons. Korea Institute for Health and Social Affairs
4 Welfare-Oriented Service Robotic Systems: Intelligent Sweet Home & KARES II
93
41. Kim CH, Jung JH, Kim BK (2003) Design of intelligent wheelchair for the motor disth abled. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Daejeon, Korea, pp 92–95 42. Kim DH, Kim JH, Chung MJ (2001) A computer interface for the disabled using eyegaze information. International Journal of Human-friendly Welfare Robotic Systems 2(3): 22–27 43. Kim DJ, Song WK, Han JS, Bien Z (2003) Soft computing based intention reading techniques as a means of human-robot interaction for human centered system. Journal of Soft Computing 7: 160–166 44. Kim JH, Lee BR, Kim DH, Chung MJ (2003) Eye-mouse system for people with moth tor disabilities. Proceedings of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Dajeon, Korea, pp 159–163 45. Kim Y, Park KH, Seo KH, Kim CH, Lee WJ, Song WG, Do JH, Lee JJ, Kim BK, Kim JO, Lim JT, Bien ZZ (2003) A report on questionnaire for developing Intelligent Sweet Home for the disabled and the elderly in Korea living conditions. Proceedings th of the 8 International Conference on Rehabilitation Robotics (ICORR2003), Dajeon, Korea, pp 171–174 46. Krebs HI, Hogan N, Volpe BT, Aisen ML, Edelstein L, Diels C (1999) Robot-aided neuro–rehabilitation in stroke: three-year follow–up. Proceedings of ICORR1999, pp 34–41 47. Kwee HH (1998) Integrated control of MANUS manipulator and wheelchair enhanced by environmental docking. Robotica 16(5): 491–498 48. Lee BR (2002) A real-time eye-gaze tracking system using infrared rays and vision sensor (in Korean). M.S. Dissertation, Korea Advanced Institute of Science and Technology 49. Lee H, Bien Z (2002) Variable bandwidth filter for reconstruction of bio-medical signd nals with time-varying instantaneous bandwidth. Proceedings of the 2 Joint EMBS/BMES Conference, Houston, USA, pp 141–142 50. Lee K, Kwon DS (2000) Sensors and actuators of wearable haptic master device for the disabled. Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 371-376 51. Lee K, Kwon DS (2001) Wearable master device for spinal injured persons as a control device of motorized wheelchairs. Journal of Artificial Life and robot 4(4):182-187 52. Lum PS, Burgar CG, Shor PC, Majmundar M, Van der Loos HFM (2002) Robotassisted movement training compared with conventional therapy techniques for the rehabilitation of upper limb motor function after stroke. Archives of Physical Medicine and Rehabilitation 83: 952–959 53. Martens C, Ivlev O, Graser A, Lang O, Ruchel N (2001) A FRIEND for assisting handicapped people. IEEE Robotics and Automation Magazine 8(1): 57–65 54. Martens C, Kim DJ, Han JS, Graeser A, Bien Z (2002) Concept for a modified hybrid multi-layer control architecture for rehabilitation robots. Proceedings of the 3rd International Workshop on Human-friendly Welfare Robotic Systems, Daejeon, Korea, pp 49–54 55. Mozer MC (1999) An intelligent environment must be adaptive. IEEE Intelligent Systems and Their Applications 14(2): 11–13
94
Z. Z. Bien et al.
56. Nakata T, Sato T, Mizoguchi H, Mori T (1999) Synthesis of robot-to-human expressive behavior for human-robot symbiosis. Proceedings of the 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 1608–1613 57. NSO (2001) The future estimated population. National Statistical Office 58. Park KH, Bien ZZ (2003) Intelligent Sweet Home for assisting the elderly and the handicapped. Proceedings of the 1st International Conference on Smart Homes and Health Telematics (ICOST200), Paris, France, pp 151–158 59. Peters II RA, Bishay M, Cambron ME, Negishi K (1996) Visual servoing for service robot. Robotics and Autonomous Systems 18: 213–224 60. Rao R, Agrawal SK, Scholz JP (2000) A Robot test-bed for assistance and assessment in physical therapy. Advanced Robotics 14(7): 565–578 61. SMI (2003) 3D VOG Video-oculography. http://www.smi.de/3d/index.htm 62. Song WG, Lim JT (2003) Design and management in smart home. Proceedings of the 1st International Conference on Smart Homes and Health Telematics (ICOST2003), Paris, France, pp 33–37 63. Song WK, Bien Z (2003) Blend of soft computing techniques for effective humanmachine interaction in service robotic systems. Fuzzy Sets and Systems 134:5–25 64. Song WK, Lee H, Bien Z (1999) KARES: Intelligent wheelchair-mounted robotic arm system using vision and force sensor. Robotics and Autonomous Systems 28(1): 83– 94 65. Townsend WT (1988) The Effect of Transmission Design on Force-Controlled Manipulator Performance. PhD Thesis, MIT 66. Van der Loos HFM (1995) VA/Stanford rehabilitation robotics research and development program: lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Transactions on Rehabilitation Engineering 3: 46–55 67. Yamaguchi A, Ogawa M, Tamura T, Togawa T (1998) Monitoring behavior in the th home using positioning sensors. Proceedings of 20 IEEE/EMBS, pp 1977–1979 68. Yoo DH, Chung MJ (2002) Vision-based eye gaze estimation system using robust pupil detection and corneal reflections. International Journal of Human-friendly Welfare Robotic Systems 3(4): 2–8.
5 „FRIEND“ – An Intelligent Assistant in Daily Life O. Kouzmitcheva, C. Martens, A. Pape, H. She, I. Volosyak, and A. Gräser
Abstract The research and development in the field of rehabilitation robots produced a multiplicity of rehabilitation robots available as off-the- shelf products or laboratory prototypes (e.g. [1]). The application scope of these systems is large and covers ranges such as support for everyday tasks, assistance in the vocational surroundings or support in health care. An in-depth analysis unveils that rehabilitation robots, which are intended for a flexible use but not for individual special applications, offer services only on a relatively low level of abstraction, i.e. the direct low level control of the system [2] remains by the user. This leads to a high cognitive load with accompanying concentration loss, especially for persons depending on interfaces like speech control or eye movement trackers. In order to relieve the users from this kind of tiresome control the treatment of tasks on higher abstraction level becomes desirable [3]. The system should be able to perform chains of actions, which are repeated during daily life tasks, autonomously and/or with minimum necessary user interaction.
5.1 Basic Concepts and Hardware 5.1.1 The FRIEND Project With the motivation to overcome the situation explained above, the Institute of Automation (IAT), University of Bremen, has been developing the rehabilitation robotic system FRIEND since 1997. The system belongs to the category “intelligent wheelchair mounted manipulators”. It focuses on users with high spinal cord injury that are unable to control the manipulator by means of a keyboard or joystick. The system shall offer support during life activities, so that its users will become independent from care personnel. The strategic objective of the FRIEND project is to offer life autonomy for approximately 2 hours per day. Beside the aspect, that this is one of the main requirements mentioned by the users, the fulfilment of the objective would have a strong impact on the economical acceptance of the rehabilitation robotic system. In order to reach this strategic objective the IAT has chosen the so-called “pour a drink task” as the first daily life task to be offered by the FRIEND system. The realization of a robust and (semi-) autonomous execution of this task unveils a number of challenging technical problems to be solved, that are also representative Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 95-126, 2004. © Springer-Verlag Berlin Heidelberg 2004
96
O. Kouzmitcheva et al.
for other tasks. Therefore, the investigation and realization of the “pour a drink task” have the potential to develop a general method for robust high-level task execution in rehabilitation robotic systems. Within this chapter, the hardware and software structure of the FRIEND are presented. Afterwards, chapter 2 focuses on the realization of the “pour a beverage task” with the help of FRIEND. 5.1.2 Hardware Structure of FRIEND FRIEND (see Fig. 5.1) consists of an electric wheelchair (SPRINT, Meyra, Germany) and a 6 DOF robot arm (MANUS, Exact Dynamics, Netherlands). The arm is connected to a PC, which is mounted on the backside of the wheelchair, via a CAN bus interface. As a user interface an off-the-shelf speech recognition system is used. Other devices may replace speech recognition, as an input device, which may be better adapted to the users needs.
Fig. 5.1. Front view of FRIEND
FRIEND operates in a flexible human centred environment. It must be able to react to environmental changes dynamically during robotic arm movements. For this purpose, the system possesses a number of sensors for environmental perception. For visual perception, the system is equipped with an adjustable stereocamera system, which is mounted behind the users neck, and a gripper mounted camera. Using visual sensors for environmental perception is twofold: On the one hand, it offers the highest amount of information about the current environmental state.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
97
On the other hand, the complexity of signal processing and pattern recognition algorithms based on visual information is tremendous. Additionally, in many cases the reliability of the perceived information is insufficient for a robust task execution. Therefore, the IAT developed a “smart” tray that is mounted at the front side of the wheelchair. The “smart” tray (see Fig. 5.2) offers information about weights of objects placed upon the tray through a scale. The location of objects relative to a coordinate system fixed to the tray is measured through a touchpad. As shown within the succeeding chapters, the reliable information offered by the tray in combination with the visual information perceived by the cameras facilitates robust task execution. Beside the sensor systems mentioned so far, the gripper of the robotic arm is equipped with a force sensitive foil to perform sensitive grasping actions.
a
b
Fig. 5.2. Smart Tray. a top-view on scale surface, b matrix foil of position sensor
5.1.3 Multi-layered Control Architecture of FRIEND The integration of sensors, actuators and human machine interfaces into a computer program is a challenging task from the viewpoint of software engineering. A control-architecture is required, that offers an infrastructure for task planning, user interaction, resource administration as well as for dynamic activation and deactivation of closed loop control processes for the realisation of reactive operations. A dominating design principle for this kind of problem is called hybrid multi-layer architecture. This type of architecture combines the main design-principles for autonomous physical agents i.e. reactive and deliberative agent. Example architectures based on this principle are 3T [4], TCA [5] or SmartSoft [6]. Hybrid multi-layer architectures consist of three layers: The bottom layer realizes the reactive part of the robotic system (autonomous physical agent), i.e. the direct coupling of sensorial input to actuator control. Within this layer, concur-
98
O. Kouzmitcheva et al.
rently running and interacting behaviours determine the interaction of the system and the environment. The top layer −called the deliberator− plans operators on the highest level of abstraction. Here, methods from the field of classical artificial intelligence are used. Between these two layers resides the sequencer. The sequencer creates task schemes based on reactive behaviours that are offered as higher level abstraction operators to the deliberator. Beside this “gluing functionality” this layer is responsible for deadlock free execution of reactive behaviours that have to share limited system resources. Even though the latter aspect is crucial for secure execution, the avoidance of such situations cannot be guaranteed by some implemented architectures [5]. Here, the formal specification and analysis of all possible action sequences on sequencer level, like presented in section 5.2.5, is necessary. As shown in [7], traditional three layered control architectures do not meet the needs within the field of rehabilitation robotics systems. For this purpose a modified layered control architecture was designed that enables autonomous execution interrupted by man-machine interactions (see Fig. 5.3). The main design principle is still based on the hybrid multi-layer concepts and that the integration of reactive behaviours as well as deliberative capabilities becomes possible. Within the modified architecture, a human-machine-interface (HMI) replaces the deliberator. Additionally, it offers a direct control path from the HMI to the actuators and sensors. The system can rely on the user’s cognitive capabilities whenever necessary. The sequencer plays the role of a discrete event controller (DEC) that is responsible for a proper generation of action sequences related to high-level commands. The arrow from the HMI to the sequencer indicates such a command, like pouring and offering a drink. After the receipt of a command, the sequencer processes the following steps: First, the sequencer has to load command-related task knowledge to fix the information necessary for the determination of the current internal and environmental state. The state information is required for the succeeding generation of the command related action sequence and stored within the symbolic part of the world model. On this level of abstraction, a state is defined as a binary vector where each vector element represents a single fact about the environment or the system itself. The determination of the facts that are necessary for a complete state description takes place during the process of task knowledge modelling. In order to acquire the current state at the beginning of the task execution the sequencer queries the world-model about the desired facts. If no information is available, monitoring commands or user interactions are activated by the sequencer in order to update the state description within the world-model. After the initial state has been determined successfully, the sequencer generates sequences of executable elementary operations (EEOPs) that will be passed over to the reactive layer or the human-machine interface. From the sequencers point of view EEOPs are split up into four categories: • Reactive operators (direct sensor and actuator coupling, like visually controlled gripping of an object) • Monitoring commands (e.g. identification of objects) • User commands (e.g. moving camera into a direction of an object)
5 „FRIEND“ – An Intelligent Assistant in Daily Life
99
• Calculation operators (e.g. trajectory planning) • Direct actuator control (e.g. gripper movements). Task
User Interactions
Human-Machine Interface
Direct Actuator Control
Execute Monitoring
Task
User Interactions
User Interaction
Worldmodel
Set Facts
Squencer
Execute Monitoring
Closed Loop Control
Read Facts
Symbolic Represenation of System and Environmental State
Closed Loop Control
Monitoring Operation
Direct Actuator Control
Write subsymbolic information Read and write subsymbolic information
Subsymbolic Represenation of System and Environmental State
Reactive Layer
Sensors
Actuators
Fig. 5.3. Modified hybrid multi-layer control architecture
In order to indicate its execution result, the EEOP returns a discrete value, like Success or NotFound, to the sequencer. Depending on this value the sequencer decides whether to proceed with the execution or to modify the generated sequence, i.e. to perform a re-planning step. The reactive layer is located on the lowest level of the control architecture. This layer consists of software servers that encapsulate the hardware of sensors (e.g. cameras) and actuators (e.g. robot arm). Each server offers its services i.e. monitoring operations or actuator control via client-objects. Each client-object can be
100
O. Kouzmitcheva et al.
instantiated within the context of another process i.e. the sequencer or the humanmachine interface. By means of the clients-objects EEOPs for monitoring, direct control, reactive operation or user interaction are constructed. For instance, if a reactive operation has to be performed, e.g. visually guided grasping of objects, the necessary clients for sensor input and actuator control are instantiated and connected dynamically within the context of the EEOP. The exchange of information between EEOPs is performed via the sub-symbolic part of the world model. Here, beside the administration of the required resources, the sequencer controls the related flow of information. For instance, if visual position information about an object is required by a reactive operation, a monitoring operation that produces this information has to be executed first. If the monitoring operation fails the sequencer is informed via an appropriate return value and it can react to this unforeseen situation. The following chapter describes the development of executable elementary operators that are necessary for the execution of a “beverage serving task”. First, each operator and its integration into the FRIEND system are described separately by means of the stepwise explanation of the task scenario. Afterwards, a task planning approach that makes use of these operators and that is used within the control architecture is presented.
5.2 Application and Control This chapter explains the realization of the “beverage serving task” by means of the FRIEND system. First, the scenario is described on an abstract and intuitive level. This sketchy explanation is followed by a detailed description of the technical realisation of executable elementary operators for object detection, object manipulation, like visually guided object grasping, obstacle avoidance and the pour process itself. To get a robust behaviour this operation is executed under closed loop control that exploits weight information from the “smart” tray and adapts a trajectory obtained via the method of demonstration-based programming. A conclusion of the task explanation is that the ability of task planning becomes a necessary requisite if autonomous task execution is required. Therefore, a real-time suitable task planning approach based on enhanced assembly planning methods is introduced. The resulting task planning component is integrated into the sequencer of FRIEND’s control architecture and makes use of its executable elementary operators. Finally, the demonstration based programming method for the creation of operators is presented in more detail. The programming method helps to create executable elementary operations more easily than traditional engineering approaches. This helps to enhance the systems flexibility and robustness during task execution.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
101
5.2.1 The „Beverage Serving“ Task The scenario can be described as follows: a person (e.g. care personnel) arbitrarily places a glass and an already opened bottle on top of the tray. Afterwards, the user enters the single command “Serve Drink” and the system shall fill the glass with the beverage inside the bottle and move it to the user’s mouth autonomously. After the user finishes drinking, the glass will be put back on the tray, ready for the next serving drinking process. Even though some hard restrictions like empty glass or already opened bottle have been introduced, the realization of the autonomous execution of the task is still a great technical challenge. First, the system has to detect and grasp the bottle. Second, it has to locate the neck of the bottle relatively to the glass to perform a succeeding pour-in action. Third, the pouring process has to be performed and observed to fill the glass sufficiently. Afterwards, the bottle has to be placed back on the tray and the glass has to be grasped. Finally, the glass has to be moved into the vicinity of the user’s mouth. In the following, the realization of these steps is described in detail. 5.2.1.1 Object Detection The ability of object detection is a fundamental requirement in autonomous task execution. Within this scenario object detection based on natural characteristics is required in order to avoid artificial object markings. It turned out that the colour of an object is suitable for this purpose. Unfortunately, using colour information introduces some additional problems compared to grey value image processing. Different external influences like variable lighting conditions can be the reason of large fluctuations in objects’ colours. It might happen that differently coloured objects possess the same colour within the captured image. In order to overcome this problem our object detection algorithm makes use of the HSV (huesaturation-value). The main advantage of this colour space is that the HS-channels for objects of the same colour are almost independent of illumination condition changes. On the basis of this colour space a new method for automatic object recognition based on fuzzy decision system is used for object classification [8]. The result of the method is shown in Fig. 5.4 (a), (b). The system detects the bottle as well as the glass in both images of the stereo-camera system. This is symbolized by the bounding ellipses within the pictures to the right. The detection process itself works with a frame rate of 10fps achieved on a Pentium IV 2.8 GHz platform with an image size of 512 × 256 pixels. This frame rate is necessary for smooth movements of the robot arm if it is operating in visual servoing control mode. The main disadvantage of the presented object identification and detection method is that it does not reason about the detected objects. Within a clustered and unstructured environment, it might happen that wrong objects are identified, because of changing illumination conditions. By using the ‘smart’ tray, we can cope with this problem. With the information from the tray, unreasonable information can be excluded. Fig. 5.4 depicts the identified objects necessary for the pouring action by using image information in combination with the touchpad.
102
O. Kouzmitcheva et al.
a
b
c
d
Fig. 5.4. Object identification. a stereo camera view, b detected objects, c raw sensor information, d touchpad view
5.2.1.2 Object Approaching, Grasping, and Crossover Based on the image information obtained by object detection the pouring action can be performed. In order to implement this part of the scenario the gripper of the robot arm has to grasp the identified bottle and move it to the vicinity of the glass to execute pouring action in closed loop mode. For this purpose, image based visual servoing as well as the “look-and-move” approach are used. Vision Based Object Manipulation A human being grasp objects almost invariably with the aid of vision. He or she uses visual information to locate and identify objects, and to decide how to grasp them. Additionally, they use visual information for obstacle avoidance during the movement of their hand towards the desired object as well as for the correct alignment of the hand. The latter aspect, i.e. the hand-eye coordination, is the basic principle of the visual servoing method for robust robot arm control. The method uses visual information as feedback value in a closed control loop, so that approaching objects becomes independent from calibration errors of the vision system, like camera or position calibration. In visual servoing, it is necessary to generate gripper motions from visual observation. The visual controller (see Fig. 5.5) uses the location of features on the image plane directly for feedback.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
103
Fig. 5.5. Conventional visual controller
The controller includes an image jacobian matrix, which describes the relationship between changes in image plane and in world coordinates [9]. During the control process, the system simultaneously tracks the robot arm and the target, e.g. I bottle. Within this context, the image error e (i.e. control error) is defined as the I distance in both images between the reference point r actual, e.g. gripper, and the tarI get point r desired, e.g. bottle. Driving this error to zero in both images is equivalent to a 3D movement of the gripper into the vicinity of the object to be gripped (see W W Fig. 5.6: u - output of the controller, r – position of the robot gripper in world coordinates). This conventional visual servoing is a well-known method and realised for many applications worldwide. The disadvantage of the method is that fixed cameras with constant, usually small, focal length are used. It reduces the complexity of the control algorithm, but at the same time it restricts the workspace of the system.
Fig. 5.6. Visual servoing based grasping of a glass
In order to enhance the flexibility of the system, it must be possible to manipulate objects that are not placed on the tray. Even in the “gripping from the tray” scenario, it might happen that the robot gripper leaves the common field of view of the fixed cameras, so that vision based control becomes impossible. To overcome this problem adjustable cameras are used. Both cameras are mounted on pan-tilt heads and have variable focal lengths. In this case, the common field of view covers the whole workspace of the robot arm [10]. To use the additional degrees of freedom two additional control loops (Pan-Tilt and Zoom) are included for each camera (see Fig. 5.7).
104
O. Kouzmitcheva et al.
Fig. 5.7. Extended visual controller
The pan-tilt control loop keeps the object in the centre of the image and the zoom loop controls the image size so that the objects of interest always have a sufficient resolution (see Fig. 5.8).
Fig. 5.8. Adaptation of image resolution and camera position during pan-tilt head (PTH) control
The application of these enhanced control loops enlarges the robustness of the visual servoing based grasping process. To enhance the robustness of the system visual information and information from the ‘smart’ tray are merged. With this additional information, the gripper position can be adjusted with the necessary accuracy. “Look-and-move” Based Object Manipulation First, we remind briefly the classical ‘Look-and-move’ paradigm. This approach uses visual sensor information to compute the 3D position and orientation of the target object in order to report a 3D pose for the robot to achieve. The ability to compute relative 3D position and orientation of the object in Cartesian space implies that: 1. a 3D model of the object is available 2. visual features used to recognize and locate the object are represented in the model 3. the visual sensor is calibrated in order to be able to estimate Cartesian positions and orientations.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
105
In order to overcome time-consuming calibration and to exclude the necessity of image models for objects the ‘smart’ tray is used as an additional sensor. This tray provides the information about the placement of the objects relative to the tray coordinate frame. By means of the combination of this information with the results of the colour image processing described in 2.1.1, the 3D position of identified objects can be calculated easily with sufficient accuracy. At first glance, this modified ‘Look-and-move’ method seems to be a suitable technique to control the robot arm in order to manipulate different objects. However, it still has some disadvantages: 1. No real-time correction of the robot path is possible 2. Errors in calibration directly affect the accuracy with which the desired position is determined. 3. The information about 3D object characteristics, e.g. height, is required in order to compute the target position of the robot arm; 4. The provided sensor information is restricted to X and Y directions. 5. The number of executable tasks is restricted to a priori known objects that have to be placed on the tray. In order to overcome the problem of storing object related data and to be able to grasp and to manipulate different objects anywhere in the workspace of the robot arm, a combination of visual servoing and ‘look-and-move’ is used. This avoids the disadvantages of the look-and-move approach and the inaccuracy of visual servoing due to small focal length. For grasping and manipulation of the objects on the tray the result is as follows: 1. After the first object identification the robot gripper will be moved into the vicinity of the object to be gripped by means of visual servoing. 2. Due to the information provided by the ‘smart’ tray the gripper position is adjusted with sufficient accuracy. 3. After the gripper has reached its target position relatively to the desired object, a pre-programmed grasp action is executed. In order to execute the pouring task the bottle has to be moved into predetermined relative position close to the glass. For the execution of this motion, both techniques described above can be used. The difference to the ‘gripping task’ is the definition of the desired position and the reference point for control. In this case, the gripped bottle determines the reference point instead of the gripper. The desired position is calculated with respect to desired relative position of the bottle and the glass determined by the pouring trajectory. After the desired position is reached, the beverage pouring can start. 5.2.1.3 Beverage Pouring To guarantee that no beverage splashes out of the glass, the pouring trajectory as well as the flow have to be controlled. The pouring trajectory is determined by the movement of the bottle tip with respect to the glass and the slope of the bottle. While it is difficult to model this trajectory, it is much easier to obtain it through
106
O. Kouzmitcheva et al.
demonstration [11]. That is, a human instructor demonstrates how the bottle moves during a pouring process. The demonstrated movement will be measured and recorded with a position sensor providing 6 DOF information. This information is transformed to get the bottle’s tip with respect to the centre of glass opening. Hence, the acquired pouring trajectory is independent from specific bottles and glasses, and can be easily applied in a pouring task for various objects. In addition, this demonstrated trajectory is chosen as a general one, which covers most pouring processes regardless different filling levels. For the utilization in the system, the trajectory is stored in form of a list data structure. Figure 5.9 shows the whole procedure. As a result, a human being’s experience on beverage pouring has been observed and stored in the robot system.
a
b
c Fig. 5.9. Generation of the pouring trajectory. a Demonstration, b List of trajectory, c Transformation
When a pouring process begins, the acquired trajectory is used as a reference of the movement of the bottle, while the actual pose of the bottle is controlled by the beverage flow, i.e., the actual trajectory is a modification of the original observed trajectory. Since the initial filling level of the bottle is unknown, the observed trajectory may be repeated partly or wholly till the desired filling weight is achieved. If the whole trajectory has been executed and the filling set point is still not reached, the system will stop the pour-in task and offers a warning message because the bottle is empty.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
107
To eliminate the influence of the initial conditions on the actual pouring task, such as the shape of bottle and glass and the initial liquid level, closed control loop is applied. The ‘smart’ tray measures the filling weight and derives the beverage flow. Figure 5.10 shows the set-up when beverage pouring begins.
Fig. 5.10. System set-up for pouring beverage
The control loop is shown in Fig. 5.11, where Win, Fref and Factual stand for the desired weight to be filled, desired flow range and actual flow respectively. As a result, the pouring process is accomplished without an interaction of the user. Due to closed loop control a very simple structure is able to handle different initial and boundary conditions.
Fig. 5.11. Schema of pouring control loop
5.2.1.4 Put an Object Down After the pouring process is finished, the bottle has to be placed back on the tray. Visual servoing, as used by human, would imply that: 1. Image processing is able to recognize the tray shape and all objects, which are still placed on it; 2. A free position on the tray has been calculated in both camera images;
108
O. Kouzmitcheva et al.
3. The contact of two planes in stereo images (lay at bottom of tray) must be determined. The visual servoing procedure is very robust and accurate as long as the image error between two objects has to be reduced. In the case, that the image error is defined as the difference between two virtual points in the image, the inaccuracy in cameras and correspondence problem in image processing become noticeable. To realize put down action accurately and collision free, it is needed that the image processing is able to recognize small details in the scene with high accuracy. It requires elaborate image processing and calibration algorithms, which can become very time consuming. The use of the “smart” tray avoids much of the problems mentioned above. First, the system has to determine a free position based on the touchpad information. In order to simplify the search, the biggest “objects free” area on the tray is determined (Fig. 5.12). put down object
search free position on tray
move gripper to free position
monitoring weight
0 1 2 5
no
no
weight step
yes
scan position of weight step
actual position = target position
yes
open gripper
Fig. 5.12. Flow chart “put down object”
g
5 „FRIEND“ – An Intelligent Assistant in Daily Life
109
On the basis of this area the put down position is calculated relatively to the tray coordinate system and the gripper can be moved above this position. Starting from this position, the gripper is moved downwards to the direction of the tray surface. During this process, the system is checking tray data continuously in order to detect the contact between the object and the tray: If the system detects an increase of the weight and the touchpad detects a “new” object, an additional plausibility check is executed. If the new object appears within the expected area, the system stops the movement of the robot and opens the gripper. 5.2.2 Obstacle Avoidance In the preceding sections, it was assumed that no object has to be considered. Within realistic scenarios, the possibility of obstacles has to be taken into account and obstacle avoidance has to be performed. First, the obstacles have to be detected. In our scenario, each object that is located between the gripper and the target is treated as an obstacle, regardless of its actual depth position. For the manipulator action, only obstacles in the neighbourhood of the manipulator and that are observable in both camera images are taken into consideration. For reasons of simplification it is assumed that the obstacles can be distinguished sufficiently from the background by their colour. They are described by surrounding rectangles and it is assumed that the objects or at least parts of the obstacles are in the same plane as the gripper and target. To detect an obstacle a small rectangle area (region of interest, ROI) between the gripper and the object is defined. The area is bounded by the centre of gravity (COG) of the blobs at the target and the blue light emitting diode at the gripper (see Fig. 5.13). If an object is detected in this region within both images and the two ratios of the object width to ROI width are equal, i.e. w11 : w12 ≅ w21 : w22 , it can be concluded that there is an obstacle between the gripper and the target object.
a
b Fig. 5.13. Defining the region of interest. a left camera view, b right camera view
After the detection of the obstacle it is necessary to generate a suitable trajectory which excludes a collision. For the realisation the manipulator movement
110
O. Kouzmitcheva et al.
from the start to the target position the path is divided into discrete steps. We divide the procedure by generating a provisional set point value for each movement. Afterwards, this set point is used by the image-based visual-servoing system. After each step the system checks if the objects are still in the center of the image and the resolution is sufficient. Otherwise, the angles of the pan-tilt and the focal lengths are adjusted. This method guarantees that the robot arm and the objects of interest are kept in the common field of view. A fixed trajectory can be calculated easily with the help of the epipolar geometry [12]. However, this reduces the ability to react in different situations. For example, when the robot arm might moves out of the field of view while exercising a significantly larger obstacle avoidance motion. Another error could be the inadequacy of camera resolution so that the gripper cannot be detected in the image. Figure 5.14 shows the provisional set points for the movement of the robot arm. A provisional set point is a virtual point without a detectable image feature in the real scene. The gripper and the object are also represented with virtual points, i.e. the light emitting diode above the gripper has to be transformed to the centre of the gripper to determine the image based grasping point [13].
3
2
1
O
G
obstacle
Fig. 5.14. Front view of the image
The gripper {G}, the object to be grasped {O}, and the obstacle are shown in Fig. 5. 14. The gripper moves to point {1}, where the image processing detects the obstacle. Then, the gripper takes an evasive motion towards {2}. This procedure is repeated until the object {O} is approached. After each movement step, the PTHand zoom-control guarantee that the image resolution suffices the needs for the gripper as well as object detection within the next step. The foil sensor of the “smart” tray that offers redundant information about the position and width of objects and obstacles supports the detection process.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
111
5.2.3 Task Planning So far, the description of the sequence of operations that are necessary for the execution of the “beverage serving task” was quite straightforward. The bottle has to be grasped, moved into a position relatively to the glass, so that the pouring process can be started, and placed back on the tray afterwards. A deeper investigation of the situation unveils, that the assumptions made implicitly, like “the bottle can be grasped directly” or “the cameras will offer all necessary information about the manipulated object”, cannot be guaranteed under real world conditions. The following example underlines theses statements and offers a method to derive plans for the control of unknown situations. The method is explained for a particular case. In order to perform a pouring action safely, the system has to determine whether the bottle can be grasped directly or an object that is standing close to the bottle has to be relocated first, so that the bottle can be grasped afterwards. For this decision the system analyses the location of objects on the tray. For each object, a “safe gripping” area is defined (Fig. 5.15). If these areas of two objects are intersected, the relocation is necessary. For instance, if the glass stands in front of the bottle (from gripper’s point of view it hides the bottle), the system has to put the glass on another “free” place. The “relocation” consists of the following subtasks: grasp the glass in front of the bottle, lift the glass and put it down on a free place on the tray. Afterwards, the bottle can be grasped and moved close to the glass, ready for the pouring process.
„Safe“ gripping areas
Glass
Bottle
Bottle
Glass a
Tray
Tray b
Fig. 5.15. Analysis of the objects’ placement on the tray. a pour in process can be performed directly, b relocation is required
The relocation process described above can be interpreted as the result of a task planning process performed by the robotic system. First, the system reasons about the current environmental situation. Afterwards, on the basis of this situation, it plans the actions to be performed next in order to reach the target situation related with the task to be solved. It is evident that task planning becomes a necessary prerequisite if robust task execution on a relatively high level of abstraction is de-
112
O. Kouzmitcheva et al.
manded. The demand for more tasks to be executed by the system enforces this requirement. Task planning problems have been investigated in the field of artificial intelligence (AI) since the last 30 years, with the objective of creating fully autonomous agents e.g. robots. Until now domain-independent planners from this area like STRIPS, ABSTRIPS, NOAH, MOLGEN, DEVISER, SIPE [14, 15] are still not robust and efficient enough to work in real world robotic systems like flexible assembly systems or service robots [16]. For instance, these systems suffer from the combinatorial explosion of the search space and do not take the uncertainty of the environment into consideration. In order to overcome these drawbacks, domaindependent assembly planning methods have been enhanced, so that the planning of tasks on a higher level of abstraction for real world applications will become possible [16]. Within these approaches as much as possible knowledge about the task is integrated in advance, so that the planning problem can be solved with search methods of low computational costs. Here, the choice of an adequate task representation data-structure is essential. 5.2.3.1 Task Representation In the field of assembly planning so-called AND/OR-graph data-structures are frequently used to represent domain dependent state relations between parts of a product within an assembly process. These data-structures are compact and efficient to search. In the following AND/OR-graphs as well as their task planning pendants, the so-called AND/OR–nets, are informally introduced. For a formal definition of AND/OR–nets see [16]. Figure 5.16 depicts the structure of an AND/OR-graph. Each node of the AND/OR-graph represents either a single part (e.g. {A}), a subassembly (e.g. {A,B,C}) or the final product (e.g. {A,B,C,D}). Nodes are connected via ANDhyperarcs that represent feasible assembly or disassembly operations performed e.g. by a manipulator. One side of the arc refers to the target node that represents the result of an assembly operation. The other side is connected with at minimum two nodes that represent the parts or the subassemblies before the assembly operation. For example {A,B,C,D} is connected with {A,B} and {C,D}. Alternatively, {A,B,C,D} can be constructed via the assembly of {A,B} and {C,D}. Cao and Sanderson [16] proposed an extension of AND/OR-graphs, so-called AND/OR-nets, which consider certain state representations emerging especially within robotic task planning scenarios. For this application, nodes represent objects and their related geometrical relation within certain task scenarios. The definition of AND/OR-nets in contrast to AND/OR-graphs allows two nodes to contain same objects. These nodes differ in an internal state that represents a current geometrical object-constellation or object states.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
113
ABCD
ABC
AB
A
CD
B
C
D
Fig. 5.16. AND-OR graph example
The structure of an AND/OR-net will be explained by means of the “Beverage serving” task: Fig. 5.17 depicts the corresponding AND/OR-net. The bottle {Bo}, the tray {Tr}, the robot arm {Ro} and the glass {Gl} appear in different situations. For instance, the node [Bo,Gl,Tr]0 represents the object-constellation that the empty glass stands on the tray next to the filled bottle, whereas [Bo,Gl,Tr]1 represents the filled glass standing on the tray next to the empty bottle. Cao and Sanderson distinguish the nodes with a predetermined internal state number (attached to the set of objects). Operators that are associated with the AND-arcs are signed with AOP (assembly operator) and DOP (disassembly operator). The AND/OR-net structure fixes all possible operation sequences that are possible for the execution of a task. In order to generate a feasible task plan based on the net depicted in Fig. 5.17, the initial as well as the desired target state have to be acquired. The target is specified during the process of task modelling, because it has a fixed association to a high-level command. But the initial state has to be determined within a monitoring process. Therefore, it is necessary to enhance the information offered by the AND/OR-net structure, so that particular states of object as well as relation between objects can be determined and associated with a state. For this purpose, facts are introduced and connected with the nodes of the AND/OR-net [17]. For instance, node [Bo,Gl,Tr]0 can be associated with the facts: 1. 2. 3. 4.
StandsOn ( Bo, Tr ) = TRUE StandsOn ( Gl, Tr ) = TRUE IsEmpty ( Gl ) = TRUE ….
114
O. Kouzmitcheva et al.
Fig. 5.17. Example AND/OR-Net for the “Beverage Serving” Task
It has to be guaranteed that each fact that appears inside the AND/OR-net can be determined within an initial monitoring process. The initial monitoring process determines the initial state whether by means of a monitoring operation, like the above described object detection, or with the help of the user. For the example of the “ beverage serving task” the initial state is established by the subsumption of the nodes [Ro]0 and [Bo,Gl,Tr]0. This represents the state that the gripper of the robot arm is empty and the filled bottle as well as the empty glass is standing on the tray. The target state is established by the nodes [Bo,Gl,Tr]1 and [Ro]0.That represents the filled glass, the empty bottle standing on the tray and the empty gripper of the robot arm. The problem of creating a feasible sequence of operators (task plan) that, when executed, will drive the systems from the initial to the target state is now reduced to a graph search problem. Because of the finite size of the net as well as its implicit restrictions in feasible operations the search methods used in [16] are real-time suitable and don’t suffer from the halting problem. 5.2.3.2 High-Level-Plan Generation As shown in [16] an arbitrary AND/OR-Net can be transformed into an equivalent Petri-Net. This offers the possibility to perform the planning process based on the reachability graph of the resulting Petri-Net. An informal description of the AND/OR-Net to Petri-Net transformation algorithm is depicted in Fig. 5.18.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
115
A,B {A,B} Assemble
Assemble Disassemble
{A}
{B}
Disassemble
A
Initial AND/OR-Net
B
Resulting Petri-Net
Fig. 5.18. AND/OR-Net to Petri-Net transformation
The Petri-Net resulting from the transformation of the AND/OR-Net in Fig. 5.17 is depicted in Fig. 5.19. The markings in place [Bo,Gl,Tr]0 and [Ro]0 represent the initial state of the “beverage serving task”. In case of a Petri-Net a task plan is equivalent with a sequence of transitions that transforms the initial marking of the Petri-Net into the target marking. For the “beverage serving task” the target marking is to have marks in [Ro]0 and [Bo,Gl,Tr]1. [Bo,Gl,Tr] 0
[Ro] 0
[Bo,Gl,Tr] 1
[Bo,Ro] 1 [Bo,Gl,Ro,Tr]
0
[Bo,Gl,Ro,Tr] 1
[Gl,Tr] 0
[Bo,Ro] 0
[Gl,Tr] 1
[Bo,Gl,Ro,Tr]
2
Fig. 5.19. Petri-Net corresponding to the “pour in a drink” task
The high-level-plan generated from the Petri-Net depicted in Fig. 5.19, is: 1. GripObject( Ro, Bo ) 2. MoveObject( Ro, Bo )
116
3. 4. 5. 6.
O. Kouzmitcheva et al.
PourIn( Ro, Bo, Gl ) MoveObject( Ro, Bo ) PutDownObject( Ro, Bo, Tr ) Depart( Ro ).
A detailed description of the “planning” algorithm can be found in [18]. Related to the description of the technical realization of EEOPs required for the “beverage serving task” it is obvious that the operations on the AND/OR-net level are not executable by the system directly. Each operator on this level of abstraction consists of a composition of EEOPs that are necessary for the realization of the operator’s sub-task. Due to this kind of composition, operators on AND/ORnet level are called composed operators (short: COP). 5.2.3.3 Low-Level-Plan Generation and Execution In order to create a sequence of EEOPs that can be executed by the system directly, the COPs of the AND/OR-net level plan have to be decomposed. For this purpose, a library with COP related generic Petri-Nets is offered. Each generic Petri-Net describes the behaviour of the system during the execution of the COP from the sequencers point of view (see Fig. 5.3). The decomposition of a task planning problem into different levels of abstraction is a natural approach because even humans will plan their activities in a rough manner first, before they think about the details afterwards. Beside the accompanied ergonomic advantages the hierarchical representation of a task planning problem results in a smaller size of the search space −in case of Petri-Nets the size of the reachability-graph− and therefore reduces the computational costs of the task plan generation, if independent modules can be assumed [14]. Each generic Petri-Net possesses a list of formal parameters that represent the actuator, the objects to be manipulated or object specific data. Within the decomposition step, the formal parameters of this list are replaced with actual parameters that represent the actual objects of the task scenario. This enables the connection between concrete objects and the resulting EEOP level plan. The atomic constructional elements for the COP associated Petri-Nets are EEOPs, descriptions of facts inside the world model and representations of system resources (e.g. sensors) required for EEOP-execution. Because it is possible that the AND/OR-net level plan contains parallel executable COPs, a mutual exclusive use of system resources has to be guaranteed. In case of shared resources, all Petri-Nets related to parallel executable COPs are merged into a single Petri-Net that can be planned afterwards. As depicted in Fig. 5.20 each place of the Petri-Net that represents a fact within the world-model starts with the keyword “FAC”. The initial markings of these places have to be queried from the world-model or determined with the help of monitoring EEOPs. The initial markings of the remaining places have to be known in advance. They are used for net-flow-control purposes, like the initial activation of a monitoring operation. Each transition of the net represents a possible realization of an EEOP i.e. the execution of an EEOP together with its possible return value. The transitions that belong to the same EEOP are gathered in groups of at
5 „FRIEND“ – An Intelligent Assistant in Daily Life
117
least two members. Each of these EEOP realizations within a group returns a different value, so that all possible execution results of an EEOP are taking into account within the Petri-Net model. To underline this aspect graphically, these transition groups are surrounded by a rectangle that produces the impression of a single transition1 (short: EXOR-transition). Because the chosen way of modeling unknown behavior doesn’t require modification within the Petri-Net formalism, the simulation and verification of the nets is still possible by means of off-theshelf software tools. During the modeling process, i.e. the creation of the COP related Petri-Nets, all possible return values of the EEOPs have to be taken into consideration. The idea is that all possibilities for erroneous behavior can be implemented in advance and verified within the context of Petri-Nets with reachable target markings. In order to avoid infinite loops of automatic fault elimination steps, user commands are integrated into the Petri-Nets directly. The decomposition of a COP during the planning process will be explained by means of a simplified Petri-Net version for the COP MoveObject() (see Fig. 5.20). Due to its low structural complexity the operator is suitable for the description of the basic design concepts. As shown in the AND/OR-net depicted in Fig. 5.17, MoveObject() is responsible for the transportation of an already gripped object to a free place within the workspace of the manipulator. It is assumed that MoveObject() subsumes the EEOPs CoarseApproach(), SearchFreePos() and User(SearchFreePos). The movement part of the operator will be performed with the help of CoarseApproach(). As described in the introduction, CoarseApproach() uses the stereo-camera system (Scam) for the determination of a collision free gripper trajectory, so that a calculated target position can be reached. In case of MoveObject() the target position is defined as the free position in the workspace. The pre-place FreePositionKnown announces the necessity of this information. If a free position is unknown to the system, the monitoring operation SearchFreePos() has to be started first. It is assumed that this operation makes use of the stereo-camera system also. In case the monitoring operation fails, the user will be involved into the search procedure. Then, he or she has the opportunity to offer the required information by controlling system parts directly or to abort the task execution. This kind of user involvement reduces the complexity of the system because a complex deliberative process is handed over to the user’s disposal. The formal parameters of the COP MoveObject() are Robot and Object. If MoveObject() represents the COP instance that connects the Nodes [Bo,Gl,Ro,Tr]2, [Gl,Tr]1 and [Bo,Ro]1 within the AND/OR-Net of Fig. 5.17 the actual parameters are Ro and Tr. These parameters have been predefined during the AND/OR-Net construction already. The instantiation of the parameters determines the semantic interpretation of places and transitions within the generic Petri-Net. For instance, FAC.IsGripped( Robot, Object ) changes to FAC.IsGripped( Ro, Bo ), so that the actual i.e. initial marking of this place can be queried from the world-model or with the help of a monitoring EEOP. 1
Within the class of Fuzzy-Petri-Nets such a transition type is called mutal-exclusive transition [16]
118
O. Kouzmitcheva et al. FAC.IsGripped( Robot, Object )
FAC.StandsOn( Object, Platform )
CoarseApproach( Robot.EEL.Pos, FreePos.EEL.Pos ) = Success CoarseApproach( Robot.EEL.Pos, FreePos.EEL.Pos ) = Failure
FAC.IsInFreePos( Robot )
FreePosKnown FreePosUnKnown
SearchFreePos( FreePos.EEL.Pos ) = Known SearchFreePos( FreePos.EEL.Pos ) = UnKnown FreePosNotInSight SYS.IsAvailable( SCam )
User( FreePos.EEL.Pos ) = Success User( FreePos.EEL.Pos ) = Abort
Abort
Fig. 5.20. Generic Petri-Net for COP MoveObject
For the generation of a sequence of EEOPs resulting from the instantiated generic Petri-Net the same algorithms like for the AND/OR-net can be used. The EEOP sequence generated from the instantiated Petri-Net depicted in Fig. 5.20 is: 1. SearchFreePos(FreePos.EEL2.Pos) = Known 2. CoarseApproach(Ro.EEL.Pos,FreePos.EEL.Pos) = Success. This plan will be passed over to the execution system within the sequencer (see Fig. 5.3). After an EEOP has been processed, the execution system compares its expected return value with the actual one. In case of a difference, a new EEOP sequence has to be generated in order to react to the unpredicted behavior. For this purpose the instantiated Petri-Net related to the COP fires the EXOR-transition with the actual return value, so that a new initial marking emerges. Starting from this initial marking a new path leading to the target marking can be searched. 2
Elementary Executable Level: Addresses the geometrical data part of the world-model
5 „FRIEND“ – An Intelligent Assistant in Daily Life
119
5.2.4 Demonstration-Based Programming Until now, the task planning mentioned above is based on a prerequisite that task knowledge has already been implemented in the robot system. In other words, the robot system has possessed a list of individual actions and a sequence of these actions as well, as already presented in Fig. 5.19. Here, the terminology of actions is of the same meaning of COP (Composed operators). A question arises immediately how the actions and sequence can be transferred to the robot. The effort to answer the question will begin with transferring the single action knowledge to the robot. A common way to achieve this goal can be a classical engineering method. This standard engineering method would consist of an in-depth task analysis first and the modelling of the action. For this, the relationships between the parameters of the concerned objects need to be known (e.g. between bottle and flow). Only based on an exact model of each action and enough knowledge of the relationships between parameters, it is possible to program a robot to accomplish the action. But how can this expertise be explored and built? It is very natural to come to the idea of observing how a human being achieves this action, as has been done in the “pour in” action, because a normal human can execute a daily life action such as pouring a drink without apparent difficulty. The observation can bring the benefit of exploring the skill of a human being for doing the action. Here “Skill” denotes “the learned power of doing a thing competently”, which a human exerts without consciousness. As an example, the heuristic observation of a human demonstrator to pour drink from a bottle to a glass on a table brings the following facts: 1. Before the pouring begins, the bottle is gripped by the demonstrator and directed to an initial relative position to the glass. This position depends on the sizes of the glass and bottle. 2. The bottle rotates around one axis at a suitable speed till the beverage comes out, meanwhile the distance between the bottle tip and the center of glass opening as well as that between the bottle bottom and the table is kept at a constant value. 3. After the first beverage flows out, the flow of the liquid is controlled to be not so large that the liquid splashes out of the glass, and not so small that the liquid drips. 4. The filled liquid level is monitored continuously to avoid the glass being overfilled. 5. At the end of pouring the bottle is moved in a special way to avoid the liquid dripping. 6. In the whole pouring process the human being applies a closed loop structure to monitor the liquid outflow. At this point, the above qualitative information from observation seems enough for programming the robot to execute a “pour in” action. One can set the reference values like the value of flow by means of trial and error. This turns out to be very tedious and time-consuming. It is much easier to measure and evaluate these set
120
O. Kouzmitcheva et al.
values directly from the demonstration process, for which an observation system is needed. At first view, the construction of an observation system increases the complexity and extra cost for solving the problem. But if the facts are taken into account that the observation system can be utilized for the observation of other daily life actions, that the results acquired are optimal because they are from human demonstration, and, that parts of the observation equipments are already a constitution of the robot system, a good trade-off could be found. In the current FRIEND system only a 6-DOF Polhemus sensor had to be equipped extra for the observation purpose. The camera system is taken as both part of the robot and observation system. Though the initial motivation for demonstration is to build an action model, or in other words, acquire the skill for executing the action, the question if it is necessary to program the robot executing the actions under the same sensory mechanism as that of a human being comes unavoidably. A human demonstrator depends mainly on the visual information, and it is very natural to conclude that a robot should also take the visual information from the camera system. Due to the high data flow and complex image-processing algorithm, the efficiency of visual system in robot application is not very high. This problem can be solved if simpler sensors are applied. As an example, a scale-based tray is applied in FRIEND to receive the flow information by the derivation of weight data. An additional advantage of simpler sensors is that a shorter cycle time can be realized which allows a higher bandwidth of the control loop. This scale-based tray can be taken as part of the observation system as well. If an observation system has been set up, a quantitative analysis of the demonstration is possible. Figure 5.21 shows a demonstrated trajectory with a sampling period of 200 ms. It can be seen that the bottle tip with respect to the center of the glass opening in x-axis direction is within ±10 mm. In y-axis direction, it is also within ± 10 mm after the liquid starts to flow out (at this point the roll angle is about –75°). In Z- direction, it is about 130 mm at the beginning and when the tilt angle arrives approximately 80 degrees, it keeps around 33.4 mm until the end of the pouring. The roll angle changes relatively fast before the liquid flows out and changes slower after that. In contrast to the huge change in roll angle, the other angles keep within a limited range with 0° and –12° for pitch and -4° to 17° for yaw angle. This trajectory reflects the intuitive pouring motion of a human being.
5 „FRIEND“ – An Intelligent Assistant in Daily Life
121
150 X (mm) Y (mm) Z (mm)
X-, Y-, Z-Axis
100
50
0
-50
-100 0
1
2
3
Time (ms)
4 x 10
4
a 20
Roll, Pitch, Yaw Angle (°)
0 -20 -40 -60 -80 -100
Roll (°) Pitch (°) Yaw (°)
-120 0
1
2 Time (ms)
3
4 x 10
4
b Fig. 5.21. Time history of a pouring trajectory. a position history, b orientation history
Figure 5.22 shows the flow history in the demonstration trials with different bottles and glasses. In this figure, three parts are indicated: start up, intermediate and final section. In the start up and the final section the flow changes very rapidly. In the intermediate section, though the flow is with a high variance as well, it is kept above a minimum level. In this part the flow depends on the filled level and due to a minimum flow dripping is avoided.
122
O. Kouzmitcheva et al.
50 Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8
45 40
Flow (g/s)
35 30 25 20 15 10 5 0 -5
0
0.2
0.4
0.6
0.8
1
Filling weight ratio Fig. 5.22. Flow depending on filling ratio for different demonstrations
The information from the quantitative analysis above can be applied in robot programming in two ways: the first is to simply implement the acquired information in the robot system, and the second is to abstract the information to a skill strategy then design a simple function to accomplish the skill. In the first case, as already described, the demonstrated pouring trajectory is taken as a reference for a closed-loop control of the flow to achieve the pouring action. In the second case, a composed trajectory that consists of a rotation and an artificial method to keep the bottle tip a constant distance from the center of glass opening, together with the closed-loop control, can also achieve a similar result. This is logical because the function of the pouring trajectory is to keep the relative position and orientation of the bottle tip to the center of glass opening. There exist many trajectories, which serve the same purpose. The demonstrated trajectory is only one possibility. Even a human being applies different trajectories for the same pouring action. However, some skills keep unchanged like the constant distance between the bottle tip and center of glass opening. Demonstration is needed to acquire such a skill but it is not necessary to copy the demonstrated data themselves if a more simplified design based on the knowledge of the skills can achieve the same functionality. The same conclusion can also be observed from the flow control. The behavior of a human demonstrator for the flow control as shown in Fig. 5.22 shows great variation. Nevertheless, regardless of these variations there exist the common skills, as already mentioned above. If a flow control is made, it is not necessary to copy a flow history exactly. But the skill for such a flow control with the set flow points should be implemented via an appropriate design. Here one important fact to be emphasized is that during the demonstration phase the human being is part of a closed loop control. With the help of the sen-
5 „FRIEND“ – An Intelligent Assistant in Daily Life
123
sory system (eyes) and the controller (brain), the human being measures and evaluates the actual values of important variables, compares the actual value with the set point and controls the movement of the body. This closed loop control behavior dominates the demonstration phase in structure as well as in recorded data and is responsible for the great robustness of human behavior and the ability to deal with variation of initial values. This can also explain the fact that regardless of the different behavior of a human demonstrator even in the same action, the action goal can still be achieved because this closed loop structure guarantees that the key skills needed in the action are kept. In this sense, we come to the conclusion that it is essential to implement such a closed loop structure in robot programming as well. Until now, it has been stated that the method we are using is demonstrationbased programming (DbP). It is necessary to make a distinction with the method of programming by demonstration (PbD). This can be explained from the aspects of both motivation and implementation. The purpose of DbP is to help a robot application engineer with the programming procedure. This is reflected in exploring the action model and the analysis and acquirement of the skill for the action. The analysis and programming of the skill is left to the human being himself. PbD, on the other hand, is a concept originated from the idea of an automatic programming, which leaves robot programming to inexperience users. Conceptually, the only inputs required to generate the control command sequences to the robot system are the description of the objects involved in the task, and the high-level task specification. Thus, the key idea of PbD is to offer the user a proper programming and cooperation interface, through which the system can observe a human performing a task, understand it, and perform the task with minimal human intervention. This can be further presented in Fig. 5.23 from the level of a programmer and a robot.
Fig. 5.23. Difference between DbP and PbD
In this context, a robot system capable for PbD is built on the following conditions: 1. A demonstration system is available 2. The robot system has the prior knowledge of the elemental operations. This is actually the database of actions that the robot has been programmed with. An action is a sensor-motor primitive that allows the robot to interact with its environment. 3. The robot should have the capability to robustly execute these actions.
124
O. Kouzmitcheva et al.
4. The connection between demonstration and program generation has been developed. It means the functions for the automatic analysis of the task as well as the sequence generation have been developed. When considering the fact that a daily life task is actually a combination of the basic elemental actions, as described in Fig. 5.19, it becomes immediately evident that a PbD system with functionalities mentioned above is very valuable in service robot applications. In this sense, the action sequences as well as the transitions among the actions can be recognized by PbD and translated directly to a robot command sequence. This not only means the sequence of a task can be transferred to the robot automatically, which answers the second question raised at the beginning of this chapter, but also gives a prospective of teaching robot new tasks in a much easier way. In conclusion, the demonstration-based programming method DbP focuses more on programming the robot with actions like “pour in”, whereas PbD is more suitable for programming a complete action sequence. Our current efforts in FRIEND system are to equip the system with robust elemental actions like pour in. It is the future plan to extend the system to be capable of PbD ability.
5.3 Summary In this paper, the rehabilitation robotic system FRIEND has been presented. After the technical description of its hard- and software structure, the realization of a ‘beverage serving’ task is explained. Even though the requirements for this task are relatively restrictive, the realisation of its autonomous execution turns out to be a great technical challenge: Objects involved in the task have to be detected, grasped and moved into different positions. Additionally, the pouring process itself has to be observed and controlled autonomously. Here, the basic principle of our approach is to introduce feedback structures, so that different sub-tasks of the task execution can be performed autonomously and are robust against environmental changes. Within this example, visual servoing has been used for grasping and moving objects. The automated pouring process, executed within a closed control loop also uses information obtained via programming by demonstration in combination with weight information from a ‘smart’ tray. The automatic combination of these sub-tasks that is realized within an overall control architecture, results in the execution of the ‘beverage serving’ task with minimum user interaction. The collaboration between semi-autonomous actions offered by the system and user interactions for the support of required information (e.g. object identification) or direct control turns out to be a promising concept for the realization of assistive devices, especially within an evolutionary development process. The system shall consider the user’s cognitive capabilities whenever full autonomy leads to an unmanageable technical complexity, so that a robust functioning system can be offered right from the start of the project. Further development steps can concentrate
5 „FRIEND“ – An Intelligent Assistant in Daily Life
125
on the reduction of necessary user interactions. We claim, that the exploitation of this principle will offer more independence in daily life for handicapped people, especially those, who aren’t able to control the robotic systems directly. A future plan of this research is to further increase the robustness of the autonomous execution of the FRIEND system. As an example, the glass should not be restricted to be empty and the system can recognize this situation to make a right decision and corresponding action. Another concern is to improve the flexibility of the system, i.e. to accomplish some other tasks, such as open a door.
References 1.
Dallaway JL, Jackson RD, Timmer PHA (1995) Rehabilitation Robotics in Europe. In: IEEE Transactions on Rehabilitation Engineering 3: 33–45 2. Dario P et. al. (2002) EURON Research Roadmaps 2002. Research Roadmaps of the European Robotics Research Network 2002, http://www.euron.org 3. Kawamura K, Bagchi S, Iskarous M, Bishay M (1995) Intelligent robotic systems in service of the disabled. In: IEEE Transactions on Rehabilitation Engineering 3(1): 14– 21 4. 3T A User Guide. Metrica Inc. Robotics and Automation Group NASA Johnson Space Center Houston, TX 77058 February 13, 1996 5. Bastia J, Fedor C, Goodwin R, Simmons R (1997) Task Control Architecture – Programmers Guide to Version 8.0, Manual version: May 1997 6. Schlegel C, Wörz R (1999) The software framework SmartSoft for implementiong sensorimotor systems. Proc. IROS, Kyongju, Korea, October 1999, pp 1610–1616 7. Martens C, Kim DJ, Han JS, Gräser A, Bien Z (2002) Concept for a Modified Hybrid Multi-Layer Control-Architecture for Rehabilitation Robots. Proc. Third International Workshop on Human-friendly Robotic Systems, Daejon, South Korea, January 21–22, 2002, pp 49–54 8. Volosyak I, Gräser A (2003) Automatic object recognition using fuzzy decision system for the reha-robot FRIEND. In: Proceedings of the 8th ICORR, Daejeon, South Korea, 23-25 April 2003, pp 76–79 9. Hager G D, Chang WC and Morse AS (1995) Robot Hand-Eye Coordination Based on Stereo Vision. In: IEEE Control Systems Magazine 15: 30–39 10. Radchenko O, Pape A and Gräser A (2003) Visual Servoing with adjustable zoomth cameras. In: Proceedings of the 8 ICORR, Daejeon, South Korea, 23-25 April 2003, pp 51–54 11. She H, Martens C, Graeser A (2003) Application of Programming by Demonstration in the Rehabilitation Robotic System FRIEND. In: Proceedings of the 8th ICORR, Daejeon, South Korea, 23-25 April 2003, pp 39–42 12. Hosoda K, Sakamoto K and Asada M (1995) Trajectory Generation for Obstacle Avoidance of Uncalibrated Stereo Visual Servoing without 3D Reconstruction. In: Proc. IEEE/RSJ Int. Conf on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, Pittsburgh, PA, USA, pp 29–34
126
O. Kouzmitcheva et al.
13. Pape A, Radchenko O, Gräser A, Jang H, Bien Z (2003) Obstacles avoidance with visual control of adjustable zoom-cameras. Proceedings 8th ICORR, Daejeon, South Korea, 23–25 April 2003, pp 294–297 14. Norvig P, Russel S (1995) Artificial Intelligence – A Modern Approach. Prentice Hall Series in Artificial Intelligence, Prentice Hall 1995 15. Qiang Y (1997) Intelligent Planning – A Decomposition and Abstraction Based Approach, Springer Verlag 16. Cao T, Sanderson AC (1996) Intelligent Task Planning Using Fuzzy Petri Nets, World Scientific Publishing 17. Martens C, Schüttler J, Gräser A (2003) Logical Verification of AND/OR-net Structures for Task-Knowladge Representation in Service Robotics Scenarios, Proceedings of the 8th ICORR, Daejeon, South Korea, 23-25 April 2003, pp 16–19 18. Martens C (2003) Generation of parallel executable control sequences for rehabilitation robotic systems on the basis of hierarchical Petri-Nets, In: Lohman B and Gräser A (Hrsg.), Methoden und Anwendungen der automatisierungstechnik, pp 73–85.
6 GIVING-A-HAND System: The Development of a Task-Specific Robot Appliance M.J. Johnson, E. Guglielmelli, G.A. Di Lauro, C. Laschi, M.C. Carrozza, and P. Dario
Abstract The rapidly changing demographics in industrialized nations create a pressing need for effective personal assistive aids that appeal to elderly and disabled users. Our goal is to design robotic aids that are not only affordable and commercially viable but also have universal appeal and benefit. To do so, we explore the creation of the robot appliance, a personal robotic aid with the ability to function within a localized assistive system in a specific environment within the home such as the kitchen. This chapter presents our concept of the robot appliance and details of one of two design studies involving our concept for task-specific, robotic aids. We discuss the GIVING-A-HAND system concept and the results of interviews with elderly and medium-to-high disabled persons that prioritized and refined requirements for the robotic appliance component of the system: a small, counter-top mobile robot, “Addams Hand” that users can remotely control to interact with common kitchen appliances to perform fetch-and-carry tasks.
6.1 Introduction The rapidly changing demographics in industrialized nations create a pressing need for effective personal assistive aids that appeal to elderly and disabled users. In industrialized nations such as the US and Italy it is predicted that by the year 2025 about one of two persons will be over the age of 65. If we follow disability trends associated with aging, we can predict that in this new society 16% or more if these persons over the age of 65 will be living with one or more impairments that disrupt their ability to complete activities of daily living in their homes. Therefore, our goal is to design personal robotic aids that are not only affordable and commercially viable but also have universal appeal and benefit to this increasing pool of potential users. In designing personal robots to assist elderly and disabled persons in their home, two main approaches can be envisaged: the first approach is to develop the human-like personal assistant, which is a general purpose, do-it-all robot that is able to assist in multiple activities of daily living [2]; the second approach is to Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 127-141, 2004.
© Springer-Verlag Berlin Heidelberg 2004
128
M.J. Johnson et al.
develop the task-specific personal robot, which is dedicated to completing one activity of daily living [8]. The first approach can be seen in the futuristic ideas exemplified in the humanoid robot waiters in such films as the American movie Bicentennial Man, by Nicolas Kazan. One concrete example is MOVAID [3], a general-purpose, personal assistant for the home. The second approach is exemplified by Handy 1, which can assist persons with severe disability to eat, to makeup, or to paint [13]. Humanoid assistants would be ideal helpers because they would seamlessly integrate within our home environments and provide truly versatile assistance. The technology for individually affordable, safe, and fully acceptable humanoid assistants for elderly and disabled persons is still many years away. Until the technology becomes commonplace, the cost of humanoid personal systems will remain prohibitive with cost to benefit ratios that are too high. In the face of these challenges, we are exploring implications within the second approach. We advocate a movement away from creating robots that are multipurpose with the ability to function throughout the residential environment to creating robots that are appliance-like with the ability to function as one of a localized system of assistive aids in a specific environment within the home. The advent of smaller and cheaper microprocessors and micro-technology make this idea of the affordable “robot appliance” even more attainable in the short-term than a humanoid assistant.
6.2 Background Our evolution toward the perspective of the personal assistance in the home as being a localized, modular system of appliance–like aids followed a sequence that closely patterns the general evolution of the field of rehabilitation robotics. The general-purpose robot assistant was first envisioned as fixed workstations such as DEVAR [5], as intelligent wheelchairs such as TIDE-OMNI, as wheelchairmounted manipulators such as the Manus arm in the SPRINT-IMMEDIATE project and later on as mobile robots as in the URMAD project [2]. Later efforts to combine the performance of fixed workstations with the versatility of mobile robots such as in the MOVAID project [2, 3] were positive movements toward creating more favorable cost to benefit ratios in personal home assistance. The MOVAID system was a distributed system that consisted of a mobile, semiautonomous robot with a number of fixed workstations to which the mobile unit could physically dock. The workstations were in the kitchen and bedroom and allowed the performance of activities of daily living such as preparing a meal, pouring a glass of water, cleaning the countertop surfaces, and removing soiled linens from the bed. Despite user acceptance during validation trials with the MOVAID system, the lessons learned indicated that a complete solution for personal assistance, especially for disabled persons, is more distributed, more reconfigurable and better able to integrate existing domotic, telematic, and consumer products.
6 GIVING-A-HAND System
129
The concepts of modularity and distribution were stressed further, up to the formulation of the idea of an integrated modular assistive system, as in the P3 Project [4, 7]. These concepts are reviewed in the following section in terms of the philosophy behind personal robotic assistance and the lessons learned during validation and simulation experiments. The accumulation of design experience and lessons lead us to now explore the creation of robot appliances: low-cost, customizable, modular, and more task-focused robots. These personal robotic aids would have the ability to function within a localized system of assistive aids, which are tied to a specific environment within the home, such as the kitchen. 6.2.1 Domotic-Robotic Integrated System The introduction of the integrated modular home system concept represents a move toward a real application and cost reduction in technological assistance in the home [7]. This concept proposed the sub-division of the personal assistant into robotic modules and off-the-shelf domotic modules that can be integrated through a domotic network to create a smart home environment. Figure 6.1 illustrates this concept. The robotic system included the technological aids: robotic arms, mobile bases, electrical wheelchairs, while the domotic network included standard domotic devices such as home lighting system, air conditioning, doors and windows control. The devices shared tasks in intuitive and cost saving ways. The addition of the domotic components lowered costs due to the use of largely commercially available products and lowered complexity by replacing the need for the robot to perform some complex tasks in the home. For example, domotic door control devices instead of robots were used to open doors; thus, they reduced the complexity of the required robot controller. In order to better design the functionality and the modularity of the system, clinical trials were conducted with potential end users (disabled people and assistants), with existing robotic and domotic devices, in real life scenarios, in tasks that were chosen based on the users’ higher priority tasks. Trials with these individuals helped identify the priorities for the system functionality and led to the following requirements for the robotic modules and the network: • reprogrammable (use flexible and accessible interfaces) • multifunctional (useful for various priority tasks) • modular (adaptable to user’s injury level) • reconfigurable (useful in priority areas of the home such as the bathroom and the kitchen) • integrated with a domotic system • compliant controllers (ability to apply variable force to modulate interaction with the environment) • portable (reasonable weight and size).
130
M.J. Johnson et al.
Fig. 6.1. Integrated Modular System of Aids
6.2.2 Localized System of Appliances The requirements detailed in the P3 project indicated that the robotic modules should be smaller, more task-specific, and within a networked system that permits quick and easy re-configuring and programming. As a result, the next evolution introduces a network of domotic, telematic, and appliance-like robot modules that function separately or as a unit within a local environment in the home. Figure 6.2 illustrates an example of this localized network in the kitchen environment consisting of standard appliances such as the microwave oven and a sample robot appliance for fetch and carry tasks. The concept of the appliance contains within it several features: the idea of simplicity, adequate performance, reasonable reliability, direct mapping between task and user need, and reasonable cost. The term “appliance” is applied to a device or instrument designed to perform specific function, specifically an electric device, such as a microwave or coffeemaker, for household use. On the other hand, the term “an information appliance” is applied to the emerging generation of home appliances, which are appliances specializing in information knowledge, facts, graphics, images, video or sound, designed to perform a specific activity and to share information within its family of appliances [9].
6 GIVING-A-HAND System
131
Fig. 6.2. The concept to use domotic and telematic technology used to network appliances in the kitchen.
From these descriptions, our basic definition of the robot appliance emerges. The robot appliance is a task-specific device, i.e., it is designed to perform a specific activity or set of activities. The robot appliance goes beyond both standard and information appliances in that it must not only be specific in function and capable of receiving, storing, and sharing information but also capable of acting autonomously or semi-autonomously on the information received. There are many types of task-specific mechatronic and robotic assistive aids that already exist within the home environment. Some examples of existing robotic aids are the small mobile robots that haul around food, cooking robots such as the Bimby, and feeding robot aids such as the Neater Eater and the Handy 1 [13]. A localized modular system of aids capitalizes on home automation and domotic networks to create an integrated system where existing task-specific robotic aids and new robot appliances can be incorporated and made to communicate and interact with each other. Of course, salient to this concept is the need to design special domotic assistants to negotiate the interactions between users and the system and between users and an individual robot appliance. These innovative control interfaces must be designed to optimize the degree of interaction between the user and the product during task performance, while taking advantage of the user’s capabilities [4]. We conducted design case studies of two task-restricted robot appliances for a local kitchen environment, a fetch-and-carry robot appliance (GIVING-A-HAND) and a robot appliance for eating (SELFEED). Here we describe in detail only the GIVING-A-HAND concept, the user-centered development process used, the resulting user-centered priority requirements for the fetch-and-carry robot appliance, Addams Hand.
132
M.J. Johnson et al.
6.3 Design Concept for the Giving-A-Hand System From a series of preliminary interviews conducted with highly disabled potential users with severe upper arm dysfunction, we arrived at the initial system concept. The interviewees had uniformly expressed a desire to use the kitchen. Some key tasks they wanted to accomplish in the kitchen without fatigue were getting a drink independently, making a snack independently, and preparing a meal alone or participating with relatives or a caregiver in preparing a meal. From these insights, we developed a design protocol that would meet users’ access needs in many kitchen tasks. A kitchen task can be divided into three major stages: the setting-up stage, the processing stage, and the enjoying stage. In the example of the task of cooking, Fig. 6.3 shows that the setting-up stage involves cutting up the food, the processing stage involves some type of tool or appliance to do the cooking, and the enjoying stage involves the eating or drinking of the products from the processing stage. Non-disabled persons can navigate these stages without problems while the elderly user may need some assistance in the setting-up stage. Furthermore, medium to high-level disabled users need even more assistance in all three stages, especially the last two.
Fig. 6.3. Three stages in a typical kitchen task such as cooking
Fig. 6.4. The Giving-A-Hand system design concept
6 GIVING-A-HAND System
133
The GIVING-A-HAND design concept builds on the cooperative use of domotic, telematic and robotic technologies to drive down cost of assistive technology and increase access. As illustrated in Fig. 6.4, the concept assumes human assistance is available and can be given during the setting-up stage, and proposes the use of domotic/telematic technologies in processing stage, and robotic assistance in the enjoying stage. The scope of human assistance depends on the task and the user’s ability. For example, high-level disabled users would most likely need a human assistant to prepare the meal and place it on the countertop so that when needed the user, with the help of the technology assistants, can reheat and eat it.
6.4 Domotic/Telematic and Robotic Assistance The concept proposes the use of domotic and telematic assistants in the form of wireless access technology and board-level Ethernet controllers to provide a local area network (LAN). The LAN network connects all kitchen appliances together (i.e., those used in the food processing stage such as the microwave) and permits them to be alternatively controlled using a portable PC-based universal remote. The system would also connect any robotic assistant into the network (Fig. 6.2). We are designing a remote control that permits users with varying levels of disability to direct high-level actions of the appliances (e.g., setting the time on the microwave) and low-to-high level actions of a mobile robot assistant (e.g., steering). Robotic assistance enables persons with severe disabilities to participate independently in the enjoying stage. One specific idea proposed for this stage is the development of a low cost, mobile robot appliance that can interface with key appliances and assist in the completion of complex manipulation tasks, specifically those priority tasks highlighted during our preliminary interviews. The robot would be controlled using the universal remote and be programmed to recognize each appliance and perform the manipulations needs for each one.
6.5 The Fetch and Carry Robot Appliance Development Figure 6.5 illustrates the scenario-of-use for the robotic assistant called “Addams Hand”. The robot is envisioned to be a small, countertop, mobile aid that is programmed and controlled by the disabled person (or any user) to take food from an appliance such as the microwave oven and deliver it to a tray or to an eating robotic assistant such as a Neater Eater [12]. It could also then be programmed to fetch items from another appliance such as a cooking robot (bimby) and deliver it to the eating surface. The Addams Hand concept is unique in that it offers a smaller, lower cost, and less invasive robotic solution than existing kitchen aids such as MOVAID [3] and CAPDI [1].
134
M.J. Johnson et al.
Fig. 6.5. A scenario-of-use depicting the robotic assistant called Addams Hand interfacing with two appliances to assist the user in the enjoying stage
Fig. 6.6. A concept scene showing the robot interacting with a microwave and using the ARTS Lab prosthetic hand [10]
The concept for the first robot prototype is shown in Fig. 6.6. With a functioning goal that is similar to the general purpose MOVAID robot, the mobile robot
6 GIVING-A-HAND System
135
appliance within the localized network compensates for mobility disabilities typical to persons with severe upper limb impairments. Unlike MOVAID, this robot appliance (in the kitchen network) is restricted to completing fetch-and-carry tasks on a countertop, i.e., move items from one appliance to another such as a plate or a beverage from the microwave to an eating robot. A fully functional prototype of the fetch-and-carry robot appliance (Addams Hand) is still being designed with the following main requirements: • Low cost (< $ 4000) • Small footprint (31 cm in diameter) • Movable of a countertop • Capacity to manipulate a variety of objects up to 10 N with prosthetic hand • Portable (< 7.5 Kg) • Member of a domotic network • Ability to be controlled by a universal remote • Ability to move autonomously or semi-autonomously • Ability to communicate with other network appliances.
6.6 User-Centered Development In order to identify additional requirements for Addams Hand, interviews were conducted. Subjects completed a questionnaire to assess their attitude toward technology in the kitchen and a questionnaire on the desired functional characteristics for the robot. A total of nine elderly users and four medium-to-severely disabled users participated. All four disabled participants and six of the nine elderly participants completed all aspects of the interview. The elderly participants (82 ± 9.62 years) were limited in completing manipulations required for kitchen tasks such as taking objects from the refrigerator, opening and closing bottles, cutting food, pouring a drink, or grasping objects. The disabled participants (41.25 ± 9.62 years) had physical limitations due to spinal cord injuries or multiple sclerosis. They required full assistance with preparing meals. All subjects were cognitively able to understand the questionnaires; assistance was given to any subject who had difficulty completing the questionnaire due to impairment. To assess subjects’ attitudes toward technology in the kitchen, participants were shown a slide-show presentation that featured current and fictional assistive aids for the kitchen environment. These aids were characterized as low, medium, or high technological design. For each category of aids shown and described, subjects were asked to decide whether they liked the concept or disliked the concept. If they were unsure, they were asked to mark the category “I don’t know.” To gather users’ ideas on the functional characteristics that should be incorporated into the robot, subjects were asked to complete a questionnaire that listed eighteen items. The requirements encompassed design issues such as controllabil-
136
M.J. Johnson et al.
ity, safety, appearance, environment-of-use, cost, accuracy, and usability. After a discussion of the concept for the robot and its scenario-of-use, participants scored each of the items with a 9 (very important), 6 (important), 3 (less important), or a 1 (not important). Priority Functional Requirements Only functional requirements survey results are reported (Table 6.1). The data given in the functional requirements survey were analyzed by taking the median of the scores given for each functional characteristic. Table I shows the functional characteristics ranked in order of the level of importance given to them as captured by the median score of the responses by all participants. The results indicate that the participants considered 8 of the 18 functional requirements “very important.” Results indicate that elderly and disabled subjects were in agreement on only 4 of the 8 “very important” requirements. They all agreed that it was very important that the robot be easy to control, safe when interacting with the user and its environment during the handling of objects, and be affordably priced. In addition, elderly subjects desired the robot to be able to be used safely on the kitchen countertop while disabled users desired the robot to handle objects such as a plate and beverage glass with or without as well as respond to their demands in a timely and robustly manner. Users’ desire to easily control the robot translates into our need to create a user interface that has high usability: effective in its ability to transmit to the robot the desired actions of the users, efficient in allowing users commands to be interpreted accurately, and satisfying in giving user comfort and pleasure during use. Users’ concerns with affordability and safety are common themes observed in the rehabilitation robotics literature [5, 6, 11]. In the case of the Addams Hand prototype, creating a low cost system is a priority. We aim to lower system cost by decreasing the number of degrees of freedom of the robot, by utilizing some commercially available components, by simplifying the number of tasks the robot needs to perform, and by creating a modular environment in which the robot perform tasks. We intend to use vision, proximity sensors, and a combination of supervisor and autonomous control to improve system safety on all three interaction levels. Disabled users’ desire for a robot that can successful assist in key manipulations tasks involving reaching and gripping agrees with results reported by Stranger et al. 1994 [11]. The study reported that disabled users’ task priorities were reaching, gripping, and picking up objects from shelves and floors, preparing food and drinks along with eating and drinking. Given the small number of participants, it was important to determine whether participants’ responses were useful inputs to the design process. We assessed whether the data were consistent with larger questionnaire study such as the study conducted under the MOVAID project [3], the user task priority review work published by Stranger et al. 1994 [11] and other reviews [5, 14]. We saw that the
6 GIVING-A-HAND System
137
data derived from these interviews with potential users were for the most part consistent with the literature and thus offer valid insights into the designing of the GIVING-A-HAND system prototype.
Table 6.1. Functional characteristics ranked in terms of the median score derived from the data of all participants. The median scores for both disabled and elderly subjects are shown. Raw score: 9 (very important), 6 (important), 3 (less important), or a 1 (not important) FUNCTIONAL REQUIREMENTS
1 2 2a 2b 3 4 5 6 7 8 9 2c 10 11 12 13 14 15 16 17 18
Elderly (NDisabled = 6) (N = 4) Easy to control (by all: elderly and disabled) 9 9 Picks up objects of various sizes and weights (e.g.,6 9 straw, plate, glass) Pick up a glass of water (empty or full) 7.5 9 Pick up a plate (empty or filled with food) 7.5 9 Delivers objects safely (user interaction) 9 9 9 Delivers objects intact (deliver objects not broken or9 spilled) Usable on the kitchen countertop (safely) 9 7.5 Affordable (low cost) 9 9 Interacts with distinct appliances (e.g., bimby, micro-7.5 9 wave etc.) Available on demand and usable for specific period 7.5 9 Manages unexpected situations 6 9 Pick up a straw 6 8 Completes task in reasonable time 6 9 Easy to transport (move from one location to another) 6 6 Quick device set up (turn on quickly and easily) 4.5 6 Gets objects from tabletop and appliances 3 7.5 Moves on floor or tabletop 6 7.5 Recognizes and stops at appliances 6 6 Delivers objects to accurate place 6 6.5 Allows user to see objects being grasped 6 7.5 Design makes the product attractive 3 3
All (N = 10) 9 9 9 9 9 9 9 9 9 9 7.5 6.5 6 6 6 6 6 6 6 6 3
In summary, the results indicated that eight of the 18 requirements will be given priority in our design. Of the 8, we consider the 4 requirements shared by the two user groups to be given the highest priority. The 3 (of 8) tasks cited by only disabled users will also be included so as to satisfy their special needs. By continuing to query users throughout the design process, we will further define each requirement and assess what are acceptable levels of implementation for them.
138
M.J. Johnson et al.
6.7 Prototype of a Local Network with the Robot Appliance A functional prototype of a networked system of two standard appliances and a prototype of a robot appliance-like aid for fetch and carry tasks have been developed as early prototype of a modular integrated assistive system under the GIVING-A-HAND project. Figure 6.7 illustrates the domotic assistant and the local network of appliances, which consists of two standard appliances and the simple prototype of the fetch-and-carry robot appliance. The appliances are a microwave oven and a multifunctional device for preparing and cooking food. This network was primarily developed to enable the user to control these three appliances through a domotic assistant that was adapted to their use. Figure 6.8 illustrates the resulting prototype of the robot appliance and the next generation prototype. The prototype consisted of a modified mobile base from Parallax and a Robot Oz 4-DoF arm. The mobile base was equipped with infrared sensors to avoid obstacles and a simple line-tracking sensor to permit structured and safe movement on the countertop. A distributed control architecture, which consisted of two dedicated PIC microcontrollers (Basic Stamp boards from Parallax) to receive and send control and information signals to the mobile base and arm, implements cheaper behavior-based protocols to reduce system control cost and overhead. A lack of standard protocols for accessing and modifying the functions on appliances necessitated the creation of special control interfaces. Control of the bimby and the microwave via the domotic assistant were facilitated by dedicated control boards and specially designed interface boards. The control boards permit the appliances to be directly connected to a local LAN network and their control panels to be controlled via digital outputs. The interface cards converted the current and voltage of the appliances to the appropriate TTL signals. An applet on the domotic assistant communicates with the control card through a socket with an IP address of the card and a predefined port. The applet permits the user to change the state of the outputs of the card. A wireless connection and control protocol enabled the robot to move freely on the countertop. A serial onboard bridge wireless LAN (Symbol – CB1000) transferred the data coming from a serial RS232 connection, on a wireless TCP/IP network. A wireless access point (Buffalo-Air Connect) connects the bridge to the network. The lack of a standard protocol for sharing information between appliances emerged as a possible cause of integration difficulties. Special communication protocols were developed to connect our appliances.
6 GIVING-A-HAND System
139
Fig. 6.7. Local network prototype in the kitchen
Fig. 6.8. A simple functional prototype of Addams Hand: a fetch-and-carry robot appliance for the in the kitchen
140
M.J. Johnson et al.
The prototype of the fetch-and-carry robot appliance gave insights into the mechanical, control, and cost issues involved in the design. In developing a lower cost and more task specific robot, we learned that we must often sacrifice performance, precision, and flexibility. In developing more functional robot appliances, we must manage reduce technical complexity and cost, balance residual user ability and machine controllability, and increase performance while minimize invasiveness into the use environment.
6.8 Summary and Conclusions Our goal is to design personal robotic aids that are not only affordable and commercially viable but also have universal appeal and benefit to this increasing pool of potential disabled and elderly users. We proposed the development of robot appliances that are task-specific and part of a network where they can interact with other information and robotic appliances. We discussed the details of the GIVING-A-HAND system. Work continues on the design and development of both the feeding robot appliance and the fetch-and-carry robot appliance as well as on the implementation of the local network. Implementation of this network of aids permits us to examine the challenges inherent in integrating and in controlling both robot appliances and standard appliances within one domotic network. A functional prototype of a domotic network consisting of a prototype robotic appliance for fetch-and-carry tasks and standard appliances for the kitchen environment revealed that lack of standard protocols for sharing information and for modifying on-board functions were the main barriers to implementation of our proposed system of aids. Next steps of the work will be the implementation and clinical validation of two different prototypes of a novel robotic feeding appliance and the realization of a second generation prototype of the assistive system with the fetch and carry robot.
Acknowledgments The work was supported via National Science Foundation-NATO Postdoctoral fellowship, DGE-0107998 and the core funds of the INAIL RTR Centre and the ARTS Lab of the Scuola Superiore Sant’Anna.
References 1.
Casals A, Merchan R, Portell E (1999) CAPDI: A robotized kitchen for the disabled and elderly. In: Bühler C and Knops H (eds) Assistive Technology on the Threshold of the New Millennium. IOS press.
6 GIVING-A-HAND System 2. 3. 4.
5.
6.
7. 8. 9.
10. 11.
12. 13.
14.
141
Dario P, Guglielmelli E, Allotta B (1996) Robotics in medicine. IEEE Robotics and Automation Society Magazine 3(3): 739–752 Dario P., Guglielmelli E., Laschi C., Teti G. (1999) MOVAID: A personal robot in everyday life of disabled and elderly people. Technology and Disability (10)2: 77–93 Guglielmelli E, Dario P, Laschi C, Fontanelli R, Susani M, Verbeeck P, Gabus J (1996) Humans and technologies at home: from friendly appliances to robot interfaces. IEEE Int’l Workshop on Robot and Human Communication, pp 71–79. Hammel J, Hall K, Lees D, Leifer L, Van Der Loos H.F.M., Perkash I, Crigler R (1995) Clinical evaluation of a desktop robotic assistant, Journal of Rehabilitation Research & Development 26: 1–16 Harwin WS, Rahman T, Foulds RA (1995) A review of design issues in rehabilitation robotics with reference to North American research. IEEE Transactions on Rehabilitation Engineering 3(1): pp 3–13 Laschi C, Guglielmelli E, Teti G, Dario P (1999) A modular approach to rehabilitation robotics, 2nd EUREL Workshop on Medical Robotics, Pisa, Italy, pp 85–89 Mahoney R (1997) Robotic products for rehabilitation: Status and Strategy, Proc. of the Int. Conf. on Rehabilitation Robotics (ICORR), Bath, UK, pp 12–22 Norman DA (1998) The Invisible Computer: Why good products can fail, the personal computer is complex, and information appliances are the solution. First MIT Press Cambridge, Massachusetts Sebastiani F and Suppo C (2000) Analisi e sviluppo di meccanismi sotto-azionati per una protesi di arto superiore. Scuola Superiore di Sant’ Anna. Pisa, Italy Stranger CA, Anglin C, Harwin WS, et al (1994) Devices for assisting manipulation: A summary of user task priorities. IEEE Transactions on Rehabilitation Engineering, 2(4): 256–265 The Neater Eater (http://www.michaeli.u-net.com/main.htm) Topping M and Smith J (1999) The development of Handy 1: A robotic system to assist the severely disabled. Proceedings of the International Conference on Rehabilitation Robotics (ICORR 99). San Francisco, CA. July, pp 244–249 Van der Loos HFM (1995) VA/Stanford rehabilitation robotics research and development program: lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Transactions on Rehabilitation Engineering 3(1): 46–55.
7 Cooperative Welfare Robot System Using Hand Gesture Instructions Noriyuki Kawarazaki, Ichiro Hoya, Kazue Nishihara, and Tadashi Yoshidome
Abstract This paper explains the cooperative work system between manipulators and humans using hand gesture instructions. The goal of our system is that the manipulator can work with humans in the same working space according to the instructions of hand gestures. Our cooperative welfare robot system is composed of the manipulator, a PC and a trinocular stereovision hardware. Since the system has to recognize the position and posture of the hand in the three dimensional work space, we use the trinocular stereovision hardware. The three dimensional positions of the hand and object are obtained through range images. The gesture is recognized based on the length and width of the hand. We propose the new method in which the hand area is divided into two blocks in order to recognize the hand gesture rapidly. Through experimental results, we will show the effectiveness of our system.
7.1 Introduction The development of robots has been significant in production, including factories. The expectation is high for the development of intelligent robot systems that work cooperatively with human beings in daily life and in medical treatment and welfare (Fig. 7.1). Smooth interfacing of human beings and robots is essential for the operation of robots by people in daily life. Anyone can operate robots with easy by giving instructions to the robot by using gestures, just as people communicate with gestures. As the motion of a human hand is capable of making expressions naturally and viscerally, information on assignments human intentions and instructions, among others, can be obtained by observing the motion of the human hand. This interaction has been the subject of extensive research in recent years. As a result, an intelligent manipulator system using tracking vision has been developed [1]. The control algorithm for a service robot through the hand over task was proposed [2], a new user interface called "Active Interface" to interact with human beings has been presented [3]. Human actions are utilized in human-robot interaction [4]. The method for recognizing head/hand gesture from a sequence of range images was presented, and algorithms for real time visual recognition of human action sequences was developed [5, 6]. With the presentation of the human Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 143-153, 2004.
© Springer-Verlag Berlin Heidelberg 2004
144
N. Kawarazaki et al.
gesture recognition method using pattern space trajectory [7], the proposal of the "Interactive Sensing" to let the robot find a human in a complex background [8], the new method uses multiple color extraction, stereo tracking and template matching for human pointing action was developed [9]. On the other hand, various types image processor of robot has been proposed [10]. There were various types of image processors proposed, however. The high performance robot vision system which is implemented as a transputer-based vision system has been developed [11, 12]. This paper provides the cooperative welfare robot system using hand gesture instructions. The goal of our system is that the manipulator can work with humans in the same workspace according to the instructions of hand gestures. Since the system has to recognize the position and posture of the hand in the three dimensional workspace, we use the trinocular stereovision hardware. The three dimensional positions of the hand and object are obtained through range images. Since it is a very time-consuming process to recognize the hand gesture based on the pattern matching of the hand, we use the characteristic dimensions of the hand. Moreover, we propose the new method that the hand area is divided into two blocks in order to recognize the hand gesture rapidly. This paper is organized as follows. The concept of our cooperative robot system is provided in Sect. 7.2. The measurement of the distance using stereo images is presented in Sect. 7.3. The detection of the hand and the target object is described in Sect. 7.4, and recognition of the hand gesture is presented in Sect. 7.5. Several experimental results are discussed in Sect. 7.6. Conclusions are provided in Sect. 7.7.
Fig. 7.1. An intelligent robot system
7.2 Cooperative Robot System Our cooperative welfare robot system is shown in Fig. 7.2. This system is composed of a manipulator, a trinocular stereovision hardware and a PC. The manipulator used here has six degrees of freedom of motion and has the mechanical hand. Since the system has to recognize the position and posture of the hand in real time, we use the trinocular stereovision hardware. The three dimensional positions of the hand and object are calculated through range images which are obtained from the trinocular stereovision hardware. The goal of our system is that the manipulator works with the human in the same working space according to the instructions of hand gestures. In our system the operator gives hand gesture to the manipulator conversationally. For example, when the operator points with the forefinger to the object, manipulator picks up the object and hands it over to the operator.
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
145
Hand gesture
Objects Manipulator
Trinocular stereovision
PC
Fig. 7.2. Cooperative Robot System
7.3 Measurement of Distance Using Stereo Images The trinocular stereovision hardware consists of three-camera module. The camera module has simultaneously obtained three images of the scene (Fig. 7.3). The system is able to determine the distance to the object in the scene. The system calculates the amount of the shift between the left image and the right image using the Sum of Absolute Differences correlation (SAD) method. The calculated value of the horizontal disparity dh and the three dimensional position (Xh,Yh,Zh) about the target pixel based on the horizontal baseline are shown below. d h = Xl − X r
(7.1)
Xh =
b h (X l + X r ) 2 dh
(7.2)
Yh =
b h ( Yl + Yr ) 2dh
(7.3)
Zh =
bh f dh
(7.4)
where: f -focal length, bh - horizontal baseline, (xl, yl) - position of the target pixel in the left image, (xr, yr) - position of the target pixel in the right image. The three dimensional position (Xv,Yv,Zv) about the target pixel based on the vertical baseline is calculated according to the same procedure about the top image
146
N. Kawarazaki et al.
and the right image.The actual three dimensional position (X,Y,Z) of the target pixel is selected among the (Xh,Yh,Zh) and (Xv,Yv,Zv) in view of the correlation of images.
Fig. 7.3. Geometric relationship between three images
7.4 Detection of the Hand and the Target Object 7.4.1 Detection of the Hand Area Using Color Image At first, the system has to detect the hand area in the image of the workspace. The hand area is detected based on the RGB pixel values of the flesh tint in the color image. The color image is digitized as 24-bit RGB pixel value, so that each element of RGB (Red, Green and Blue) is 8 bit or 256 levels of brightness [13, 14]. We examined the RGB pixel value of the flesh tint of several people. In order to detect the hand area in the color image, we define the "RGB range" as shown below. R: 96 – 177 (7.5) G: 53 – 118
(7.6)
B: 49 – 109
(7.7)
|R-G| / |G-B| = 5.5–10.8
(7.8)
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
147
The RGB value is apt to be influenced by the light. Therefore, we use the hue of flesh tint in order to reduce the influence of the light. The equation of transformation from RGB value to the hue is shown below.
0.7 R − 0.59G − 0.11B H = tan −1 − 0.3R − 0.59G + 0.89B
(7.9)
We obtained the hue of flesh tint through the experiment and define the "hue range" as shown below. H: 105 – 135
(7.10)
The area of the flesh tint is detected roughly in the color image using the "hue range" and the noise is removed using the "RGB range". 7.4.2 Tracking of the Hand Using CP After the hand area is detected using "RGB range" and the "hue range" of the color image, we determine the center position of the hand that is called the CP in order to trace the hand. The position of each pixels of flesh tint is obtained based on the stereo images. Since the size of the fist of the humans is approximately equal to the sphere with radius 40 mm, the system searches for the center of the sphere with the maximum density of pixels of flesh tint (Fig. 7.4). The center of the sphere is regarded as the CP of the hand. As shown in Fig. 7.5, the area of flesh tint about the hand is obtained based on the "RGB range" and the "hue range". Then the hand is obtained based on the CP. Once the CP is detected, the hand is traced by the tracking of the CP.
Fig. 7.4. Pixels of flesh tint
Fig. 7.5. Detection of the hand
148
N. Kawarazaki et al.
7.4.3 Detection of the Object Using Gesture Instruction It is assumed that objects are put on the table and height of the table is known. The system has to detect the object on the table, when the operator points with the forefinger to the object. The farthest point from the CP in the hand area is called the FP that is the tip of the forefinger. After the detection of the FP, we define the pointing vector V directed from the CP to the FP. The pointing vector V intersects the surface of the table at the object position OP. The OP is calculated from the geometric relationship (Fig. 7.6) and equations of the OP (X, Y, Z) are shown below. Because the object lies around the OP on the table, the system searches around the OP so as to detect the object. X = CX −
(C X − FX ) ⋅ ( Y − C y )
(7.11)
Fy − C y
Y = C h − Th Z = CZ +
(7.12)
(C X − FX ) ⋅ (Y − C y ) ⋅ ( FZ − C Z ) (Fy − C y ) ⋅ (C X − FX )
where: CP(CX , Cy , Cz), FP(Fx ,Fy, Fz), Ch: height of the camera from the floor, Th: height of the table from the floor.
X CP FP
Table
OP
Y Fig. 7.6. Geometric relationship between CP, FP and OP
(4.13)
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
149
7.5 Recognition of the Hand Gesture As shown in Fig. 7.7, we define several instructions using hand configurations. We make the manipulator move in accordance with the instructions of hand gestures. For example, when the operator opens the hand upward (Inst.2), manipulator hands the object to the operator.
Inst.1 Grasp
Inst.2 Deliver the object Inst.3 Approach
Inst.4 Stand by
Fig. 7.7. Instructions of hand gestures
Since it is a very time-consuming process to recognize the hand gesture according to the pattern matching about the whole hand configuration, we use the characteristic dimensions of the hand. In order to recognize the hand configuration rapidly, we divide the hand area into two blocks: the hand block and the finger block (Fig. 7.8).
B CP
Hand Block
FP
A
C
Finger Block
Fig. 7.8. Block division of the hand area
The finger block is defined as the flesh tint’s area that is more than 60mm distant from the CP of the hand. As shown in Fig. 7.8, we define three characteristic dimensions (A, B and C) of the hand in order to recognize the hand gesture rapidly. As shown in Fig. 7.9, hand gestures are divided into branches based on the conditions. The length A is the distance from the CP to the FP. If the length A is less than 60 mm, we consider that the operator closes the hand and the hand gesture means the instruction 1. If the length A is more than 60 mm, we calculate the length B that is the maximum width of the hand block. If the length B is less than 60 mm, we consider that the operator opens the hand upward and the hand gesture means instruction 2. If the length B is more than 60 mm, we calculate the length C, is the maximum width of the finger block. If the length C is less than 30 mm,
150
N. Kawarazaki et al.
we consider that the operator points with forefinger to the object and the hand gesture means instruction 3. Otherwise we consider that the hand gesture means instructions 4. As the width of the forefinger of the human is more than 30 mm, we define the threshold at 30 mm so as to determine the instruction 3 or 4. Because we don't use the whole hand configuration but the three characteristic dimensions, the hand gesture is determined rapidly.
Fig. 7.9. General flow about the recognition of the hand gesture
7.6 Experimental Results We made several experiments in order to clarify the effectiveness of our system. The sequence of hand gesture instruction is shown in Fig. 7.10. In our system the operator gives gesture instruction to the manipulator conversationally. Manipulator waits for the gesture instruction and works based on it individually. The experimental results are shown in Fig. 7.11. The manipulator waits for the instruction of the hand gesture at the initial position (Fig. 7.11a). After the operator points at the target object with the forefinger, manipulator moves the mechanical hand over the object (Fig. 7.11 b). As shown in Fig. 7.11c, the manipulator grasps the object according to the instruction 1. Figure. 7.11d shows that the manipulator picks the object up and hands on it to the operator. Manipulator can move according to the instructions of hand gestures in real time.
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
151
Fig. 7.10. Sequence of hand gesture instruction
a
b
c
d
Fig. 7.11. Experimental results from the gesture control of the robot. a Initial position (Instruction 4), b operator points at the target object with the forefinger (instruction 3) and the robotic gripper moves over the object, c the posture of the operator’s hand corresponds to instruction 1 and the robotic gripper grasp the object, d the manipulator delivers the object to the operator’s hand which configuration corresponds to instruction 2
152
N. Kawarazaki et al.
7.7 Conclusions In this paper, we proposed the cooperative welfare robot system using hand gesture instructions. In our system, the hand gesture is recognized and the manipulator works based on it. The hand area is detected accurately based on the "RGB range" and "hue range" of the color image. In order to recognize the hand gesture we use the characteristic dimensions of the hand. We propose the method that the hand area is divided into two blocks in order to recognize the hand gesture rapidly. In our system the operator gives gesture instruction to the manipulator conversationally. The effectiveness of our system was clarified by several experimental results. In future work, we will further define many kinds of gesture instructions for the practical application of our system.
References 1.
Kashiwagi N, Kawarazaki N, Nishihara K (1998) Manipulator work system using vision. Proc. of the IEEE Int. Workshop on Robot and Human Communication, pp 251– 255 2. Agah A, Tanie K (1997) Human interaction with a service robot: mobile-manipulator handing over an object to a human. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 575–580 3. Yamasaki N, Anzai Y (1995) Active interface for human-robot interaction. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 3103–3109 4. Pavlovic VI, Sharma R and Huang TS (1997) Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Trans. on Pattern Analysis and Machine Intelligence 19(7): 677–695 5. Kuniyoshi Y, Inaba M, Inoue H (1992) Seeing, understanding and doing human task. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 2–9 6. Wren CR, Azarbayejani A, Darrell T, Pentland AP (1997) P finder: real-time tracking of the human body. IEEE Trans. on Pattern Analysis and Machine Intelligence 19(7): 780–785 7. Nagaya S, Seki S, Oka R (1996) Pattern space trajectory for gesture spotting recognition. Proc. of the 3rd Japan-France Congress on Mechatronics, pp 208–211 8. Inamura T, Inaba M, Inoue H (1998) Finding human based on the interactive sensing. Proc. of Intelligent Autonomous Systems, pp 86–92 9. Mori T, Yokokawa T, Sato T (1998) Recognition of human pointing action based on color extraction and stereo tracking. Proc. of Intelligent Autonomous Systems, pp 93– 100 10. Moribe H, Nakano M, Kuno T, Hasegawa J (1987) Image preprocessor of modelbased vision system for assembly robots. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 366–371 11. Inoue H, Tachikawa T, Inaba M (1992) Robot vision system with a correlation chip for real-time tracking, optical flow and depth map generation. Proc. of the IEEE Int. Conf. on Robotics and Automation, pp 1621–1626
7 Cooperative Welfare Robot System Using Hand Gesture Instructions
153
12. Morita T (1999) Tracking vision system for real-time motion analysis. Advanced Robotics 12(6): 609–617 13. M Sonka, V Hlavac, R Boyle (1999) Image processing, analysis, and machine vision. An International Thomson Publishing Company 14. J C Russ (1999) The image processing handbook. A CRC handbook published in cooperation with IEEE Press.
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon” Ryoji Soyama, Sumio Ishii, and Azuma Fukase
8.1 Introduction Eating is a basic need for humans. In fact, it is so basic that most people are not aware of the motions that they use while eating. For some physically disabled people, eating can be laborious or a bit embarrassing since they must rely on a caretaker to feed them. “My Spoon” is a meal-assistance device designed to assist these people during meals. Its basic concept is to allow a user to “eat a meal freely, at any pace and choosing favorite foods in any desired order.” By using My Spoon, it is now possible to eat together with friends and family. After several trials and prototypes [3, 4, 5], My Spoon has been marketed in Japan since May 2002.
8.2 Meal-Assistance Device “My Spoon” As shown in Fig. 8.1, My Spoon is comprised of an operating interface adjustable to the user condition, a 5-DOF manipulator arm, a 1-DOF end-effector (spoon and fork) and a dedicated meal tray sectioned into four compartments [1, 2]. The small size of the robot system assumes domestic usage. The robot body weights approximately 6 kg, and measures 370 (L) × 280 (W) × 270 (H) mm. Power consumption is approximately 30W. The user uses a body part that can be moved relatively freely and selects a control interface that is suitable for that part. Through the interface, the user can move the manipulator to any desired position. When in position, the spoon and fork will grasp the food and relay it to the mouth of the user. Anticipated users are those who cannot move their hands freely and need help while eating. Current users include those who have spinal cord injuries or muscular dystrophy, but can • move their head freely and take in food brought to the mouth • swallow normally • understand how to operate the device
Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 155-163, 2004.
© Springer-Verlag Berlin Heidelberg 2004
156
R. Soyama, S. Ishii, and A. Fukase
Fig. 8.1. Meal-Assistance Device “My Spoon”
8.3 Operating Interface Different users have different physical disabilities. The parts of the body with which the user can maneuver to control the machine can vary greatly, as can the ability to understand of the operation of the machine. Moreover, some users may become tired if frequent manipulations are required while others may become weary of eating if an extended amount of time is required for a meal. Therefore the operating interface must be versatile given these conditions. In short, it is necessary to reduce the number of control operations while shortening the time required to eat. Although reducing the number of control operations will deprive the user of eating options, we believe that we have made the interface sufficiently flexible enough to target the vast majority of potential users. As a result of the field trials, we have adopted a chin-controlled joystick as the standard operation interface, with an optional reinforced joystick or push-button as a replacement for the chin-controlled joystick (Fig. 8.2).
8.4 Basic Operation After initial setup, two types of commands, compartment selection and position adjustment, are necessary to operate My Spoon. For general use, a user must be able to understand both of these commands. A user will first select the desired compartment, and then adjust the position of the manipulator within the compartment.
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon”
a
b
157
c
Fig. 8.2. Some examples of robot interface. a Standard chin-controlled joystick, b Optional reinforced joystick, c Optional push button
8.4.1 Setup Initial setup is important to ensure safe operation. This also enables My Spoon to accommodate different users with different physical disabilities. Setup entails • Registering the “home” position of the manipulator in front of the mouth of the user. • Adjusting the joystick or push-button sensitivity to one of three preset values. • Selecting a spoon and fork from two available sizes (Fig. 8.3). These exclusive attachments are designed for quick and easy interchange. • Adjust the placement of the cup stand (Fig. 8.4.) for easy access.
Fig. 8.3. Variable size spoon and fork sets;
Fig. 8.4. Cup stand
158
R. Soyama, S. Ishii, and A. Fukase
8.4.2 Compartment Selection Command Set The included meal tray is partitioned into four rectangular compartments. The compartment selection command selects the compartment that contains the item that the user would like to eat. Pushing forward selects the upper-left compartment; pushing right, the upper-right; pulling back, the lower-right; and pushing left, the lower-left compartment. After the compartment is selected, the manipulator automatically moves to the left-most edge of the compartment while rotating so that the spoon and the fork are positioned perpendicularly to the tray. The manipulator will then start to descend down into the tray (Fig. 8.5). 8.4.3 Position Adjustment Command Set This command set becomes effective automatically after the compartment selection mode finishes. The position of the spoon and the fork can now be finely adjusted by manually moving the joystick in a desired direction. However, an input in the left direction signifies the end of adjustment and the fork will slide down and grasp the food (Fig. 8.6). After grasping the food, the manipulator ascends and rotates to a position parallel with the tray and moves back toward the home position.
a
b
c
d
Fig. 8.5. Compartment selection sequence. a Compartment selection using joystick, b Manipulator movement with rotation, c Movement toward selected compartment, d Final position
a
b
c
d
Fig. 8.6. Position adjustment and grasping sequence. a Manipulator position adjustment, b Fork descends, c Grasp food; d Manipulator return (with food)
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon”
159
8.5 Control Modes This basic operation allows a user to eat using only several joystick inputs. Some users may find it troublesome to repeat the same operations every time they eat. Mentally impaired users may find it difficult to understand the operation of the system and others may find it difficult to finely control the arm due to other disabilities. Therefore there are two other simpler control modes each with a lower amount of user input, for a total of three control modes. The first mode allows full user control of timing, compartment and food selection. The second allows full control of timing and compartment selection. The third mode allows for the control of timing only. In each mode, the user can freely select when to eat, although there will be a restriction on food selection. 8.5.1 Manual Mode This is the default operating mode, requiring both compartment selection and position adjustment. 1. One joystick operation is required to select the tray compartment. 2. Several operations are required to move the spoon and fork to the desired position. One additional operation is necessary to grasp the food. 3. The manipulator automatically ascends and rotates toward to a position parallel with the tray, moving towards the mouth. This mode provides the most flexibility in eating. However due to the number of operations required, the push-button cannot be used as in operating interface for this mode. 8.5.2 Semi-automatic Mode In this mode, a tray compartment is selected by controlling the joystick as in manual mode. However, position adjustment within the selected compartment is not performed, and the foods are grasped one after another in the predetermined sequence (Fig. 8.7). This mode of operation is suitable for users with a relatively low level of manual dexterity. However due to the number of operations required, a joystick must be used as the operating interface. 1. One joystick operation is required to select the tray compartment. 2. Foods are grasped in a predetermined sequence. 3. The manipulator automatically ascends and rotates toward to a position parallel with the tray, moving towards the mouth.
160
R. Soyama, S. Ishii, and A. Fukase
Fig. 8.7. Predetermined sequence and position for food selection within a compartment
Fig. 8.8. Predetermined sequence for compartment selection
8.5.3 Automatic Mode In this mode, the user can only specify when to pick up food. As in semiautomatic mode, foods within a compartment are selected automatically. However compartment selection is not available and will be selected in a predetermined sequence (Fig. 8.8). A user operation will consist only of a single joystick movement in any direction or a push of an optional button (Fig. 8.2). The user cannot select the desired food. 1. One joystick or push-button operation is required to initiate motion. 2. The manipulator moves to a compartment in a predetermined sequence. 3. Foods in the compartment are grasped in a predetermined sequence. 4. The manipulator automatically ascends and rotates toward to a position parallel with the tray, moving towards the mouth. In automatic mode and semi-automatic mode, the manipulator may return to the user without any food since it moves in a predetermined sequence. Therefore in these modes, the spoon will “glide” slightly above the tray surface for a few centimeters before grasping to avoid returning to the user with an empty spoon.
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon”
161
8.6 Future Tasks While the current My Spoon functions well, there are still several areas what can be improved, namely enhancing the accuracy in grasping food and in preventing an empty spoon from returning to the user. To add this functionality, imageprocessing technology may be incorporated for sensing and automation. 8.6.1 Food Recognition by Using Color Image Processing The purpose of adding color image processing to My Spoon is to identify the position and distribution of food in the food tray. This will simplify the operation, making it more convenient to use. 8.6.1.1 Fundamental Image Processing Since the color and shape of food encompasses a very wide range, accurately extracting food items is a difficult problem. Although My Spoon is mainly used indoors, there still are many different kinds of light sources, not to mention that the lighting may change during operation. By using the color of the bottom of the lower-right compartment of the meal tray as the default tray color, thresholds for further image processing can be set dynamically. 8.6.1.2 Color Sample Extraction and Threshold Calculation Although there are many ways to derive thresholds from a color sample, we have used hue and saturation from an HSV input for robustness against external environmental changes. By mapping the hue on the x-axis and saturation on the y-axis, the hue-saturation distribution of the bottom of the meal tray can be seen in Fig. 8.9.
Fig. 8.9. Hue - Saturation Distribution
162
R. Soyama, S. Ishii, and A. Fukase
a
b
Fig. 8.10. Meal tray image. a Input Image b The same image after processing
To determine which areas of the tray contain food, the hue and saturation of each pixel is determined. If the hue and saturation of a pixel is within the bounds of the hue-saturation distribution of the meal tray, then the pixel is classified as the meal tray. Likewise if the element falls outside of the bounds of the meal tray distribution, then the pixel is classified as food. An input image and an image after processing are shown in Fig. 8.10. 8.6.2 Improvements in Operation
8.6.2.1 Preventing an Empty Spoon to Return to the User In automatic and semi-automatic mode, each compartment is subdivided into nine partitions and My Spoon will grasp food from each compartment in a predetermined sequence. By using image processing to determine which partitions are empty, new sequences can be dynamically generated to prevent unnecessary movements in those partitions. In Fig. 8.11, partitions 1, 6 and 8 in the compartment have been identified as containing food. Thus only three operations will be required to grasp all of the food. 8.6.2.2 Improving the Usability of Manual Mode In manual mode, the spoon and fork position must be accurately adjusted. By determining food locations using image processing, a directional input from the joystick will allow My Spoon to automatically find the nearest food item in that direction. Thus by adding a pose calculation after the binarization routine, the probable food locations can be labeled, allowing calculation of the probable center of mass.
8 Selectable Operating Interfaces of the Meal-Assistance Device “My Spoon”
a
b
163
c
Fig. 8.11. Image of a single compartment. a Input image, b Image after processing, c Food locations (o: found, ×: not found)
If several food items are in close proximity, there is a chance that they may be mislabeled as one food item instead of several items. We are considering using edge-detection and the color schemes within a label as ways to overcome this problem, however this has not been implemented yet.
8.7 Conclusion We have developed "My Spoon", a meal-assistance device commercially available in Japan. My Spoon is operated by combining several motions. The system has three modes of operation each with a different level of user control. Although the standard control interface is a chin-controlled joystick, optional control interfaces such as a reinforced joystick and push button are available. By combining these options, the system can be customized fit to a wide variety of physical conditions. Future tasks include enhancing the accuracy in food grasping, and preventing an empty spoon to return to the user. With this in mind, we are exploring the use of image processing technology for sensing and automation.
References 1. 2. 3. 4. 5.
Ishii S. et al. (2002) Development of the meal assistance robot. In: Proceedings of the 17th RESJA Annual Conference, pp 443–446 Ishii S. et al. (2001) Safety of the meal assistance robot. In: The Journal of Japanese Society for Medical and Biological Engineering, p 200 Ishii S. et al. (1996) A meal assistance robot for people with quadriplegia -4th report, In: Proceedings of the 11th RESJA Annual Conference, pp 351–356 Tanaka S. et al. (1995) A meal assistance robot for people with quadriplegia. In: Proceedings of the 10th RESJA Annual Conference, pp 311–314 Ishii S. et al. (1992) Meal assistance robot as a device for people with quadriplegia. In: Proceedings of the 7th RESJA Annual Conference, pp 79–82.
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
Abstract Controlling MANUS, a wheelchair-mounted manipulator, can put a high cognitive load on the end user. Visual servoing techniques can help to reduce this load. This paper describes a visual servoing system that can assist the end-user in carrying out all day living tasks. No a-priori information about the object to manipulate is used (allowing operation in unstructured environments). The system is to be able to operate in direct collaboration with the end-user (collaborative control).
9.1 Introduction MANUS is a wheelchair mounted manipulator, meant to assist severely handicapped people in carrying out all day living tasks, such as eating, drinking, scratching etc. The manipulator has six rotational degrees of freedom for positioning and orienting the gripper, one degree of freedom for opening and closing the gripper, and one (optional) degree of freedom for lifting the entire manipulator. MANUS is designed to operate in an unstructured environment, where the user is responsible for driving the robot to the required position (telemanipulation). Compared with industrial robots, MANUS has low accuracy and low repeatability1. These deficiencies have to be compensated by the end-user who guides the system to the required position, based on visual observations (dotted lines in Fig. 9.1). Although users are able to manipulate many objects using this control architecture, the cognitive load on the end-user while accomplishing a certain task can be serious (especially when accurate motions are required). In this research it is investigated how a camera system, mounted on the gripper of the MANUS (eyein-hand configuration) can help the end-user in carrying out a certain task. The control of a robot by means of a camera, visual servoing, is a widely explored field of science. As MANUS has to operate in a very unstructured environment with little or no knowledge of the objects, building a fully autonomous system is very difficult.
1
This is a result from the high friction and backlash, caused by the design choice to put the motors and gearbox in the main shaft of the manipulator, (keeping the size and weight of the MANUS low.)
Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 165-174, 2004.
© Springer-Verlag Berlin Heidelberg 2004
166
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
Objects
User
Controller
Manus
Fig. 9.1. Traditional MANUS control scheme
Also, the MANUS philosophy is that the end-user should always be ‘in control’. Especially for the latter reason, it is not the intention to build a fully autonomous system, but rather to incorporate the user into the control loop. The research is therefore not only directed towards building a vision system that can produce some control in unstructured environments, but also on how the user and the visual servoing can work together towards executing a task. The research field that combines user input with machine control is called "Collaborative control" and is first used in teleoperation of vehicles, for example in [1]. In this paper, a robot vehicle is described that engages in an active dialogue with the user to determine the course of action. The application of collaborative control to MANUS is quite new, see [2]. In this work, the user also has to interact with the robot through a dialogue. In our approach, the user can input commands to the robot directly and the vision system merely assists in the movement in an intuitive way. The control scheme needed for this is depicted in Fig. 9.2. In this paper, several approaches to visual servoing are considered and a solution for integrating visual servoing in the MANUS controller is proposed. The performance of the visual servo is demonstrated in a test case where the objective is to pick up a colored beaker.
Objects Vision User
Controller
Manus
Fig. 9.2. Vision based control scheme
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing
167
9.2 Visual Servoing Two widely used architectures for visual servoing are image based visual servoing (IBVS) and position based visual servoing (PBVS) (see Fig. 9.3) [1–5]. In PBVS a 3D world-model is constructed by the vision system. This 3D world model is used by a path-planner, which will guide the arm to the desired position. Calculating a 3D world model can be carried out using different techniques (stereovision, laser triangulation, usage of a-priori knowledge about the objects, etc.). These techniques are not preferred, since they need extra hardware, or explicit apriori information (which is in most cases not available in unstructured environments). In IBVS, the required position of the manipulator is defined in terms of image features (e.g. required location of corners of a cube). The visual servo continuously calculates the actual values of these features, and calculates a correcting control action in feature space. Feature errors are translated to robot co-ordinates (defined in Cartesian space or joint space). This requires the calculation of the inverse of an image Jacobian (Ji), defined in the following equation:
df = J i ( c xo ) c d xo
(9.1)
c
where f represents the feature vector and xo the pose of the object with respect to the camera frame.
y x
xd +
∆x
-
x
fd +
∆f
World
Control law
f
Pose determination
Inverse Jacobian
-
f
∆x
z
Feature extraction
Control law
World
Feature extraction
Fig. 9.3. Position based visual servoing PBVS (upper) vs. image based visual servoing (IBVS) (lower)
168
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
9.2.1 Vision Aspects of the Visual Servoing As stated earlier, the MANUS has to be able to function in unstructured environments and with various objects, putting high constraints on the vision system [6]. To reduce the complexity of these constraints, the objects can be equipped with markers. Although this approach has proven to work well [7], it limits the versatility of the system. Other examples, in which MANUS cooperates with the user, can be found in [8]. Here, the MANUS functions in conjunction with an automated wheelchair. For each object that is to be manipulated, a 3D model is formed using several shots from different angles. An operator is asked to segment and classify the object. In addition, he is asked to supply a gripping strategy. The main differences with our approach are that the operator is not part of the control loop and that a separate learning step is necessary. Also, all DOF's are controlled, while the essence of our approach is that DOF's are distributed between vision and user. Our visual servoing system is used in two manners. When MANUS is far away from an object, only generic image features are used (e.g. size of object, position of center of gravity, etc). The visual servo is responsible for driving the gripper close to the object to manipulate. When the manipulator is ‘close’ to the object, either the end-user takes over the control of the visual-servoed degrees of freedom, or a second visual servoing algorithm is activated, for fine-positioning the gripper. For this, a-priori information can be used to identify the object and the position of the gripper with respect to the object. Only the first manner of use is described in this paper.
9.3 Control Architecture The control system must support the combination autonomous control (via visual servoing), and direct user control, which may be defined in different co-ordinate systems. Figure 9.4 shows the scheme used. Whenever a user wants to execute a certain task, a task scheduler selects a set of image features from a database. Different features may be selected during different stages of the task. The features are compared with the measured features, resulting in a feature error. The in–1 verse image Jacobian Ji translates feature errors to Cartesian errors (with respect to the camera frame). These errors are multiplied by a selection matrix S, which is th a diagonal matrix of ones and zeros. A one in element Sii implies that the i Cartesian DOF is actively used for visual servoing. In parallel, the end-user may drive the robot in his/her preferred co-ordinate system. The specified user input is transformed the camera frame too, and added to the output of the visual controller. The controller uses the combined signals to control the robot. This architecture has the advantage that tasks need not be carried out completely by the visual controller. Instead, it is possible to merge user inputs with
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing
169
visual servo inputs, allowing some DOF's to be controlled by the user, and some DOF's to be controlled by the visual controller. Note that when the entire selection matrix S is set to 0, the traditional MANUS controller is realized.
Fig. 9.4. Visual control architecture
9.4 Vision System 9.4.1 Theory In the first stage of task execution, we wish to rely on generic features, applicable for a wide range of objects. These features should supply servoing information, i.e. information that can be used to control the selected DOFs of the manipulator. An example of a generic feature is the center of mass of an object, the size of objects, or the angle of the ‘long axis’ of an object. Since the system operates in unstructured environments, with changing illumination, the quality of the extracted features will strongly vary. In order to cope with this, confidence measures are introduced, which measure the reliability of the calculated features. During control the features with the highest confidence measures can be selected. Note that the number of features must be at least equal to the number of actively controlled DOF's. If no alternative features can be selected, the visual servo must stop, and the end-user will be informed (e.g. through an audio signal).
170
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
9.4.2 Implementation The vision system is equipped with a low cost color camera. Wide-angle lenses were used, preventing failure of feature calculation at close proximity. Occlusion is dealt with by using the confidence measures (Hu moments and the number of pixels on the border). For segmentation of an object normalized color information is used, providing brightness independent color segmentation. From the segmented image, several features are calculated. c c For servoing in the x and y directions, the center of mass of the pixels belonging to the object are calculated by taking the median of the pixel coordinates. This is more robust than the mean of the coordinates, as can be seen in Fig. 9.5. A feature for the Roll is obtained by calculating the center of mass for both the upper and lower half of the object. The angle of these two points with the y-axis is used as a measure of the Roll. This way, the system tends towards a symmetric image in which long objects are aligned with the y axis. More than one possibility exists to use the features for visual servoing. For exc c ample, the center of mass can be used for controlling the x and y of the gripper, or for controlling the yaw and pitch orientation angles. Naturally, this requires a proper selection matrix S, and image Jacobian Ji.
a)
b)
c)
d)
Fig. 9.5. Calculated features. a The cup, b centers of mass using median (light gray, more robust) and mean (white) c centers of mass in case of segmentation error (gray square) d roll feature, cup divided in two halves
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing
171
9.5 Stability Our setup has some special consequences for the stability of the system. Although the relationship between the position of the object with respect to the camera and the features in the image is linearized through the Jacobian, the Jacoc bian itself is dependent on xo. This means that the 'gain' between one pose parameter and its feature is influenced by other pose parameters. For instance, the center of mass appears to be moving faster in the camera image when an object is close. This is caused by the perspective of the camera. According to the wellknown pinhole model, the projection Jacobian has the following form:
foc x ∆ xo c z o ∆ xi c = J xyz ⋅ c = ∆ y o 0 ∆ y i c
c
0 c ∆ xo foc y ∆c y o c z o
(9.2)
c
c
where focx and focy represent the focal distance in x and y directions, xo and yo the c c position of the object with respect to the camera frame, xi and yi the corresponding point (pixel location) in the camera image. c The Jacobian has large diagonal values for smaller values of zo, which may result in instability. Whilst this is a problem in all IBVS setups, here it cannot be compensated, bec cause not all pose parameters, like zo, are known. We solved this problem by considering possible cases of cross coupling and tuning the gain to a stable system in the worst case of this cross coupling. The system should then be stable in all other cases. Additionally, some constraint may be imposed on a DOF to pose a limit on this worst case. For the aforementioned example of the center of mass, this means that the movement is constrained to a minimum Z and the gain is tuned to a stable system for this Zmin.
9.6 Experiments Several experiments have been done using different selection matrices (and consequently different Jacobians). The task was to pick up a colored beaker. The experiments served two goals: 1. To investigate the stability of the proposed visual control architecture. 2. To investigate the usability issues of the combined visual/user control system. The results of one typical experiment are shown in Figs. 9.6 and 9.7. In this experiment, the user controlled the Z-direction (i.e. the distance between the gripper and the beaker) manually, whilst the visual servo controlled the X, Y and roll DOF's. Yaw and pitch angles remained constant.
172
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
Figure 9.6 shows the pose of the MANUS during task execution, calculated from the motor angles. As explained earlier, the MANUS has a significant amount of backlash. This means that the position of the motors differs from the actual pose of the gripper. Consequently, a moving motor does not automatically imply that the gripper is also moving. This is also shown in Fig. 9.7 (which applies to the same experiment). Three signals are given. The first signal is the Y co-ordinate of the gripper, calculated from motor co-ordinates. The second signal shows the Y co-ordinate measured in the camera image. The third signal shows the input of the end-user. It can be seen that though the motor is moving, the real gripper position is relatively constant. Another experiment was done concerning the effect of cross coupling on the c stability. First, the arm is moved towards the cup, whilst being servoed in the x c and y directions (Fig. 9.8). The gain is clearly too high, as the arm starts to oscilc c late in x as the distance in z gets smaller. The gain is set to a stable value at this distance (Zmin). (Zmin is small enough to pick up the cup). As we move away from the cup, the servo remains stable throughout the trajectory, as would be expected. Other experiments confirm this outcome, and show that the proposed architecture resulted in a stable control performance.
Fig. 9.6. Results of experiments
9 Enhancing the Usability of the MANUS Manipulator by Using Visual Servoing
173
Fig. 9.7. Motor position vs. gripper position
Fig. 9.8. The user moves in cz and vision servoes the cx direction. The black straight lines in de cz graph indicate the position of the cup
174
A.H.G. Versluis, B.J.F. Driessen, and J.A. van Woerden
9.7 Conclusions and Future Work We proposed a visual control architecture that allows semi-autonomous visual control of the MANUS manipulator. Using the control architecture it is possible to select DOF's that will be controlled by visual servoing, whilst other DOF's are directly controlled by the enduser. The experiments show that the total controller is stable. Differences between the motion of the motors and the motion of the gripper can be explained by the amount of backlash that exists in the manipulator. More experiments will be done to investigate the best value of the selection matrix S for different tasks. This will be ascertained by user trials. Subsequently, the feature-tracking algorithm should be improved. Furthermore, the second stage of the visual servoing algorithm (close to the object) should be developed. In this stage a flexible way of using a-priori information will be introduced.
References 1.
Fong T, Thorpe C, Baur C (1999) Collaborative control: A robot-centric model for vehicle teleoperation. AAAI 1999 Spring Symposium: Agents with Adjustable Autonomy, Stanford, CA, March 1999 2. Martens C, Ruchel N, Lang O, Ivlev O, Graser A. (2001) A FRIEND for assisting Handicapped people. IEEE Robotics & Automation Magazine, 7(1): 57–65 3. Hutchinson S, Hager G, Corke P (1996) A tutorial on visual servo control. IEEE Transactions on Robotic and Automation, 12(5): 651–670 4. Espiau B, Chaumette F, Rives P (1992) A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3): 313–326 5. Hager G (1997) A modular system for robust hand-eye coordination using feedback from stereo vision. IEEE Transactions on Robotics and Automation, 13(4): 582–595 6. Weiss LE, Sanderson AC, Neuman CP (1987) Dynamic sensor-based control of robots with visual feedback. IEEE Journal on Robotics and Automation, RA-3, pp 404–417 7. Corke P, Hashimoto K (eds) (1993) Visual control of robot manipulators – A review. World scientific; Singapore 8. Woodfill J, Zabih R (1997) Motion based tracking for dynamic, unstructured environments. Computer Science Department, Stanford University 9. Martens C (2001) Interactive controlled robotic system FRIEND to assist disabled people. In. Proceeding ICORR 2001, p 148 10. Matsikis A, Schmitt M, Rous M, Kraiss K “Ein Konzept für die mobile Manipulation von unbekannten Objekten mit Hilfe von 3D-Rekonstruktion und Visual Servoing", RWTH Aachen, http://www.techinfo.rwth-aachen.de/Forschung/MSR/Manus/.
10 A Safety Strategy for Rehabilitation Robots Makoto Nokata and Noriyuki Tejima
10.1 Introduction It was reported previously that many MANUS users thought that the MANUS was useful, but not useful enough to gain totally independent lifestyles for several reasons; it moves too slowly, it cannot handle heavy goods, and its manipulative arm is not long enough. It would technically be easy to develop a faster, more powerful and bigger robot than the MANUS to meet these requests, however, such a high performance robot would be too dangerous for everyday use. The consideration of safety aspects is especially important for rehabilitation robots with higher performance entering the market. The problem has been discussed in a special committee of the Japan Robot Association according to ISO. In the paper, a safety strategy for rehabilitation robots is proposed as a result of the discussion.
10.2 Principles of Safety Standards for Robots 10.2.1 Framework of New Safety Standards for Robots About industrial robots, ISO 10218 “Manipulating Industrial Robot-Safety” was established in 1992. The basic strategies for safety in the standard were that robots should be isolated from humans and that they must be turned off when they cannot be isolated. However, the strategies cannot be applied to rehabilitation robots because they must work near or in contact with human. We should establish a new safety framework for rehabilitation robots. Recently, a safety standard system for machinery has been established, in which the safety standards for manipulating industrial robots are included. In Europe, a certification system has been carried out according to it. Manufacturers can affix the CE Marking (CE is an abbreviation for “Conformité Européenne”, French for 'European Conformity) to their own products after they are certified by themselves or by a notified body. When such a certification system is functioning, taking the responsibility for accidents is not necessary for designers. Accidents caused after enough risk reduction has been carried out should be tolerated according to the certification system. Accordingly, the process of certification is most important. Designers must properly perform the iterative process of risk assessment and risk reduction. Though there is no safety standard for rehabilitation robots, they should obey the safety standard system for machinery. As stated above, the risk for rehabilitaZ.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 177-185, 2004.
© Springer-Verlag Berlin Heidelberg 2004
178
M. Nokata and N. Tejima
tion robots cannot be reduced in the same manner as for industrial robots, and residual risk may be difficult to tolerate. Therefore, it is necessary to develop rational protective measures for rehabilitation robots based both on the basic concepts of safety standards and from the point of view of enhancing users' QOL (Quality of Life). 10.2.2 Safety Standard for Machinery The safety standard system for machinery has been established in a pyramidal structure shown in Fig. 10.1. In this system, standards on the top prescribe basic concepts of safety, standards below them prescribe common technologies, and standards at the bottom prescribe precise technologies for each type of machinery, such as manipulating industrial robots. The basic concepts of safety are protective measures according to risk assessment and disclosure of residual risk. According to ISO/IEC Guide 51:1999, safety is defined as “freedom from unacceptable risk” and risk is defined as “combination of the probability of occurrence of harm and the severity of that harm”. Tolerable risk is defined as “risk which is accepted in a given context based on the current values of society”. A level of tolerable risk is not clearly stated in the standard and should be decided according to the current values of society, state-of-the-art technology, legal problems and so on. Safety is relatively described by risk in terms of probabilities. There can be no absolute safety: some risk will remain, defined as residual risk. Nobody can say that accidents or disasters can be avoidable absolutely. For a guarantee of safety, there must be ground for tolerating accidents after an adequate risk reduction process is implemented.
Fig. 10.1. A pyramidal structure of the safety standard system for machinery
10 A Safety Strategy for Rehabilitation Robots
179
10.2.3 Risk Assessment Process and Risk Reduction According to ISO 12100-1:1992, safety measures are a combination of the measures incorporated at the design stage and those measures required to be implemented by the user. Safety measures at the design stage should be performed as a combination of risk assessment and risk reduction as listed below. The first four processes are iterated until the remaining risks become tolerable, and finally point 5 will be used. 1. Specify the limits of the machine, 2. identify the hazards and assess the risks, 3. remove the hazards or limit the risks as much as possible, 4. design guards and/or safety devices against any remaining risks. 5. inform and warn the user about any residual risks. When specifying the limits, use limits, space limits and time limits should be determined. In the process, it is important to take reasonably foreseeable misuse into account, such as incorrect behavior resulting from normal carelessness and the reflex behavior of a person in case of malfunction, incident, failure and so on. Hazards are the origin or the nature of the expected harm. For identifying the various hazards, past data and experiences will be useful. After hazard identification, risk estimation should be performed for each hazard. According to ISO 14121:1999, the risk is derived from a combination of the following elements (Fig. 10.2): 1. the severity of harm; 2. the probability of occurrence of that harm, which is a function of: o the frequency and duration of the exposure of persons to the hazards; o the probability of occurrence of a hazardous event; o the technical and human possibilities to avoid or limit the harm. Before, the risk was defined as the product obtained by multiplying the four elements, however, now it can be estimated by various functions of them. There are many methods of risk estimation. For risk reduction, an inherently safe design should be striven for by completely removing the hazards or limiting the risks first of all, such as removal of sharp edges, minimizing or removal of kinetic energy, usage of low voltage and observing ergonomic principles. Against hazards which cannot be avoided or sufficiently limited by an inherent safe design, safeguards should be applied, such as fixed guards, movable guards, interlocking guards and trip devices. After the risk reduction, risks for the machine with the safety measures will be assessed again. As risks cannot be eliminated completely, it should be considered which of the risks should preferentially be reduced or which measures will provide cost-effective performance. In carrying out this process, it is necessary to take account of: • the safety of the machine, • the ability of the machine to perform its function, • usability of the machine, • the manufacturing and operational cost of the machine, in that order of preference.
180
M. Nokata and N. Tejima
It is necessary to inform and warn the users about residual risks. The instructions and warnings will prescribe the procedures and operating modes intended to overcome the relevant hazards. If a particular training is required, it must be indicated.
Fig. 10.2. Risk assessment and risk reduction of safety measures at the design (ISO 14121:1999, ISO 12100-1-2)
10.2.4 Tolerable Risks for Robots As mentioned above, a level of tolerable risk for machinery should be decided according to the standard which risk assessor declares. This is suitable for industrial robots. Because the workers whom they may injure do not directly benefit from their use, their safety should be certified objectively. On the contrary, users of rehabilitation robots directly benefit from them. Users may accept their use on account of these benefits even when the designer cannot reduce their associated risks sufficiently. Safety standards for medical devices will help us to better consider safety for rehabilitation robots. For example, surgical robots are highly beneficial if the patients' outcome is successful, however, the operative outcomes are not always a success. For such machinery, it is not necessary for designers to take responsibility for accidents if the following steps were taken prior to their usage: That the device was in fact declared as "state-of-the-art", that the residual risks were clearly detailed to the patients and that the patients consented to their use.
10 A Safety Strategy for Rehabilitation Robots
181
10.3 Case Study on Safety of Rehabilitation Robots In general, the risk assessment and the risk reduction of machinery are carried out according to ISO/TR 12100-1 “Safety of machinery-Basic concepts, general principle for design” and ISO 14121:1999 “Safety of machinery-principles of risk assessment”. In Japan, the special committee for standardizing rehabilitation robots has been established by the Japan Robot Association in 2001. The committee members, who are researchers of medical and rehabilitation robots, carried out Case Study of assessing several medical and rehabilitation robots according to ISO/TR 12100-1:1992 and ISO 14121:1999. The aim of this case study is to clarify the key points of risk assessment and risk reduction for these robots. The following medical and rehabilitation robots are carried out case study of the risk assessment by use of block chart shown in Fig. 10.3 which is Fig. 10.2 modified by ISO14971, that is "Medical devices: Application of risk management to medical devices".
Fig. 10.3. The iterative process to achieve safety which is Fig. 10.2 modified by ISO 14971
• Medical robots o Neurosurgical robot o Laparoscopic surgery robot o Continuous passive motion device (CMP)
182
M. Nokata and N. Tejima
• Rehabilitation robots o Meal assistance robot o Mobile ceiling lift o Bed transfer o Pet robot / Mental care robot. This section reports some of the results as follows: 1. Risk estimation 2. Risk reduction 3. Benefit estimation 10.3.1 Risk Estimation This section comments some formulas for estimating the risk of machinery are proposed. Risk related to the considered hazard can be calculated by the following equation:
R = Q * F *C * N
(10.1)
R: risk related to the considered hazard Q: probability of occurrence of harm F: frequency and duration of exposure C: severity of possible harm that can result from considered hazard N: number of exposed people The same equation (Eq. 10.1) is used by special committee for standardizing to estimate the risk of marketed medical and rehabilitation robots. However, this approach shows a lot of significant disadvantages. Some of them are briefly commented below: (a) "R: risk related to the considered hazard" is influenced by the difference in a user and the body situation of cased person. In case of in-home care, a caretaker or a cared person has to operate a medical / rehabilitation robot by himself / herself. Most of caretakers or cared persons are not familiar with their operation, the number of "Q: probability of occurrence of harm" becomes large caused by their incorrect operation or misuse. Even though correct operation and movement, robots can injure the patient whose joint is stiff or whose bone is breakable such as osteoporosis, the Q number is so high. As a result, risk of medical and rehabilitation robots is influenced by the difference in a user and the body situation of cased person. This is far different from a risk of machinery which can be estimated on the assumption that user is a specialist of operation and a person with a normal healthy body. (b) The difference in the state of robot's work space greatly influences "R: risk related to the considered hazard". Stretcher and lifter have residual risk of the user's fall. The damage is dependent on a fall place. For example, the damage of falling to bed is low, but to rigid floor
10 A Safety Strategy for Rehabilitation Robots
183
is high. Manipulator also gives user a risk of collision accident, the probability of accident is dependent on room space or user's position. As mentioned above, the difference in the state of robot's work space must be considered in estimating a risk of medical and rehabilitation robots. (c) There is little judgment material for determining "Q: probability of occurrence of harm" and "C: severity of possible harm" Compared with machinery, there are few statistics data about the accident report of medical treatment and rehabilitation apparatus. The accident occurrence number of cases including a slight injury is unknown, so it is extremely difficult to determine "Q: probability of occurrence of harm". In addition, there is no method to calculate "C: severity of possible harm". In the present circumstances, these values are estimated experimentally or subjectively by the risk assessor. (d) The risk cannot be expressed correctly by the multiplication of a risk element like Eq. 10.1. On the assumption that risk factors in Eq. (10.1) have been independent mutually, risk level is calculated by multiplied each of them. However, there is a certain correlation between "F: frequency and duration of exposure" and "Q: probability of occurrence of harm" of medical and rehabilitation robots. No harm will come from only using Eq. (10.1). 10.3.2 Safety Measures of Risk Reduction Although some safety measures of risk reduction are convenient for machinery, it is not useful for rehabilitation robots. In case of continuous passive motion device (CPM device), the emergency stop device is not useful for the persons who injured their spinal cord. Because they can feel no pain and cannot judge stopping the robot. Generally speaking, inherently safe of industrial robots is easily guaranteed by setting up a fence around the robot. Furthermore, several types of safety measures, such as interlocking device, enabling device, hold-to-run control device and so on, are available to reduce the risk. Therefore risk can be made small infinite. However, it is impossible to realize inherently safe of medical and rehabilitation robots because these robots cannot work without giving a person a touch. Few available safety measures for these robots are developed. 10.3.3 Benefit Estimation Several benefits of using rehabilitation robots are reported by case study (Table 10.1). Most of benefits must be quantified by use of QOL (Quality Of Life), ROL (Respect Of Living) and ADL (Activities of Daily Living), but it is too difficult to quantify them objectively. For examples, benefit of "mobility" is changed according to the extent of gait disorder, so it is necessary to subjectively consider the daily life of targeted cared person.
184
M. Nokata and N. Tejima Table 10.1. Benefits of using rehabilitation robots and the quantification factors
User (Mostly carer)
Cared person
Benefit Improvement of working condition (ex. Reduction of lumbago generating) Acquisition of an independence life Expansion of a life space Mentally relieved
Quantification factors QOL, ROL, Tiredness, Working time, Cut-down medical expenses for lumbago QOL, ROL, ADL
10.4 Proposal of Risk Assessment Guideline for Rehabilitation Robots This section proposes safety strategy for rehabilitation robots according to results of case study mentioned above. Proposed guideline of risk assessment and risk reduction is shown in Fig. 10.4.
Fig. 10.4. Proposed guideline of safety strategy for rehabilitation robots
10 A Safety Strategy for Rehabilitation Robots
185
Fig. 10.4 has similar structure to those, shown in Fig. 10.3. As a difference, the new structure is additionally improved: • Determine limits: user, the extent of handicap, the condition of health, the ability of operation and so on • The 3rd person who can do objective judgment with technical knowledge evaluates the contents of the carried-out risk assessment. Judgments whether apparatus is introduced or not by carer, cared person and manager in consideration of benefit.
10.5 Conclusion This paper presented the study of a safety strategy for rehabilitation robots. The principle of safety standards for robot is reported, especially the framework of safety standards, risk assessment process and risk reduction for machinery. As a result, it made clear that it was necessary to form a new safety strategy for rehabilitation robots based on the principle and process of safety standards for machinery. According to the results, the case study of assessing several medical and rehabilitation robots was carried out according to ISO/TR 12100-1:1992 and ISO 14121:1999. The problems of assessing these robots were discussed in a special committee of the Japan Robot Association, a safety strategy for rehabilitation robots has been proposed. Determine limits of special factors, judgments by certification system, benefit of robots for user have been added to the strategy. However, making the safety standard system for rehabilitation robot has just started. The safety strategy proposed in this paper has not been reached the stage of perfection yet. Future works are: (1) development of risk reduction technology for rehabilitation robots, (2) study of estimation / evaluation method of risk for them, and (3) establishment of certification system in order to guarantee risk assessment carried out.
11 Safety Evaluation Method of Rehabilitation Robots Makoto Nokata, Koji Ikuta, and Hideki Ishii
11.1 Introduction An aged society comes soon. Human-care robots must be realized to nurse aged and disabled persons. Human-care robot will need to work around elderly people and give them touches; therefore conventional safety strategies for industrial robots can not be applied to human-care robots. It is now necessary to make a new study of safety in the space where a human and a machine will exist together. In this chapter, we investigate the human injury caused by robot and machine, and then we classify safety design and control strategy for robot. Next, we propose an evaluation method of safety for human-care robots and define evaluation measures which describe the degree of safety. Next, we apply our method to evaluate several safety design and control strategies, and then we prove the viability of our safety evaluation method. These proposed methods enable us to optimally distribute cost among several safety strategies, and to derive suitable approaching motion of a multi-link manipulator to a human. The validity and effectiveness of these methods are demonstrated by numerical analysis. As a result, the design and control to increase safety are successfully obtained.
11.2 Safety Strategy for Human-Care Robots 11.2.1 Injury to Humans from Human-Care Robots We gave thorough consideration to the possibility of injury to humans from human-care robots and machines. The causes of injury may be classified as follows: 1. mechanical injury - shock (internal bleeding, fracture of a bone), scar (bleeding, contagion) 2. electric injury - electric shock (death from shock, burn), electromagnetic wave (cancer, leukemia) 3. acoustic injury - boom (hardness of hearing), low frequency sound (insomnia, neurosis). In this research, we chose the safety strategy to prevent mechanical injury as the subject of the study. Though protecting humans from electric and acoustic injury is possible by making use of insulators or soundproofing materials, it is very Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 187-198, 2004.
© Springer-Verlag Berlin Heidelberg 2004
188
M. Nokata, K. Ikuta, and H. Ishii
difficult to isolate the mechanical damage in the workspace of the robot. For securing human-care robots which work around humans, many kinds of design and control strategies are indispensable. But many complex and difficult problems are conformed in the design and the control of the robot. 11.2.2 Classification of Safety Strategies We classify safety strategies as follows: 1. Pre-contact safety strategy 2. Post-contact safety strategy. Pre-contact safety strategy considers of as minimizing human injury before human-robot collision. Post-contact safety strategy considers of reducing the injury after the collision. In analogy to car safety strategy, the former strategy corresponds to avoiding a collision by means of an Antilock Brake System (ABS), the latter strategy to absorbing the shock by means of an air bag or side door beam. This discussion can be classified as follows from the viewpoint of a humancare robot user or designer: 1. Safety design strategy (Minimizing injury by design) 2. Safety control strategy (Minimizing injury by control). Table 11.1 shows the classification of safety strategies. Table 11.1. Classification of safety strategies
after collision
before collision
control strategy avoid collision minimize impact force attenuation diffusion
design strategy
distance speed posture moment of inertia stiffness
weight cover
joint compliance
surface
shape
To implement robot design safety strategies, we have already developed the cybernetic actuator [1] and non-contact magnetic gear [2], which have force limiting functions. Other strategies have been devised such as force limiting equipment using electrorheological fluid [3], force control, shock absorption cover [4], and chamfering, etc. Few researches have been carried out on safety evaluation methods, such as dangerousness of actuator arrangement [5] and safety in a human controlling [6]. International safety standards have defined safety as, "freedom from unacceptable
11 Safety Evaluation Method of Rehabilitation Robots
189
risk of harm," and thus estimate only the risk of harm [7]. This estimation method lacks a quantitative basis because it relays on the use of insufficiently provable data. Furthermore, the estimation methods of safety vary with the researchers. As a result, we cannot compare the various strategies. So we have done only one separate case study from beginning to end. The reasons are attributable to the vagueness of the concept of safety. Everyone thinks that it is difficult to calculate the degree of safety or dangerousness and the contribution of each safety design and control strategy to the overall safety performance of robots. So nobody tries to do it.
11.3 Proposing Evaluation Measures of Safety 11.3.1 Necessity of Safe Quantitative Evaluation It is necessary to define ''evaluation measures'' for devising the general safety strategies of human-care robots. Evaluation measures enable us to compare the effect of each safety strategy on the same scale and to optimize the design and control of human-care robots. In the field of information science, Dr. Shannon has defined information as the degree of entropy, he has there by advanced information theory remarkably [8]. In the robotics field, Dr. Uchiyama and Dr. Yoshikawa defined the measure of manipulability, which has enabled us to compare the manipulation performance of various kinds of robot uniformly [9]. The former definition doesn't express enough about the quality of the information; the latter doesn't express various kinds of control performance completely. But we cannot deny their contribution to science and engineering. If we overcome some different opinions and define the general evaluation measures of human-care robots, we will able to achieve similar effects. 11.3.2 Selection of Evaluation Measures First, we examined in detail the occurrence process of collision accidents. According to ISO 12100, some formulas for estimating the risk of machinery are proposed. A typical equation for risk related to the considered hazard is shown as follows:
R = Q * F *C * N R: risk related to the considered hazard Q: probability of occurrence of harm F: frequency and duration of exposure C: severity of possible harm that can result from considered hazard N: number of exposed people.
(11.1)
190
M. Nokata, K. Ikuta, and H. Ishii
Many researchers have analyzed the ''Q: probability of occurrence of harm'' caused by human error, manipulation and so on. Their main topic is how to reduce the probability of accident and how to estimate it. The relation between the design and control of human-care robots and the dangerousness of injury has been paid little attention. In the event of careless collision between robots and humans, the degree of ''C: severity of possible harm that can result from considered hazard'' can be expressed as Eq. 11.2 by using only main factors such as design and control.
C = f (design) ⋅ g (control)
(11.2)
In this research, we have been taking a stand on studying ''what design or control can minimize human injury'' at the occurrence of an accident. Put another way, our aim is to make quantitative evaluation of the effectiveness of safety design or control measures, and to minimize its dangerousness on condition that the Q: probability of occurrence is 1. What should the evaluation measures be? A human-care robot works around humans who move irregularly. We consider an appropriate safety strategy while adapting the classified design/control safety strategy mentioned previously. A safety design strategy is a means for reducing the injury to a human after an irregular collision. A safety control strategy is a means for minimizing the injury before a human-robot collision. It is important to estimate not the occurrence rate but the injury due to collision. No matter what the cause of collision accident may be, the shock of mechanical injury depends on impact force, and the scar depends on impact stress. Namely we consider impact force and stress as evaluation measures.
11.4 General Evaluation Method Using Evaluation Measures In this section, we propose a general quantitative method of evaluation using evaluation measures. First, we define critical impact force Fc as minimal impact force that causes injury to human. Next, we define the danger-index α as the producible impact force F of robot against Fc in Eq. 11.3.
α=
F Fc
(α ≥ 0)
(11.3)
Strictly speaking, the value of force Fc varies according to age, sex and body part. But we use one representative value for realizing the generality of safety evaluation. In exceptional cases such as eyes, where Fc is very low, these body parts are treated as a singular point. Another evaluation is needed for such points. Next, we consider the overall danger-index provided by some safety strategies. We express the characteristic of safety strategies for minimizing the impact force
11 Safety Evaluation Method of Rehabilitation Robots
191
by using a block chart, which is popular in the control field. For example, producible impact force is input, a safety strategy is a factor, its danger-index is transfer function, and injury to a human is output. The index is dependent on the transfer function. In this system, several factors are connected with each other in series. The characteristic of whole system can be expressed as the multiplication of each transfer function. The total danger-index of whole robot αall is expressed by the multiplication shown in Eq. 11.4. This equation enables us to quantify the effect of safety strategies on the same scale: n
α all = ∏ α i
(11.4)
i =1
where ''n'' is the total number of safety strategies and ''i'' is the number of safe strategies. As an example, we consider the case of reducing impact force by a perfect shock absorption material. Even if a robot collides with a human, the impact force to the human is qualitatively 0 because it is isolated by the material. The dangerindex αj about the shock absorption material is expressed as 0 by using the proposed evaluation method of safety. The total danger-index multiplied by each index results in 0, so it is obvious to agree the usual. Too many safety strategies reduce the ability of robot work or operation. This problem can be solved by devising a safety strategy on condition that required working ability is satisfied, or calculating the optimum solution between Eq. 11.4 and efficiency of robot working. This is an advantage produced by a quantitative evaluation of dangerousness. Defining impact force and the danger-index before improvement as F0 and α0 respectively, the improvement rate η can be calculated by Eq. 11.5.
η=
α 0 F0 Fc F0 = = α Fc F F
(11.5)
Fc is canceled in Eq. 11.5, we can simply compare before and after safety strategies. The algorithms of our safety evaluation method are the following: 1. Investigating the factor of damage to a human as evaluation measures. 2. Calculating the impact force F of each safety strategy. 3. Calculating the danger-index α from Eq. 11.3. 4. Executing the general evaluation of safety by using the total danger-index. 5. Discussing the safety strategy from the result. This method enables to evaluate the effect of each or all safety strategies.
192
M. Nokata, K. Ikuta, and H. Ishii
11.5 Deriving Danger-Indexes of Safety Strategy In this section, examples of safety design and control strategies will be given to show the practical derivation of a danger-index. 11.5.1 Safety Design Strategy At first, we propose a linear approximate model of each safety strategy and solve it individually. The aim of approximation is in order to extract only the effect of a safety factor and remove the effects of other factor, as much as possible. Usually, we make models and equations which satisfy all effects of boundary conditions at the same time. This method requires the reconsideration of them when the conditions are changed. If more phenomena are considered, it makes the equation complicated and increases unknown variables. For evaluating and comparing safety strategies, it is necessary not only to consider all phenomena strictly but also to quantify the safety with the aim of wide use. As a result, we work out the danger-index of the safety strategy by using a linear approximate model individually. This research supposes a collision accident between human and robot, and each safety strategy for reducing the damage from the collision is discussed. For example of a safety design measure, the reducing a robot weight in order to minimize the impact force is shown as follows. Impact force F is derived as Eq. 11.6 by Newton's equation of motion. This impact force F of robot against the critical one yields the danger-index α, Eq. 11.7.
F = ma
(11.6)
ma Fc
(11.7)
α=
As an example, a danger-index is shown when robot material is changed from steel (density: 7.86 ×103[kg / m3 ] ) to aluminum (density: 2.69 ×103 [kg / m3 ] ). When the robot moves at 1 [m/s ], the danger-index α is 0.34. Or if replaced with a plastic (density: 1.40 ×103 [kg / m 3 ] ), the index α is 0.18. In short, if the weight is reduced by 2
half, α is half, too. Similarly, it is possible to derive danger-indexes of several design strategies, such as absorbing impact force by soft cover, safety joint compliance, minimizing impact stress caused by shape, reducing surface friction and so on. The equations of these danger-indexes have been shown in [10].
11 Safety Evaluation Method of Rehabilitation Robots
193
11.5.2 Safety Control Strategy Danger-index equations of safety control strategy are derived in this research. If dynamical analysis or consideration of extra parameters are needed, the safety is evaluated by using some assumptions. For example of safety control strategy, ''Effect of keeping distance'' is shown as follows. Sufficient distance between a human and a robot produces enough time to reduce impact force by braking, actions to avert collision, and so on. When the approaching speed of a robot (mass: m) is reduced at acceleration a from distance l. Time until collision ∆t is obtained by Eq. 11.8, when v>0 and a>0.
l = v∆t −
a∆t 2 2
2
∆t =
v v 2l − − a a a
(11.8)
The collision speed becomes v − a∆t , and impact force F and danger-index α are expressed as in Eqs. 11.9 and 11.10. We assume that the impact force does not become a negative value.
F =m
α=
(v − a∆t ) − v′
(11.9)
dt
F (v − a∆t ) − v′ =m Fc dt
(11.10)
Here, we examine nursing motion by a multi-joint manipulator. First, ''normalization technique of impact force'' is introduced in order to pick up the effect of distance. In Eqs. 11.8–11.10, acceleration a has no influence on the effect of distance and differs between every robot. Velocity after collision v' cannot be specifically determined before the collision. These parameters are determined by the assumption that impact force is 1 N (normalized impact force). Therefore, unknown parameters in these equations, obtained from this technique, should be a = 1[m / s 2 ] , v'=0 [m/s], dt=1.0 [s]. That is a normalization technique. We consider a concrete example of a robot with mass 10 kg approaching a human from a distance of 0.5 m at a velocity of 2 m/s. The time until collision ∆t , calculated from Eq. 11.8, is 0.27 s. Impact force F0, obtained from Eq. 11.9, is 64.65 N. The critical impact force Fc is 490 N that is 10% of the force which the human head can withstand without injury. A safety factor of 10 on Fc is introduced on our own terms. Strictly speaking, Fc changes according to age, sex and body part. But we use 490 N as one representative value for realizing the generality of danger evaluation. If another value of Fc is needed, the safety is evaluated by replacing just the equation of impact force F in Eq. 11.3. Of course, exceptional cases exist, such as eye, where Fc is very low, these are treated as a singular point, and therefore another evaluation is needed for such points. The danger-index α0
194
M. Nokata, K. Ikuta, and H. Ishii
calculated from Eq. 11.10 is 0.13. When the robot is set up at 1.0 m apart from human, ∆t , F and α are 0.59 s, 24.15 N and 0.049, respectively. The improvement rate η is 3.01. The result revealed quantitatively that the danger was decreased almost to 30%. Similarly, it is possible to derive danger-indexes of several control strategies, such as approaching safety velocity and safety posture so on. The equations of these danger-indexes have been shown in [11].
11.6 Proposal of Design Optimization and Practical Examples This section proposes a design and control optimization using our danger evaluation method. 11.6.1 Formulating the Design Optimization Method First, we calculate the cost performance of safety methods. When the cost of safety method i(1,2, … ,n ) is ∆yi and the increased improvement rate is ∆η i ,
∆φi is expressed as Eq. 11.11. ∆η φ= i ∆yi
then the improvement rate for cost
(11.11)
Improvement rate ηi of safety method i is expressed as Eq. 11.12, which is increased improvement rate (invested cost yi × φi ) plus 1 (initial). 1 (initial) means an improvement rate before improving.
ηi = 1 + yiφi
(11.12)
Practical examples of optimizing the cost distribution are maximizing safety under fixed cost and minimizing total cost under fixed safety. These examples use three safety methods: decreasing weight, modifying shape and protective surfacing. The improvement rate per unit cost of each method is derived by our danger evaluation method. The safety method, which is decreasing weight by replacing the stainless steel of a robot arm (100x80x300 [mm], ρ sus = 7.87 [g/cm 3 ] ) by duralumin ( ρ dur = 2.80 [g/cm 3 ] ) is as follows. Danger-index can be expressed as Eq. 11.7, and improvement rate is derived by Eq. 11.13,
η=
α ρ susVa Fc 7.87 = = = 2.81 Fc ρ durVa 2.80 α0
where V is volume of material, a is acceleration at a collision.
(11.13)
11 Safety Evaluation Method of Rehabilitation Robots
195
The costs will come to $364, which consists of material expense of $64 plus wages of $300. The increase in the improvement rate is the value derived by Eq. 11.13 minus 1 (1 is value of the improvement rate before the improvement). As a result, the improvement rate per cost is expressed as Eq. 11.14.
φweight =
2.81 − 1 = 0.005 364
(11.14)
By modifying the shape by planning off the four corners (R5), we obtain
φshape = 0.0034 ( ∆η shape = 0.67 , ∆yshape = $200 ). A protective surfacing of soft material (thickness: 10 [mm], E = 5.0 [Mpa], 4 sides) gives φshrface = 0.0154 ( ∆η surface = 2.16 , ∆y surface = $140 ).
11.6.2 Maximizing Safety Under Fixed Cost This section solves maximizing safety under fixed cost. The optimized cost distribution is obtained by satisfying total improvement rate Tη ⇒ max (Eq. 11.15) and total cost Y=const (Eq. 11.16).
(
)(
) (
)
Tη = 1 + y1 φ1 ⋅ 1 + y2 φ2 L 1 + yn φn : max
(11.15)
y1* + y2* + L + yn* = Y : const
(11.16)
*
*
*
*
*
*
If the total cost of improving one robot arm is $500, each cost can be obtained by substituting the improvement rate per unit cost shown in above section for Eq. 11.15 and Y=$500 for Eq. 11.16. The safety can be improved 9.76 times by distributing $500 among decreasing weight $227.05, modifying shape $132.95 and protective surfacing $140.00, specifically, replacing 62% iron with duralumin, chamfering 66% of corners and covering 100% of surface with rubber. As a result, it is possible to quantitatively determine the enforcement percentages of the safety method. Another combination, such as decreasing weight $360.00 (98%) and protective surfacing $140.00 (100%) can increase safety 8.85 times, or decreasing weight $300.00 (83%) and modifying shape $140.00 (100%) can increase safety 3.91 times. These results clarify that the above combination is the best-optimized cost distribution. As a result, this method enables us to quantitatively optimize safety design methods while considering cost and makes it easy to execute them efficiently.
196
M. Nokata, K. Ikuta, and H. Ishii
11.6.3 A New Method of Calculate a Safe Approach Motion This section proposes a new method for calculating a safe approach motion. The new method minimizes the total amount of danger index (Eq. 11.17) considering the tolerant danger index [12].
∫ α (t )dt → min
0
(11.17)
This method chooses a safe path for which all danger indexes are below the tolerant danger index. The aim is to avoid the rise of the danger index. We optimize the whole motion of a multi-link manipulator by using the new method, which is to minimize the total amount of danger index considering the tolerant danger index. Figure 11.1 shows the calculation result of safe motion when a human is stationed at 60 [cm]. First, the tip joint moves, and then the whole part approach the human by stooping. The graph in Fig. 11.2 shows the danger index and the velocity of the optimized motion (Fig. 11.1). The maximum danger index is 0.251, and the tolerant danger index is 0.26. We can obtain a safe approaching motion in which the relative velocity is small and the posture is kept away from the human.
Fig. 11.1. Optimized approaching motion of multi-links manipulator; a human stays at 60 [cm]. First, the tip joint moves, and then the whole part approach the human by stooping
Danger-index
Velocity
100
0.8
80
0.6
60
0.4
40
0.2
20
0
0
1
Time[s]
2
3
Velocity[cm/s]
Danger-index
1
0
Fig. 11.2. Danger index and the velocity of the optimized motion shown in Fig. 11.1. The maximum danger index is 0.251; the relative velocity is small
11 Safety Evaluation Method of Rehabilitation Robots
197
Therefore, safety-optimized motion for any relationship between a human and a robot is achieved. To make good use of the safety-optimizing method, we are now integrating the method into our special robot simulator for danger evaluation (Fig. 11.3) [11]. The robot simulator evaluates the designs and controls of various robots three-dimensionally, so this installation enables us not only to optimize practical robots but also to obtain various safety-optimized human-care motion.
Fig. 11.3. Special robot simulator for danger evaluation
11.7 Conclusions We undertook a new study of safety in the coexistent space of human and machine in order to realize a human-care robot for the nursing of the aged or disabled. First, the human injury from robot and machine was investigated thoroughly, and we found that it was important to treat safety strategies in the light of mechanical injury. We grouped them as safety design and control strategy according to the difference in their contents. In order to take every safety strategy into consideration, impact force and stress were chosen as evaluation measures for quantifying safety. We proposed the evaluation method of safety and defined danger-index, improvement rate, and total evaluation index. Discussions of some general safety strategies proved the viability of our safety evaluation method.
198
M. Nokata, K. Ikuta, and H. Ishii
Safety-optimizing method for human-care robot design and control was studied theoretically. A method of optimizing the safety design was proposed, and practical examples of optimizing the cost distribution were solved. We proposed a method of optimizing robot control and optimized the whole motion of a multilink manipulator by minimizing the total amount of danger index while considering the tolerant danger index. We will contribute our danger evaluation method to the overall safety performance of human-care robots.
References 1.
2.
3. 4.
5.
6.
7. 8. 9. 10. 11.
12.
Ikuta K, Kawahara A, Yamazumi S (1991) Miniature cybernetic actuators using piezoelectric device. Proc. of International Workshop on Micro Electromechanical Systems (MEMS'91), pp 131–135 Ikuta K, Makita S, Arimoto S (1991) Non-contact magnetic gear for micro transmission mechanism. Proc. of International Workshop on Micro Electromechanical Systems (MEMS'91), pp 125–130 Saito T, Sugimoto N (1997) A study on electro-rheological motion control using an antagonistic rotary actuator. Proc. the 6th Int. Conf. on ER-Fluids: ERMR '97 Suita K, et al. (1995) A failure-to-safety ''Kyozon '' system with simple contact detection and stop capabilities for safe human-robot coexistence. Proc. of IEEE Int. Conf. on Robotics and Automation, vol.3, pp 3089–3096 Dohi Takeyoshi (1996) Classification and safety of medical and human care robots. Proc. of JSME Annual Conference on Robotics and Mechatronics, pp 1181–1182 (in Japanese) Saito Yukio et al. (1996) Research on safety operating of the assisting robot. Proc. of JSME Annual Conference on Robotics and Mechatronics, pp 1177–1180 (in Japanese) Guidelines for the inclusion of safety aspects in standards, ISO/IEC GUIDE 51, 1990 Shannon C.E (1948) A mathematical theory of communication. Bell System Tech. J (27): 379–423 Yoshikawa T (1985) Manipulability of robotic mechanisms. The International Journal of Robotics Research 4: 3–9 Ikuta K, Nokata M (2001) Safety evaluation method of human-care robot design. Integration of Assistive Technology in the Information Age, IOS Press, pp 307–316 Ikuta K, Nokata M, Ishii H (2001) Safety evaluation method of human-care robot control and special robot simulator. Integration of Assistive Technology in the Information Age, IOS Press, pp 317–326 Nokata M, Ikuta K, Ishii H (2003) Optimizing method of human-care robot design and control for safety. Proc. Int. Conf. on Rehabilitation Robotics (ICORR’2003), pp 80–83.
12 Risk Reduction Mechanisms for Safe Rehabilitation Robots Noriyuki Tejima
12.1 Introduction Safety is one of the most important features of rehabilitation robots. However, strategies for safe rehabilitation robots are not yet clarified. A basic strategy for safe industrial robots is isolation, which cannot be applied to rehabilitation robots because they must work near or in contact with humans. This problem has been discussed in a special committee of the Japan Robot Association according to ISO and new safe strategies for rehabilitation robots were proposed: considering tolerable risks on account of their benefits, a declaration to apply "state-of-the-art" technologies to them, informing users of the residual risks and procuring users' consent for their use. According to the proposal, not only inherently safe rehabilitation robots but also highly beneficial robots with relatively low risk can be acceptable. As both the MANUS and Handy-1 system use low-power actuators for inherent safety, they can move very slowly and handle only light loads. However, rehabilitation robots with higher performance can be developed according to these new strategies. In this paper, risk reduction mechanisms for rehabilitation robots with powerful actuators are discussed.
12.2 Tolerable Risk and Surface Injury Designing a rehabilitation robot that does not contact humans by any means will prove impossible, because the devices currently employed to detect any such contact, such as ultrasonic sensors, are inherently unreliable. Therefore, a safety strategy should be considered based on the assumption that accidental robot/human contacts are unavoidable. According to our proposal, tolerable risk should be considered on account of the benefits afforded. This indicates that a level of tolerable risk cannot be discussed without the assumption of a specific robot. However, I think that head or breast injury criteria, whose mechanisms are used for designing a safe automobile, are too severe for ascertaining the tolerable risks for rehabilitation robots. In this chapter, I assume that a surface soft tissue trauma can be acceptable for users of rehabilitation robots if such accidents occur infrequently. In the following discussions, the robots' contacts with eyeballs or other weak points are ignored; for such weak points, special safeguards should be used. Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 199-207, 2004.
© Springer-Verlag Berlin Heidelberg 2004
200
N. Tejima
When a robot contacts human skin, mechanical forces are brought to bear on the surface soft tissues, which can be divided into three types: shear, tension, and compression. When a rehabilitation robot has no sharp edges, shearing forces are negligible. Tension forces are also negligible in general. Compression forces are the main source of surface injuries caused by rehabilitation robots. When compression forces applied to the skin are static, it is easily discussed; as the magnitude of the damage is related to the amount of the static forces or the amount of the static stress, they should be restricted below a specific threshold as inequality (12.1): F < Fst_th
(12.1)
where, Fst_th = threshold force against static contacts. On the contrary, when a blunt robot hits the skin dynamically, it is not readily evident what should be the index of the soft tissue trauma. Cardany et al. [1] assumed that the magnitude of the damage was directly related to the amount of energy absorbed per unit area of soft tissue. There are other theories, which state that the momentum or the power is related to the amount and severity of the damage [2]. In either case, the velocity of the robot movement is one of the most important mechanical variables for restricting risk, and should therefore be restricted below a specific threshold. In this study, the kinetic energy theory is assumed. In order to restrict the kinetic energy, a restriction of the velocity is the simplest and most straightforward solution. If the mass of the robot is doubled, then the kinetic energy is doubled; however, if the velocity of the robot is doubled, then the kinetic energy is quadrupled. Therefore, a slow robot poses less risk than a light robot. When the critical kinetic energy per unit area Kth can be ascertained by experiments and the contact area A can be estimated, the angular velocity of the robot ω should be restricted below the critical angular velocity ωth as inequality (12.2): 2 K th A ω < ω th = (12.2) I where, I = the moment of inertia of the robot. Precisely speaking, the velocity is the resultant of the velocity of the robot and the velocity of the human. However, I suggest that human movement can be negligibly slow. Low power actuators and/or gears with a high reduction ratio can be employed to effectively restrict the robot's velocity. The maximum velocity of rehabilitation robots has usually been decided by the designers' experiences; however, if the critical kinetic energy per unit area is evident, then the critical velocity can be calculated theoretically. It is also important to find a velocity that users find practical for performing their activities of daily living. If the calculated velocity is fast enough for a robot operator to satisfactorily use the robot for their daily living activities, the restriction of the velocity is effective. If the resultant reduction in velocity proves insufficient for practical usage, then another or an additional method for restricting the robot's kinetic energy should be considered. Figure 12.1 shows a simple model of a robot-human collision.
12 Risk Reduction Mechanisms for Safe Rehabilitation Robots
201
Fig. 12.1. A simple model of a robot-human collision
When the 1 D.O.F. robot strikes soft tissue, the peak force Fpeak applied to the soft tissue can be calculated by the following equation:
Fpeak = 2 Ek
(12.3)
where, E = kinetic energy k = the modulus of elasticity of the soft tissue and clothes. Provided that the viscosity of the soft tissue is ignored, that the robotic torque is off after the collision, and that the soft tissue does not move even after the collision. It indicates that the surface soft tissue trauma can be minimized by restricting the peak force applied to the soft tissue. Consequently, the restriction of both the static and dynamic forces applied to humans will prevent surface soft tissue trauma.
12.3 Force Limitation Methods For restricting forces applied to the soft tissue, an impedance control with force sensors, torque sensors and monitoring of the current of the actuators has been proposed. The control method using these sensors is easily applied; however, it cannot be a fail-safe system. It has a low level of reliability due to the susceptibility of electronic devices to interference from electromagnetic noise. Such devices can be used to reduce risk additionally, but not inherently. Soft structures, such as soft arms or soft joints, have also been proposed. They would only be effective in preventing soft tissue trauma caused by a dynamic compression force. Besides, when the soft structure is resonated, it may become uncontrollable and generate a strong force. It is not evident how elastic and viscose these soft structures should be, nor is there a clear methodology for determining the appropriate value. Torque limitation mechanisms are useful for a simple system. A torque limitation mechanism is rigid against torques weaker than the set threshold, but it dodges stronger torques. It can limit torque independently of its angular velocity. It works more reliably than control circuits and is commercially available. However, for an articulated robot it is difficult to decide the threshold value practically
202
N. Tejima
because of the complex relationship between the torques of each actuator and an external force applied to the end-effector.
12.4 A Straight Movement-Type Force Limitation Mechanism To overcome this problem, a new straight movement-type force limitation mechanism is proposed. For example, a 2-dimensional 2 DoF. model as shown in Fig. 12.2 is considered. A torque limitation mechanism that slips when the torque becomes larger than the threshold torque Tth is attached on the joint, and a straightmovement type force limitation mechanism with the threshold force Fth is installed in the middle of the forearm. The radial component of the external force Fr can be restricted by the force limitation mechanism and its rotational component Fs can be restricted by the torque limitation mechanism separately. When the critical external force from the soft tissue trauma is Fc, the threshold torque Tth and the threshold force Fth can be designed by the following equations: F Fl Fth = c , Tth = c , (12.4) 2 2 where l is the length of the forearm. There is a difference between the torque limitation mechanism and the conventional force limitation mechanism on restoration after unloading the force. No actuator for restoration is installed on the force limitation mechanism, although the torque limitation mechanism attached on the joint can be restored by the actuators that drive the joint. The new force limitation mechanism with a spring and a damper with anisotropic viscosity can be restored automatically (Fig. 12.3). The spring restores the mechanism after unloading the force. The damper with anisotropic viscosity realizes both quick responses to excessive forces and slow restorations, and it can prevent the mechanism from resonating. A prototype was developed. It is composed of four magnets, a spring and a commercial damper with anisotropic viscosity. The damper can move freely in the direction of the contraction, although it limits the maximum velocity to 0.017 m/sec in the direction of the expansion. When a weaker force than the holding force is applied to a steel plate which is stuck to magnets, it will remain rigidly fixed. However, when a stronger force is applied to it, it will part from magnets and the force will be lessened. The force limitation mechanism is realized by this theory. It is not readily evident how great a threshold should be used. Fischer [3] pointed out that the threshold of human pain tolerance was 77–121 N, and Yamada [4] set the value at approximately 50 N. Pain easily changes according to a subject's psychological condition and that is why both values can be assumed as quite reasonable. So the prototype was designed using the value of 50 N. Magnets, each of which had an ideal holding force of 9.8 N against steel, were used. A spring with a stiffness of 0.98 N/mm was fixed with a pre-load of 14.7 N. The
12 Risk Reduction Mechanisms for Safe Rehabilitation Robots
203
prototype has a theoretical threshold of force of 53.9 N as the sum of the magnetic force and the pre-load of the spring.
Fig. 12.2. A 2-dimensional 2-degrees of freedom robot model with force limitation mechanism
Fig. 12.3. Prototype of the straight movement-type force limitation mechanism
Its basic features were experimentally evaluated. The threshold of a static force was 52.6 ± 2.3 N. When an excessive static force was applied, the mechanism moved 40 mm within 0.15 seconds. On the other hand, it was restored slowly after the force was unloaded. The time constant of the restoration was 2.9 seconds, which would be long enough to prevent accidents caused by recoil. The dynamic response of the mechanism was different from the static one; For example, when the force continued for only 0.1 msec, the threshold of the prototype was about 1000 N. When it was thumped with a soft material, such as a human limb, the impulsive force continued longer than 10 msec, and it worked almost the same as it would against a static force. If the excess of the dynamic threshold of the mechanism over the static threshold is not very large, it may be acceptable, because the critical dynamic force is always larger than the critical static force. When the mechanism is mounted on an articulated robot, the payload and the weight of the end-effector influence the threshold force. In general, a robot rarely contacts a human in either the upward or downward posture, however, this suggests that the mechanism can only be applied to a light robot with a small payload. Besides, it suggests that the mechanism should be mounted near an end-effector. If a force were loaded continuously even after activation, the mechanism would reach the end of its stroke, where finally, an excessive force would be applied to it.
204
N. Tejima
To avoid this, cutting off its power supply should stop the robot system when the mechanism is activated.
12.5 A Three-Dimensional Force Limitation Mechanism For the 2-dimensional model in Fig. 12.2, the straight movement-type force limitation mechanism will be effective; however, a component of the external force which is perpendicular to its radial component Fr and its rotational component Fs should also be restricted for a 3-dimensional robot. To solve the problem, a threedimensional force limitation mechanism was developed, which was made up of three individual straight movement type units linked together as shown in Fig. 12.4.
Fig. 12.4. Structure of the three-dimensional force limitation mechanism
The three-dimensional force limitation mechanism can restrict threedimensional forces affected to an end-effector of a robot. When each straight movement type mechanism features the threshold force Fth, the external force F can theoretically be restricted by the following inequality:
12 Risk Reduction Mechanisms for Safe Rehabilitation Robots
F < F3th (θ , φ ) =
3rFth r sin φ + 2l cos φ cosθ
π π θ ≤ , 0 ≤φ ≤ 6 2
205
(12.5)
The critical external force F3th is dependent on the angle of the force and on the human contact position. The maximum and minimum values of the critical external force F3th are as follows: 3Fth (l ≤ r ) 3rFth F3th min = 3rFth F3th max = (l > r ), (12.6) r 2 + 4l 2 l From this, we can find the difference between minimum and maximum values, which becomes minimum when l = r. A prototype of the three-dimensional force limitation mechanism was developed (Fig. 12.5). Basic features of three straight movement-type mechanisms were experimentally evaluated. The threshold of a static force against each mechanism is shown in Table 12.1. We tried to develop three identical mechanisms, however, their features are scattered because of friction, the dead load, the unbalanced load, and the flatness of the magnets' surfaces.
Fig. 12.5. Prototype of the three-dimensional force limitation mechanism with a six-axis force and torque sensor
Table 12.1. Threshold force of straight movement-type mechanism Mechanism A B C
Fth (Mean ± S.D.) [N] 36.2 ± 1.7 41.3 ± 2.7 36.7 ± 1.5 S.D. – Standard Deviation
The static force threshold against the prototype was experimentally measured with a six-axis force and torque sensor. Typical results are shown in Fig. 12.6. The results were almost the same as theoretical predictions based on the Eq. 12.5. I be-
206
N. Tejima
lieve that the scattered thresholds of the straight movement-type mechanisms and the uncertain force points of the center bar in the experiment caused the errors.
Fig. 12.6. Critical external force of the prototype
When such a mechanism is practically applied to rehabilitation robots, the force-point length becomes bigger than the radius r normally. This indicates that it is difficult to design a mechanism with a small variation of critical external forces. As the prototype has an l/r ratio of 3.8, the ratio of the maximum to the minimum critical external force becomes almost 10, which is not negligible. It will be necessary to find a solution to this problem for a practical application to be designed.
12.6 Reflex Mechanism The three-dimensional force limitation mechanism will be effective when the endeffector of the robot hits a human, however, it is not activated when an arm of the robot hits against it. Because it is considered an accident when a robot arm contacts with anything, a safety mechanism should detect any contact at the robot arm and stop the robot. However, the arm may still harm a human owing to the inertia of the arm during the delay between detecting the contact and stopping the arm. To solve this problem, a reflex mechanism was proposed, which was similar to the biological reflex. A human immediately withdraws its hand from a hot stove by instinct after touching it. Such a reflex is effective for eluding major harm. The fundamentals of the mechanism are shown in Fig. 12.7. When sensors with which the arm is covered detect a contact, a pin by which the actuator is connected with the arm will be withdrawn. Then the arm will spring back regardless of the movement of the actuator. As the mechanism is controlled by local electric circuits without a computer, it can be fail-safe and work quickly. The central controller of the robot which is informed of the activation of the mechanism can then take the necessary time to stop the actuator. At present, after activation or when switched on, this mechanism cannot be restored automatically. A mechanism for automatic restoration should be developed for practical use.
12 Risk Reduction Mechanisms for Safe Rehabilitation Robots
207
Fig. 12.7. Basic concept of the reflex mechanism
12.7 Conclusions Three different types of mechanisms for rehabilitation robot risk reduction for robots with higher performances were proposed. This research has just begun and several problems remain unsolved in each mechanism: reliability, an optimum design method, a faster response, a built-in structure, a reset mechanism and a method for determining the critical force for human damage. However, the ideas I have proposed are encouraging and improvements are in progress.
References 1. 2. 3. 4.
Cardany CR, Rodeheaver G, Thacker J, Edgerton MT, Edlich RF (1976) The crush injury: A high risk wound. JACEP, vol 5, pp 965–970 Trott A (1988) Mechanisms of surface soft tissue trauma. Ann Emerg Med 17: 1279– 1283 Fischer AA (1986) Pressure tolerance over muscles and bones in normal subjects. Arch Phys Med Rehabil 67: 406–409 Yamada Y, Hirasawa Y, Huang SY, Umetani Y (1997) Human-robot contact in the safeguarding space. IEEE/ASME Trans Mechatronics 2: 230–236.
13 Usability of an Assistive Robot Manipulator: Toward a Quantitative User Evaluation Bessam Abdulrazak, Mounir Mokhtari , and Bernard Grandjean
Abstract This chapter describes our research activity on the integration of a robotic arm in the environment of persons with disabilities. People who have lost the abilities to use their proper arms to perform daily living task could use an adapted robot to compensate, even partly, the problems of objects manipulation generated by their handicap. Many robotized systems have been developed, however the manipulation of such robots by the disabled people is not always obvious. In this chapter, we provide original results on the evaluation of the new Manus software architecture called Commanus in reference to the European Commanus project terminated last year. A quantitative evaluation method has been developed and used to provide accurate data on the usability of Manus robot with the integration of new control modes. This work corresponds to 9 months of evaluation which involved 19 quadriplegics patients, mainly having spinal cord injuries and muscular dystrophies, from the rehabilitation hospital of Garches. Preliminary results on usability of the new software architecture of the robot, is described.
13.1 Introduction The main objective of our research work is to determine the different factors that could influence the human-machine interaction in term of assistive robotics for people having severe motor disabilities. Our contribution consists on providing quantitative evaluation methods which provides accurate data on the usability of a system, such as the Manus robot. Assistive robotics for severely disabled people appeared in 1985 in France with the Spartacus project [3]. This has permitted to highlight the problematic regarding Human-Machine interaction in case of people with disabilities using robots. This research work served as a reference in the robotics development which followed [6], in particular the Afma-Master workstation and Manus robot. The French Muscular Dystrophy association (AFM), in close collaboration with our team, has launched an operation intended to provide a Manus robots to some handicapped people living in their homes. The aim of this approach was to verify if a Manus robot could give substantial help to the users in their daily environment. The implication of our team in this operation consists on integrating our Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 211-220, 2004.
© Springer-Verlag Berlin Heidelberg 2004
212
B. Abdulrazak, M. Mokhtari , and B. Grandjean
quantitative evaluation method to get accurate information about the usability of Manus robots in real conditions at home. This experimentation has permitted to point out many points regarding the adaptability of this robot to handicapped people [4]. In other words our contribution for designing adaptable, configurable and personalized robot system based on the Manus robot (Commanus Project) allowed building a new Manus control system adaptable in function of user [2]. In this chapter we present preliminary evaluation results obtained with the latest version of Commanus control system.
13.2 Users Needs Analysis As it was noticed in former evaluation performed by our team [4], users spend more than 50% of the task duration seeking for the good strategies to reach the target or seeking for the appropriate control action. The use of Manus arm in daily living with the commercialized command architecture implies always that the user has to perform repetitive actions to realize the same tasks, for example; when the user targets an object, he performs manually using a succession of actions on “Cartesian” or “Joint” modes to move the robot toward the target, when this first phase is finished, the user performs manually also, the necessary commands to grasp the desired object (second phase). When the object is small or the task require precise, for example when user wants to insert a floppy disk in a computer, the second phase could be complicated and needs many actions to adjust the position of the floppy. Our approach is to propose assistance to the user when using the Manus in an open environment. We developed a new control software architecture which allow the handicapped user to perform the same sequence of commands as a global automatic movement of the robot stored in a gesture library. The new control modes proposed provides semi-autonomous control of Manus that permits to decrease the number of actions necessary to perform tasks. The new system permits, in one hand, to reduce manipulation problems that users meet during complex tasks, and on the other hand, to solve the problems linked to the user-interface. With these new functionalities, we noticed a reduction of task duration and the decrease the number of actions necessary for complex task.
13.3 Hardware and Software Organization Manus is a tele-manipulator robot mounted on an electric wheelchair. Its objective is to favor the independence of severally handicapped people who have lost their upper and lower limbs mobility, by increasing the potential activity and by compensating the prehension motor incapabilities. Manus is a robot with six degrees of freedom, with a gripper in the extremity of the arm which permits capturing objects (payload of 1, 5 kg) in all directions. It is controlled by a 4x4 buttons key-
13 Usability of an Assistive Robot Manipulator
213
pad, a joystick, a mouse with the latest version available in our lab, and also with a touch screen in the near future. Theses input devices give the user the possibility of handling Manus, and the display unit gives the current status of the robot [1]. The main advantage of the Manus is that it can perform tasks in a nonstructured environment which correspond, in general, to the real environment of the end-users. However the manipulation of such assistive device by the disabled people is not always obvious [4]. 13.3.1 Hardware Architecture The control box is based on a PC-104 computer card; this system was planned for the procedure of evaluation. Due to hardware integration delay in this portable computer box, we decided to proceed with the evaluation using a standard desktop computer which integrates the Commanus control software (Fig. 13.1). This system supports as input control devices: a 16-digit keypad, a 3D joystick, a trackball and any type of mouse. A screen (visual feedback) displays a user-friendly Graphic Interface for mouse control or with scanning devices controlled by one switch. This screen represents a devices input that helps the user to send commands to Manus. Users can also find, Manus status and warnings messages that are helpful during manipulation. Visual feedback was used during the whole evaluation period.
Fig. 13.1. System to collect data for quantitative evaluation
Configuration software tool “OT” allows the occupational therapist to configure easily input devices with different menus containing activities associated to commands of Manus. During evaluation, different parameters where recorded on line. This method allows knowing how much time the user spent in each mode, how many actions were performed in each mode, and number of warnings and errors messages oc-
214
B. Abdulrazak, M. Mokhtari , and B. Grandjean
curred during manipulation. Data analysis allowed us to compare the use of the Manus robot among different types of tasks and with several end-users. 13.3.2 Software Command Architecture The Manus software architecture allows us to choose many modes in order to offer several possibilities of controlling the arm [7]. As shown in Fig. 13.2, the basic software architecture, called Manus modes, has three different control modes: The Cartesian Mode, which allows the user to control manually the arm and gripper motion in Cartesian space, the Joint Mode which allows a direct and separate control of the six arm joints, and the Main Mode which allows access to the above cited modes and allows the user to perform specific commands such us fold-in, fold-out, drink, etc. Regarding the evaluation results, we have developed new command architecture, called Commanus modes, and implemented several extra modes beside the Manus modes to meet the users needs. Four new control modes have been developed and integrated to the software architecture of Manus robot.
Commanus Modes Manus Modes
Point-to-Point Pilot
Foldin Foldout
Main
Record & Replay Relative
Cartesian Joint
Pointing & Doing
Fig. 13.2. Commanus command architecture
The first additional mode is called “Record & Replay Mode” which allows the user to record specific positions and movements, and when it is required to reach them later, only an automatic movement generated according to the recorded point will be required. When the gripper is in a specific position in a given space and a given configuration, the user will be able to record the coordinates of the point to be able to return directly to this point any time using only one action on the input device [4]. The second one is the “Pilot mode” which allows handling Manus robot in the direction of the gripper following the main axis.
13 Usability of an Assistive Robot Manipulator
215
This mode has been developed mainly when using a 2D joystick to control the robot when practicing robot forward action in the direction of the target, which is similar to the human approaching movement to grip an object. The third mode is the Relative mode which allow movements of the gripper, usually where it is near the target, to perform small defined steps relatively to the target position and to the current position and orientation of the gripper. This mode is useful during task requiring high accuracy, such as inserting a videotape in the VCR [5]. During evaluation, we have noticed that, even with this modes, the user have to perform the same sequence of command action when processing repetitive tasks, such as eating or drinking with the Manus. This is natural for human movement, but heavy when using a robot. The strategy we have followed was to try to identify the repetitive movements and implement them as automatic gestures available for the user. A gesture library was integrated in the software architecture and a path planner was developed to allow point-to-point and pointing-and-doing modes.
13.4 Quantitative Evaluation Method The aim was to develop original methods based on quantitative evaluation to analyze accurate data on the usability of the Manus robot and particularly the contribution of the new added modes. The idea was to record all the actions performed by the users on the input devices. The log file generated contains several parameters, such as all commands performed by the user on the input device, the processing time of the robot, the corresponding mode, and the robot gripper orientation and position coordinates. This method allow us to determine several key parameters on the usability of Manus, such as, the time spent using the Manus according to each control mode, how many actions processed in each mode, and how many warnings and error messages has been generated. The evaluation process is decomposed on tow phases: The learning phase and the evaluation phase of Manus to perform some specific tasks. During the learning phase, which could last from 10 minutes to an hours depending on the users; the users learn how to control Manus, how to use input device functionalities and how to swap between different control modes described above.
13.5 Preliminary Results These preliminary results represent 9 months of evaluation, mainly dedicated to the Commanus version, and correspond to more than 37 hours of effective use of Manus robot inside the rehabilitation hospital of Garches. Evaluation outside the hospital and at homes of people with disabilities have been also performed, but using only the commercialized Manus version [1].
216
B. Abdulrazak, M. Mokhtari , and B. Grandjean
Nineteen persons with four limbs impairment, mainly having spinal cord injuries and muscular dystrophies, took part in our evaluation: 2 muscular dystrophies patients, 13 hospitalized spinal cord injuries patients, 2 patients with cerebral palsy and 2 patients with Locked in Syndrome. One of the spinal cord injuries patients has more than 2 years of his accident; he is living in a specialized institution. This patient had taken part in the evaluation of the first version of Manus, since this time, he expressed his wishes to use the arm in real life situation. One person having muscular dystrophies is living in his home with his family and had taken part in a former evaluation of Manus. He had acquired a Manus robot mounted on his wheelchair for one year. The participation of these two persons in this evaluation allowed us to compare the new Commanus version with the commercialized version of Manus. The average age of the users taking part in this series of evaluation is about 32 years old; in addition 2 people were more than 50 years old and a young spinal cord injuries patient of 10 years old. 13.5.1 Modes and Time of Use The first graph (Fig. 13.3) shows the time repartition during the whole evaluation duration of 37 hours (134.360.392 ms): Mode (Execution Tim e) Joint relative 234 227,00 Cartesian relative 868 100,00 Joint position
2 479 878,00
Pilot
3 124 476,00
Cartesian position
3 497 029,00
Joint velocity
5 142 332,00
Cartesian velocity NOP
10 837 469,00 108 176 881,00
Fig. 13.3. Time repartition for control modes
The “NOP” time (no actions or rest time) is considerable and represents 80,5% of the total duration of the evaluation. But we have to make the distinction between four types of NOP times: − A no action time: when the user takes a real rest without switching off the Manus; − A cognitive time: when the user is thinking on the sequence of action he plans; − A physiological motor time: the physiological time necessary to execute a movement with hand or finger. The “Cartesian velocity mode”, corresponding to the Cartesian Mode, is the most frequently used after the NOP time (8,1% in time) in regards to others modes, and it could be processed with three different speeds: slow (1), medium (2)
13 Usability of an Assistive Robot Manipulator
217
and high (3) speeds. As shown in Fig. 13.5d, within the Cartesian mode, the Cartesian velocity mode with speed 2 is the most frequently used (53% in time), where the “Cartesian mode” in speed 3 is not really used (1,2%), except when users want to do large movement of the arm robot. When a user wants to perform complex tasks, or when the gripper is close to the target, he usually chooses the “Cartesian mode” in low speed. 13.5.2 Actions Number The representation in term of events or actions performed on the input device is shown below (Fig. 13.4). The whole recording time corresponds to 9715 Events actions sent to the robot. Mode (Nb actions) Pilot (15,0%) Cartesian relative (4,1%) Cartesian position (3,4%)
Joint velocity (23,7%) Joint position (0,8%) Joint relative (1,1%)
Cartesian velocity (51,9%)
Fig. 13.4. Number of action repartition
Users’ manipulation on the input device generated 633 events without any robot activity (keypad buttons or joystick event without function). The Robot generated 92 warnings messages (robot in deadlocking configuration, limit of working space reached…). To recover deadlocking configuration, the user have recourse to the joint mode to ovoid restarting the system. During the evaluation users had to perform the same tasks, and to follow the same scenarios. Different parts have been performed according to the different control modes: − P-01: part using the basic Manus modes − P-02: P-01+Pilot mode − P-03: P-01+Point to point mode − P-04: P-01+Relative mode − P-09: free scenario (all modes) We remarked that within the parts using the new modes, users needed less number of actions and less time to perform the tasks, which mean that Pilot mode (Fig. 13.5a and Fig. 13.6a), relative mode (Fig. 13.5c and Fig. 13.6b) and Point-to-
218
B. Abdulrazak, M. Mokhtari , and B. Grandjean
Point mode (Fig. 13.5b and Fig. 13.6c), contributed in the reduction of the number of actions and to the reduction of task duration. P-01 / P-03
P-01 / P-02 U-01
U-01
U-02
U-02
P-01 P-02
P-01 P-04
P-01 P-03
U-09
U-09
U-14
U-14
U-15
U-15
a
b
P-01 / P-04
P-01 / P-09 -> Task T-04 U-01
U-02 P-01 P-09
U-14
U-02 U-14 U-15
c
d
Fig. 13.5. Mode contribution by number of actions. a Pilot mode; b Point To Point mode; c Relative mode; d Commanus mode
Using Manus within all new modes at the same time; also, need less number of actions and less of time to perform the tasks (Fig. 13.5d and Fig. 13.6d), which confirms that the new modes contribute in the reduction of the number of actions and to the reduction of execution time. Past Tim e (P-01/P-02) U-01 P-01 U-02 P-02 U-09 U-14 U-15
a
U-01 P-01 U-02 P-04 U-09 U-14 U-15
b Past Tim e (P-01/P-03)
U-01 P-01 U-02 P-03 U-09 U-14 U-15
c
Past Tim e (P-01/P-04)
Past Tim e (P-01/P-09) -> Task T-04
P-01 P-09
U-01 U-02 U-09 U-14 U-15
d
Fig. 13.6. Mode contribution by execution time. a Pilot mode; b Relative mode; c Point To Point Mode; d Commanus mode
13 Usability of an Assistive Robot Manipulator
219
13.6 Discussion During evaluations, we have noticed that: • The standard relation between movement and mode shown that Cartesian, Joint or Pilot modes where generally used to carry out movements of high amplitude. The Relative mode remains adapted for the final phase, for example to insert floppy disk in a computer. • The visual feedback contributed to simplify the training phase, this was concretized by the reduction in the training duration, a session of 20 minutes was sufficient for the majority of the users in contrary to several sessions previously. • Use of the OT configuration tool, dedicated to define the mapping control on the input device, by the occupational therapists remained appreciable by the users. We noticed that it was necessary to carry out a series of input device’s configurations, on average from 2 to 5 iterations to lead to a personalized version for each user. • The training of the Pilot Mode remains difficult to realize because it forces to imagine a virtual reference fixed on the gripper. But for those who succeeded this step, expert users, it permitted to decrease the execution time of the tasks. Even, some users found it more intuitive in comparison with the movement of the hand. • These results confirmed our former evaluations on the favorable contribution of the Point-to-Point mode.
13.7 Conclusion The evaluation of the new architecture allowed us to bring some improvement to the system. The first trials with disabled patients showed their interest regarding the new added modes. The result obtained, are only preliminary results, and we can not pronounce yet on the real contribution of the new command architecture modes in the daily use of Manus at home and outside. More evaluations in real life conditions with the help of disabled people are necessary to test all the new functions offered by the proposed new system. This new architecture allowed plugging different input devices which could be selected according to each end-user. To propose personalization of input device, we have developed a user-friendly “OT” tool, which allows users to select any appropriate device and modify the configuration mapping of actions according to each robot modes. Evaluation and contribution of this on-line mapping of input devices and adaptation has already started and results will be presented in the near future. The continuation of this research work is insured through a new European project AMOR which started in May this year, which is a logical continuation of
220
B. Abdulrazak, M. Mokhtari , and B. Grandjean
Commanus project. The development realized during Commanus will lead to new command architecture for Manus which will be integrated in AMOR1 project. The aim is to propose a new generation of Manus robot taking into account the users requirements.
Acknowledgments The authors would like to thank the users who have participated actively in these experimentations from the rehabilitation hospital of Garches. Funds for this project were provided by the European Commission.
References 1.
2.
3. 4. 5.
6.
7.
Abdulrazak B, Mokhtari M, Grandjean B (2003) Assistive robotics for severely disabled people: The Commanus project. AMSE, Journal of the Association for the Advancement of Modeling and Simulation Techniques in Enterprises, Special edition HANDICAP, Barcelona, Spain 63(4): 1–14 Abdulrazak B, Mokhtari M, Grandjean B, Dumas C (2002) La robotique d'aide Aux personnes Handicapées, le projet Commanus. Proc Handicap 2002, la deuxième conférence, Pour l’essor des technologies d’assistance, porte de Versailles, Paris, France, pp 89–94 Chatila R, Moutarlier P, Vigouroux N (1996) Robotics for the Impaired and Elderly Persons. IARP Workshop on Medical Robots. Vienna, Austria, 1–2 Oct. 1996 Heidmann J, Dumazeau C (1999) Evaluation du robot Manus par des personnes lourdement handicapées. Rapport interne Handicom Mokhtari M, Abdulrazak B, Rodriguez R, Grandjean B (2003) Implementation of a path planner to improve the usability of a robot dedicated to severely disabled people. Proc IEEE Robotics and Automation International Conference (ICRA’2003), Taiwan Mokhtari M, Didi N, Roby-Brami A (1999) A multidisciplinary approach in evaluating and facilitating the use of the Manus robot. Proc IEEE Robotics and Automation International Conference (ICRA’99), Detroit, Michigan, USA Vertut J, Coiffet P (1984) Les Robots. Tome 3a : Téléopération, Evaluation des technologies. Ed. Hermes, France.
1 AMOR project EEC Growth program: Mechatronic upgrade & wheelchair integration of the Manus Arm manipulator. Partners involved: Exact dynamics, TNO-TPD and Koningh in the Netherlands, Ideasis and ExpertCam in Greece, Lund University in Sweden, HMC in Belgium, and INT and AFM in France
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands Gert Willem Römer, Harry Stuyt, Geer Peters, and Koos van Woerden
14.1 Introduction This chapter describes both the current (Anno Domini 2003) and future processes of prescribing the Assistive Robotic Manipulator (ARM, also known as “Manus”) to potential client/users within the Dutch public health insurance system. In addition, the results of two studies conducted in the Netherlands relevant to the costeffectiveness of the ARM, the indication criteria, and targeted user groups, are summarized and discussed.
Fig. 14.1. Two ARM users pouring a beer
14.2 Wheelchair Mounted Service Manipulator ARM The wheelchair mounted service manipulator ARM (also known as "Assistive Robotic Manipulator" and "Manus") is produced by Exact Dynamics, in the NetherZ.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 221-230, 2004.
© Springer-Verlag Berlin Heidelberg 2004
222
G.W. Römer et al.
lands, and assists disabled people having very limited or non-existent hand and/or arm function (See Figs. 14.1 and 14.2).
Fig. 14.2. The ARM and its components.
Table 14.1. Physical characteristics & properties of the ARM Property Degrees of Freedom (DoF): Reach: Weight: Lift capacity: Gripper: Gripper clampign force: Repeatability: Velocity: Safety features: Power supply: Input devices: Display: Control modes: RoI:
Value 6 + gripper + lift unit (total 8) 80 cm + 25 cm (lift unit, optional) 13 kg (ARM only), 18 kg (incl. lift unit) Up to 2 kg 2 fingers, with 3 point grasping finger tips 20 N ±1.5 mm 9.9 cm/s (max. Joint velocity 30º/s) Slip-couplings, limited speed, acceleration, gripper force, and much more 24 V DC, 3 A (max.) Joystick, keypad, switches, sip & puff, UniScanner, EasyRider, etc. 5×7 LED matrix with buzzer Carthesian-mode, joint-mode 1 to 2 years (see Sect. 14.5.2)
The typical ARM user may suffer from muscular dystrophy, multiple sclerosis (MS), cerebral palsy, rheumatism, or spinal-cord lesions. The ARM allows a variety of Activities of Daily Life (ADL) tasks to be carried out in the home, at work, and outdoors. These tasks include drinking from a glass, removing an item from a desk, scratching ones head, discarding an item in a trash receptacle, handling a floppy disk, shopping, or posting a letter. The ARM can be operated using a wide range of input devices that include, but are not limited to a keypad (sixteen-button,
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands
223
with 4×4 grid), or a joystick (also the joystick of the wheelchair). Additionally, a headband or spectacle mounted laser pointer, or other specially adapted device can be devised and constructed to function by the use of a non-disabled body part, such as the chin. Table 14.1 lists some technical characteristics of the ARM. Since its commercial introduction more than ten years ago, the ARM has proved to be a safe, efficient, and a highly appreciated assistive device. The time needed to complete an ADL-task is a characteristic determining the performance of a rehabilitation robot [4]. That is, if a rehabilitation robot is capable of carrying out quickly, it is a good robot. Table 14.2 lists the typical time, required by trained users, to carry out some typical ADL-tasks using the ARM.
Table 14.2. Typical times to complete ADL-tasks using the ARM Task Pick up a water bottle from a table, fill a glass with water, and place the bottle back on the table Pick up a glass of water from a table and bring it to ones mouth, and take a sip (no straw) Pick up an object from the floor and put it on a table Grab a chip from a plate, bring it to ones mouth, eat the chip, and return the gripper to the plate Pick up and answer a mobile phone Training required to teach an inexperienced user how to pick up an object from the floor
Time [min] 1.25 1 1.6 1 1 5 – 10
14.3 The Current Process of Providing an ARM to a User The distributor for the ARM in the Netherlands is Revalidatietechniek hetDorp (known as RTD), and they are currently the organization responsible for the process of prescribing an ARM for potential users, including: • informing potential users of the benefits of the ARM, • establishing the indications for each client based on the indication criteria, • assessment of the technical modifications required to attach the ARM onto the wheelchair, • filing of the formal application for reimbursement for the ARM, • installation of the ARM on the wheelchair, • training the user. The manufacturer, Exact Dynamics, is responsible for the service and maintenance of each ARM.
224
G.W. Römer et al.
14.3.1 Informing Users about the Benefits of the ARM Rehabilitation specialists at RTD are familiar with the features and benefits of the ARM for severely handicapped individuals. These specialists inform potential clients, nation wide, of the effectiveness and benefits of the ARM. Also, potential users themselves have contacted RTD directly after having learned of the existence of the ARM, either through a network of current ARM users, the media, or a rehabilitation center. Once interested in the ARM, a potential client consults one of the specialists at RTD, who then guides them through the rest of the procurement process. Such a specialist is hereinafter referred to as an “ARM-therapist”. 14.3.2 Indication Criteria The following medical, social, and physical requirements must be met, in order for an ARM to be prescribed. The (potential) user must: • have very limited or non-existent arm and/or hand function, and can not independently (without the help of another aid) carry out ADL-tasks, • use an electric powered wheelchair, • have cognitive skills sufficient to learn how to operate and control the ARM, • have a strong will and determination to gain independence by using the ARM, • have a social environment including caregivers, friends, and/or relatives who encourage the user to become more independent by using the ARM. A formal indication is determined by an ARM-therapist, based on these criteria, to calculate the maximum functional benefits of the ARM for each client. 14.3.3 Stand-Alone Test If the indication criteria (previous section) are met, the client will be given the opportunity to explore and test the ARM in a so-called “stand-alone” setup. In this setup, the ARM is mounted on a self-supporting fixture that the wheelchair can position next to (See Fig. 14.3). This setup allows the client to use the ARM to explore their individual ability to operate the ARM using different control devices. This not only helps the potential client to decide whether or not he/she wants to use the ARM but also allows the ARM-therapist to determine if the user is capable of learning how to operate the ARM. In addition, the ARM-therapist will make determinations as to the technical electrical and mechanical modifications to the wheelchair are required to optimize the operation of the ARM. For example, the mounting location of the ARM on the wheelchair must be determined based on the physical characteristics of the client and the design of the individual wheelchair itself. The posture of each user and their field of view also affects the final positioning of an ARM installation. Generally, the ARM is mounted in front of the user on the left side (Fig. 14.1) or right side (Fig. 14.4) of the wheelchair.
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands
225
Fig. 14.3. Using a stand-mounted ARM, this user was able to pick up, manipulate, and position a small object, after only 5 to 10 minutes of training
Fig. 14.4. An ARM mounted on the right side of a wheelchair
Also at this point, the most effective input control device for each client is determined and specified (e.g. wheelchair joystick, keypad, chin-control, switches, buttons, etc.). Currently, users in the Netherlands control their ARM with either (their original wheelchair) joystick (about 80%), or the 4×4 keypad (about 20%). The keypad option is preferred by clients that still have some hand function because it offers them a slightly more convenient way to operate the ARM.
226
G.W. Römer et al.
14.3.4 Formal Application and Funding of an ARM Once the previous phase is successfully completed, the ARM-therapist files a formal request for funding of the ARM. Currently, funding for the ARM is not yet provided by the Dutch social security system, or public health insurance systems, but by a three-year governmental exploratory grant in the framework of the AWBZ (the Dutch Act of Particular Medical Expenses). This grant, ending in 2004, covers the purchase and supply of approximately 240 ARM’s, and includes their five-year service contract, the adaptation of the wheelchair, and training the user (Sect. 14.3.6). Each ARM is purchased from and serviced by Exact Dynamics (Sect. 14.3.7). Part of the formal application process for each ARM requires that the user's local municipality, and associated wheelchair service organization, grant permission for the modification of the user's wheelchair that an ARM is to be installed on. This is because, according to the WVG (the Dutch Act for Supplies for the Handicapped), local municipalities fund and legally own each user’s wheelchair. This formal bureaucratic process, can take up to several months, and obviously needs to be simplified in the future. 14.3.5 Mounting the ARM on the Wheelchair Once the funding of the ARM and the grant of permission to adapt an ARM to a wheelchair has been obtained, the wheelchair is shipped to RTD where it is fitted with the ARM. This integration includes; • Mechanical adaptations, such as the quick-change mount to easily attach and detach the ARM, • Electrical modifications, including the ARM’s power supply, rewiring of the wheelchair’s joystick if necessary, or the functional integration of the ARM with an on-chair Environmental Control Unit (ECU) or scanner. Depending on the complexity of each installation, the modification can take one to two days. During the ARM installation period the client generally may apply to receive a replacement wheelchair from their local municipality. 14.3.6 Training Each client is trained in their own home to operate their ARM, conducted by an ARM-therapist. Usually, only one session is required to familiarize the user with the operational characteristics of the ARM, at which safety issues are also discussed. On average, it takes 5 to 10 minutes of training for an inexperienced user to pick up an object for the first time (See Fig. 14.3). Depending on the user, up to six additional training sessions, at two-week intervals, may be necessary to teach tips-and-tricks to efficiently perform ADL-tasks. After three months, the use of the ARM is evaluated, which may result in more training, additional technical modifications to the wheelchair, or installation of another input control device.
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands
227
14.3.7 Service and Maintenance The ARM is serviced by Exact Dynamics. Each robot, funded by the exploratory grant (Sect. 14.3.4), is sold with the optional five-year service-contract that includes the annual maintenance, mechanical, electrical, and software updates, repair of (manufacturing) defects, and a helpdesk support line. A user will report malfunction of an ARM to RTD. If the problem cannot be repaired on-site by RTD, the ARM is then shipped to Exact Dynamics for repair. Usually the ARM is returned to RTD within 48 hours for reinstallation.
14.4 The Future Process of Prescribing the ARM As mentioned previously the ARM is not yet funded by the Dutch social security or social health insurance systems, but by a governmental exploratory grant (Sect. 14.3.4), which explains the laborious and bureaucratic process of prescribing an ARM for a user. It is expected that the ARM will be on the so-called “prescription list for medical aids” from 2005, which implies that the reimbursement of all costs of the ARM (including wheelchair adaptation, etc.) is the responsibility of a single organization, i.e. the social health insurance companies. This will greatly simplify the process of prescribing an ARM. RTD currently operates from a single location in the Netherlands. By the end of 2003, four rehabilitation centers will be disbursed evenly within the Netherlands in northern, southern, eastern, and western regions, and will take over tasks of RTD. These centers are independent from RTD and will be responsible for increasing the public awareness of the ARM and informing potential users on the opportunities and possibilities of using the ARM. If a potential user applies for an ARM at his or her social health insurance company, the insurance company will apply for a formal indication (Sect. 14.2.2) at a rehabilitation center. This indication will be carried out at the indication center by a team of specialists consisting of a medical doctor, an ergonomic engineer, physiotherapist, and a rehabtechnician from RTD. When a potential client meets the requirements, the standalone-test (Sect. 14.2.3) will be carried out at a rehabilitation center. The team will determine if the potential user has sufficient cognitive skills to control the ARM. Once a positive determination is made, the wheelchair will then shipped to RTD for integration of the ARM. It is still uncertain whether training will be offered to each user in group-settings at a rehabilitation center, or individually at the user’s home’s. Each user will have the opportunity to test their ARM for three months, and then be evaluated. Once a definite prescription has been made, the regional rehabilitation center will be the intermediary between the user, RTD, and Exact Dynamics.
228
G.W. Römer et al.
14.5 Summary of Two Recent Dutch ARM-User Evaluations This section summarizes and discusses two major Dutch ARM-user evaluations. 14.5.1 User Study Conducted by iRV In 1999, the Dutch Council of Health Care Insurance (CvZ) commissioned the Dutch Institute for Rehabilitation Issues (iRV) to carry out a study to analyze the target user-group, and the cost-effectiveness and indication criteria of the ARM [1, 2]. It was estimated that the size of the group of potential users in the Netherlands, which could benefit from the ARM, ranges from about 800 to 2000 individuals. This estimation was based on medical diagnosis, the availability of a powered wheelchair, user age, individual personal characteristics, intended use of the ARM, and the user’s environment. Therefore, given the population of the Netherlands (16 million), the number of potential users ranges from 0.005% to 0.0125% of the Dutch population. Indication criteria was formulated and evaluated, which resulted in the criteria as described in Sect. 14.3.2. Part of the study was to determine the effect of the ARM on the independence of specific ARM users, and their perception of the changes to their quality of life. The study compared the activities of 13 long term (> 4 years) ARM users, to 21 non-ARM users having a comparable level of impairment. The activities of both groups were analyzed with respect to individual levels of independence, required assistance, perceived quality of life, and more. The observations included eating, drinking, self-care activities like washing and brushing teeth, removing objects from the floor or out of a cupboard, feeding pets, and operating typical devices such as a VCR. Statistical evaluation showed that 10% of the users applied the ARM for more than 4 hours per day, 30% for 2 to 2½ hours per day, and 60% for less than 2 hours per day. So, about 2 hours per day on average. It was noted that ARM users carried out about 40% more ADL-tasks themselves, than did the non-ARM users. In addition, ARM users required about 30% less assistance to carry out those tasks, indicating greater independence. For the ARM users, assistance was mainly required to prepare the specific task, like uncorking a bottle of wine, while pouring and drinking the wine was then carried out by the ARM users themselves. Moreover, ARM users reported an increased feeling of independence and autonomy, which led to a higher level of satisfaction and pride when they accomplished these activities unassisted. Although the latter benefits of the ARM cannot be expressed in terms of money, they are of course of great value. Detailed results of this study can be found in de Witte et al. [1] and Gelderblom et al. [2]. Discussion Usually Dutch ARM users have the availability of numerous additional ADL aids, and their homes are furnished with a high degree of home automation. It is therefore likely that an ARM user which lacks these additional aids will (need to) use
14 Processes for Obtaining a “Manus” (ARM) Robot within the Netherlands
229
the ARM for more than the reported 2 hours per day. The expenses on additional ADL aids could then be saved. 14.5.2 User Study Conducted by hetDorp In 1998, almost parallel to the iRV-study, the Siza Dorp Group, of which RTD is a subsidiary), started an independent user study [3]. The main focus of this study was to quantify the net cost-savings on labor-costs of ADL caregivers, due to the reduced ADL assistance required by ARM users. Eight non-ARM users ranging in age from 21 to 37 were selected, based on the indication criteria (Sect. 14.2.2), and were provided with an ARM, and trained how to use it. Next, one-week observations of users took place at 3-month intervals, comprising a total of 4 weeks in 12 months. During these observations, the amount and duration of ARM usage, as well as the amount and duration of ADL assistance was recorded (See Table 14.3).
Table 14.3. Averaged times (in hours per day) provided by ADL assistance with and without the use of an ARM [3].
Assistance ARM usage
Without ARM min avg. max
min
With ARM avg. max
2.9
3.7
5.8
1.4
2.8
0
0
0
0.6
1.5
Difference
min
avg.
4.8
0.7
1.2
ma x 1.8
3.7
0.6
1.5
3.7
A wide variation in ARM usage, and ADL-assistance required, was noted. This variation was attributed to the cognitive and physical capabilities of each user, and included any lack of desire to use the ARM. Results show that, due to the use of the ARM, at least 0.7 to 1.8 hours “per day” can be saved on the labor-costs of ADL caregivers. With an average hourly rate of 28 euro, for Dutch ADL-assistance, this results in a savings of 7,154 to 18,396 euro per year. Detailed results of this study can be found in van den Brand and van de Ven [3]. Discussion It can be argued that the measured reduction of 0.7 to 1.8 hours per day on ADLassistance is conservative for the following reasons. The group of ARM users that were tested has the availability of additional aids, as well as a high degree of home automation. It is therefore likely that an ARM user lacking these additional aids will use the ARM more and, as a result, will save more on assistance than the reported 0.7 to 1.8 hours per day. Also, expenses for additional ADL-aids are then saved. Unfortunately, this study does not report which, or how cognitive and physical limitations impede the use of the ARM. Such information is relevant to improve or modify the ARM (e.g. a different or optimized input-device) to result in
230
G.W. Römer et al.
increased ARM usage and therefore increased savings on ADL-assistance. Also, the individual desire and level of determination to use the ARM are important factors governing the degree of ADL-assistance required. It is reasonable to expect that, in the future, once a user has the availability of an ARM, he/she must use the ARM for a minimum number of hours each day to gain the cost benefit. These aspects indicate that a saving on ADL assistance of 2 to 3 hours per day could be achieved. It is expected that the trend of reduced ADL assistance will be amplified by the future introduction of a personal care budget. This method of financial support will offer handicapped individuals the opportunity to select and acquire the level of human or technical aid they desire. The cost for a standard ARM, including a 3-year warranty is about €€ 25,000, excluding local sales tax, wheelchair modification and training. With the estimated savings on ADL assistance of 2 to 3 hours per day, this implies a return on investment of about 1 to 1.5 years. An even more favorable return on investment is obtained, when the fact that the ARM user is able to work (again), is incorporated in the calculation. This is especially important for countries without social security, or public health insurance systems.
References 1. 2.
3. 4.
Witte L.P. de et al. (2000) Manus een helpende hand. iRV report Gelderblom, GJ et al. (2001) Cost-effectiveness of the MANUS robot manipulator, Proc. of the International Conference of Rehabilitation Robotics (ICORR2001), pp. 340–345 Brand Jvd, Ven Avd (2000) Onderzoeksrapport project Manus robot manipulator, Siza Dorp Group report Römer GRBE, Johnson M, Driessen B (2003) Towards a performance benchmark for rehabilitation robots, Proc. 1st International Conference on Smart Homes and Health Telematics, September 24-26, 2003, Paris, France, pp. 159–164.
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors of an Underactuated Prosthetic Hand M. Zecca, G. Cappiello, F. Sebastiani, S. Roccella, F. Vecchi, M.C. Carrozza, and P. Dario
Abstract The development of a prosthetic hand able to replicate as much as possible the grasping and sensory features of the natural hand represents an ambitious project for scientists. State of the art technology is still far to provide engineers with components with similar performance of their natural models, and active prosthetic hands can be only a pale replication of the missing natural limb. This chapter presents the current research efforts towards the development of a self-adaptative and anthropomorphic prosthetic hand. In particular, the chapter is focused on the problem of replicating the natural sensory system of the hand with an artificial proprioceptive and exteroceptive sensory system.
15.1 Introduction The hand is the end effector of the upper limb, which in humans serves the important function of prehension, as well as being an important organ for sensation and communication [11]. The development of a prosthetic hand able to replicate as much as possible the grasping and sensory features of the natural hand represents an ambitious project for scientists, because of its challenging characteristics and performance: a large number of Degrees of Freedom (22 DoFs), redundancy and complexity of proprioceptive and exteroceptive sensors, and advanced control [6]. State of the art technology is still far to provide engineers with components with similar performance of their natural counterparts, and prosthetic hands can be only a pale replication [3]. Commercial hand prostheses have one or two degrees of freedom (DoFs) providing finger movements and thumb opposition. Due to this lack of DoFs, such devices are characterized by a low grasping functionality [4]. In order to overcome these limitations, and to enhance the dexterity and the usability of myoelectric hand prostheses, a self-adaptative and anthropomorphic prosthetic hand has been developed [9]. In particular, this chapter is focused on the problem of replicating the natural sensory system of the hand with an artificial sensory system designed and fabricated according to a biomechatronic approach [3]. Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 233-242, 2004.
© Springer-Verlag Berlin Heidelberg 2004
234
M. Zecca et al.
15.2 Mechanical Structure In general, cosmetics requirements force the engineers to incorporate the prosthetic device in a glove, and to keep size and mass of the entire device compatible with those of the human hand. The combination of robust design goals, cosmetics, and limitations of available components, can be matched only with a drastic reduction of DoFs, as compared to those of the natural hand [2]. Due to this, prostheses are characterized by low grasping functionality, and thus they do not allow adequate encirclement of objects in comparison to the human hand. This low flexibility and low adaptability of artificial fingers lead to an instability of the grasp in presence of an external perturbation, as illustrated in [10]. In order to enhance the dexterity of prosthetic hands by keeping an intrinsic actuation solution and a simple control algorithm, we adopted an innovative design approach based on underactuated mechanisms [5, 8]. The result of these efforts is a three-fingered anthropomorphic hand called RTR II hand (Fig. 15.1) [9].
Fig. 15.1. The RTR II prosthetic hand (on the left) and its actuation and transmission system (on the right)
This hand weights about 320 grams, and it has nine DoFs in total, but only two motors. Index and middle fingers are identical (both have three phalanges), while the thumb has two phalanges, as in the human hand. The hand is based on a tendon transmission system (Fig. 15.1). The tension of the tendons generates a torque around each joint, by means of small pulleys, and allows the flexion movement; this transmission structure acts in the same way as the flexor digitorum profundus [5, 7]. The extension movement is realized by torsion springs. The adduction and abduction movements of the thumb are realized by means of a four bar link mechanism.
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors
235
The actuators system consists of 2 DC motors with different characteristics and functions: • the first motor (Minimotor S. A., mod. 1727 006 C, with 20/1 Minimotor S.A gearhead) acts on a slider providing all the fingers with the flexion/extension movements for power grasp (a detailed view of the slider is shown in Fig. 15.4); • the second motor (Minimotor S. A., mod. 1219 006 C, with 10/1 Minimotor S.A gearhead) allows the adduction and abduction movements of the thumb (positioning grasp with less power).
15.3 Sensory System The hand sensory system is the core of the prosthetic device. It is necessary to enable automatic control of grasping tasks without requiring special attention and efforts to the user. In addition, the sensory system is studied with the idea of providing the amputee with cognitive feedback about the grasping task that is performed [1]. For these reasons, according to a biomechatronic approach [3], the artificial sensory system is inspired at replicating the natural sensory system providing both proprioceptive and exteroceptive sensing abilities.
Fig. 15.2. Photograph of the prototype of the RTR II hand
236
M. Zecca et al.
In synthesis, the sensory system is composed of different sensors (Fig. 15.2): • proprioceptive position sensors – the position of the slider actuating the tendon transmission is monitored by a Hall-effect sensor, which detects the position of the slider along his stroke during the flexion/extension movements of the fingers, like the physiological angular sensors in the joint capsules [6, 13]; • proprioceptive joint angular position sensors – the thumb angular displacement when performing adduction/abduction movements is measured by a Hall effect sensor embedded in the joint structure, like the physiological angular joint sensors in the joint capsules [6, 13]; • proprioceptive tendon force sensors – a tension sensor has been fabricated in order to continuously monitor the cable tension applied by the motors, as the Golgi tendon organ in series with the muscle [6, 13]; • exteroceptive force sensors – an artificial mechanoreceptor is obtained by means of a FSR sensor embedded in a silicone cap at the thumb tip. This sensor behaves like the physiological skin mechanoreceptors [6, 13]. The force sensor has been applied only on the thumb tip that is significantly involved in all the functional grasping tasks [12]. The following subsections describe in detail the sensory system and its performance.
15.4 Materials and Methods The calibration of the tensiometer and of the FSR pressure sensor have been done using an INSTRON R4464 testing machine (Instron Corporation, Canton, Massachusetts, USA) with a static load. The calibration of the two position sensors has been done manually with a Rupac digital Caliper 1165. The data have been pre-processed with custom-made electronic boards. All the signals have been acquired using an acquisition board (National Instruments™ DAQ Card 1200), and processed by a custom LabVIEW™ interface to visualize in real time the output (in Volts) versus the applied load or displacement. All data have been saved on a PC for post-processing and further reference. 15.4.1 Slider Position Sensor A qualitative measurement of phalanges positions is obtained by detecting the displacement of the slider where a Hall-effect sensor (model SS496B, Honeywell Inc, Freeport, Il, USA) is mounted. Twelve magnets (model 103MG5, Honeywell Inc, Freeport, Il, USA) have been mounted in front of the slider in order to generate a monotonic magnetic field (Fig. 15.2). Thanks to a Finite Elements (FE) simulation, a configuration of magnets able to generate an appropriate distribution of magnetic field has been established
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors
237
The experimental analysis (Fig. 15.3) has assessed the simulation and the final calibration on board has provided good linearity and repeatability (enhanced by reducing the machining and assembling tolerances) [9]. With a power supply of 5 V, the output of the sensor could be approximated by: 2
Vout = 0.0643•Xslider + 1.8371,
R = 0.9901
(15.1)
2
where R , is defined as:
∑( y - yˆ ) =1∑( y ) −∑( yˆ ) 2
2
R
j
j
2
j
2
.
(15.2)
j
2
R , or coefficient of determination, is a number from 0 to 1 that reveals how closely the estimated values of the trendline correspond to the actual data. A trendline is most reliable when its R-squared value is at or near 1.
Fig. 15.3. Hall tension versus linear slider’s stroke
15.4.2 Tendon Tensiometer In the RTR II hand, the transmission cables are fixed on one end to the index and middle distal phalanges and, on the other end, they are connected to the linear slider through the two compression springs of the differential mechanism (Fig. 15.4). The cables act directly on two mobile elements, which compress the springs during the adaptive grasp of an object of irregular shape. The force sensor is obtained by sensorizing a mechanical component acting as a mechanical stop for the cable and able to strain itself under the tension of a grasping cable.
238
M. Zecca et al.
Fig. 15.4. Cross section of the linear slider
In order to obtain an elastic strain of the component and an appropriate mechanical strength, a classic structural analysis with FE methods (using two symmetry planes and linearity assumption) has been used to optimize the dimension of the cantilever in the design phase. The tendon tensiometer is based on two strain gauges sensors (model ESU-0251000, Entran Device Inc, Fairfield, NJ, USA). The micromechanical structure has been fabricated to obtain a deformable cantilever (Fig. 15.4), in order to continuously monitor the cable tension applied by the motors, as the Golgi tendon organ in series with the muscle [6, 13]. A cone-shaped tip, fixed to the load cell, has been used to apply the load. A Wheatstone bridge, followed by a signal amplifier and a low pass RC filter with ft=100 Hz, has been used to detect the variation of the resistance of the two strain gauges. The output of the tensiometer Vout is related to the applied tension Tcable by the following equation: Vout = 26.349•Tcable - 0.3732,
2
R = 0.9996
(15.3)
The sensing device has shown good dynamic, sensitivity and repeatability performance (Fig. 15.5); a little hysteresis and time delay have been detected due to the differential mechanism of the hand (there is a spring under the strained component) [9].
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors
239
Fig. 15.5. Output response of the tensiometer
15.4.3 Thumb Position Sensor In order to sense the position of the thumb, a round-shaped cap with two magnets (model 103MG5, Honeywell Inc, Freeport, Il, USA) has been assembled at its base, at the center of rotation of the four bar link mechanism providing abduction/adduction capabilities to the thumb (Fig. 15.2). A Hall effect sensor (model SS496B, Honeywell Inc, Freeport, Il, USA), located in front of the cap, determines the angle displacement of thumb metacarpus.
Fig. 15.6. Response of the thumb position sensor
240
M. Zecca et al.
The output of the position sensor Vout is related to the angular position of the thumb θthumb by the following equation: Vout = 131.1•θthumb - 319.76
2
R = 0.9575
(15.4)
The sensor has an operative range of 30° and has shown good sensitivity, and repeatability performance (Fig. 15.6). 15.4.4 Force Sensor A FSR pressure sensor (part #400, Interlink Electronics, Camarillo, Ca, USA), 5 mm in diameter and 0.3 mm of nominal thickness, has been embedded at the thumb tip: the whole distal phalange, with the FSR at the volar side, has been immersed in a thumb shaped shell containing melted silicone. When the silicone polymerization has been over, a force sensitive thumb tip has been obtained. The hand was locked with the force sensor facing upwards, and a cylinder (5 mm in diameter), fixed to the load cell of the testing machine, has been used to apply the load. The output of the FSR force sensor Vout is related to the applied force FFSR by the following equation: Vout = - 0.2887•Ln(FFSR) + 1.2867,
2
R = 0.9754
(15.5)
Preliminary experiments have shown a low hysteresis, and high repeatability (Fig. 15.7). The sensor gives information on the static pressure on a large area (more than 5 mm) and it has shown good dynamic characteristics. As a consequence, the developed force sensor could be likened to some features of the FA II and SA II physiological mechanoreceptors [6, 13].
Fig. 15.7. Output response of the thumb force sensor
15 Experimental Analysis of the Proprioceptive and Exteroceptive Sensors
241
15.5 Conclusions Commercial hand prostheses have just one or two degrees of freedom (DoFs) providing finger movements and thumb opposition. Due to this lack of DoFs, such devices are characterized by a low grasping functionality. In order to overcome these limitation, and to enhance the dexterity and the usability of myoelectric hand prostheses, a self-adaptative and anthropomorphic prosthetic hand has been developed. In this chapter the sensory system of this hand, called RTR II hand, has been described. The proprioceptive and exteroceptive sensory structure has shown good performance in terms of operative range, repeatability and linearity. At present some experiments on the control of the hand by means of the Electromyographic signals are carried out, to exploit the force and position sensors for implementing a closed-loop control of the grasping hand in order to limit the user involvement just in identifying the initial parameters of the grasping task (the required grasping force level and the grasping type). The expected result is the increasing of the prosthesis usability.
Acknowledgements This work has been carried at the RTR Research Centre in Rehabilitation Bioengineering (Viareggio, LU, Italy) of the INAIL Prosthetic Centre funded by INAIL (National Institute for Insurance of Injured Workers), and originated by a joint initiative promoted by INAIL and by Scuola Superiore Sant’Anna. This work is supported in part also by funds of the CYBERHAND (“Development of a Cybernetic Hand”, IST-FET Project #2001-35094) Project.
References 1. 2.
3.
4. 5.
ARTS, CRIM Labs (2001) The CYBERHAND Project, Development of a Cybernetic Hand. IST-FET Project (#2001 35094) Carrozza MC, Massa B, Dario P, Lazzarini R, Zecca M, Micera S, Pastacaldi P (2002) A two DOF finger for a biomechatronic artificial hand. Technology & Health Care 10: 7–89 Carrozza MC, Massa B, Micera S, Lazzarini R, Zecca M, Dario P (2002) The Development of a Novel Prosthetic Hand–Ongoing Research and Preliminary Results. IEEE Trans Mechatronics 7: 108–114 Dechev N, Cleghorn WL, Naumann S (2001) Multiple Finger, passive adaptive grasp prosthetic hand. Mechanism Machine Theory 36: 1157–1173 Hirose RS, Ma S (1999) Coupled tendon-driven multijoint manipulator. In: Proceedings of the 1999 IEEE Conf. on Robotics and Automation, pp 1268–1275
242 6. 7. 8. 9.
10.
11. 12.
13.
M. Zecca et al. Kandel ER, Schwartz JH, Jessel TM (2000) Principles of Neural Science. McGrawHill/Appleton & Lange Kapandji IA (1982) The Physiology of the Joints. Vol. 1: Upper Limb. Churchill Livingstone, Edinburgh Laliberté T, Gosselin CM (1998) Simulation and design of underactuated mechanical hands. Mechanism Machine Theory 33: 39–57 Massa B, Roccella S, Carrozza MC, Dario P (2002) Design and development of an underactuated prosthetic hand. In: Proceedings of the 2002 IEEE Conf. on Robotics and Automation, pp 3374–3379 Ruthier F, Rancourt D, Gosselin CM (1995) Design of a hand prosthesis based on kinematic principles. In: Proceedings of the 1995 Myoelectric Controls Powered Prosthesis Symposium, pp 53–56 Tubiana R (1981) The Hand. W. B. Saunders Company, West Washington Square, Philadelphia Vecchi F, Micera S, Zaccone F, Carrozza MC, Sabatini AM, Dario P (2001) A sensorized glove for applications in biomechanics and motor control. In: Proceedings of the 2001 Conference of the International FES Society Webster JG (1988) Tactile Sensors for Robotics and Medicine. John Wiley&Sons.
16 Design and Testing of WREX Tariq Rahman, Whitney Sample, and Rahamim Seliktar
Abstract A passive gravity-balanced, 4 DoF arm orthosis was built for children with arm weakness such as in muscular dystrophy. The orthosis is identically gravity balanced in 3D with linear elastic elements. The orthosis has an exoskeletal configuration and is attached to the back of a wheelchair. Algorithms that yield an improved gravity-balancing scheme are derived and a new mechanical structure of the arm presented. Our experience with user trials is presented along with early results using the Jebsen test of hand function and other subjective user feedback.
16.1 Introduction WREX- Wilmington Robotic Exoskeletal has been developed to assist people with muscular weakness in moving their arm in 3D. The arm is gravity balanced for the weight of the person and WREX. The person’s arm is placed in WREX, which then allows him to navigate his arm in space with his residual strength. WREX is primarily intended for people with muscular dystrophy and spinal muscular atrophy. People with these conditions lose the ability to place their arm in space due to weakness. The distal muscles are less affected and sensation remains intact. The balance forearm orthosis (BFO) is among the earliest devices to assist people with arm weakness [1]. The BFO for the most part, however, is a planar device that does not assist in elevation. The first computerized orthosis was developed at the Case Institute of Technology in the early 1960s [2]. The manipulator was configured as a floor-mounted four degree-of-freedom externally powered exoskeleton. Control of this manipulator was achieved using a head-mounted light source to trigger light sensors in the environment. Rancho Los Amigos Hospital continued the Case orthosis and developed a six-degree-of-freedom electrically driven “Golden Arm” [3]. The Rancho `Golden Arm' had a similar configuration to the Case arm but no computer control. It was significant, however, in that it was mounted on a wheelchair and was found to be useful by people who had disabilities with intact sensation, resulting from polio or multiple sclerosis. The Rancho `Golden Arm' was controlled at joint level by seven tongue-operated switches, which made operation very tedious. A number of other projects have developed arm orthoses including the Burke Rehabilitation Center arm [4], the Hybrid Arm Orthosis [5], and the PODEUM Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 243-250, 2004.
© Springer-Verlag Berlin Heidelberg 2004
244
T. Rahman, W. Sample, and R. Seliktar
system [6]; however, to date the BFO remains the only affordable and realistic option. This project addresses some of the practical issues of design, affordability and user acceptance of such a device.
16.2 Design of WREX WREX is a 2-link 4-DoF exoskeletal arm that is attached to the wheelchair (Fig. 16.1). It is gravity balanced using linear elastic elements. Details of the original structure and equations are given in [7, 8]. The following design changes are proposed based on pilot trials with users [9]: 1. Make the lengths of the links adjustable 2. Make a custom mount to the wheelchair.
Fig. 16.1. Earlier prototype of WREX mounted on a subject’s wheelchair
Based on user trials in the home, the feedback was that the lengths of the links do not conform to the length of the natural limb. Since our design had three different sizes to accommodate all subjects, it would be impossible to adjust for those that fall between sizes. Therefore, a plan was made to have link sizes that were adjustable. This was impossible with the existing design. Therefore a major redesign of WREX was called for. The original design used off-the-shelf bungee cords with specific stiffnesses. The cords were tied in a loop and mounted on the parallel linkage of the upper and lower links. The bungee passed over four points in order to yield the exact balancing and to conform to the equations, as shown in Fig. 16.2. The four contact points added to
16 Design and Testing of WREX
245
the friction in the movement and added mechanical complexity to the design. It was therefore decided to attempt to have the bungees connected only at two points. A two-point connection would allow the links to be adjusted in length and not compromise the exact balancing constraint. However a two-point connection requires that the resting length of the spring be zero, or the stiffness line pass through the origin. This requirement was impossible to meet since none of the elastic elements tried conformed to these constraints. However, there were some elements whose stiffness characteristics came close to those described above.
Fig. 16.2. Shows the bungee in a loop which circumvents the issue of the zero resting length requirement
16.3 Gravity Balancing With x ≠ 0 If the resting length of the elastic element is not required to be zero then a number of available elements can be utilized.
Fig. 16.3. Schematic of WREX with forearm link and upper arm link
246
T. Rahman, W. Sample, and R. Seliktar
The equations of motion for the orthosis are derived as follows, referring to Fig. 16.3: The moment about the elbow for the lower arm link is given by: x20 a b sin θ (16.1) M e = m2 gl2 sin θ 2 − K 2 1 − 2 2 2 2 2 a b 2 a b cos θ + + 2 2 2 2 2 and the moment about the shoulder for both links is given by: x10 a b sin θ M s = m1 gl1 sin θ1 − K1 1 − 1 11 2 2 a + b + 2 a b cos θ 1 1 1 1 1 + 2m2 gl1 sin θ1
(16.2)
The following values were chosen to match the dimensions and weights of the children to be tested: m1=3.2 Kg, m2=2 Kg, l1=252 mm, l2=101 mm K1=0.15 Kgm, K2=0.046 Kgm, b1= 229 mm, b2=152 mm a1=a2= 25 mm, x10=x20 = 38 mm. The values m1 and m2 are the weights of the child’s upper arm and forearm respectively added to the weight of the two links of the orthosis. The weights of the persons’ upper arm and forearm are based on 2.7% and 2.2% body weight respectively. For this illustration a person weighing 81.6 Kg is chosen. Torque at Elbow 18 16
elbow torque (lb in)
14 12 10 8 6 4 2 0 -2 theta
1.01
2.01
3.01
theta (rad)
Fig. 16.4. This is the torque at the elbow required to extend the forearm through 180 degrees. The bottom curve is the torque with the orthosis. The top is the torque without the orthosis. These figures are for an 81.6 Kg person
16 Design and Testing of WREX
247
Torque at Shoulder
90 80
shoulder torque (lb in)
70 60 50 40 30 20 10 0 -10 theta
1.01
2.01
3.01
theta (rad)
Fig. 16.5. Torque at the shoulder due to the weight of the whole arm. The bottom curve is the torque required with the orthosis. The top curve is the torque of the arm without the support of the orthosis
As can be seen from Figs. 16.4 and 16.5, the torques at the shoulder for these particular variables are approximately linear. The amount of non-linearity is insignificant when compared to the sinusoidal curvature obtained when there is no orthosis compensation. These curves are presented for comparison on the same graphs. These dimensions were then implemented on the prototype orthosis, shown in Fig. 16.6.
Fig. 16.6. WREX shown with subject use in the home
248
T. Rahman, W. Sample, and R. Seliktar
The new prototype allows the adjustment of link lengths and changing stiffness by adding or subtracting therabands (Smith and Nephew, Germantown, Wisconsin, USA). Therabands are ideal because their elastic behavior is consistent, and they come in various sizes that are easily identifiable by color. The new unit is made from steel telescoping rods. These are in a parallelogram arrangement for the upper arm and a single link for the forearm. The connection from the wheelchair to the origin of the shoulder link is custom made by bending steel tubing to suit the individual subject. The arm trough is also custom made for each subject. The subject’s arm is plaster cast. From this a positive mold is fabricated then a negative polyethylene brace is made. This brace is then attached to the forearm link.
16.4 Clinical Testing Testing of WREX comprises user trials in his or her home along with controlled laboratory experiments. The subject is first fitted with a custom unit. This comprises a wheelchair attachment, arm brace, and length and stiffness adjustments. The subject is then tested without the orthosis on the Jebsen test of hand function. The Jebsen is a standardized instrument made up of a series of timed tasks. These tasks are indicative of activities of daily living such as feeding, writing and picking up objects [9]. The user is then asked to take WREX home for two weeks. They are encouraged to use it as much as possible in a variety of settings including school and home. They are brought back into the lab and tested on the Jebsen with WREX. It is hypothesized that a period of two weeks is sufficient to overcome the learning curve. They are also tested on the WeeFIM, which is functional independence measure. The WeeFIM is a comprehensive instrument that covers many aspects of disability. For this study only the questions related to grooming and feeding are included. Fifteen subjects have been recruited for the study. Inclusion criteria are MD or SMA, arm rating of 2–3 on the manual muscle scale, and are in a wheelchair.
16.5 Results Early indications presented here are based on feedback form three subjects. The Jebsen test has 7 tasks related to activities of daily living. These include writing, card turning, small object manipulation, simulated feeding, checker stacking, large light object manipulation and heavy large object manipulation. The simulated feeding and small object manipulation showed significant improvement in timing of these activities (Table 16.1).
16 Design and Testing of WREX
249
Table 16.1. Scores from the Jebsen test of hand function for the three subjects
Subject
1 2 3
Object manipulation time (s) Arm Arm with WREX 21 9 27 13 35 15
Simulated feeding time (s) Arm Arm with WREX 32 14 25 15 38 19
These tasks showed more than a 100% reduction in time to perform the two tasks. The small object manipulation consists of picking up bottle tops, paper clips, and coins and placing them in a can. The simulated feeding has the subject picking up dried beans with a spoon and putting them in the same can. Some of the subjective comments were that they could now raise their hand in school; they could feed themselves various foods they were unable to eat before, such as spaghetti. They could perform activities such as building legos, playing swords and throwing a baseball. The results presented here are early in the testing process; however, they provide an indication of the potential success of this type of intervention. The clinical trials are ongoing and should yield statistically significant results with the use of WREX.
Acknowledgments This research has been funded by the Nemours Biomedical Research, and the U.S. Department of Education, Field Initiated Grant # H133E30013. Thanks to Mena Scavina DO, Michael Alexander MD and Alisa Clark RN for the clinical trials.
References 1.
2. 3.
4. 5.
Lunsford TR, Wallace JM (1995) The orthotic prescription. In: Goldberg B, Hsu J (eds) Atlas of Orthotic and Assistive Devices-Biomechanical Principles and Applications (Third Edition), Mosby LeBlanc M, Leifer L (1982) Environmental control and robotic manipulation aids. Engineering in Medicine and Biology Magazine, December, pp 16–22 Allen JR, A. Karchak Jr., Bontrager EL (1972) Final project report, design and fabricate a pair of rancho anthropomorphic manipulator. Technical Report, The Rancho Los Amigos Hospital Inc., 12826, Hawthorn Street, Downey CA 90242 Stern PH, Lauko T (1975) Modular designed, wheelchair based orthotic system for upper extremities. Paraplegia, 12, pp 299–304 Benjuya N, Kenney SB (1990) Hybrid Arm Orthosis. Journal of Prosthetics and Orthosis 2(2): 155–163
250 6.
7.
8.
9.
T. Rahman, W. Sample, and R. Seliktar Galway R, Naumann S, Sauter W, Somerville J (1991) The evaluation of a powered orthotic device for the enhancement of upper-limb movement (PODEUM). Final report submitted to The National Health Research and Development Program of Health and Welfare Canada, Project # 6606-3835-59 Rahman T, Ramanathan RR, Seliktar R, Harwin W (1995) A simple technique to passively gravity-balance articulated mechanisms. ASME Transactions on Mechanisms Design, 117(4): 655–658 Rahman T, Sample W, Seliktar R, Alexander M, Scavina M (2000) A body-powered functional upper limb orthosis. VA Journal of Rehabilitation Research and Development 37(6): 675–680 Jebsen RH, Taylor N, Trieschmann RB, Trotter MJ, Howard LA (1969) An objective and standardized test of hand function. Arch Phys Med Rehab 50: 311–319.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair Dimitar Stefanov, Alexander Avtanski, and Z. Zenn Bien
Abstract The present chapter introduces a navigation algorithm for a wheelchair in semistructured home environment. The user sets the end position and orientation of the wheelchair and the controller autonomously steers the wheelchair from the current position to the goal. The algorithm includes a procedure for automatic creation of an initial map of the wheelchair surroundings that is used afterwards for composition of collision-free path from the starting point to the target. Calculation of the present wheelchair position is based on the information of the TV images from ceiling mounted cameras that detect the wheelchair in all steps of its movement toward the goal. For prevention from navigation failures in situations when wheelchair markers cannot be clearly detected by the cameras, the information about the angular rotation of the driving wheels is used for calculation of the current position. In order to set the movement task, the user refers to a wheelchair-mounted monitor where the images from all rooms are represented. After selection of the TV image from the room where the target is located, one puts the end wheelchair position and orientation by pointing them directly on the image. The information, collected by onboard-mounted range sensors during the wheelchair operation, is used for obstacle avoidance and map update. The map can be additionally corrected and modified by the user. A special simulator ROSI (RObotic SImulator) is designed for computer testing of the proposed algorithm. Finally, some design principles of the simulator and results from the initial evaluation of the algorithm with the simulation are remarked.
17.1 Introduction and Related Works Autonomous guided wheelchairs (AGW) can be a promising solution for indoor transportation of older persons and people with severe dexterity limitations. Utilization of such wheelchairs offers the user an easy and independent access to different home positions. Receiving the user’s instruction about the goal, the navigation system autonomously steers the wheelchair to the goal. In order to respond much better to the user’s needs, most of the recent developments in the same research area aim at the design of wheelchairs that not only can follow the path to the goal but also have abilities to compose the routine toward the end position and Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 253-298, 2004.
© Springer-Verlag Berlin Heidelberg 2004
254
D. Stefanov, A. Avtanski, and Z.Z. Bien
to modify automatically the initially generated path during the task execution if obstacles appear on the primarily-intended route appear. AGW dramatically reduces the number of user’s commands and significantly decrease cognitive load of the user. Despite various solutions for autonomous guidance of industrial vehicles (much as automatic guided carriers for flexible manufacturing systems, mobile industrial robots, etc.), only small number of these ideas can be applied to the control of powered wheelchairs that operate at home or office environment due to several important reasons: First, the wheelchair systems should be relatively cheap in order to be affordable to a great number of users. Second, the wheelchair performance should be extremely safe and risk-free for the user, for nearby located people, pets, and for the home furniture/appliances. Third, the wheelchair navigation systems should possess certain level of intelligence in order to be able to perform successful avoidance of obstacles and operate in semi-structured or unknown environment. Fourth, such systems should be human-friendly and easily controlled by users who could be non-technically educated with certain level of movement disability. 17.1.1 Methods for Navigation Most of the developments within that area concern wheelchairs that operate in structured- or semi-structured home environment. Regarding their navigation algorithm, AGW can be classified into two large groups: beacon-based AGW and AGW with natural landmark navigation. The beacon-based navigation systems determine current wheelchair position by detection of beacons that are strategically placed at pre-defined locations within the home environment, measurement of the distances to these beacons, and calculation of the angles of the beacon directions. Some recent wheelchair projects apply navigation by natural landmarks from the structured environment such as furniture edges, doorframe edges, etc. The procedures for landmark identification are usually based on processing of the visual information from one or two cameras. The procedure includes identification of the reference edges, calculation of the length of each landmark and orientation regarding the floor plane, and estimation of the landmark elevation from the floor. The beacon-based systems can be further grouped into systems that refer to active beacons (light-emitting, sound-emitting, or radio-wave-emitting beacons, installed on the walls or furniture) and systems that refer to passive beacons. The latter usually utilizes specific images whose patterns, dimensions, reflection characteristics, and colors are unique for the environment. Such markers, usually of low cost, can be easily attached to the walls and simply rearranged within the house but the techniques for their detection typically involve CCD visual sensors and require much complicated computing procedures. As an example for wheelchair navigation based on passive markers, we may mention the wheelchair systems developed at the University of Notre Dame [1, 2]. The wheelchair is navigated through special markers attached to the walls and to the furniture. The reference paths are physically “taught” to the system during the setup procedure. In the
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
255
“run” mode, the error between the current estimated position and the reference path are calculated and used to control the wheelchair steering in order to follow the reference path. The user selects the destination and then the wheelchair system controls the chair to the desired location. In the “run” mode the navigation system not only follows the pre-composed path to the goal but also automatically modify the routine if obstacles on the initial path are detected. After the obstacle overcome, the wheelchair returns on the original trajectory. The navigation systems with active beacons are usually much resistant to ambient-light artifacts and apply simple hardware for detection of the current position than those with passive markers but the rearrangement of the active beacons can be a problem since each active beacon should be separately powered and controlled. The guidepath navigation systems can be considered as a special class beaconbased systems where the beacons are embedded in the floor. The guide track can be designed as a reflective or colored tape attached to the floor, as a cable that emits high-frequency electromagnetic field, as permanent-magnets array embedded in the floor, or as a magnetic tape with pre-recorded information tracks on it. Although the approach is widely used in many material-handling applications, its utilization in the design of home-operated wheelchairs is limited due to the complexity of the wheelchair movement routines and the requirements for easy reconfiguration of the path. An additional limitation arises from the necessity for embedment of the guidepath in the floor. A wheelchair, navigated via magnetic tape guidepath, was reported in [3]. The solution includes a strip of flexible magnetic material attached to the floor surface and a magnetic stripe follower based on an array of fluxgate or Hall effect sensors, mounted on the board of the vehicle. The natural-landmark navigation schemes vary from detection of ceiling mounted lamps and calculation of their positions [4] to detection of doorframes and furniture edges [5, 6]. In the VAHM project, the vertically located edges of the home furniture are used as natural landmarks for navigation [7, 8]. Each natural landmark is identified by the metrics of its edges and their distance to the ground. The TAO-1 wheelchair, developed at the Applied AI Systems Inc., uses two CCD color cameras for landmark - based navigation and infrared sensors for obstacle detection and collision avoidance [9]. Mobile robot localization by representation of a path as sequential color strings is proposed in [10]. The code of each string includes information about the color and geometry characteristics of the vertical edges of the furniture. In general, the navigation systems based on passive bacons possess enhanced flexibility than those with active beacons. Since the natural landmark navigation does not require installation of any special beacons, the user can modify easily the existing travel routines and can add new travel paths without assistance from specialized technical staff. On the other hand, the solutions based on natural landmarks require much-sophisticated sensors, involve complex algorithms for analysis of the visual scene, and require much powerful hardware. Similar to the strategies adopted in the design of mobile robots, most algorithms for autonomous wheelchair guidance usually run simultaneously two procedures for estimation of the current wheelchair position: landmarks navigation procedure and dead-reckoning navigation procedure that allows calculation of the
256
D. Stefanov, A. Avtanski, and Z.Z. Bien
current wheelchair position by memorizing the coordinates of previously determined position and applying to it the direction and distance traveled since that point. Calculations are usually based on measurement of the rotation angles of the driving wheels, which is typically realized by wheel-embedded encoders. Despite the sensitivity of the dead-reckoning method to errors resulting from wheel slippage and tire deformation, the combined approach significantly improves the wheelchair performance in the situations when one or more beacons/landmarks are missing, malfunctioning or temporarily hidden from the navigation sensors by other objects that lie between them. The odometry information plays dominant role for the positioning of some autonomous wheelchairs [11, 12]. The wheelchair path can be described as the sequence of the angular positions that the driving wheels have in each point. Such path representation is quite simple and computation does not require big hardware resources. During the task execution, the odometry module provides only the rough position information and the exact wheelchair position is calculated upon the information from ultrasonic range sensors or eye-safe infrared scanners. When the wheelchair operates in semi-autonomous navigation mode, the wheelchair control is shared between the user and the automatic controller [13]. In that mode the user sets the general direction of the wheelchair and the controller modifies the user’s commands in order to prevent possible collisions with obstacles. That mode facilitates successful doorway passage and can help users who are unable to give precise commands because of tremor, vision problems, etc. The algorithm is based on processing of the information from wheelchair-mounted proximity sensors. Since the AGW design usually include sensors for obstacle detection and apply algorithm for automatic avoidance, in most cases, the same AGW are used not only in full autonomous mode but also in semi-autonomous mode. As an example, we may refer to the "NavChair" - a wheelchair, developed at the Mobile Robotics Lab of the University of Michigan [14, 15]. The user sets the general direction of travel and the NavChair follows it. It automatically avoids obstacles trying to maintain user-specified direction as closely as possible. Twelve Polaroid ultrasonic sensors are used for both obstacle detection and wall following. The TinMan supplementary wheelchair controller, developed at the KISS Institute for Practical Robotics (KIPR), is a special control module that sits between the joystick and the existing wheelchair controller and modifies joystick signals in order to avoid obstacles detected by its proximity sensors [16, 17]. The navigation system of the “Wheelesley” wheelchair employs infrared-, sonar-, and Hall-effect sensors [18]. The user sets the desired movement direction and the controller generates commands for collision avoidance and centering the chair in the hallway. The Drive assistant system, developed at the VTT Machine Automation, Finland, uses ultrasonic sensors for environmental perception and modifies the user’s commands in case of obstacle avoidance [19].
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
257
17.1.2 Path Planning and Navigation to the Goal Approaches to mobile robot navigation can be classified into three categories: model-based approaches, sensor-based approaches, and hybrid approaches [20]. Model-based approaches (called also as “global planning methods” [21, 22] or “functional architectures”) refer to a priori knowledge about environment. The general control strategy is to build a world model, plan actions with respect to goals, and then execute the planned path via steering system. Techniques for model-based path generation include connectivity graph [23], Voronoi diagram, cell decomposition [24, 25], artificial potential fields [26–29], etc. Although the approaches can successfully generate a path from the initial point to the goal, their operation is limited to structured environment and cannot meet the complexity of environment that changes relatively quickly or cannot be precisely described. Sensor-based approaches (so-called “reactive planning systems”, or “reactive control systems”) [30–33] are adaptive to unstructured environment and their operation is based completely on sensor information. Sensor-based approaches apply “behavioral architecture” [34], where multiple tasks run in parallel. Each task refers to robot sensory input and forms stimulus-response mechanism or “behavior”. The behavior set includes typically goal-attraction, wall-following and obstacle avoidance that contribute to the successful and safety navigation in a dynamic environment. A drawback of these approaches is that they do not always guarantee a success of the mission and the robot may get lost in the environment. The hybrid approaches [35, 36] try to fuse the strengths of both model-based and sensor-based approaches in order to achieve larger capabilities. These methods refer to pre-acquired knowledge of the environment. A planner is used to generate a path from incomplete global description of the environment and to give sub-goals to a navigator that realizes the local control. This approach is usually realized by hierarchical control structure that has a set of three enclosing loops: the functional loop, the reactive loop, and the reflective loop. The functional loop is responsible for path planning and refers to the map that is permanently updated on the information from the proximity sensors. The reactive loop refers to the sensorial data of a set of range sensors and deals with local motion, path following and localization issues. The reflexive loop includes a set of bumper sensors and deals with imminent collision detection. Raw sensorial data is available within this loop. Layers have their priority in the decision taking process. Highest is the priority of the reflexive loop, following by the reactive and functional loop. Depending on its priority, command of certain loop can disable or modify those other commands which are in conflict with itself. The navigation algorithm commented in this work is based on the hybrid approach. The present chapter proposes a control strategy for indoor-operated autonomous wheelchair. The wheelchair user can access different places in his/her home by designation of the position and the orientation of the wheelchair at the end point only. The wheelchair controller automatically plans the initial path from the current position to the goal and additionally modifies it during the task execution, while building a collision-avoidance strategy. The approach also considers wheelchair operation in semi-structured environment. The chapter is organized as fol-
258
D. Stefanov, A. Avtanski, and Z.Z. Bien
lows. In Sect. 17.2, we introduce the navigation problem and the main considerations to the proposed algorithm. In Sect. 17.3, we explain the proposed localization of the current wheelchair position. The scenario of wheelchair control and the way of setting the final position and orientation of the wheelchair by the operators are commented in Sect. 17.4. Next, a navigation system based on the algorithm is described. In Sect. 17.5, a computer simulator based on the proposed approach, is presented in Sect. 17.6 together with the main assumptions on the modeling of the home environment, wheelchair kinematics, sensors modeling, and control of the wheelchair model. Some simulation results and comment on the wheelchair behavior in some interesting situations are given in Sect. 17.7. Future plans and conclusions appear in Sects. 8 and 9 respectively.
17.2 Conception of Wheelchair Navigation 17.2.1 Problem Statement After delivery to the user’s home, the wheelchair should be adjusted in accordance with the user’s needs and home specifics. The overall procedure includes initial data collection, creation of a database about the characteristics of the home environment, composition of the wheelchair routines, and fine-tuning of the autonomous wheelchair system. Since the adjustment process includes a lot of special operations, further modification of the existing movement programs usually can be done by highly-qualified programmers and rehabilitation specialists only. The user cannot do that by himself. Even in case of small re-adjustment of the existing programs, specialists should visit the user’s home to perform such work. The present research aims at development of a flexible algorithm for autonomous wheelchair control that allows users with serious movement impairments to customize by themselves the initial wheelchair settings and to access home positions that have not been included in the initial set. The approach allows composition of new movement routines and modification of existing movement programs. Apart from that, the approach concentrates on the problem for automatic organization of a home map and for automatic adaptation to an unknown home environment. 17.2.2 Initial Assumptions 17.2.2.1 Assumptions Regarding the House Environment: 1. The user’s house consists of several rooms and the user can move from one room to another by means of a wheelchair with suitable interface. Appropriate lifting system is available for transferring from the bed to the wheelchair and vice versa.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
259
2. The home environment is semi-structured, i.e. most of the objects are placed in known positions but during the wheelchair operation some of them may be removed or replaced. New objects can be added as well. 3. The wheelchair can freely enter any room due to the fact that either all doors in the operating environment are permanently opened or an adequate system manipulates the door when the wheelchair approaches it. 17.2.2.2 Assumption Regarding the User’s Interface The interface to be used is customized in accordance with the user’s own movement abilities. Recently, some advanced head tracking techniques involving facial detection [37, 38] and optoelectronic detection of light-reflective head-attached markers have been proposed [39, 40]. New technologies such as eye-movement control, brain control [41], and gesture recognition [42] give new opportunities for natural human – friendly interaction/interface and offer new perspectives for efficient human-wheelchair communication. 17.2.2.3 Four Modes of Wheelchair Operation: 1. Navigation to a new goal position – By means of a suitable interface, the user sets the position to which the wheelchair should go. Afterwards the wheelchair autonomously composes the route to the goal and transports him/her to that position 2. Pre-programmed mode. – In order to simplify the way of wheelchair instruction, the user can build his/her own library of predefined goal positions. In order to be transported to one of his/her desired positions, the operator just selects it and initiates its automatic execution. This mode facilitates user’s interaction with the wheelchair and reduces the time for instruction setting. 3. Regime of initial data acquisition. – After installation at user’s home, the wheelchair should collect some initial information regarding the unknown environment. Such information about the rooms’ geometry and obstacle locations is needed for the initial path planning procedures. We assume that the wheelchair is equipped with sensors that detect nearly located obstacles and that a suitable algorithm is applied to process the sensor information and to represent it as a graphic image (map) where the contours of the rooms and the obstacles are specified. We consider two variant for gathering data about the home environment: (a) Following the specific algorithm, the wheelchair autonomously moves to different places of user’s house in order to explore the unknown environment. The wheelchair maneuvers for exploration of home environment may be tiring and annoying for the user if he/she rides the wheelchair during the data acquisition process. Therefore we assume that the exploratory maneuvers should be performed in autonomous wheelchair mode. (b) After the wheelchair delivery at user’s home, a person from the service team directs the wheelchair to different places and “shows” to it the new home environment. During that “human-guided intro-
260
D. Stefanov, A. Avtanski, and Z.Z. Bien
duction” the wheelchair sensors collect information about the geometry and position of different obstacles and walls. The approach requires human involvement but makes the wheelchair teaching process faster. The initial teaching gives only basic information about the home environment. During exploitation of the wheelchair, its sensors continuously monitor the home environment and update existing database with the latest changes, making it much precise and detailed. 4. Semi-autonomous navigation – This mode is applied for successful doorway passage and can help the user who is not able to provide precise commands because of some motor or visual limitations. Based on the information of the proximity sensors, the algorithm modifies the user’s commands and prevents eventual collisions to obstacles.
17.3 Localization of the Wheelchair Position Successful autonomous navigation can be realized if the coordinates of the wheelchair position, position of the target, and obstacle positions are represented in one and the same coordinate system. Knowing the wheelchair position is important because of two main reasons: 1. Path planning procedure requires initial wheelchair position. 2. If an obstacle is detected by the wheelchair sensors, its coordinates are represented in conjunction with the coordinates and heading of the wheelchair. In order to solve the localization problem, we suppose that a sufficient number of ceiling-mounted TV cameras are installed within the house. Sensing areas of these cameras cover the whole region where the wheelchair and its user may be located. At least one camera detects the wheelchair in each moment of its operation (Fig. 17.1). Since the location of each camera is a priori known, the coordinates of each camera can be easily re-calculated by a common coordinate system.
Fig. 17.1. Home environment and the ceiling-mounted cameras
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
261
Much precise and faster localization of the wheelchair can be achieved if special passive markers with specific shape, patterns and color are attached to it, as illustrated in Fig. 17.2. These markers are arranged on the possible paths which make easy detection from the TV cameras.
a
b
Fig. 17.2. A diagram of the applied wheelchair localization. a ceiling-mounted TV cameras measure the cues positions and wheel-embedded encoders measure the rotation of the driving wheels; b cue example
Fig. 17.3. Localization of the current wheelchair position
262
D. Stefanov, A. Avtanski, and Z.Z. Bien
The wheelchair is considered as a rigid body and its position is then determined by the calculated positions of the attached markers. Since the approaches for calculation of markers’ position from the TV image are well researched and applied in many areas (in the human movement analysis, for example) [43–45], we will not discuss here the way of calculation of the marker positions in the paper. In order to prevent eventual navigation failure in the positions where the markers cannot be clearly seen by the cameras, the dead reckoning can be additionally applied for much precise calculation of the current position. The overall block diagram of the proposed wheelchair navigation scheme is shown in Fig. 17.3. The images from all TV cameras are transferred to two monitors located on the wheelchair and near the bed, respectively. Apart from the navigation purposes, the installed TV system can be used for home surveillance.
17.4 Scenario of the Wheelchair Control Setting of the target is organized in an easy and comprehensive manner. In order to place the desired location, the user refers to the wheelchair-mounted monitor where initially the images from all rooms are represented. At first, using an appropriate interface, he/she selects the TV image from the room where the desired end location is situated. The same interface is then used for zooming of the selected image (or part of it) up to the size of the whole screen that gives additional convenience in the precise setting of the goal position. By pointing the TV image, the user sets the goal position. As a result, a color circle and a flashing line passing through it, appear on the image. The circle represents the goal position and the flashing arrow line corresponds to the wheelchair orientation toward the goal position. By means of an appropriate command (for instance by selection the arrow end and dragging it), the user can rotate the flashing arrow line in order to set the desired wheelchair orientation toward the goal position. After setting the desired end position and the final wheelchair orientation, the user applies a new command that initiates the automatic execution of the transportation task. The wheelchair starts to navigate autonomously toward the goal. The consequence of the abovecommented procedures for setting the goal position and orientation is illustrated with Fig. 17.4. In the most cases, the two-dimensional TV image is not sufficient for the precise setting of the desired end-position (since the cameras are usually oriented with a certain angle toward the wheelchair movement plane; some obstacles may be located between the TV camera and the desired location). The position specified by pointing on the TV image usually represents a place that is located NEAR the desired position rather than the exact location. Therefore we assume that after arriving near the final position the user will set the exact wheelchair position by direct control. During the execution of the transportation task, the wheelchair sensors collect information about the home environment. In order to be verified by the user, this information is shown on the monitor screen as a plan that represents the contours
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
263
of all rooms and obstacles. Current wheelchair location is also specified on the map. The user and the service personnel can modify the map. Wheelchair movement in a free-from-obstacle zones can be restricted by marking them on the plan as "banned" (For example, it is not desirable to set wheelchair passage via zones that may cause certain discomfort to the user, such as passage near heaters or powerful fans, traveling through open areas during raining, etc.). Alternatively, a zone can be marked as "not-free from obstacles", when multiple reflection of the emitted sensor signals causes false sensor signals.
Fig. 17.4. Sequences of setting of the goal position and orientation
264
D. Stefanov, A. Avtanski, and Z.Z. Bien
The automatically generated map can be used not only for verification of the collected information but also as an alternative way for selection of the goal location in the pre-programmed mode of operation. Instead of referring to a list of desired end positions, the user can set the desired location by pointing it directly on the map (Fig. 17.5). In order to be easily recognized and selected, the preprogrammed targets can be noted on the screen with flashing symbols, whose shapes and colors vary.
Fig. 17.5. Map of the house and favorite places specified on it; 1, 2, 3, 4, 5 – predefined end positions
17.5 Navigation System The proposed navigation algorithm can be realized by means of an hierarchical control structure with two layers, named here as subsystem for global navigation and subsystem for local navigation, respectively (Fig. 17.6). Depending on the position of the switch SW, the wheelchair control system can operate in either full autonomous mode or semi-autonomous mode. The user can choose the control mode by an appropriate command that activates the switch. When the full autonomous mode is selected, the output of the human-machine interface (HMI) becomes connected to the subsystem for global navigation and the user’s instructions regarding desired end position and orientation are transferred to it. If the semi-autonomous mode (the second state of the switch SW) is set, the signals from the HMI are transferred to the subsystem for local navigation and the user controls the wheelchair directly.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
265
Fig. 17.6. Block diagram of the navigation system
When the wheelchair runs in the full autonomous mode, the subsystem for global navigation builds the wheelchair’s route to the goal and sets to the subsystem for local navigation a sequence of instructions for the wheelchair movement toward the goal. The subsystem performs the following tasks: 1. Building a map of the home environment. – The map contains information about the positions of the walls, doorways, and obstacles. For its composition, the subsystem for global navigation refers to the data from the TV images, encoders of the driving wheels, and sensors for obstacle detection, mounted to the wheelchair platform. 2. Calculation of the goal position. – Since the user sets the end position and orientation by pointing on the TV image, the subsystem should “translate” the user’s instruction by calculation the position of the pointer (x, y, Θ) in respect to the coordinates of the same map. 3. Calculation of the current wheelchair coordinates. – The wheelchair position and orientation should be defined in terms of the coordinate system of the map. In order to perform that task, the subsystem for global navigation refers to the information from the TV images. After recognition of the wheelchair-
266
D. Stefanov, A. Avtanski, and Z.Z. Bien
installed markers in the visual scene and calculation of their positions, the wheelchair position and orientation are computed. In the cases when the visual information is insufficient for successful calculation of the current wheelchair position, a dead reckoning procedure based on analysis of the angle rotation of the driving wheels can be additionally applied. 4. Path planning. – The system for global navigation computes the wheelchair’s route to the goal by referring to the information regarding the goal position, current wheelchair location, and the location of free-from-obstacles zones in the existed map, Apart from that, the calculations include a procedure for optimization of the composed path (removal of loops, finding the shortest way to the goal, choice of path segments with maximal width, reference to user’s preferences, etc.). 5. Generating instructions to the subsystem for local navigation. – During the execution of the movement task, the subsystem for global navigation refers to the planned path and sends to the subsystem for local navigation a subsequence of instructions regarding the position and direction that the wheelchair should take in the next step toward the goal. The instructions contain also information about the distance from the current wheelchair location to the goal that is further used by the subsystem for local navigation for calculation of the appropriate wheelchair speed for precise maneuvers and exact positioning in the goal position. 6. Map update. – During the task execution, the wheelchair-mounted sensors scan the environment for existence of obstacles. The newly detected obstacles are added to the map and the old obstacles, which presence is not confirmed, are removed from the same map. In order to minimize the size of the onboard-mounted hardware, the image processing procedures can be done in two computing modules (one onboard mounted and other immobile), connected via wireless data link (see Fig. 17.3). The local navigation subsystem receives instructions from the upper hierarchical level and controls the driving wheels of the wheelchair. An array of wheelchair-mounted range sensors supplies the subsystem with information about near obstacles. The wheelchair completes the exact movement instructions and follows the initially composed path if no obstacles on the intended route are detected. If the wheelchair sensors signalize about unknown obstacle the subsystem for local navigation starts maneuver for safe obstacle avoidance and tries to keep the userspecified direction of the wheelchair movement as closer as possible. Path modification is based on the existing map and the information from the range sensors. The information regarding the location and geometry of the new obstacle is being sent to the global navigation system for the further map update. In semiautonomous mode, the local navigation subsystem modifies only these user’s commands that are not precise and would lead to collision with walls or obstacles.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
267
17.6 Computer Simulation of the Control Algorithm In order to test proposed navigation algorithm, a computer simulator named RoSi (Robotic Simulator) was developed. The simulation is conducted for a semistructured home environment. The user can choose the initial conditions for the planned task (structure of the home environment, obstacle shape and location, starting position, goal positions of the wheelchair, speed, mode of control, etc.). The task execution conforms to the proposed navigation algorithm. The simulator demonstrates the behavior that the real wheelchair will have during execution of the following control tasks: 1. Exploration. – We assume that the floor plan and the obstacles position are unknown to the wheelchair control system. In order to get enough information regarding modeled home environment and operate successfully in it, the simulator explores the surroundings in the process of the task execution and creates a map, which represents the locations of all obstacles, available routes, positions of the walls, etc. 2. Route planning. – The simulator tries to find a route to the goal. Initially generated route may be inaccurate if the map information is incomplete. 3. Tracking the planned route 4. Obstacle avoidance. – In order to realize successful navigation to the goal position, the navigation algorithm modifies the initially planned route if any obstacles on it are detected. 5. Wall following. – Moving along wall by keeping on a certain lateral distance from it is a useful component of obstacle-avoidance strategy. That regime can facilitate the smooth passage through narrow corridors and doorways. 6. Control assistance (semi-autonomous mode). – The semi-autonomous control mode can be particularly helpful to certain category of disabled users who experience difficulties to operate ordinary powered wheelchair in narrow corridors, doorways, or closely located obstacles. In order to evaluate the proposed control algorithm by people having such kind of disabilities, we decided to include this mode in the computer simulation. In order to explain the operation of the simulator and its structure, at first we will comment on some significant design issues such as algorithm for wheelchair movement, representation of the home environment, and mechanisms for building the control strategy adopted in the simulator. 17.6.1 Wheelchair Kinematics The wheelchair movements, represented by the simulator, correspond to the movement of a four-wheeled wheelchair with two front castor wheels and two rear driving wheels. The scheme of the mobile platform is shown in Fig. 17.7, where the driving wheels are indicated as A and B and the front wheels are noted as C
268
D. Stefanov, A. Avtanski, and Z.Z. Bien
and D. The speed ratio of the driving wheels determines the change of the wheelchair movement direction.
Fig. 17.7. Kinematic model of the wheelchair
Let us denote the delta-rotations of driving wheels by dϕ1 and dϕ 2 , respectively. The delta-translations of the driving wheels can be determined as:
dr1 = R.dϕ1
(17.1)
dr2 = R . dϕ 2
(17.2)
where R is the radius of driving wheels. Wheelchair displacement dX along the path of travel can be expressed by the equation:
dX =
dr1 + dr2 2
(17.3)
The wheelchair heading dΘ is a function of the displacement of the left and right driving wheels:
dΘ =
dr1 − dr2 b
(17.4)
where b is the distance between the driving wheels. Modeling of the wheelchair and it movement are based on the following assumptions: 1. The wheelchair is modeled as a platform with rectangular shape, as those shown in Fig. 17.7. 2. The wheelchair can move forward and backward.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
269
3. As a difference of most approaches for autonomous navigation where the robot movement is represented as shifting from one grid cell to another, present simulation considers a wheelchair model that can move into 5 discrete directions: strong left (LL), left (L), straight (S), right (R), and strong right (RR), as shown in Fig. 17.8. Wheelchair headings are counted regarding the wheelchair platform, not regarding the map coordinates. 4. Movement speed of the wheelchair model is constant and differs for each of above commented directions. Straight movement speed is the highest. Movements to the left and right (L and R respectively) are performed on medium speed. Low speed is chosen for the LL and RR movements. Backward motions are the slowest and equal for all directions. 5. During its movement toward the target, wheelchair moves mainly forward. Backward movements are used for short periods of time, mainly for recovering from harmful situations. 6. Acceleration and deceleration of the wheelchair model are limited to preliminary defined values. The wheelchair gradually accelerates to the nominal speed and stops lightly. 7. The wheelchair moves on a flat floor. 8. The wheelchair moves without sliding.
Fig. 17.8. Movement directions of the wheelchair model
17.6.2 Modeling of the Sensors and Their Arrangement on the Wheelchair Platform The simulation algorithm is based on the assumption that two types sensors for obstacle detection are mounted on the wheelchair periphery: range sensors and contact sensors. Range sensors give an approximate distance to the nearest obstacle in the particular direction in which they are oriented. Since the range sensors are fixed to the wheelchair platform and their position and orientation are known, a simple algorithm is applied for calculation of the distance to each obstacle and its angular location regarding the wheelchair platform. In certain situations, the distance sensors may not provide full coverage of the surrounding area. The contact sensors are intended to detect obstacles that have not been revealed by the
270
D. Stefanov, A. Avtanski, and Z.Z. Bien
distance sensors. Activation of one or more sensors discontinues further wheelchair movement in the direction of the obstacle.
Fig. 17.9. Representation of a range sensor in the simulator. a sensor detection area and it zones; b schematic representation of a sensor; c An object in the sensing zone; d schematic representation of the detected obstacle
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
271
Autonomous robots usually use ultrasonic, laser-scanning or position-sensing detectors (PSD) for ranging of nearly located obstacles [46]. The detection zone of such range sensors is typically shaped as cone or pyramid. As it will be commented in the Section 17.6.3.1, the simulator considers two-dimensional model of the home environment. That is why the detection zones of the wheelchair-mounted range sensors are also presented in the same model as two-dimensional areas, shaped as triangles as shown in Fig. 17.9. In this paper we do not refer to concrete model range sensors. Instead, the computer simulation is based on the assumption that the range sensors have viewing angles: +30º ~ -30º. The distance toward the obstacle is ranged at four sensing zones (Z1 ~ Z4) and the state of each zone is represented with a sensing point. A point is considered to be in "on" state if an obstacle is detected at that zone. In the Fig. 17.9 b, the activated points are marked as solid black circles and the non-activated ones are specified as empty circles. Information from the first two proximal sensing points (Z1 and Z2) is used for obstacle detection although the signals from the distal sensing points (Z3 and Z4) are used for building a map of the home environment. The problem of selecting a minimum number of sensors with their optimal arrangement on the autonomous guided vehicles (so called sensor modeling problem) has been explored for many years, and many methods for its solution have been proposed [47]. The problem is not an objective of the simulator at present. Instead, we assume that the sensors system consists of 7 range sensors and 7 contact sensors all of them arranged on the wheelchair periphery on a way shown in Fig. 17.10. Dotted lines on the figure indicate the orientation of the sensors.
Fig. 17.10. Arrangement of the range sensors and contact sensors on the wheelchair platform
272
D. Stefanov, A. Avtanski, and Z.Z. Bien
17.6.3 Navigation Algorithm of the Simulator The navigation task presented by the simulator is to find a path from a start position to a target and traverse it without collision. Navigation may be decomposed into three sub-tasks: (a) mapping and modeling the environment; (b) path planning and selection; and (c) path following and collision avoidance. The relationships between these tasks and their solution in the simulator will be commented in the next three sections. 17.6.3.1 Map of the Indoor Environment Over the last two decades various approaches of map building have been proposed such as occupancy grids, free space maps, composite maps, etc. [48–52]. In the proposed approach, the initial map of the home environment is based upon the readings of the states of two distal sensing points of each range sensor (Fig. 17.9 b, sensing points 3 and 4). This tactic allows collecting of initial information on the obstacles without coming in dangerous proximity to them. Developed simulator adopts grid-based mapping and represents the home environment as a bidimensional space divided in equal squares. Each square is represented by a pixel. In the first version of the simulator for simplicity reasons we used monochrome bitmap in which the obstacles are denoted by black color and associated with value “1” and the free space is represented by white color and associated with value “0”. This way one pixel is described by just one bit. The grid density is 65536 x 65536 lines that is quite enough for precise simulation of office and home environment. In order to simplify the simulator algorithm, all three-dimensional obstacles are represented with their projection on the floor. On the occupancy grid each obstacle is embodied with the sum of the grid cells that are completely or particularly occupied by the obstacle (Fig. 17.11). Choice of the map density should be done very carefully. As smaller are the grid-cells, as precise the obstacles are modeled. From other hand, operation with large number of cells reduces the calculation speed. The simulator uses a large number of grid nodes and snapping does not result in significant errors. For example, if the map with the same density is applied to rep2 resent an apartment with dimensions 12x12 meters (144 m floorage), the map resolution on both X and Y direction will be 0.18 mm, i.e. the snap error in the particular case will not exceed 0.1 mm. The initial map, composed upon the data from the introductory exploration of the new environment, is relatively coarse but despite its low accuracy it is still sufficient for successful path planning process. During the wheelchair operation, the information from all sensing zones of the range sensors is analyzed and the resolution of the primary map quickly improves. As a result, the accuracy of the next path planning procedures increases.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
273
Fig. 17.11. Representation of obstacles on the map
As it was already commented in Sect. 17.4, proposed approach is based on the idea that the user specifies a goal position by pointing it directly on the TV image. It is also supposed that current wheelchair position will be calculated from the TV images. In both tasks the point of interest is represented in respect to the coordinate system of a concrete camera. Since the wheelchair operates in the whole indoors area, it is much convenient if the navigation system refers to one unified map of the whole house rather than referring to number of maps that represent only small parts from it. The common map will significantly facilitate such important procedures as path planning and wheelchair position tracking. The map merging problem in that paper is quite similar to the problem of building a map from data collected by different robots that explore different parts of the environment [53]. As a difference, in the particular case the map merging problem is simplified from the facts that all cameras are fixed to the ceiling (considered to be a solid two-dimensional rigid body), cameras do not change their view angle, and their mutual positions remain the same for the whole period of exploitation of the wheelchair system. 17.6.3.2 Map Update It is quite likely that some obstacles in home environment change their positions, and appear, or disappear. In order to provide correct functioning of the wheelchair control system, the current map should be updated through the information from the onboard mounted range sensors. However, in some situations, range sensors can output false signals for obstacle existence. Some possible reasons are:
274
•
• •
D. Stefanov, A. Avtanski, and Z.Z. Bien
Interference between sensors. – It may occur in specific configuration of obstacles or specific surface properties of detected objects. The activation of one sensor from the emitted signals of another may cause incorrectly determined distance to the obstacle. False signal due to detection of a moving object and considering it as a static obstacle. Sensor signal caused by random noise.
In order to make the control system much resistant to sensor artifacts, decision for modifying the map by adding or removing an obstacle can be made if the change of the state of a map cell (“occupied” or “free of obstacle”) is repeatedly confirmed. The approach is based on the assumption that the occupancy of the particular cell is not binary determined but it depends on certain weight coefficient ω , whose value is determined as follows:
ω i = ω i −1 + a k
where
ωi ω i −1
if
0 < ω i −1 + a k < 1
(17.5)
ωi =1
if
ω i −1 + a k ≥ 1
(17.6)
ωi = 0
if
ω i −1 + a k ≤ 1
(17.7)
(i ) -th scanning of the cell occupancy, is the weight coefficient during the (i − 1) -th detection of the cell occu is the weight coefficient during the
pancy, is increment of the change of the weight coefficient, if an obstacle is detected in the entire cell during the
k (i ) -th scanning, a = +1 a = − 1 if an obstacle is not detected in the entire cell during the (i ) -th scanning. Since the map is monochrome, each cell state (S) can be described as “0” or “1” only. In order to conform to that rule, we make following assumptions:
where
Si = Si −1
if
0 < ωi <1
(17.8)
Si = 1
if
ωi =1
(17.9)
Si = 0
if
ωi = 0
(17.10)
Si and Si −1 are the states of the particular cell for the (i ) - th and (i − 1) -
th scanning of the cell occupancy. As a result, the certainty for existence of an obstacle increases every time when the wheelchair sensors detect it. Accordingly, cells that are detected several times as “occupied”, appear on the map as “certainly occupied”. Respectively, cells
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
275
where no obstacle has been recently found become considered as “empty cells” soon. The value of k may vary upon the sensor type. In case when wheelchair is equipped with range sensors that differ in brand or adjustment, k can have different value for each sensor. In other words, the cell state of the map is being changed only in that case when the wheelchair sensors have already registered the same change for couple of times. As we already commented in Sect. 17.2.2.3, after installation at user’s home, the wheelchair collects some initial information regarding the unknown environment by it range sensors. The same information is used later for building the first map of the new environment. If the above-commented approach for obstacle confirming is applied in that mode, the processes for initial exploration of user’s home and composition of the initial map may take too much time. In order to accelerate the procedure, the weight coefficients ω during the period of initial collection of information is always considered as “1”, i.e. once an obstacle is detected, it appears immediately on the initial map. Since we assume that the first map is quite coarse and will be further modified during the wheelchair operation, the fixed weight coefficient will not affect the system efficiency. 17.6.3.3 Path Generation and Task Execution Control strategy of the simulator refers to semi-structured home environment where some of the objects such as walls, doors, heavy and fixed furniture have their permanent position while many other objects have a priori unknown positions or their positions are not precisely mapped because of sensor errors. Efficient wheelchair steering in such complex environment can be realized if the navigation algorithm answers on a effective way such important tasks as generating a path, following the route, finding changes in the environment based on the information from wheelchair's range sensors, and automatic updating of the environment map. The navigation solution brings together two strategies: global planning that results to composition of the global path to the goal and local control that is applied for path following and obstacle avoidance. Adopted control structure is presented in Fig. 17.12. After setting the target, the system for global navigation identifies the original wheelchair position and composes initial path to the goal based on the preacquired knowledge of the environment. At that stage, the approach is similar to those of path planning with complete information [23, 24, 29, 54]. Most of the proposed solutions of that problem are based on the assumption that the mobile robot is relatively small and occupies just one cell. Such approach leads to certain contradictions. – If the map cells are relatively large in order to express on a realistic way robot dimensions, map become quite coarse and a lot of free space among obstacles is represented on it as occupied. As a result, narrow passages that are still enough for successful passage of the robot through them are being represented on the map either as non-existed or as too tight and automatically excluded from the path search procedure. In order to balance these two confusing requirements, we refer to the map with fine grid cells that allow precise mapping of the obstacles and assume that the real wheelchair dimensions exceed one cell. Instead,
276
D. Stefanov, A. Avtanski, and Z.Z. Bien
we take it for granted that the wheelchair center is located in one cell only. In order to find the allowed corridors, at first we apply a procedure for checking the tightness of the passages between neighbor obstacles. For that purpose, we “grow” each obstacle by shifting its real boundaries on distance of half wheelchair width in the normal direction. If the operation leads to overlapping of two obstacles, the corridor between them is considered as too narrow for the wheelchair. All nonpermissible narrow corridors are marked and excluded from the search process. The path-planning algorithm of the simulator is based on the distance transform method [55, 56]. Referring to the map composed during the initial exploration of the home environment, the program calculates the distance to the goal for all freefrom-obstacles cells of the occupancy grid, starting from the free cells that neighbor with the goal cell. Weights increase away from the target. Computation of the distances uses the eight-connected transform [48]. Once the distance transform values are calculated, the path planning procedure starts. Search is based on the rule for finding a path with a minimal cost. Generated path is a sequence of lines that pass through the free space in the map. Path segments can be perpendicular to each other or oriented on 45 degrees regarding the grid axes. Well-known disadvantage of the approach is that the generated path sometimes passes too near to obstacles despite the existing free space. As a solution of the problem, some path relaxation algorithms have been proposed [48]. Present simulation does not apply any procedures to move the path away from nearly located obstacles because the initially composed path is used to give the general route to the goal only.
Fig. 17.12. Control architecture of the navigation system in autonomous mode
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
277
During the path following phase, the global navigation layer converts initial path into subgoals (called also as “intermediate goals” or “intermediate rendezvous points”) and sends them consecutively to the local-level task planner. The choice of a subgoal is done in a manner to be clearly seen from current wheelchair position and to lead the wheelchair closer to the predefined goal point. After setting a subgoal, the local-level task planner (also called as “reactive layer”) steers the wheelchair to it, referring to the information from onboard-mounted range sensors. Once the intermediate goal is achieved, the global navigation system sets a new subgoal that will guide the wheelchair to the next region. The reactive layer deals with only a small portion of the home map in the wheelchair’s immediately surrounding and uses it for local path planning. We called that map as “local map”. The sensory feedback is used to steer the wheelchair toward the centre of two obstacles, to smooth the initial path, to shift the trajectory far from obstacles, and to built a strategy for overcome of new obstacles. The architecture of the system for local navigation is shown in Fig. 17.12. The path following module computes the headings ( Θ ) that the wheelchair should have in order to reach the intermediate goal1. Navigation instructions include only forward movements in five discrete movement directions, specified as LL, L. S, R, and RR (see Sect. 17.6.2). Navigation to one intermediate goal may require a sequence of headings. The norm module (NM) converts headings ( Θ ) into traveling queries V and sets them to the decision-making module (DMM). Instructions of the path following module are sufficient for the successful navigation toward the subgoal if no obstacles on the intended way are detected. If a new obstacle is detected by the sensor system during the navigation to the intermediate goal, it location is immediately specified on the map. If the barrier is relatively far from the wheelchair and detected by sensing zones Z3 or Z4 only (as specified in Fig. 17.9), the wheelchair keeps its initial trajectory and continues to the subgoal. However, if the new obstacle is near the wheelchair body and detected by Z1 or Z2, a maneuver for collision avoidance should be performed. Obstacle avoidance module (OAM) analyses the signals of the activated range sensors and sends to the DMM a set of recommended changes of the movement direction. Correction instructions (W) are based on preliminary defined rules regarding the wheelchair behavior in various collision situations. Based upon the signals of the path following modules and suggested movement corrections of the obstacle avoidance module, DMM takes a decision for modification of the current wheelchair direction. Bumper sensors and module for emergency collision avoidance (ECAM) form so called “reflexive loop” that deals with imminent collision detection. Activation of one or more bumper sensors results to sending warning signals ( ∑ Ψi ) to DMM and the wheelchair immediately reverses its movement direction. If meanwhile some of the range sensor detects the same obstacle, wheelchair stops and the DMM applies collision avoidance maneuver relevant to the information of the activated range sensor. If no one range sensor can still de1
Specified headings refer to the wheelchair platform, not to directions regarding the map cells.
278
D. Stefanov, A. Avtanski, and Z.Z. Bien
tect the obstacle that causes the alarm signal, after moving away on certain distance the wheelchair stops and the DMM uses same collision avoidance strategy that would be applied if the corresponding range sensor were activated in zone Z1. Collision avoidance module uses rule–based behavior to generate control signals. It refers to a set of control action recommendations that determine the degree to which each movement direction (S, R, RR, L, and LL) is considered allowed if a specific sensor is activated. The advisability of each movement direction is embodied with a weighted coefficient that is empirically chosen. Tables 17.1 and 17.2 present the rule base, applied in the simulator. Each of them has two segments. Upper segment of each table is relevant to activation of sensing point 1 of every sensor, although lower segment concerns activation of sensing points 2 of the same sensors (see Fig. 17.9). The history of sensors activation is also considered in the tables. Table 17.1 relates to forward motion of the wheelchair before activation of the sensors, although the second table (Table 17.2) concerns backward motion. Sensor zones 3 and 4 are not represented in the tables since their outputs are not considered for collision avoidance maneuvers. Higher value of the weighted coefficients means higher preference of that command. The command query can be expressed as Wij :
Wij = [k LL , k L , k 0 , k R , k RR ] ,
(17.11)
where k LL to k RR are weight coefficients for advisability of a movement direction to the collision situation , i - is the number of the activated sensor ( i = 1÷ 7 ),
j - is the number of the activated zone ( j = 1÷ 2 ). For example, the second row from the Table 17.1 (0, -5, -20, -30, -40) expresses the recommended wheelchair behavior when sensor 2 is activated in sensing point 1 during forward wheelchair motion (see Fig. 17.9). Most recommended collision-avoidance action in that situation is change of the existed movement direction to strong left turn (LL); less recommended are turning on right (L), straight motion (S), and turning on the right (R); strong right turn (RR) is least suggested. The output signal of collision avoidance module includes queries of the weight coefficients of all activated sensors.
W = ∑Wi
(17.12)
i∈A
where A is the set of the numbers of activated sensors, and query for the i-th sensor.
Wi is the correcting
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
279
Table 17.1. Rules, applied to colision-avoidance modile (CAM) when the wheelchairs moves forward
Sensors
Sensing zone 2
Sensing zone 1
LL 1 2 3 4 5 6 7 1 2 3 4 5 6 7
0 0 0 0 0 -30 -40 0 0 0 0 0 -5 -5
L -10 -5 -5 0 0 -20 -30 -1 0 0 0 0 -3 0
Forward S -20 -20 -10 0 0 -10 -20 -5 0 0 0 0 0 0
R -10 -30 -20 0 0 -5 -5 -1 0 -3 0 0 0 0
RR 0 -40 -30 0 0 0 0 0 -5 -5 0 0 0 0
Table 17.2. Rules, applied to colision-avoidance modile (CAM) when the wheelchairs moves backward
Sensors
Sensing zone 2
Sensing zone 1
LL 1 2 3 4 5 6 7 1 2 3 4 5 6 7
0 -20 0 -10 -30 -10 0 0 0 0 0 -20 0 0
L 0 0 0 0 -20 -5 0 0 0 0 0 -10 0 0
Backward S 0 0 0 -10 -10 0 0 0 0 0 0 0 0 0
R 0 0 -5 -20 0 0 0 0 0 0 -10 0 0 0
RR 0 0 -10 -30 -10 0 -20 0 0 0 -20 0 0 0
280
D. Stefanov, A. Avtanski, and Z.Z. Bien
In order to be easily analyzed by the DMM, initial headings Θ of the PFM are also converted into queries, similar to those outputted by the OAM. Components of these queries represent the relevance of the input heading to each possible direction (S, R, RR, L, and LL). Conversion is done by the norm module (NM) and follows a simple rule of correspondence, as shown in Table 17.3. Weight coefficients are empirically chosen.
Input heading (Θ)
Table 17.3. Rules, applied to norm modile (NM)
LL L S R RR
LL 10 0 0 0 0
L 0 10 0 0 0
Output query S R 0 0 0 0 10 0 0 10 0 0
RR 0 0 0 0 10
The goal–following behavior of the PFM generates a conclusion about desired turning direction, while the obstacle avoidance module generates a rule-based conclusion about disallowed turning directions and the priority of the allowed turning directions. These two conclusions should be combined by the DMM. Decision on the taken movement direction is determined by the sensor source that has the strongest opinion about it. The approach for command fusion involves two steps: At first, conjunction operator between outputs from the PFM and OAM is used. Then, the max operator is applied to the conjunction, in order to find the direction that should be taken after detection of the obstacle. Let's consider the following example: The wheelchair moves forward and keeps straight direction ( Θ = S ). In the next moment, a sensing point 1 of the front right sensor (sensor 2) detects an obstacle, as shown in Fig. 17.13. Collision avoiding actions are as follows: Referring to Table 2, initial heading Θ = S will be converted into a control query: V = [0, 0, 10, 0, 0] . Activation of sensing point 1 of sensor 2 causes following correction query (see Table 1, the third line of the upper segment): W2 = [0, − 5, − 20, − 30, − 40] . The conjunction on both queries gives: V ′ = V + ∑ Wi = V + W2 = [0, − 5, − 10, − 30, − 40] . i∈A Much recommended action for collision prevention is change of current direction (S) to strong left turn (LL) since it weight coefficient is highest. The ranking order of the rest four commands is: L, S, R, and RR. If much recommended com-
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
281
mand (LL) is not possible, DMM may substitute it with L or S commands. RR command is less recommended in that situation.
Fig. 17.13. Sensor activation. In this example, sensor 2 is activated in its zone 1.
17.6.4 Graphic User Interface (GUI) of the Wheelchair Simulator The GUI of the simulator is Windows TM-based. It contains a main window (upper right part of the screen), setting palette (upper left windows), and status palette (lower part of the screen). Fig. 17.14 displays the simulator control screen. The main window of the simulator represents the obstacles, the goal position, and the wheelchair. Selected parts of the scene can be additionally zoomed that allows precise visualization of the performed wheelchair maneuvers. The buttons for zoom control are a part of the setting palette. The status palette included is located on the lower part of the screen and consists of four windows: • The panel in the upper left corner is labeled as “A-M-K” and indicates the performed control mode. Following notations are adopted: A – autonomous mode, M – control via mouse, and K – control via keyboard. Selected control regime is indicated by a highlighted letter. When the wheelchair is controlled via keyboard or mouse, user’s controls are visualized on the same window with a green dot that changes its position. The dot stays at the cross centre when the user’s command is zero. If the user sends a command for wheelchair motion, the dot displacement regarding the cross centre corresponds to the wheelchair speed and the dot quadrant position represents the direction that user sets. • The second panel is labeled as "Sensors" and represents the state of the range sensors. Each sensor is embodied as four points. Dot colors correspond to the
282
•
•
D. Stefanov, A. Avtanski, and Z.Z. Bien
distance to the nearest obstacle in the particular direction. All free of obstacle feature points are indicated as green dots, while all points that indicated obstacles are noted as red dots. The third panel is named as "Locator" and shows the state of the global navigation system. The remaining distance to the goal is represented as deviation of a single point regarding the origin of the coordinate system and the position of the same dot corresponds to the direction of the goal. The fourth panel is named as "Map" and shows the map of the home environment that the wheelchair control system builds. Detected obstacles are shown in red and current wheelchair position is marked as a green point
Fig. 17.14. A screenshot of the simulator screen
The setting palette includes 5 control buttons for creation of a test environment. In order to test the algorithm, the user can choose the position and shape of obstacles, set the initial and the end positions of the wheelchair, and choose the wheelchair orientation at these positions. Designed test configurations can be executed at the moment or saved for future use. The algorithm of the developed simulator considers static obstacles only. It is assumed that the obstacles do not change their position during the execution of the transportation task. In the first variant of the simulator all obstacles are embodied as polygons. Walls are usually represented as combination of rectangles and the obstacles, which projections are compiled by smooth curves, appear on the map as n-sided polygons. The sides of the rectangles are parallel to the coordinate system of the map. The usage of two class obstacles only limits the obstacle representation. Instead, such an approach simplifies calculations and increases the simulation speed.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
283
Orientation of the wheelchair in the final position is an essential requirement for certain classes of movement tasks, such as positioning near a table, window, bed, or wall. The simulator allows the user to set not only to goal position but also the wheelchair orientation.
17.7 Evaluation of the Control Algorithm This section comments some results from the testing of the proposed control algorithm with the developed simulator. Some movement tasks and the wheelchair behavior during their execution are commented below. 17.7.1 Navigation to Multiple Goals The user can also set more than one goal position. This task corresponds to the execution of automatic inspection of the user’s home. During the completing of such multi-goal tasks, the wheelchair moves consequentially from one goal position to the next position in the specified order, following the orientation that the user has set to these positions. Fig. 17.15 presents a sample configuration of obstacles and 7 numbered goal points. The wheelchair task is to reach each goal position.
Fig. 17.15. Multi-goal task
284
D. Stefanov, A. Avtanski, and Z.Z. Bien
The wheelchair trajectory after the task execution is shown in Fig 17.16. The detected obstacles can be seen on the map window.
Fig. 17.16. Trajectory after execution of the multi-goal task
17.7.2 Obstacle Avoidance Our initial scenario is given in Fig 17.17. The wheelchair task is to avoid the obstacle (marked as an rectangle) and, reach the goal (marked with “o”) that is located behind the obstacle. At the beginning, the control system does not have any knowledge regarding the home environment. The map window is empty (only the start and the end positions appear on it). The initially generated path lies on the straight line to the goal. As can be seen in Fig. 17.18, the wheelchair follows the initially intended trajectory until it sensors detect the obstacle. For obstacle overcome, the wheelchair applies the wall following strategy. Simultaneously the map is updated and the detected obstacle appears on it. Once the obstacle is overcome and the wheelchair senses no more indicate obstacle presence, the wheelchair model turns toward the goal and moves to it on a straight line. The second part of the experiment is aimed at determining how the wheelchair uses the information, obtained during its first pass. The wheelchair model starts execution of the same task from the same start position. Based on the gained information, the navigation algorithm generates a path that goes directly to the end of the obstacle, continues on a certain distance on the small side of the rectangle-
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
285
shaped obstacle and after obstacle overcome continues to the goal on a straight line (Fig. 17.19). The new trajectory is smoother and shorter.
Fig. 17.17. Scenario of the obstacle-avoidance task
Fig. 17.18. Execution of the obstacle-avoidance task (first run)
286
D. Stefanov, A. Avtanski, and Z.Z. Bien
Fig. 17.19. Execution of the obstacle-avoidance task (second run)
17.7.3 Avoiding a “Trap” When the wheelchair operates in an unknown environment, some trap states may occur. Specific configurations of obstacles can cause some algorithms to guide the wheelchair into a closed loop, trying endlessly to escape with the same unsuccessful maneuver. A trap classification is done in [30]. The developed simulator contains a trap detection module that checks whether the wheelchair walks into a trap and determines what kind of traps it falls into. If a trap state is detected, the traprecovering program is executed to guide the wheelchair to leave the trap. In case of “C”-shaped trap the wheelchair walks in loop. It moves far from the goal and returns near the goal but restricted from the trap wall, it moves again far and returns near. The process continues again and again. In order to recover from such state, after detection of the trap situation and the trap borders, the wheelchair should move back to the initial position (going away from the goal) and after that it should turn left or right in order to avoid the trap entrance. In order to test the abilities of the navigation algorithm to recover from such trap situations, we designed a scenario with C-shaped trap as shown in Fig. 17.20. In this first test, the wheelchair controller does not have preliminary information about the environment and about the existed trap in it (the map window is empty). The wheelchair trajectory during the test execution is shown in Fig. 17.21.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
287
Fig. 17.20. Scenario with “C”- shaped trap
Fig. 17.21. Wheelchair trajectory during recovering from “C”-trap situation. The algorithm does not have preliminary information about the trap.
288
D. Stefanov, A. Avtanski, and Z.Z. Bien
Initially the wheelchair moves on the shortest way from the start position to the goal until it enters the trap. When the range sensors detect the trap wall W1, the wheelchairs turns and starts to move on certain distance on it. At the same time, based on the information from the range sensors, the navigation program builds a map of the wheelchair surroundings where the trap appears. The control algorithm analyzes the map and the wheelchair movement trajectory. When the trap situation is discovered, an alternative route is generated. The wheelchair leaves the trap and starts trap overcome. The modified path strategy allows the wheelchair to pass near the trap shaped obstacle without entering in it again. When the trap walls ale no more detected from the range sensors, the wheelchair directs to the goal and continues its way on a straight line. Figure 17.22 shows the wheelchair trajectory when the same task is executed again. Referring to the already-existing map, the navigation algorithm generates a smooth trajectory that passes outside the trap. The task execution time is significantly shortened.
Fig. 17.22. Wheelchair trajectory during recovering from “C”-trap situation. Generated path is based on the information collected during the first test.
17.7.4 Navigation in a Complex Environment Navigation in a complex environment is a task that the wheelchair should perform frequently in its indoor exploitation. In such situations, the wheelchair not only
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
289
should find a collision-free path among nearly located obstacles and walls but also it should detect eventual traps and should be able to recover from them successfully, while discovering the shortest way to the goal. In order to explore the algorithm performance in such states, we have used the test scenario shown in Fig. 17.23.
Fig. 17.23. Test scenario of a complex environment
The goal position is located in the labyrinth corridor C and marked with “s”. In order to test the abilities of the proposed algorithm to collect information and to use it, the same movement task is repeated three times. Next three figures (Fig. 17.24, Fig. 17.25, and Fig. 17.26) show the wheelchair performance in the three sequential executions of the same task. During the first test (see Fig. 17.24), the program does not refer to the map of the environment (The map window is empty; only the start- and goal positions appear in it). After the start, the wheelchair moves on the straight line that connects the initial position to the end one, trying to follow the shortest way to the goal. That direction is kept until the wheelchair sensors encounter an obstacle (wall W1). Since the shape of the obstacle is unknown, initially the navigation system decides to turn the wheelchair on the right (point a). After the turn, the front sensors of the wheelchair detect a new obstacle (wall W2) and the navigation system turns the wheelchair into a 180-degree (U-turn), trying to find a path around the opposite end of wall W1. The wheelchair moves along W1 until it comes on the position when the wall W1 is no more detected by the wheelchair sensors (point b). At that point the wheelchair turns and continues its movement on the straight
290
D. Stefanov, A. Avtanski, and Z.Z. Bien
line to the goal but it sensors detect a new obstacle (wall W3). The wheelchair turns to the left and starts to follow W3. When the front sensors of the wheelchair detect the new wall (wall W4), the wheelchair turns into 180 degrees (point c) and overcomes the wall W3 from it opposite end. After the wall overcome, the wheelchair turns to the left (point d) and continues toward the goal. As can be seen in the map window, the sensor information collected during the task execution has been processed and represented as a map of the home environment.
Fig. 17.24. Trajectory of the wheelchair movement during the first test (without preliminary information about the home environment)
During the second trial (Fig. 17.25), the algorithm attempts to minimize the path length. After the task start, the algorithm tries to check about passage existence from the right site of wall W1 and initially the wheelchair turns to the right. This attempt is unsuccessful because the front wheelchair sensors detect wall W2. The wheelchair turns on the left and directly goes to the goal, excluding the loop (point c from Fig. 17.24) from its new routine. During the third test (Fig. 17.26), the algorithm uses the information that has been collected during the previous two tests and composes the most successful strategy. The trajectory is smooth and does not contain obsolete loops.
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
Fig. 17.25. Trajectory of the wheelchair movement during the second test
Fig. 17.26. Trajectory of the wheelchair movement during the third test
291
292
D. Stefanov, A. Avtanski, and Z.Z. Bien
17.7.5 Route Generation in Partially Known Environment The experiments aim to test the algorithm’s ability to use collected information about the environment (positions and shapes of the obstacles) in the path planning procedures of new movement tasks. Trials are based on the same scenario as those used in the previous section (see Fig. 17.23). As a difference, the algorithm can refer to the sensor information that has been already collected during the execution of the previous tests (see Sect. 17.6.2). Experiments include two movement tasks. In the first test, the initial wheelchair position and orientation coincide with the end position/orientation adopted in the previous test (point o from Fig. 17.23). Actually it is assumed that the wheelchair starts from the same position where it stopped after the execution of the last task. The new wheelchair task is to reach a goal that is located in corridor M. Figure 17.27 illustrates the wheelchair behavior during the task execution. In that figure, the initial position of the wheelchair is marked as “s” and the goal position is noted as “q”. Since the initial data does not include information about the existence of wall W5, the wheelchair keeps its initial orientation after the start expecting to find a corridor toward the goal. After detection of W5, the wheelchair tries to reverse its movement direction. Because of the limited space in the corridor, U-turn cannot be performed. As can be seen on the picture, the wheelchair succeeds to turn in the narrow space after sequence of maneuvers and finds the route to the goal.
Fig. 17.27. Navigation in partially known environment. The scenario and the initial position and orientation are the same as the final position/orientation in the previous task. The collected sensor information from the previous test (commented in Sect. 17.7.4) is used
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
293
Figure 17.28 presents the movement trajectory during the execution of the second task for route generation. In the new task, the wheelchair should starts its movement from the position reached at the previous experiment (from point q) in order to reach position f that is located in corridor N. Initially the algorithm tries to keep the initial orientation and search for are corridor at the same direction. Instead, the wheelchair sensors detect wall W6 that is not marked on the map. The existed wide space around the wheelchair allows reverse of its direction via Uturn. Since the trajectory after the turn lies quite far from wall W7, the wheelchair sensors detect the end of W7 just before to approach the correct passage. However, current wheelchair speed is too high and does not allow entering into the narrow corridor. In order to provide suitable dynamic parameters for the turning, the wheelchair initially bypasses the correct passage and performs a broad turn before to enter into the corridor. The next part of the task execution does not cause any problems to the control algorithm. If the same task will be repeated, the new trajectory will not contain a loop and the wheelchair will find the direct way to the goal. As can be found from the last figure, during the execution of the last two tasks a significant portion of the environment becomes correctly mapped.
Fig. 17.28. Navigation in partially known environment. The initial position and orientation are the same as the final position/orientation in the previous task (shown in Fig. 17.27). The collected sensor information from the previous tests is used
294
D. Stefanov, A. Avtanski, and Z.Z. Bien
17.8 Future Plans and Concluding Remark The current version of the simulator represents the wheelchair operation in the main mode only. A new version of the same simulator is currently under development. It will include the other three modes of the proposed algorithm: semiautonomous navigation, initial collecting of information, and pre-programmed mode (see Sect. 17.2.2.3). In addition, the new simulator will enable control inputs from a variety of special user interfaces, such as head-controlled interface, joystick, etc. (currently only a computer mouse and a keyboard are used for setting the goal position). Tests with such interfaces will help for further validation of the proposed algorithm and initial assessment of its suitability to people with disabilities. Current version of the simulator represents the obstacles as polygons only. The new version will also include an expanded set of shapes for obstacle representation. Zooming a part of the performed trajectory after the task execution will allow much precise analysis of the wheelchair behavior in certain situation that will make available further improvement of the control strategy. In this study, we have proposed an algorithm for controlling an intelligent wheelchair that is based on the information from ceiling-mounted cameras. The user can set the end position and orientation of the wheelchair by pointing them directly on the video image. During the task execution, current wheelchair location and orientation are calculated from the visual information provided by the cameras. The sensor information during the task execution is used for automatic creation of a precise map of the indoor environment. Apart from navigation in autonomous and semi-autonomous modes, the approach also includes two modes for initial exploration of the home environment. In order to do initial evaluation of the proposed algorithm, we have designed a computer simulator and commented some of the tests. Results show that the control algorithm is quite insensitive to errors and inaccuracies of the sensor data. The algorithm leads to fast finding of a smooth and short path toward the goal (usually after one or two repeats of the same task).
References 1. 2.
3.
4.
Baumgartner E, Skaar S (1994) An autonomous vision-based mobile robot. IEEE Transactions on Automatic Control 39 (3): 493–502 Yoder JD, Baumgartner E, Skaar S (1996) Initial results in the development of a guidance system for a powered wheelchair. IEEE Transaction on Rehabilitation Engineering 4(3): 143–302 Wakuami H, Nakamura K, Matsumara T (1992) Development of an automated wheelchair guided by a magnetic ferrite marker lane. J. of Rehab. Research and Development 29(1): 27–34 Wang H, Kang CU, Ishimatsu T, Ochiai T. (1996) Auto navigation on a wheelchair. st In Proc. 1 Int. Symposium on Artificial Life and Robotics, Beppu, Oita, Japan
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair 5. 6.
7. 8.
9.
10.
11. 12.
13.
14.
15.
16. 17.
18.
295
Gomi T, Griffith A. (1998) Developing intelligent wheelchairs for the handicapped. In Proc. Evolutionary Robotics Symposium, Tokyo, Japan, pp 461–478 Kreutner M, Horn O (2001) Contribution to rehabilitation mobile robotics: Localization of an autonomous wheelchair. In Proc. 7th Int. Conf. Rehabilitation Robotics, Evry, Paris, pp 207–214 Bourhis G, Horn O, Habert O, Pruski A. (2001) An autonomous vehicle for people with motor disabilities. IEEE Robotics and Automation Magazine, 7(1): 20–28 Pruski A, Habert O (1999) Obstacle avoidance module for the VAHM-2 Wheelchair. th 5 Conference for the Advancement of Assistive Technology (AAATE’99), Düsseldorf, Germany Gomi T, Griffith A (1998) Developing intelligent wheelchairs for the handicapped. In: Mittal V, Yanco H, Aronis J, Simpson R (eds) Lecture notes in artificial intelligence: Assistive technology and artificial intelligence – Application in robotics, user interfaces and natural language processing. Springer-Verlag, vol 1458, pp 150–179 Lamon P, Nourbakhsh I, Jensen B, Siegwart R (2001) Deriving and matching image fingerprint sequences for mobile robot localization. In Proceedings of ICRA 2001, Seoul, Korea Beattie PD, Bishop JM (1998) Self-localisation in the 'Senario' autonomous wheelchair. Journal of Intelligent and Robotic Systems, 22: 255–267 Katevas NI, Sgouros NM, Tzafestas SG, Papakonstantinou G, Beattie P, Bishop JM, Tsanakas P, Koutsouris D (1997). The autonomous mobile robot SENARIO: A sensor-aided intelligent navigation system for powered wheelchairs. IEEE Robotics and Automation Magazine 4: 60–70 Borgolte U., Hoyer H., Buhler C., Heck H., and Hoelper R. (1998) Architectural concepts of a semi-autonomous wheelchair. Journal of Intelligent and Autonomous Systems 22: 233-253 Simpson RC, Levine SP (1999) Automatic adaptation in the NavChair assistive wheelchair navigation system. IEEE Transactions on Rehabilitation Engineering, 7(4): 452–463 Simpson RC, Levine SP, Bell DA, Jaros LA, Koren Y, Borenstein J (1998) NavChair: an assistive wheelchair navigation system with automatic adaptation. In: Mittal V, Yanco H, Aronis J, Simpson R (eds) Lecture notes in artificial intelligence: Assistive technology and artificial intelligence – Application in robotics, user interfaces and natural language processing. Springer-Verlag, vol 1458, pp 235–255 Miller DP, Slack MG (1995) Design and testing of a low-cost robotic wheelchair prototype. In: Journal of Autonomous Robots 2: 77–88 Miller DP (1998) Assistive robotics: semi-autonomous movement towards independence. In: Mittal V, Yanco H, Aronis J, Simpson R (eds) Lecture notes in artificial intelligence: Assistive technology and artificial intelligence – Application in robotics, user interfaces and natural language processing. Springer-Verlag, vol 1458, pp 126– 137 Yanco H (1998) Wheelesley: A Robotic wheelchair system: Indoor navigation and user interface. In: Mittal V, Yanco H, Aronis J, Simpson R (eds) Lecture notes in artificial intelligence: Assistive technology and artificial intelligence – Application in robotics, user interfaces and natural language processing. Springer-Verlag, vol 1458, pp 256–286
296
D. Stefanov, A. Avtanski, and Z.Z. Bien
19. Peussa P, Virtanen A, Johansson T (1998) Improving the mobility of severely disabled. In: Proceedings of the 2nd European Conference on Disability, Virtual Reality and Associated Technologies. Skövde, Sweden, September 10–11, 1998, pp 169–176 20. Yen J, Langari R (1998) Fuzzy logic: Intelligence, control, and information. Prentice Hall. Inc. 21. Benreguieg M, Hoppenot P, Maaref H, Colle E, Barret C (1996) Control of a medical aid mobile robot based on a fuzzy navigation”. Proc IEEE Symposium on Robotics and Cybernetics. Lille, France, July 1996, pp 388–393 22. Maaref H, Barret C (2002) Sensor-based navigation of a mobile robot in an indoor environment. Journal of Robotics and Autonomous Systems, 38: 1–18 23. Lui YH, Arimoto S (1992) Path planning using a tangent graph for mobile robots among polygonal and curved obstacles. International Journal of Robotics Research, vol 11, no. 4 24. Latombe J (1991) Robot motion planning. Kluwer Academic 25. Dorst L, Travato K (1988) Optimal path planning by cost wave propagation in metric configuration space. SPIE, 1007: 186–197 26. Okutomi M, Mori M (1986) Decision of robot movement by means of a potential field. Journal Advanced Robotics, 1(2): 131–141 27. Valavanis KP, Hebert T, Kolluru R, Tsourveloudis N (2000) Mobile robot navigation in 2-D dynamic environments using an electrostatic potential field. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans. 30(2): 187–196 28. Rimon E, Koditschek D (1992) Exact robot navigation using artificial potential functions. IEEE Trans. Robot. Automat., 8(5): 501–518 29. Kathib O (1986) Real time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research, 5(1): 90–99 30. Lumelsky V, Skewis T. (1990) Incorporating range sensing in the robot navigation function. IEEE Trans. on Systems, Man and Cybernetics, vol 20, no. 5 31. Krogh BH, Feng D (1989) Dynamic generation of subgoals for autonomous mobile robots using local feedback information. In IEEE Trans. on Automatic Control, vol 34, no. 5 32. Nam YS, Lee BH, Kim MS (1996) View time based moving obstacle avoidance using stochastic prediction of obstacle motion. In Proc. of IEEE Int. Conf. on Robotics and Automation, Minneapolis, USA 33. Borenstein J, Koren Y (1989) Real-time obstacle avoidance for fast mobile robot in cluttered environment. IASTED International symposium on robots and manipulators. November 1989, Santa Barbara 34. Brooks RA (1986) A Robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2(1): 14–23 35. Arkin, RC (1989) Towards the unification of navigational planning and reactive control. Working Notes of the AAAI Spring Symposium on Robot Navigation. Stanford University, March 28–30 36. Yen J, Pfluger N (1995) A fuzzy logic based extension to Payton and Rosenblatt's command fusion method for mobile robot navigation. IEEE Trans. on Systems, Man, and Cybernetics, 25(6): 971–978
17 A Concept for Control of Indoor-Operated Autonomous Wheelchair
297
37. Bergasa LM, Mazo M, Gradel A, Sotelo MA, Garcia JC (1999) Guidance of a wheelchair for handicapped people by head movements. In Proceedings of International Conference on Field and Service Robotics (FSR’99), Pittsburgh, 29-31 August, 1999, pp 150–155 38. Xu G, Sugimoto T (1998) Rits Eye: A software-based system for realtime face detection and tracing using pan-tilt-zoom controllable camera. In Proc. 14th Int. Conf. Pattern Recognition. Brisbane, Australia, pp 1194–1197 39. MacBride C, Fleming B, Tanberg BJ (2001) Interdisciplinary team approach to AAC assessment and intervention. In Proc. 16th Annual CSUN’s Conf. Technology and Persons with Disabilities, Los Angeles 40. Evans DG, Drew R, Blenkhorn P (2000) Controlling mouse pointer position using an infrared head-operated joystick. IEEE Trans. Rehab. Eng 8(1): 107–117 41. Levine SP, Huggins JE, BeMent SL, Kushwaha RK, Schuh LA, Rohde MM, Passaro EA, Ross DA, Elisevich KV, Smith BJ (2000) A direct brain interface based on eventrelated potentials. IEEE Trans on Rehab. Eng 8(2): 180–185 42. Bien Z, Kim JB, Jung JW, Park KH, Bang WC (2000) Issues of human-friendly manst machine interface for intelligent residential system. In Proc. 1 Int. Workshop on Human-Friendly Welfare Robotic Systems. Taejon, Korea, pp 10–14 43. Ferrigno G, Borghese N, Pedotti A (1990) Pattern recognition in 3D automatic human motion analysis. ISPRS Journal of Photogrammetry and Remote Sensing 45: 227–246 44. Borghese N, Rienzo M, Ferrigno G, Pedotti A (1991) ELITE: A goal oriented vision system for moving object detection. Robotica, 9: 275–282 45. Ferrigno G, Pedotti A (1985) ELITE: A digital dedicated hardware system for movement analysis via real-time TV signal processing. IEEE Transactions on Biomedical Engineering, 32(11): 943–950 46. Everett HR (1995) Sensors for mobile robots: Theory and application. Wellesley, MA, USA: A. K. Peters, Ltd 47. Lin CH, Wang LL (1997) Intelligent collision avoidance by fuzzy logic control. Journal on robotics and autonomous systems, 20(1): 61–83 48. McKerrow P (1998) Introduction to robotics. Addison-Wesley 49. Elfes A (1989) Using occupancy grids for mobile robot perception and navigation. IEEE Computer Magazine, Special Issue on Autonomous Intelligent Machines, June 1989, pp 46-58 50. Elfes A (1987) Sonar-based real-world mapping and navigation. IEEE Journal of Robotics and Automation, vol RA-3, no. 3 51. Moravec H, Elfes A (1985) High resolution maps from wide angle sonar. Proceeding of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, March 1985, pp 116-121 52. Aboshosha A, Zell A (2003) Robust mapping and path planning for indoor robots th based on sensor integration of sonar and 2D laser range finder. IEEE 7 International Conference on Intelligent Engineering Systems (INES’2003), March 4-6, AssiutLuxor, Egypt 53. Ko J, Stewart B, Fox D, Konolige K (2003) A practical, decision-theoretic approach to multi-robot mapping and exploration. In: Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 27-31, Las Vegas 54. Lozano-Perez T (1983) Spatial planning: A configuration space approach. IEEE Transactions on Computers, vol C-32, No. 2, pp 108–120
298
D. Stefanov, A. Avtanski, and Z.Z. Bien
55. Jarvis RA (1989) Collision-free path planning in time-varying environments. IEEE/RSJ International Workshop on Intelligent Robots and Systems, pp 99–106 56. Jarvis RA (1993) Distance transform based path planning for robot navigation. In: Zheng YF (ed) Recent trends in mobile robots. Volume 11 of World Scientific Series in Robotics and Automated Systems, Chap 1, World Scientific, Singapore, pp 3–31.
18 Design of an Intelligent Wheelchair for the Motor Disabled Chong Hui Kim, Jik Han Jung, and Byung Kook Kim
Abstract As rehabilitation technologies such as robotics, artificial intelligence, and computer science have been rapidly developed, the dream of overcoming handicap comes true. This article presents a design of intelligent wheelchair system to be able to provide mobility aid to the disabled who has difficulties in driving a conventional powered wheelchair. First we conducted a survey for identifying the requirements of potential users, defining missions which users want to be served. After establishing the requirements through survey results, we present out hardware design including sensory system based on 2D Laser Range Finder (LRF), software architecture providing real-time capability for hard real-time tasks of safe-critical system, and hierarchical control architecture. Also we present an initial experimental result carried out at the Intelligent Sweet Home and at a corridor, both in KAIST.
18.1 Introduction As an aging society is coming and industrial accidents are increasing, the social demands and responsibilities on improving the quality of life for the elderly and the disabled have been increasing and their desire for overcoming the handicap comes true by the innovation of rehabilitation technology. The aim of rehabilitation is to improve the functions of them [1] and to enable handicapped persons making a living without assistance of another person as far as possible. The mobile robotics can significantly contribute to rehabilitation technology that supports the disabled in their daily life. Powered wheelchair is an important locomotive system for the motor disabled as well as the elderly with motor impairments, and its employment became quite popular. However, conventional powered wheelchair is still restricted to the disabled suffering from severe motor impairments such as spinal cord injury at the cervical, quadriplegia, tremors, and so on. Hence, they have expected development of advanced wheelchairs which can meet their needs to improve the quality of life. We have designed and developed an autonomous intelligent wheelchair system for the motor disabled. In the first stage, we focused on identifying needs of potential users and developing a prototype system using a commercial wheelchair. Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 299-310, 2004.
© Springer-Verlag Berlin Heidelberg 2004
300
C.H. Kim, J.H. Jung, and B.K. Kim
Section 18.2 summarized related works which deal with the development of intelligent wheelchair. Section 18.3 describes the requirements based on questionnaires to potential users. In Sect. 18.4, we describe the hardware configuration including 3D recognizable sensory system and software design which can process hard real-time tasks. Section 18.5 describes the localization using map matching and the hierarchical control architecture. Section 18.6 shows the preliminary experimental results. Finally, a conclusion is given in Sect. 18.7.
18.2 Related Works A mobility function disease is one of the most severe barriers in leading daily life. Increasing mobility is the essential desire of the disabled and rehabilitation robot such as an intelligent wheelchair can augment the mobility. In recent years, many intelligent wheelchair projects have emerged to provide more independent mobility for the motor disabled than they could before. These projects are concerned with functionality, safety, sensor equipment, and/or humanmachine interface (HMI). In order to support mobility access, many research groups have focused on not how to substitute handicap of the disabled but how to complement the remaining ability. In consequence, many on-going projects have developed the semiautonomous system, not fully autonomous system like mobile robot. Also most of them are based on the commercially available powered wheelchair to reduce the cost and the development time. The NavChair [2] developed at the University of Michigan is a semiautonomous wheelchair which shares vehicle control decisions with human operator using joystick or voice. It consists of an array of 12 ultrasonic transducers and wheel encoders. To navigate safely, there are three operating modes: General Obstacle Avoidance, Door Passage, and Automatic Wall Following mode. The vector field histogram [3] method relying on sonar is implemented which is an efficient method for obstacle avoidance with minimum speed reduction. The Rolland [4] developed at the Bremen University is a shared-control system jointly controlled by the control module and the human operator. It is equipped with 27 sonar sensors and the QNX real-time operating system. Especially, it focused on the safety aspect. Rehabilitation robot has to be considered as safetycritical systems [5] because its malfunction can lead the operator to dangerous status. In order to prove the fulfillment of safety requirements during operation and to handle the mode confusion problem in shared control systems [6], this project applied formal methods such as hazard analysis [7] and model checking to define safety requirements of the system. The goal of the VAHM project [8] is to take full advantage of the operator’s abilities without burdening him with too much workload, not to make the robot as autonomous as possible. It consists of 16 ultrasonic sensors, two incremental encoders for localization and LCD display for graphic user interface. This project implemented various navigation skills such as free space search, direction fol-
18 Design of an Intelligent Wheelchair for the Motor Disabled
301
lowing, wall following, motion control, obstacle avoidance, and autonomous navigation. These navigation skills are selectively applied to the wheelchair according to the operator’s physical and cognitive capacities. Robotic wheelchair, MAid [9], is developed to support and transport people with limited motion skill. MAid concentrates on navigational strategy for two particular situations which are difficult and tiresome to the disabled. One is narrow, cluttered environments and the other is wide, crowded areas. MAid is powered by the QNX (Quick UNIX) real-time operating system and it is equipped with a variety of sensors for environment perception, collision detection, and position estimation, such as wheel encoders, optical-fiber gyroscope, 24 ultrasonic transducer modules controlled by a microcontroller, two infrared scanners for short-range sensing and 2D LRF for long-range sensing. MAid supports semi or fully autonomous navigation. Many intelligent wheelchairs are simply equipped with joystick as an input device and no specific output device. However, the SIAMO [10] especially puts a lot of effort into HMI for user-specific adaptability. There are five guidance alternatives: breath-expulsion driving, user-dependent isolated word recognition, head movements, electro-oculographic signals, and joystick. Modular architecture makes the system well suited to be easily adapted to specific user’s needs. SIAMO is equipped with sonars, passive and active vision, infrared, and bumpers for environment perception. It also used coded landmark for absolute positioning. The semiautonomous robotic system FRIEND [11] was developed to handle objects placed at known positions using robotic arm for the people with upperlimb impairments.
18.3 Requirements As mentioned above, many intelligent wheelchairs have been developed in recent years in order to have an opportunity for being employed by potential users, but few systems are commercially available [12]. For commercial use, an intelligent wheelchair must meet various requirements of potential users. We carried out a survey to identify a variety of requirements of potential users of intelligent wheelchairs. Surveys were conducted with 62 wheelchair users - 12 powered and 50 manual wheelchairs. In response to a question, “Why do you employ the powered wheelchair?”, there were 16 responses from the 12 powered wheelchair users. The most common response, 8 out of 16, was convenience in locomotive power. The other responses were saving physical power (5/16) and serious physical handicap (3/16). In spite of indispensability, however, many potential users do not employ because of cost, difficulty of transferring between bed and wheelchair, unfriendly appearance, maintenance, safety, and so on. Hence, we set design requirements such as cost, functionality, friendly appearance, easiness of use, and safety. Various functions are required for the different handicaps. We asked a question about the most necessary function for an advanced wheelchair. Though each function has not equal interest and importance, most of participants wished wheelchair
302
C.H. Kim, J.H. Jung, and B.K. Kim
would support autonomous navigation, door passage, automatic battery charging, and home appliance control. In the first stage, based on the opinion of potential users, we designed and implemented an autonomous indoor navigation function in a structured space such as corridor and residential room which is occupied with bed, electric home appliances, furniture, and so on.
18.4 System Architecture 18.4.1 Hardware Configuration In order to keep the cost low, our prototype system is based on a low cost commercial powered wheelchair PW2003 manufacture by Daese Medical Care, which is powered by two 12V batteries. A Pentium III 550MHz PC is added, which is directly powered by batteries and mounted behind the wheelchair, and connected to a host computer via wireless Ethernet [13]. Sensory system consists of SICK 2D LRF and incremental encoders for environments perception, and localization. Fig. 18.1 shows an illustration of our prototype.
Fig. 18.1. Illustration of prototype system
The laser range finder mounted on a custom designed aluminum support has a tilt in the sensing plane as shown in Fig. 18.1 so that it provides more abundant environment information, and is linked to the PC via a RS232 serial link with 38.4kbps speed. The height from the floor to LRF is 1.56 meters and inclination of LRF is fixed at an angle of 26.56 degrees for environment perception from foot of operator to two meters.
18 Design of an Intelligent Wheelchair for the Motor Disabled
303
In the prototype system, we do simultaneous scanning and driving so that we acquire the 3D data of 3D environment, and overcome the shortcoming of constructing 2D maps from 2D proximity information that may cause problem with objects below the height of LRF. 18.4.2 Software Design for Real-Time System Our wheelchair is composed of various tasks such as motor control, sensing, localization, path planning, emergency command, and so on. Each task has different periodicity, priority, and computational load. For example, motor control is a periodic task and has a low computational load without deadline missing. Hence it is a hard real-time task. Localization is also a periodic task which has heavier computational load than the motor control task. Emergency command task is a sporadic task. Since there are hard real-time requirements with regard to sensors and actuators, intelligent wheelchair requires software design with real-time capabilities as a safety-critical system. Many research groups [2, 10, 11] use DOS or WINDOWS which provides easy development interface, GUI (Graphical User Interface), abundant device drivers, and multitasking support. However, it cannot process hard real-time tasks and a lot of resources are used to maintain GUI. Some groups [4, 9] use commercially available real-time operating systems such as LynxOS and QNX. While these systems have good features, they are somewhat expensive, less device drivers, and have lack of expandability. Some other groups [15] use real-time Linux which is based on the Linux, free, open source, and has much device drivers. A real-time Linux has two special mechanisms, one is real-time scheduler, and the other is two-level interrupt handler. A real-time Linux implements a small real-time kernel which runs a non realtime Linux kernel which is completely preemptable and has low priority. Under the real-time Linux, application programs must be split into real-time and non real-time tasks. Data transmission between real-time and non real-time task is supported by the lock-free queues and shared memory. In our system, we adopted RedHat Linux 7.1 with Real-Time Application Interface (RTAI) 24.1.10 which is a nice implementation of real-time Linux. RTAI has unique features such as LXRT (Linux Real-Time) which allows use of the realtime Linux API (Application Program Interface) from within standard user space, aiding development and debugging, fully portable POSIX (Portable Operating System Interface), and comprehensive information regarding the real-time services and processes [14]. Under this configuration, low-level motor control task could be periodically executed at every 0.1 ms without any missing. Sensing task of LRF is executed at every 200 ms. Data transmission such as encoder reading generated in real-time by the low level motor control task is occurred through the shared memory. The other data such as command, status, and trajectory profile are delivered to realtime control tasks using real-time FIFO (First-In, First-Out) queues.
304
C.H. Kim, J.H. Jung, and B.K. Kim
18.5 Navigation 18.5.1 Localization In order to navigate autonomously, self-localization is an essential requirement. We used an LRF and two incremental encoders to determine the position of the wheelchair. The measurements obtained using encoders provide relative position by means of dead-reckoning. This localization result is rather inaccurate if accumulated over the traveled distance, but it is available at all times with light computation load. Scan data from the LRF can provide an accurate and reliable localization information as well as environment perception. Localization is performed in four steps. First, we extract line segments from the scan data using recursive line splitting method [16]. This method is described as follows: 1. Project the whole laser scan on the ground plane. 2. Form a line from the start point to the end point. 3. Find the point with the biggest distance to this line 4. If its distance is small, a line segment is found. 5. If not, split the point group at this point and repeat from step 2. The next step is fitting the segment group into lines with lease square error, and projects start and end points of the group on the corresponding line in the given map. The third step is estimating the position of wheelchair in world coordinates by identifying matched pairs. Since we know the relative position from the previous position, we can easily estimate ( X k ,Yk ,Φ k ) which is the position and orientation of wheelchair on a given map. Then the extracted line segments are located in the global map. Finally, we have to find the matching pairs between the given map and sensed data. In order to determine segment correspondence, we use two criterions First criterion is the orientations of map line segment and extracted line segment are sufficiently close since wheelchair’s orientation is changed slowly and smoothly in one scan period of LRF. That is, the orientation difference MS M ∆φ = φ − φ S between map line segment and extracted line segment is suffi-
(
j i
j
i
)
ciently small. Using this criterion, we grouped extracted line segments into a group G M which have possibility of correspondence with map line segment M j . j
Second criterion is a distance measure between extracted line segment in G M and corresponding map line segment is small. We defined the distance measure as a sum of real and imaginary distances. Real distance is the sum of two distances from each end point of extracted line segment to the map line segment. If the foot of perpendicular of end point is not in the map line segment but in the extended one, distance between the foot of perpendicular and near end point of map line segment is added to distance measure as imaginary distance. j
18 Design of an Intelligent Wheelchair for the Motor Disabled
305
S2 v0
M0
2
v1 v
4
φ
φS
S0
φS
0
M0
v2 S1
φS
v3
1
Fig. 18.2. Two criterions for determining segment correspondence. S0 , S1 , S2 are extracted line segments and M 0 is map line segment. φ
Mj
and φ Si are the orientation of j-th
map line and i-th extracted line segment. φ S2
For example, suppose there are three extracted line segments and one map line segment in Fig. 18.2. The orientations of S0 and S1 are similar to that of M 0 . But orientation of S2 is not similar. Hence S0 and S1 are grouped into G M which have possibility of correspondence with map line segment M 0 . Next we calculate the distance measure. Since foot of perpendicular of each end point of S0 is in the map 0
line segment, distance measure for S0 is equal to the sum of real distance v 0 and v1 . Since foot of perpendicular of one end point of S1 is in the extended map line segment, however, distance measure for S1 is the sum of real distance ( v 2 and v3 ) and imaginary distance ( v 4 ) as a penalty distance. By the two criterions, we can find extracted line segment S0 is the matching pair of map line segment M 0 . From these all matching pairs by two criterions, orientation is given by
(
)
ˆ = Φ + ∑ ∑ l s φ M j − φ Si / ∑ l s Φ k k j i
(18.1)
where l s is the length of the matched line segment from scan data. To obtain the position of wheelchair, we assumed two matching pairs and
( L , L ) are found. Let ( x , y ) S 2
M 2
S c
the intersection of tained from
s 1
s 2
M
L1
and
M
L2
S c
be the intersection of
S
L1
and
S
L2
,
(L ,L ) ( x , y ) be M c
S 1
M 1
M c
. Then using all matched pairs, the position is ob-
ˆ = X + ∑ l s l s ( x M − x S ) / ∑ l sl s X k k 1 2 c c 1 2
(18.2)
ˆ = Y + ∑ l sl s ( y M − y S ) / ∑ l sl s Y k k 1 2 c c 1 2
(18.3)
where l and l are the length of the line segments extracted from the scan data.
306
C.H. Kim, J.H. Jung, and B.K. Kim
18.5.2 Hierarchical Control Architecture Figure 18.3 shows the hierarchical navigation architecture of our system.
Fig. 18.3. Hierarchical navigation architecture Hierarchical control is divided into three levels with respect to tasks’ execution period. One is the low-level motor control and position feedback with encoders which are executed at every TMOTOR (0.1 ms). Another is the trajectory following, obstacle avoidance, LRF sensing, and localization which are executed at TLRF (200 ms). The path planning and trajectory planning tasks are executed sporadically. For a given start and goal positions, the path planner splits the given path into a combination of pure translation and turning sections, and then the trajectory planner divides the path into trajectory containing velocity and time as shown in Fig. 18.4. A translation section is a trapezoid in velocity (composed of acceleration, cruise, and deceleration). A turning section is composed of forward trajectory transited from translation to turning and backward trajectory transited from turning to translation.
18 Design of an Intelligent Wheelchair for the Motor Disabled
a
307
b
c Fig. 18.4. Path and trajectory planning. a path planning (Path is split into combination of translation and turning sections.) b Trajectory planning (Two translation sections are divided into acceleration-cruise and cruise-deceleration. Turning section is divided into forward trajectory and backward trajectory.) c Velocity profile output (Translation and turning section are trapezoid in linear and angular velocity.)
18.6 Experiments Figure 18.5 shows the prototype system that we developed. The design applied a LRF to detect nearly located objects. The wheelchair controller was based on Linux with RTAI.
308
C.H. Kim, J.H. Jung, and B.K. Kim
Fig. 18.5. Prototype system
In order to verify our self-localization method, we constructed a map of Intelligent Sweet Home [17] in KAIST using localization result and extracted line segments at every scan period.
Fig. 18.6. Self-localization result for the Intelligent Sweet Home in KAIST
As can be seen in Fig. 18.6, line segments from LRF were properly superimposed on the given map. The given map of the Intelligent Sweet Home is repre-
18 Design of an Intelligent Wheelchair for the Motor Disabled
309
sented as solid line. Each text of the map means the furniture (bed, table, desk) and home appliances (refrigerator, television) in Intelligent Sweet Home. The path of wheelchair is from start point A to end point B. The line segments used in localization are superimposed as dotted line on the given map. Another experiment was conducted in corridor which is composed of translation – right turning – translation like Fig. 18.4a. Overall path was composed of two translations and a turning section, and the total traveling distance was about 50 m. Our wheelchair has successfully navigated passing through the designated via points.
18.7 Conclusion In this paper we described our hardware and software design for autonomous wheelchair system. The self-localization using LRF and encoders is composed of feature extraction, position estimation on the map, matching and localization, and validated with successful navigation controlled by navigation hierarchy in Intelligent Sweet Home and corridor. Further work will focus on additional user required functions: For obstacle avoidance, 3D obstacle detection and avoidance mechanism will be developed. Also automatic battery charging station and docking mechanism will be studied. Since wheelchair is highly interactive system, we are building a simple and easyto-use human-machine interface such as touch screen, voice recognition as input device as well as graphic display, speech synthesis as output device.
Acknowledgement This work is supported by KOSEF through HWRS-ERC at KAIST.
References 1.
2.
3.
4.
S Fioretti, T Leo, S Longhi (2000) A Navigation System for Increasing Autonomy and the Security of Powered Wheelchairs. IEEE Transactions on Rehabilitation Engineering 8: 490–498 Simon P Levine, David A Bell, Lincoln A Jaros, Richard C Simpson, Yoram Koren, Johan Bornstein (1999) The NavChair Assistive Wheelchair Navigation System. IEEE Transactions on Rehabilitation Engineering 7: 443–451 Johan Borenstein, Yoram Koren (1991) The Vector Field Histogram – Fast Obstacle Avoidance for Mobile Robots. IEEE Transactions on Robotics & Automation 7: 278– 288 Axel Lankenau, Thomas Rofer (2001) A Versitile and Safe Mobility Assistance. IEEE Robotics & Automation Magazine 8: 29–37
310 5. 6. 7.
8. 9. 10. 11.
12. 13. 14. 15.
16.
17.
C.H. Kim, J.H. Jung, and B.K. Kim Storey N (1996) Safety-Critical Computer Systems. Addison-Wesley Lankenau A, Meyer O (1999) Formal methods in robotics: Faults tree based verification. In: Proceedings of Quality Week Europe. Brussels Lankenau A et al. (1998) Safety in Robotics: The Bremen Autonomous Wheelchair. In: Proceedings of AMC’98, 5th International workshop on Advanced Motion Control. Coimbra, pp 524–529 G Bourhis, O Horn, O Habert, A Pruski (2001) An Autonomous Vehicle for People with Motor Disabilities. IEEE Robotics & Automation Magazine 8: 57–65 Erwin Prassler, Jens Scholz, Paolo Fiorini (2001) A Robotics Wheelchair for Crowded Public Environments. IEEE Robotics & Automation Magazine 8: 38–45 Manuel Mazo et al. (2001) An Integral System for Assisted Mobility. IEEE Robotics & Automation Magazine 8: 46–56 Christian Martens, Nils Ruchel, Olier Lang, Oleg Ivlev, Axel Graser (2001) A FRIEND for Assisting Handicapped People. IEEE Robotics & Automation Magazine 8: 57–65 Paul D Nisbet (2002) Who’s intelligent? Wheelchair, driver or both?. In: Proceedings of the 2002 IEEE International Conference on Control Applications, pp 760–765 Chong Hui Kim (2001) Implementation of Distributed Mobile Robot Control System using COTS Systems (in Korean). Master’s Thesis, KAIST Lineo Inc (2000) DIAPM RTAI Programming Guide 1.0. Developer Guide Chong Hui Kim, Jik Han Jung, Byung Kook Kim (2003) Design of Intelligent Wheelchair for the Motor Disabled. In: The 8th International Conference on Rehabilitation Robotics, pp 92–95 L Zhang, B K Ghosh (2000) Line Segment Based Map Building and Localization using 2D Laser Range Finder. In: IEEE International Conference on Robotics & Automation 3, pp 2538–2543. Kwang-Hyun Park, Z. Zenn Bien (2003) Intelligent Sweet Home for Assisting the Elderly and the Handicapped. In: Proceedings of the 1st International Conference on Smart Homes and Health Telematics, France, pp 151–158.
19 Electrically Assisted Walker with Supporter-Embedded Force-Sensing Device Saku Egawa, Ikuo Takeuchi, Atsushi Koseki, and Takeshi Ishii
Abstract An electrically assisted walking aid with a force-sensing device embedded in its supporting arm has been developed. The force sensor consists of a pair of Ushaped members joined by four rubber springs and four gap sensors that detect the relative displacements between the members. It calculates forward and vertical forces from the displacements, and thus determines the torque about the vertical axis applied by a user. Test results showed that the sensor has sufficient precision for an electric walker.
19.1 Introduction In an aging society, where the elderly population is rapidly increasing, the elderly need to live a self-supported life. By keeping their independence, they can enjoy a high-quality life and also reduce their own expenses and health-care costs. Walking is a key physical function to provide independence; it is required not only for mobility purposes but also for maintaining physical and mental health. Some people who have difficulty in walking use conventional walkers. A walker is an assistive device that has a supporting arm or grips, which a user can lean on or grasp, and legs, which may have wheels. Conventional walkers are commonly seen in hospitals or nursing homes. However, since conventional walkers can easily fall over, so only people with fairly good motor functions can use them. Walkers should thus be more adaptive to the various needs of users. Several researches have been conducted on walkers with enhanced functional capabilities. Lacey and MacNamara developed robotic walkers for frail visually impaired people [1, 2]. Miyawaki reported a motorized walker that keeps a constant distance from its user [3]. Dubowsky et al. developed a robotic aid for mobility assistance and health monitoring [4]. Graf adapted an intelligent walking-aid function to a home-care robot system [5]. Lee et al. developed a power-assisted gait rehabilitation robot comprising a robotic manipulator and a mobile base [6]. The authors have been developing an electrically assisted walking aid that provides physical support for people who can hardly walk without assistance [7–9]. This walker-type device assists the user’s gait by means of a motor-drive system controlled by the force input from the user. It uses a force-sensing device to detect Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 313-322, 2004.
© Springer-Verlag Berlin Heidelberg 2004
314
S. Egawa et al.
forces and torques applied by the user. The walkers formerly developed by the authors employed a conventional multiple-axis force sensor for industrial robots. However, because its design was not suited to an electric-walker system, the sensing system needed very high precision to meet the requirements of a walker. Consequently, its cost was too high. To solve this problem, the authors have developed a new force-sensing system for the electric walker. Its design and test results are described in the following sections.
19.2 Electrically Assisted Walker The electrically assisted walker has four wheels and a supporting arm that holds a user (Fig. 19.1). Two motors drive its right and left rear wheels independently. Its front casters can freely rotate and turn. It also has an electric lift mechanism to control the height of the supporting arm. A force sensor inserted between the supporter and the lifting device detects the forces and torques applied by the user to the supporting arm.
Fig. 19.1. Electrically assisted walker
19 Electrically Assisted Walker with Supporter-Embedded Force-Sensing Device
Inclinometer
θ Fz
Force Sensor
315
Ky
2N/D
Right Motor
Kz
NW/D
Left Motor
Fy Frz
Fig. 19.2. Control system
Figure 19.2 illustrates the control system of the walker. The system drives the motors so that the walker moves forward at a speed in proportion to the forward component of the detected force (Fy) and turns in the same way according to the toque component about the vertical axis (Frz). It can compensate the affect of gravity on slopes by means of an inclinometer and the vertical force sensor output (Fz) [7]. It also has an imbalance compensation function for hemiplegic patients [8] and an obstacle avoidance system using infrared range sensors [7]. With this control system, the user can intuitively manipulate the walker at their own pace. If the user pushes or twists the supporter, the walker moves forward or turns accordingly. The user can also walk backward by pulling the supporter. By modifying certain control parameters through a wireless hand-held terminal, the user or a therapist can easily adjust the dynamical characteristics of the walker in order to improve the user’s gait stability. The variable parameters include viscous resistances and virtual inertias in the forward, backward, and rotational directions and the imbalance compensation. The electric-walker system has been tested at hospitals and elderly care facilities and approved as an effective tool for walking rehabilitation [8].
19.3 Supporter-Embedded Force Sensor 19.3.1 Requirements for the Force Sensor The electric walker needs a multiple-axis force sensor that detects the forward force, the vertical force, and the turning torque, that is, the torque about the vertical axis. The control system uses the forward force and the torque for forwardand turning-speed control and the vertical force for slope compensation. The force sensor for the electric walker has to fulfill the following requirements: • detect forward force with fine resolution down to 1 N for smooth control in order to provide comfortable assistance to the user; • bear a large vertical force up to 1 kN, which may be exerted by the user leaning on the supporting arm while walking; • maintain the fine forward force resolution under a large vertical load.
316
S. Egawa et al.
In the former walkers developed by the authors, a commercially available force sensor for industrial robots was used. The conventional force sensor comprises a precisely machined metal beam and strain gauges adhered to it. This type of sensor has to be installed under the front end of the U-shaped supporting arm so that all the forces and torques act on the force-sensing beam. This configuration is unfavorable because the vertical load produces a large torque about the horizontal axis on the force sensor in addition to the load itself. This additional torque lowers the sensing precision of the forward force and the turning torque. It is possible to find a high-grade product that avoids these problems, but its cost is high. To solve this dilemma, the authors have developed a new force-sensing system. 19.3.2 Sensor Structure Figure 19.3 illustrates the structure of the developed force sensor. It is embedded in the U-shaped supporting arm. It comprises upper and lower members connected with four elastic joints at their corners and four gap sensors that detect vertical and horizontal (forward/backward) displacements between the members. The elastic joints are designed so that their vertical stiffness is larger than the horizontal one. The upper member is covered with a soft pad, and the lower member is attached to the walker body. The user walks with their hands and arms placed on the pad. The sensor detects forward force (Fy), vertical force (Fz) and turning torque (Frz). It can also detect the torque about the forward axis (Fry), although it is not used for motion control.
Upper member
Elastic joint
D d b C
B c a
Lower member
A
Gap sensor Fig. 19.3. Structure of force sensor
19 Electrically Assisted Walker with Supporter-Embedded Force-Sensing Device
317
(i) (iv)
A B Fy L2
Dx E
Frz
F
Q
Dy Drz
Fx
C
(v)
D (iii)
(ii)
L1 Dx
Fz Fry
Q
Dry
Fx Dz
Fig. 19.4. Definition of forces and displacements
19.3.3 Sensing Method Figure 19.4 shows the definition of the axes and variables used in the sensor. Since the origin of the coordinate system is defined to coincide with the center of the four elastic joints, each force/torque component about each axis depends only on the displacement of the same axis. That is, Fx = 4 Csx Dx,
(19.1)
Fy = 4 Csx Dy,
(19.2)
Fz = 4 Csz Dz,
(19.3)
2
(19.4)
2
(19.5)
Frx = 2 L2 Csz Drx, Fry = 2 L1 Csz Dry,
318
S. Egawa et al.
2
2
Frz = (L1 + L2 ) Csx Drz,
(19.6)
where Csx and Csz are horizontal and vertical stiffnesses, and L1 and L2 are distances between joints. On the other hand, the relation between the gap-sensor outputs Sa, Sb, Sc, and Sd and the displacements Dy, Dz, Drz, and Dry becomes Sa = Dy + (L1 / 2) Drz,
(19.7)
Sb = Dy – (L1 / 2) Drz,
(19.8)
Sc = Dz – (L1 / 2) Dry,
(19.9)
Sd = Dz + (L1 / 2) Dry.
(19.10)
Since the gap sensors are placed at the midpoint between the joints, the four gapsensor outputs depend only on the four displacements, Dy, Dz, Drz, and Dry. This means that the gap-sensor outputs correlate to the force components, Fy, Fz, Frz, and Fry. Eqs. 1 to 4 and 7 to 10 give the following relations. Fy = 2 Csx (Sa + Sb),
(19.11)
Fz = 2 Csz (Sc + Sd),
(19.12)
Fry = 2 L1 Csz (Sd + Sc)
(19.13)
2
Frz = (L1 + L2 / L1) Csx (Sa – Sb).
(19.14)
The force/torque components can thus be obtained from the gap-sensor outputs. 19.3.4 Advantages The supporter-embedded force sensor has the following advantages: • It can detect the forward force component with high accuracy even if a large vertical force is applied, because horizontal stiffness is lower than the vertical one. • It is free from the large torque due to vertical load because the vertical load is supported at the four elastic joints located at the corners. • It is hardly affected by electromagnetic noises because the displacements that the gap sensors should detect are relatively large compared with the minute dislocation measured by strain gauges in conventional force sensors. • It is inexpensive because it can use low-cost rubber springs for the elastic joints. • The embedded structure in which the whole sensing device is combined with the supporting arm facilitates a simple and compact walker design.
19 Electrically Assisted Walker with Supporter-Embedded Force-Sensing Device
319
19.4 Experiments An electrically assisted walker equipped with the new supporter-embedded force sensor was fabricated and tested. A low-cost rubber spring for vibration isolation was used for the elastic joint. An eddy current type detector with a 1-mm sensing range was used for the gap sensor. Horizontal and vertical stiffnesses of the rubber springs, Csx and Csz, were 105 kN/m and 490 kN/m, respectively. Distances between the springs, L1 and L2, were 500 mm and 355 mm, respectively. Forces and torques were measured using the sensor under various load conditions. Test loads were made by placing some weights on the upper member or pushing it with a force gauge at the points shown in Fig. 19.4 by marks (A) to (D) and (i) to (v). Figure 19.5 shows the dynamic characteristics of the sensor determined by applying and removing a vertical load at the front end of the sensor. The sensor output showed delay and hysteresis behaviors. The output slowly drifted for more than 10 seconds after the load changed and a small amount of output remained after the load was removed. Figures 19.6, 19.7, and 19.8 show the test results for vertical force, forward force, and turning torque, respectively. They show the relation between the applied loads and sensor outputs. The data for different loading positions are indicated with different marks. In all the charts, the sensor outputs tend to rise more steeply as the input loads increase. And variation in the outputs for different loading positions was less than 20%, and the overall error was 30%.
Sensor output [N]
100 80 60 40 20 0 -20 0
20
40
60
80
Time [s] Fig. 19.5. Dynamic characteristics of sensor
100
S. Egawa et al.
Detected vertical force [N]
400 300 200
E, F A, B C, D E F
100 0 0
100
200
300
400
Applied vertical force [N] Fig. 19.6. Sensing test result (vertical force)
200 Detected forward force [N]
320
150 100 (i) (ii) (iii)
50 0 0
50
100
150
Applied forward force [N] Fig. 19.7. Sensing test result (forward force)
200
19 Electrically Assisted Walker with Supporter-Embedded Force-Sensing Device
321
Detected turning torque [Nm]
60 50 40 30
(ii) (iii) (iv) (v)
20 10 0 0
20
40
60
Applied turning torque [Nm] Fig. 19.8. Sensing test result (turning torque)
19.5 Discussion Since this sensor uses a rubber spring, it has delay and hysteresis behaviors and nonlinear characteristics, which lower the overall precision. In an electric-walker system, the offset for zero input has to be small. If there is a large offset in the force sensor, the walker may move arbitrarily without manipulation by the user. On the contrary, high accuracy is not important to provide assistance for users and 30% error is acceptable. The problem of the offset can be solved by periodically calibrating the sensor to output zero while the user is not touching the supporting arm. It was shown by field trials that the electric walker using the new sensor can be used in the same manner as an older walker using a conventional force sensor.
19.6 Summary A force-sensing device that uses elastic joints and gap sensors was developed. An electrically assisted walker using the sensor was then fabricated. Test results show that the force sensor has sufficient accuracy for an electric-walker system. This sensor will thus facilitate in designing a compact and low-cost electric-walker system.
322
S. Egawa et al.
Acknowledgement This work was performed partly under entrustment by the New Energy and Industrial Technology Development Organization (NEDO).
References 1.
2. 3.
4.
5.
6.
7. 8.
9.
Lacey G, MacNamara S, Petrie H, Hunter H, Karlsson M, Katevas N, Rundenschöld J (1999) Adaptive Control of a Mobile Robot for the Frail Visually Impaired. Proc. 6th International Conference on Rehabilitation Robotics, pp 60–66 MacNamara S, Lacey G (1999) A Robotic Mobility Aid for Frail Visually Impaired People. Proc. 6th International Conference on Rehabilitation Robotics, pp 163–169 Miyawaki K, Kutuzawa K, Nishimura S, Iwami T, Obinata G, Ito S (1999) Effect of Using Assisting Walker on Gait of Elderly People — In Case of Walking on a Slope — (in Japanese). Proc. JSME Symposium, No. 99–41, pp 68–72 Dubowsky S, Genot F, Godding S, Kozono H, Skwersky A, Yu H, Yu LS (2000) PAMM – A Robotic Aid to the Elderly for Mobility Assistance and Monitoring: A “Helping-Hand” for the Elderly. Proc. 2000 IEEE International Conference on Robotics & Automation, pp 570–576 Graf B (2001) Reactive navigation of an intelligent robotic walking aid. Proc. 10th IEEE International Workshop on Robot and Human Interactive Communication, pp 353–358 Lee CY, Seo KH, Kim CH, Oh SK, Lee JJ (2002) A System for Gait Rehabilitation: Mobile Manipulator Approach. Proc. 2002 IEEE International Conference on Robotics and Automation, pp 3254–3259 Nemoto Y, Egawa S, Fujie M (1999) Power Assist Control for Walking Support System. Journal of Robotics and Mechatronics 11(6): 473–476 Egawa S, Nemoto Y, Fujie M, Koseki A, Hattori S, Ishii T (1999) Power-Assisted Walking Support System with Imbalance Compensation Control for Hemiplegics. Proc. First Joint BMES/EMBS Conference, p 635 Egawa S, Nemoto Y, Koseki A, Ishii T, Fujie M (2001) Gait Improvement by PowerAssisted Walking Support Device. In: Arai E, Arai T, Takano M (eds) Human Friendly Mechatronics. Elsevier Science, Amsterdam, pp 117–122.
20 Human-Friendly Care Robot System for the Elderly Dong Hyun Yoo, Hyun Seok Hong, Han Jo Kwon, and Myung Jin Chung
20.1 Introduction In this paper, “Do-u-mi”, a robotic care system is introduced (Do-u-mi means “helper” in Korean) (see Fig. 20.1). It is particularly aimed at the elderly who cannot walk without assistance. The main function of the Do-u-mi robot is to provide walking support. The robot also has many entertainment functions. The robot is developed with a human-friendly design and has an intelligent human-robot interface. One CCD camera, many ultra sonic and infrared sensors are employed in order to provide sound localization, face tracking and autonomous navigation with obstacle avoidance. 20.1.1 The Functions of Do-u-mi Robot The main task of the Do-u-mi robot is to provide walking support for the elderly who cannot walk without assistance. The following is a scenario illustrating typical use of the robot. An elderly man is in his room in a senior citizen’s home with the Do-u-mi robot. Because his legs are weak, the man cannot walk without assistance. He is far from the robot and wants to go to another room. To request the assistance of the robot, he calls it by clapping his hands. Upon detecting the call, the robot estimates the position of the sound source by sound localization and turns its head to the direction of the sound source. The camera on the robot then captures images in an effort to locate a human face. If the face tracker finds any face among the viewed images, it then tracks the face continuously and moves to the caller, avoiding obstacles by autonomous navigation. The robot stops in front of the caller and waits for further commands. The user sends commands to the robot through its touch screen and the robot carries out walking support or various entertainment services according to the commands. For this scenario, the functions of the Do-u-mi robot are as follows. Walking Support This is the most important function of the robot. The user can walk by leaning on the robot’s back panel.
Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 323-332, 2004.
© Springer-Verlag Berlin Heidelberg 2004
324
D.H. Yoo et al.
Fig. 20.1. Do-u-mi robot: walking support system
Intelligent Human-Robot Interface Generally, a complex robot cannot be easily controlled. To overcome this problem, the Do-u-mi robot has an intelligent human-robot interface. To provide an easy-to-use interface, sound localization, which estimates the direction of the sound source, and face tracking method are employed. The sound localization is used when the viewing angle of the camera is limited. Autonomous Navigation The robot has an autonomous navigation function because it has to approach the user to provide its services. The robot can move without collisions using infrared and ultrasonic sensors. Entertainment Function Do-u-mi has numerous entertainment functions. It can read e-mail to users with weak eyesight, and it also plays music and displays movies. Also, the user can make calls via a videophone integrated into the robot.
20 Human-Friendly Care Robot System for the Elderly
325
Fig. 20.2. The inside of Do-u-mi robot
20.2 Overall System of Do-u-mi Robot Figure 20.2 shows the inside of Do-u-mi robot. There is a CCD camera and a pantilt device. Twelve ultrasonic sensors and twelve infrared sensors are attached to the bottom of the robot in order for it to avoid obstacles. For a convenient interface, a touch screen is installed on the front of the robot. On the back of the robot, there is a supporting panel, which the user leans on while walking. On the panel, there are buttons to control the robot and an emergency switch to stop the robot in an emergency situation. To detect a user’s call, there are two microphones. The robot is connected to the Internet through a wireless LAN. The ultrasonic sensors, the infrared sensors, and motors are controlled by a microprocessor. The data from the sensors and commands from the main computer are transmitted through serial communication between the main computer and the microprocessor. The main computer also processes images captured by the CCD camera and makes decisions about how to react to a given situation. The main computer obtains data from the image and the sensors, and then sends commands to the pan-tilt unit and the microprocessor to control the direction of the camera and to move the robot, respectively, by serial communication. The system of the robot is divided into four sub-blocks as follows: sound localization, face tracking, autonomous navigation, and entertainment.
326
D.H. Yoo et al.
20.3 Sound Localization When the robot is far from the user, the user summons the robot by clapping his/her hands. The robot can detect the direction of the sound source in a similar manner as humans through the use of two microphones. There are two methods to estimate the direction, amplitude comparison of sound data from two microphones and phase comparison [2, 3]. The phase comparison method is better than the amplitude comparison method because it is less sensitive to noise in the sound and to the distance between the microphone and the sound source. As such, more precise results can be obtained by the phase comparison method. In Fig. 20.3 there are the two microphones of which distance is d. Figure 20.4 shows sound signal waves from the microphones. Two signal waves are very similar and there is only time difference between two signals. From this time delay (td), we can find out what direction (θ) the sound came from.
l = vtd
(20.1)
θ = sin −1 ( dl )
(20.2)
Fig. 20.3. Two microphones setting for sound localization
From the experimental results obtained from varying the direction of the sound source at intervals of ten degrees, our sound localization algorithm shows that the average of absolute error is 2.4235 degrees and the standard deviation of absolute error is 2.6176 degrees.
20 Human-Friendly Care Robot System for the Elderly
327
Left microphone Right microphone
Fig. 20.4. Sound wave data from two microphones
20.4 Face Tracking Our face tracking method is comprised of color segmentation, region grouping, face candidate extraction, preprocessing, and face pattern matching. RGB Image
Pan-tilt Camera Control
Color Segmentation
Color LookUp Table Database
Region Grouping
xooxx ooxxx xoxxx
xooxx xooxx xxoxx
I=i0~i1
I=i1~i2
...
xxoxx xxoox xxxox I=in-1~in
Face Candidate Extraction Update Index (I) of Color LookUp Table according to Intensity of Face Region
Preprocess
Face Pattern Matching
Report face position, angle and size
Fig. 20.5. Face tracking algorithm
328
D.H. Yoo et al.
20.4.1 Face Candidate Extraction 20.4.1.1 Color Segmentation The pixels of an image are compared with a specific color model and segmented by thresholds in specific color space [5]. We use the HSI color coordinate system for skin color detection because hue and saturation values are invariant to intensity change [6]. The skin color model is acquired in the form of a Gaussian function in the hue and saturation plane with respect to each intensity value. We use a lookup-table for fast color segmentation. Furthermore, to cope with illumination change, we use a finite number of look-up-tables according to the quantized intensity value of estimated face region. Therefore, skin color model adaptation is processed according to illumination of the face region. 20.4.1.2 Face Candidate Extraction Region grouping distinguishes each of the connected components in the binary image, which is acquired by color segmentation. Each region is regarded as a face candidate region. By intuition, a human face can be modeled by an ellipse. The length of the minor axis is proportional to face width. Also, the direction of the major axis indicates the planar rotation angle of the frontal face. Ellipse fitting of each region can be calculated by the second order moments, which are the following:
m M = 20 m11
m11 m02
(20.3)
where xc=mean(x), yc=mean(y), m20=∑x∑y(x-xc)2, m11=∑x∑y(x-xc)(y-yc) and m02=∑x∑y (y-yc) . The larger (smaller) eigen value of M and corresponding eigen vector present the length and the direction of the major (minor) axis, respectively. Normally, the ratio of the major axis length to the minor axis length of a human face varies within a narrow range. Therefore, we can exclude the blobs that exceed a specific ratio value from face candidates. 2
20.4.1.3 Face Pattern-Matching The face tracker, which does not use any facial features, cannot guarantee that it is tracking a human face. However, the feature-matching process generally requires too much calculation time. To diminish calculation time, we fix the face template size to 20 × 20 pixels. Each face candidate contains information about the region, including its center position (xc_i,yc_i), planar rotation angle (θi), and the width of the face region to be matched (w), where i=1,2,…,N and N is the number of face candidates. By observing human faces, we can fix the size of the face region which to be matched
20 Human-Friendly Care Robot System for the Elderly
329
with face template in some ratio related with the length of the minor axis of each face candidate.
Fig. 20.6. Finding the orientation and the width of face candidate for face template matching
For each face candidate, preprocessing and face pattern-matching are processed in the small neighborhood of the center position, in the direction of the planar rotation angle with the face candidate’s width. By doing this, the search area and the number of matching and calculation time are remarkably reduced The preprocessing for the face pattern-matching is composed of two stages. For the first step, uneven lighting is compensated for by 2D linearization. Then, the gray-level image of each face candidate is equalized [4]. These preprocesses make the pattern-matching result more reliable. Figure 20.7 shows how this preprocessing works on uneven and even illuminated environments.
a
b
c
d
e
Fig. 20.7. Preprocess for uneven lighting compensation. a selected region in face candidate, b sampled image, c computed illumination by 2D linear fitting, d uneven illumination compensated image, e histogram equalized image of d
330
D.H. Yoo et al.
Pattern-matching is carried out to determine whether face candidates are true faces. Pattern matching between the face template and face candidate is executed by inspecting the normalized correlation coefficient.
a
b
c
Fig. 20.8. Face tracking results. a tracking only face, b illumination changes, c rotated small face
Table 20.1. Elapsed time Stage Color Segmentation Region Grouping Face Candidates Extraction Preprocess + Face Pattern Matching Total
Elapsed Time 1~2 ms << 1 ms << 1 ms 12~56 ms 13~60 ms
We perform our face tracker on a Pentium3 computer with 600MHz CPU. Figure 20.8 and Table 20.1 show the face detection and tracking results. In pattern matching, we set the width of face region to be matched as w=3/4·|2ai| by observing human faces.
20.5 Autonomous Navigation Human face can be detected by the face tracker, and its direction can be estimated. Also, all positions of obstacles such as walls, boxes, moving chairs, beds, and so on, can be found by twelve ultrasonic sensors and twelve infrared sensors. The robot approaches a person avoiding the obstacles by vector field method [1] because the target direction and all relative positions of the obstacles can be measured by sensors. Suppose the current position of the robot is (x,y), and the current position of face is (xgoal,ygoal). N obstacles' positions are (x0(i),y0(i)) (i=1,2,…,N). All relative positions of the obstacles are (x0(i)-x, y0(i) -y) (i=1,2,…,N). The robot starts to be forced by goal point’s attractive force. The target direction vector is acquired by vision system, which is expressed by
20 Human-Friendly Care Robot System for the Elderly
f goal =
1 ( xgoal − x)2 + ( y goal N
f obs = −∑ i =1
331
xgoal − x . − y )2 ygoal − y
(20.4)
x0(i ) − x y − y 0(i )
(20.5)
1 ( x0(i ) − x) + ( y0(i ) − y )2 2
The repulsive force by the obstacles is in inverse proportion to square of distance from the robot to the obstacles (Eq. 20.5). We can neglect the repulsive forces of the obstacles which are far from the robot over the critical radius Rc because the forces are very small. Therefore the summation of the repulsive forces made by M obstacles in the critical radius Rc becomes M
f obs′ = −∑ i =1
1 ( x0(′ i ) − x) + ( y0(′ i ) − y ) 2 2
x0(′ i ) − x y′ − y , 0(i )
(20.6)
,y0(m)•)) (m=1,2,…,M) are positions of M obstacles in the critical radius where (x0(m)•,y Rc. These forces affect the robot, and the velocity (vx,vy) is the following equation:
vx ′ , v = K1 f goal + K 2 f obs y
(20.7)
where K1 and K2 are constant. If K1 is bigger than K2, then the robot approaches the goal point.
20.6 Conclusion Do-u-mi is a care robot for elderly persons who are unable to walk without assistance. It has walking support functions and many kinds of entertainment functions. It has been developed with a human-friendly design and has an intelligent manmachine interface for user convenience.
Acknowledgement This research is supported by Human-friendly Welfare Robot System Engineering Research Center (Sponsored by KOSEF) of KAIST.
332
D.H. Yoo et al.
References 1. 2. 3. 4. 5. 6.
Borenstein J, Koren Y (1991) The vector field histogram fast obstacle avoidance for mobile robot. IEEE Trans. on Robotic and Automation, 7(3): 278–288 Chiang-Jung P, Harris GJ, Principe JC (1997) A neuromorphic microphone for sound localization. IEEE Conf. on Systems, Man, and Cybernetics, pp 1469–1474 Huang J, Supaongprapa T, Terakura I, Ohnishi N, Sugie N (1997) Mobile robot and sound localization. IEEE Conf. on Intelligent Robots and Systems, pp 683–689 Rowley HA, Baluja S, and Kanade T (1998) Neural network-based face detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(1): 23–38 Yang J, Waibel A (1996) A real-time face tracker. IEEE Workshop on Applications of Computer Vision, pp 142–147 Zarit BD, Super BJ, Quek FKH (1999) Comparison of five color models in skin pixel classification. Proc. International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pp 58–63.
21 Newly Designed Rehabilitation Robot System for Walking-Aid Choon-Young Lee, Kap-Ho Seo, Changmok Oh, and Ju-Jang Lee
21.1 Introduction The increase in the life span of many elderly people means that there are more demands for aids to support them in normal life [11]. Previous efforts to improve the ability of bedridden people to enjoy daily life have led to the development of devices with special functions such as a food supplying using robotic manipulator equipped with vision, delivering materials to the user [13]. Among their daily activities, walking is an essential function for the elderly and the disabled. The exercise of walking is also needed to prevent the contractions of the lower limb [24]. The main causes for the disorder of gait in hemiplegic patients are poor muscular activation [14], poor weight bearing capabilities [3, 6, 7], hyperactive stretch reflex, and poor balance [25]. In medical centers or rehabilitation facilities, nurses and physiotherapists assist gait training for the subjects with the conventional devices for weight support, e.g., canes, crutches and parallel bars in static condition to improve muscular activation capability and the sense of balance. Although these efforts improve the general condition of patients, there remains gait deviation from the normal one after treatment [8]. Nowadays, instead of these traditional treatments, walking rehabilitation systems with body weight support (BWS) have been suggested in dynamic condition [1, 4, 9, 10, 12, 16, 18-21, 23]. A typical method for this kind of treatment was proposed by Finch and Barbeau who made the subjects walk on a treadmill at their comfortable speed with their body weight unloaded partially [9]. They investigated the method by examining healthy subjects walking on a treadmill while supported their body weight up to 70%. Patients who receive this therapy could significantly increase their independent walking ability and walking speed. The current gait rehabilitation systems proposed so far may be classified into groups according to the technology applied to exert the relief force [10]. From another point of view, we may classify the systems according to the mobility of them i.e., the system moving on the ground according to the motion of the subjects and the system giving effects of walking to the subjects. The former includes the systems having mobile base which is driven by electrical power or manual power. The latter corresponds to the systems on which patients are mechanically supported in a harness over a treadmill. Most systems in gait rehabilitation with BWS train the subjects over a treadmill while relieve them of their partial weight by electric motor [1, 21], pneumatic Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 333-344, 2004.
© Springer-Verlag Berlin Heidelberg 2004
334
C.-Y. Lee et al.
[10] or hydraulic device [19]. These systems focus on the mechanism of weight relief. The weight relieving device REHABOT allows the subjects to walk on a circular path. It has been used for the subjects who had difficulties with gait training at parallel bars. The prolonged training on a circular path of walking may cause abnormal gait pattern since there is unbalanced force distribution on each leg. Nemoto et al. developed a power-assisted walking support system to help the elderly stand up, sit down and walk [18]. Lee developed a gait training system with pneumatic device for body weight support as a prototype [16]. In most of the gait training systems focused on the mechanism of BWS and neglected the real mobility of the system to increase the effect of rehabilitation. These considerations drove us to develop a new gait training system with mobile base and body weight support mechanism. Here, we will describe two systems for gait rehabilitation. One system uses electric motor to support body weight and the other adopts pneumatic actuator. In the following sections, the detailed description will be made.
21.2 Electric Motor Based Gait Rehabilitation System 21.2.1 System Description The developed gait rehabilitation system is shown in Fig. 21.1. This system consists of three subsystems: a robotic manipulator, mobile platform and an ultrasonic sensor system. A robotic manipulator controls the quantity of body weight support during training. Mobile platform moves the whole system according to the subject's motion which can be measured with the two linear potentiometers. Any objects in the direction of movement will be detected by the ultrasonic sensors attached around front side of the system. Detailed description will be done in the following subsections. 21.2.1.1 Mobile Base There are many choices for the driving method of the mobile platform: car-like design, mobile base with synchronous driving and steering wheels, and differential driving mobile base [26]. We examined these types of driving mechanism to decide on the most suitable to follow human walking path and the easiest to implement control algorithm [2]. The front-wheel-drive car-like model has a complex mechanical design for driving and steering gear box and its cost of implementation is rather high although there exist many developed control algorithms for car-like driving systems [15]. Synchronous driving and steering mechanism has also a complex structure inside the wheel, but it may approximate human walking path, especially the zigzag walking path of patients [27]. However, total system size with synchronous driving and steering wheels is much bigger than the other one since it requires me-
21 Newly Designed Rehabilitation Robot System for Walking-Aid
335
chanical linkage between each wheel to operate synchronously. Mechanical chain generates noisy sound during operation is also a big problem in gait training system. Differential driving mechanism uses two independent driving wheels. Change of moving direction can be made by the difference between each wheel's velocities. Its mechanical architecture is simple and practical to implement and there are also abundant control algorithms to differential driving type [22]. The tradeoff among maintenance problem, development cost, and functional issues led us to choose the differential driving mechanism for the current gait training system, which was also adopted by Nemoto's system [18]. The current mobile platform has two driving wheels and two castors for the stability of the system. The driving wheels are connected with two 165 Watt-AC motors through gear box of reducer. Speed ranges from 0.0 to 10.0 km/h.
Fig. 21.1. The developed gait rehabilitation system
21.2.1.2 Body Weight Support Mechanism The one-link robotic manipulator with a rotational joint is mounted on the mobile platform. It supports the user's weight, while at the same time it stabilizes the user when the subject walks. Also, it is used to adjust the height of the system for various users in initial operation. The rotational joint is composed with worm gear to provide mechanical lock for the large force from the user in case of a swoon. A 150 Watt-DC motor actuates its rotational joint with enhancing torque by a reducer with a ratio of 150:1. The robotic manipulator has many sensors to detect the current status of the user. Two linear potentiometers with a 500 mm stroke length are attached to the top of it. Their signals are used to find the user's walking direction and velocity. After low-pass filtering the raw signals of the linear potentiometers, we can obtain information on the subject's intention of walking. A load cell force transducer with a maximum capacity of 500 Kg gives force feedback to the force control unit.
336
C.-Y. Lee et al.
21.2.1.3 Safety Issue As a system cooperating with human, especially patients, a special effort was made to add operational safety. Since the main users would be elderly or disabled people, range sensors were needed in case of any adverse situations. In this application, ultrasonic sensors (POLAROID 6500) are used to give commands for the “stop-and-go" operation. The simplest collision avoidance operation was used, i.e., the system stops when confronted by an object within some range value which is proportional to the speed. When the object is cleared, the system continues to move. There is a high risk of falling down due to a sudden lack of leg support and lose of balance since this kind of system is used for patients who cannot usually control their posture very well. In case of a falling down, on the basis of the time history of the load cell sensor value, force control algorithm is deactivated and the motion of manipulator stops with giving warning sound. On the cover of the system, two emergency stop switches are located in both sides and they may be pushed by the supervisor in case of dangerous situation which is not recognized by the system itself. Also, the user can stop the system by pressing STOP button GUI on the LCD monitor during operation. 21.2.1.4 Operation There are two operational modes: the training mode and the following mode. In the training mode, the main computer generates the reference trajectories and moving velocity which would be set by the supervisor, or, by the medical doctor. In this mode, the user should follow the motion of the system. In the following mode, the mobile base would respond to human motion by the two linear potentiometers. Operation of body weight support is common to the two modes. We implemented an industrial PC with Celeron 500 MHz CPU as a main controller of the system. It receives sensor values and controls mobile platform and a robotic manipulator. It also stores experimental data during operation. For the control of mobile base, we implemented kinematic controller with PD control algorithm [25]. 21.2.2 Experiments 21.2.2.1 Experimental Protocol To validate the developed system and to gather information for its possible application on patients, we carried out tests on 10 healthy subjects (six males and four females) with a mean age of 37 years applying the following protocol. All procedures were explained to the subjects, and each subject was habituated to the experimental protocol by walking 10 minutes using the developed system before data collection. At that time, each subject determined his/her own comfortable walking speed. The mean walking speed was 41.75 m/min. (SD = 12.97). At real experiment, all subjects walked using the developed system with 0%, 10%,
21 Newly Designed Rehabilitation Robot System for Walking-Aid
337
20%, 30%, and 40% of their body weight supported at their own comfortable walking speed. Throughout all trials, temporal data on walking were collected using a set of foot-switches. Heart rate was measured by COSMED K2 (Italy). All subjects took 5 minute-rest after each stage.
Fig. 21.2. Percentage of double limb support time and single limb support time versus body weight support
21.2.2.2 Results Experimental results are shown in Figs. 21.2 and 21.3. Figure 21.2 shows the percentages of double limb support (DLS) time and single limb support (SLS) time at different BWS. We can see that the percentage of DLS time decreased with increased control of BWS ratio. In addition, the percentage of SLS time increased from 34.5% at 0% of BWS to 42.5% at 40% of BWS. Figure 21.3 shows the trend of change in heart rate versus the percentage of BWS. With the increase of BWS, heart rate has the tendency of decrease, which means lower energy consumption for higher BWS. These results give the possibility of application on the patients as a therapeutic approach to retrain gait.
338
C.-Y. Lee et al.
Fig. 21.3. Result on heart rate versus body weight support
21.3 Newly Developed Gait Rehabilitation System 21.3.1 System Description This system consists of two subsystems: a pneumatic actuator for body weight support, and a mobile platform. A pneumatic actuator controls the quantity of body weight support during training. Mobile platform moves the whole system according to the subject's motion which can be measured by a compass sensor and accelerometer attached to the user. Detailed description will be done in the following subsections. 21.3.1.1 Body Weight Support The body weight support mechanism is constituted by a mechanical gear driven by a DC motor and two pneumatic actuators as shown in Fig. 21.5. The subject is mechanically supported in a harness over the system. The harness is pulled upwards by a steel curved around the subject’s head. When pressurized air is sent to the cylinder, the steel raises the harness. The air pressure inside the cylinder determines the pull-up force. The pull-up force F is
21 Newly Designed Rehabilitation Robot System for Walking-Aid
F = (p0A+p1A)
339
(21.1)
where p0 is the air pressure of the left cylinder, p1 is the air pressure of the right cylinder and A is the piston section area. The total vertical stroke admitted is 20 cm. The adaptation of the system height to the height of the subjects is accomplished by the rotational link.
Fig. 21.4. The developed gait rehabilitation system
Fig. 21.5. Pneumatic actuator
21.3.1.2 Electronic Interface The control of mobile base and the body weight support are controlled by an industrial computer through an ISA bus. The analog and digital interface is based on
340
C.-Y. Lee et al.
a commercial unit (Advantec PCL-818H). The main computer determines control signals by sensing all status of the system. The compressed air is produced by a remote compressor and the pressure inside the cylinder is regulated by a separate control circuit that senses the output of a pressure transducer and drives the associated valves. The control of DC motors is interfaced with commercial PWM amplifier (Advanced Motion Controls, U.S.A., Model 10A8). Analog voltage command signals are fed into each amplifier. 21.3.1.3 A Harness The harness has been designed to distribute the support force over a body surface as large as possible. It consists of an ad hoc designed corset, and auxiliary reinforcements, suspended by means of two straps. As shown in Fig. 21.4, a harness has a corset and adjustable belts. 21.3.2 Control Method As far as vertical forces are concerned, a walking person may be modeled as a simple body, of mass M, subjected to the action of acceleration, gravity and a vertical supporting force F, expressed a fraction S of body weight [5, 10, 17].
RS = Mg (1 − S ) + M
∂2x ∂t 2
(21.2)
where RS is the resulting ground reaction and x is the vertical coordinate oriented upwards. In practice, it is necessary to add an error term Re in the Eq. 21.1 to figure out the effect of masses, frictions and strains specific to the technical solution adopted to realize the supporting device:
Re = ± F f + Kx + B
∂x ∂2 x + Mi ∂t ∂t
(21.3)
where Ff is the pure friction, K is stiffness, B is viscous drag, and Mi is the moving masses. Coefficients K and B are not constant, but vary as a function of F. As a quality indicator of the constancy of the support force, it is convenient to use the ratio of the following,
E=
Re Mg (1 − S )
(21.4)
A small value for E indicates that disturbing forces may be neglected when compared to the mean value of the residual weight sustained by the subject. Before conducting force control, we analyzed the characteristics of the pneumatic actuator and its amplifier.
21 Newly Designed Rehabilitation Robot System for Walking-Aid
F = aV+b
341
(21.5)
where F is the supporting force, a and b are coefficient of the linear fit, and V is command voltage to amplifier.
70
Unloaded Force [kgf]
60 50
: : : : :
Test point (upward) Linear Fit (upward) Test point (downward) Linear Fit (downward) Averaged Linear Fit
40 30 20 10 0 0.0
0.5
1.0
1.5
2.0
Voltage [V]
Fig. 21.6. Static response of the developed pneumatic actuator
Figure 21.6 shows the static response of pneumatic actuator for upward and downward force. We also analyzed the frequency response characteristics of the pneumatic actuator by applying sinusoidal input (Fig. 21.7).
Fig. 21.7. Frequency response of the developed pneumatic actuator
From this result, we need to construct a feed-forward control term in addition to feedback control law to compensate a delay in the response. Fig. 21.8 shows the performance of force control when the desired force profile varies with the period
342
C.-Y. Lee et al.
of 1 [s]. Under normal conditions, human walking takes place at a rate of 1 [Hz], therefore, there are load fluctuations at the same rate. From the response to a sinusoidal input, we can achieve significant improvement on the performance with the mentioned control algorithm.
50
Force [kgf]
40 30 20 10 : Rerference Input : Force Output
0
0
2
4
6
8
10
Time [sec]
Fig. 21.8. Response of pneumatic actuator when feed-forward controller is implemented
21.4 Conclusion Mobile gait training systems with body weight support have been fully described. The system relieves the patient of body weight of a prescribed amount by supporting the subject at the trunk by a specially designed harness and actuators. We used electric motor for one system and pneumatic actuator for the other system for comparison. These systems have simple architectures to construct, and are practical to the gait rehabilitation. The developed gait rehabilitation systems achieve mobility according to the subject's motion while it maintains a constant force on the body. The mobility of the system seems to be important for gait rehabilitation because the subjects enjoyed using it for real walking.
References 1. 2.
Barbeau H, Wainberg M, Finch L (1987) Description of a system for locomotor rehabilitation. Med Biol Eng Comp 25: 341–344 Campion G, Bastin G, Dandrea-Novel B (1996) Structural properties and classification of kinematic and dynamic models of wheeled mobile robots. IEEE Transactions on Robotics and Automation 12(1): 47–62
21 Newly Designed Rehabilitation Robot System for Walking-Aid 3.
4. 5.
6.
7. 8. 9. 10.
11. 12.
13. 14. 15. 16. 17. 18.
19.
20.
21.
343
Carlsoo S, Dahllof AG, Holm J (1974) Kinematic analysis of the gait in patients with hemiparesis and in patients with intermittent claudication. Scand J Rehabil Med 6: 166–179 Chun K, Cho K, Kim B (1999) Influence of body weight unloading on hemiplegic gait. J of Korean Acad of Rehab Med 23(2): 371–376 Cvgna GA, Heuglund NC, Taylor CR (1977) Mechanical work in terrestrial locomotion: Two basic mechanisms for minimizing energy expenditure. Amer J Physiol 233(5): 379–399 Dickstein R, Nissan M, Pillar T, Scheer D (1984) Foot-ground pressure pattern of standing hemiplegic subjects: major characteristics and patterns of improvement. Phys Ther 64: 19–23 Eke-Okoro ST, Larsson L (1984) Comparison of the gait of paretic patients with the gait of control subjects carrying a load. Scand J Rehabil Med 16: 151–158 Finch L, Barbeau H (1985) Hemiplegic gait: New treatment strategies. Physiother Can 38: 36–41 Finch L, Barbeau H, Arsenault B (1991) Influence of body weight support on normal human gait: the development of a gait retraining strategy. Phys Ther 71: 842–856 Gazzani F, Fadda A, Torre M, Macellari V (2000) WARD: A pneumatic system for body weight relief in gait rehabilitation. IEEE Transactions on Rehabilitation Engineering 8(4): 506–513 Hashino S (1996) Daily life support robot. J of Robot Soc of Japan 14(5): 2–6 Hesse S, Bertelt C, Schaffrin A, Malezic M, Mauritz KH (1994) Restoration of gait in non ambulatory hemiparetic patients by treadmill training with partial body-weight support. Arch Phys Med Rehabil 75: 1087–1093 Ide T, Siddiqi N, Akamatsu N (1993) Expectations for medical and healthcare robotics. Adv Robot 7(2): 189–200 Knutsson E, Richaards C (1979) Different types of disturbed motor control in gait of hemiparetic patients. Brain 102: 405–430 Lafferriere G, Sussmann H (1991) Motion planning for controllable systems without drift. Proc IEEE Int Conf Robotics and Automation, pp 1148–1153 Lee C, Lee J (2000) Walking-support robot system for walking rehabilitation: design and control. J of Artificial Life and Robotics 4(4): 206–211 Lewis FL, Abdallah CT, Dawson DM (1993) Control of robot manipulators. New York, Macmillan Nemoto Y, Egawa S, Koseki A, Hattori S, Ishii T, Fujie M (1998) Power-assisted th walking support system for elderly. 20 Ann Int Conf of the IEEE Eng in Med and Biol Soc 10(5), pp 2693–2695 Norman KE, Pepin A, Ladouceur M, Barbeau H (1995) A treadmill apparatus and harness support for evaluation and rehabilitation of gait. Arch Phys Med Rehab 76: 772–778 Pillar T, Dickstein R, Smolinski Z (1991) Walking reeducation with partial relief of body weight in rehabilitation of patients with locomotor disabilities. J Reh Res and Dev 28(4): 47–52 Tani T, Sakai A, Fujimoto T, Fujie M (1997) Walk training system: Improvement of the ability of postural control. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, pp 627–631
344
C.-Y. Lee et al.
22. Umeda Y, Yakoh T (2002) Configuration and readhesion control for a mobile robot with external sensors. IEEE Tr. on Industrial Electronics 49(1): 241–247 23. Visintin M, Barbeau H (1989) The effects of body weight support on the locomotor pattern of spastic paretic patients. Can J Neurol Sci 16: 315–325 24. Walfson L, Whipple R, Amerman P, Kleinberg A (1986) Stressing the postural response-A quantitative method for testing balance. J Amer Geriat Soc 34: 845–850. 25. Winstein CJ, Gardner ER, McNeal DR (1989) Standing balance training: effect on balance and locomotion in hemiparetic adults. Arch Phys Med Rehabil 70: 755–762 26. Zhao Y, BeMent SL (1992) Kinematics, dynamics and control of wheeled mobile robots. Proc IEEE Int Conf on Robotics and Automation, pp 91–96 27. Zhao Y, BeMent SL (1991) Experimental and mathematical studies of obstacle avoidance in mobile-robot navigation thru unknown environment. Robot System Division, CRIM, The University of Michigan, Tech. Rep. RSD-TR-1-91.
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation Rui Loureiro, Farshid Amirabdollahian, and William Harwin
Abstract Movement disorders (MD) include a group of neurological disorders that involve neuromotor systems. MD can result in several abnormalities ranging from an inability to move, to severe constant and excessive movements. Strokes are a leading cause of disability affecting largely the older people worldwide. Traditional treatments rely on the use of physiotherapy that is partially based on theories and also heavily reliant on the therapists training and past experience. The lack of evidence to prove that one treatment is more effective than any other makes the rehabilitation of stroke patients a difficult task. UL motor re-learning and recovery levels tend to improve with intensive physiotherapy delivery. The need for conclusive evidence supporting one method over the other and the need to stimulate the stroke patient clearly suggest that traditional methods lack high motivational content, as well as objective standardised analytical methods for evaluating a patient’s performance and assessment of therapy effectiveness. Despite all the advances in machine mediated therapies, there is still a need to improve therapy tools. This chapter describes a new approach to robot assisted neuro-rehabilitation for upper limb rehabilitation. Gentle/S introduces a new approach on the integration of appropriate haptic technologies to high quality virtual environments, so as to deliver challenging and meaningful therapies to people with upper limb impairment in consequence of a stroke. The described approach can enhance traditional therapy tools, provide therapy “on demand” and can present accurate objective measurements of a patient’s progression. Our recent studies suggest the use of tele-presence and VR-based systems can potentially motivate patients to exercise for longer periods of time. Two identical prototypes have undergone extended clinical trials in the UK and Ireland with a cohort of 30 stroke subjects. From the lessons learnt with the Gentle/S approach, it is clear also that high quality therapy devices of this nature have a role in future delivery of stroke rehabilitation, and machine mediated therapies should be available to patient and his/her clinical team from initial hospital admission, through to long term placement in the patient’s home following hospital discharge.
Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 347-363, 2004.
© Springer-Verlag Berlin Heidelberg 2004
348
R. Loureiro, F. Amirabdollahian, and W. Harwin
22.1 Background to Stroke Stroke is a leading cause of disability in the UK. The UK incidence rate is between 1.25 and 1.8 per 1000 per annum with the rate higher in Scotland and higher for men [1, 2]. The incidence rate in Germany is comparable to the UK, and in France it is slightly lower. Current assumptions are that 2/3 of individuals will survive the stroke and of these about a third will have moderate disabilities and a third will have severe disabilities. Better management of acute strokes, however mean that survival is now closer 80%. Total annual costs are £2.3 billion per year [3] and split between the hospitalisation and long term care costs. In the USA the first 90 days of care per patient following a stroke is estimated as $15,000 [4]. Given the costs of stroke rehabilitation any intervention that either reduces the hospitalisation costs, or improves the outcome for the patient and hence the long term costs of the disability, will have a beneficial impact. Machine mediated physiotherapies have the potential to do both, but are novel, relatively underdeveloped and untested. Functional imaging has revealed neural plasticity and neural regeneration during stroke recovery and has led to a search for treatments that can enrich sensorimotor experience in a way that promotes more recovery after stroke. Training has traditionally been therapist dependent and may not allow many movement repetitions due to time or physical constraints. Provision of physiotherapy improves function in stroke patients and more intensive physiotherapy is thought to reduce fatality and enhance recovery [5]. Likewise providing high motivation is seen as a prerequisite for relearning motor skills [6]. Again there is a strong logic for using machine mediated therapies. Although not replacing existing physiotherapy services providing the technologies to assist the therapist in planning and delivering a patients rehabilitation will revolutionise current practices. Most work on machine mediated therapies is ongoing in the USA. Several are based at the Veterans Administration Research & Development centre in Palo Alto California, including a therapy principle termed MIME (Mirror Image Motion Enabler), where activities of the sound upper limb are used to pattern the effected limb. One example is the SEAT project where a car steering wheel is instrumented and controlled so that its operation can require any amount of assistance from the effected arm, from none to all. In the latter case all forces must come from the paretic arm, the good arm will provide no assistance in steering. High motivation is provided by giving the subject a driving task, a particularly important skill to American subjects. This project is yet to publish clinical results [7]. On the East coast of the USA the principal group is based at Massachusetts Institute of Technology (MIT) and the Burke Rehabilitation Hospital. Therapies are based on a robot made at MIT that allows the subject to exercise against programmable stiffness and damping in two degrees of freedom that is movements along the surface of a table. A randomised control trial (RCT) was conducted with 96 patients between 1995 and 1999 and showed that the Fugl-Meyer score for elbow and shoulder scores improved in the treatment group. The exposure to placebo was significantly less than exposure to the robot, and it is therefore not pos-
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation
349
sible to attribute the recovery to robot mediation rather than that the treatment subjects simply received more therapy. Likewise movements were restricted to the plane and had a limited visual component and minimal provision for engaging the subject in the treatment, however it is encouraging that therapies delivered by robot are beneficial to the patient [8]. In 2000 the European community funded the three year long Gentle/S project to develop machine mediated therapies for neurorehabilitation of people with stroke. This project resulted in 3 prototype machines that are currently in Trinity College Dublin, University of Reading/Battle Hospital, and University of Ljubljana. The primary purpose of the Gentle/S research project was to develop the technology and demonstrate it on stroke subjects who were considered clinically stable (between 3 months and 4 years post stroke).
22.2 Gentle/S Gentle/S is an innovative method to both improve quality of treatment and reduce costs. To measure quality of recovery requires objective measures. Like many other clinical measures, functional recovery is presently measured using subjective scales. Because the therapies can be machine delivered GENTLE/S allows a large quantity of data to be collected, and from this data, objective measures should be developed. However these measures must have clinical relevance, be based on fundamental science and be able to predict progress. A large part of the work in GENTLE/S is researching these issues. The impact of any novel treatment must consider both the social cost and the economic cost and early work within this research will identify the economic implications for robot delivered physiotherapies, as well as the social consequences. For people with upper limb hemiplegia it is possible to consider unilateral therapies where intelligent assistance from a robot is able to provide varying degrees of assisted movements for the affected side. That technology has advanced to a state where a robot can be considered gives some exciting possibilities. Robotic devices are capable of faster reaction times than a human, which opens up the breadth of possible treatments. Furthermore sensing that already exists within the robot can be used to provide a wealth of information about the underlying pathology. Evidence indicates that where a patient is motivated and premeditates their movements, the recovery is more effective and intelligent machines allow a broad scope to investigate these conditions. Finally a robot can be infinitely patient, is available on demand, can be used at home or in rural clinics and allows a far greater flexibility and intensity of treatment than is currently possible. By basing the therapy on haptic interfaces, the Gentle/S prototypes can ensure that the underlying technologies have been designed to be safe and to match the forces and range of movements appropriate to humans.
350
R. Loureiro, F. Amirabdollahian, and W. Harwin
22.2.1 Assumptions Some studies have shown that repetitive task-oriented movements are of therapeutic benefit. With the use of haptics and VR technology, patient attention and motivation can be enhanced by means of ‘Active Feedback’ that will further facilitate motor recovery through brain plasticity [9, 10]. Four different levels for ‘Active Feedback’ have been identified: visual, haptic, auditory and performance cues. The creation of active agents and biofeedback can be a way of implementing and integrating active feedback in a neuro-rehabilitation robotic system. • Visual cues - In some cases, following a stroke, hemiplegic subjects tend to be confused about what they see [11]. The brain needs to be re-educated to associate (for example) colours and objects. As a result of the need of cognitive relearning it is important that visual cues be simple, yet stimulating. Visual cues can be represented using real tasks based on the ones used in occupational therapy sessions, to realistic and accurate goal oriented 3D computer environments. This can be anything from a virtual room (for example a virtual kitchen or museum) to an interactive game. • Haptic cues - Kinaesthetic feedback can help to discriminate physical properties of virtual objects, such as geometry. It can also be used to deliver physical therapy to a human subject using haptic interfaces. The force delivered in this way can be very therapeutic dependent on the way we apply this force to human muscular and skeletal systems. It will undoubtedly play an important role when manipulating objects, either virtual or real. In conjunction with interactive virtual and augmented tasks, it can simulate the shape of a virtual pen, bingo card or the friction/drag when writing on the virtual bingo card. • Auditory cues - In some cases it may be appropriate to give encouraging words and sounds when the person is trying to perform a task and congratulatory or consolatory words on task completion. • Performance cues - In a haptic stroke rehabilitation system, results of the previous tasks can be displayed indicating the errors committed and the level of help given when completing the task. These performance cues should be designed to give constructive feedback. A robotic/haptic rehabilitation system should be ergonomically comfortable. The therapy should be enjoyable and the system should be considered trust worthy by the patient. Such a concept can be achieved by the introduction of a personality to the system, such as a character (wizard) that interacts with the patient by using different identified cues. Different wizards can be implemented for different personalities. These can be defined and assigned to the patient by the therapist. An example could be in the case where the patient is performing a simple exercise, such as reaching for an object in a virtual world. In this case, with the aid of a good sensor system, analytical measures can be obtained in order to identify if the patient is struggling in reaching the target. Taking this into account, the wizard could then offer encouragement to the patient to finish the movement or offer a score at the end of the session.
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation
351
22.3 Clinical Prototype for Machine Mediated Neurorehabilitation As a result of extensive user needs assessment we produced two clinical prototypes for evaluation. Several design variants were considered and a working prototype was installed in Adelaide and Meath Hospital in Dublin in March 2001. An early decision was to use haptic interface technologies as the cornerstone of the work rather than industrial robotics technologies. An investigation of the market showed the HapticMaster arm (from FCS Robotics)1 as the most suitable base line technology. This haptic interface has only three active Degrees of Freedom and is designed for grasping, thus it was necessary to design three additional passive degrees of freedom into the mechanism and to design an arm support mechanism that allows the patient to attach to the haptic device. An early system analysis showed that it is of prime importance to allow for the possibility of shoulder subluxing. A variety of solutions were proposed to provide a shoulder support unit that could reduce the risk of the shoulder dislocating. A functional model was developed by the Mechatronics group at TNO, showing the principle of a suitable mechanism. This was evaluated by the North Staffordshire Young Stroke group resulting in several modifications. The University of Newcastle, Centre for Rehabilitation Engineering Research continued the development of a prototype to the point where it could be incorporated into the Rapid Clinical Prototype (Fig. 22.3). Several mechanical frames were developed by the UK Company, Rehab Robotics allowing several configurations to be evaluated. The evolution of the design is apparent when comparing the initial sketches of the prototype undergoing evaluation at Adelaide and Meath Hospital (AMH) , Dublin, with the prototype installed in Reading/Battle Hospital (RBH) and the University of Ljubljana (UL), in Fig. 22.1. The Dublin version has two chairs to allow both left and right-sided hemiplegic patients to use the prototype without requiring substantial reorganization of the workspace. As for the other versions (Fig. 22.1), a wheelchair can be used. In this case left/right side configurations are possible by a table rotation mechanism. In all versions, the patient is seated on the chair with the arm positioned in an elbow orthosis suspended from the overhead frame. This is to eliminate the effects of gravity and address the problem of shoulder subluxation. The wrist is placed on a wrist-orthosis connected to the robot arm using a quick release mechanism. Software for the rapid clinical prototype was developed having in mind easy human computer interaction. This resulted in a distributed architecture, where it is possible to specify new exercises without too much effort. Furthermore, it is possible to design exercises in simple as well as complex virtual worlds. One disadvantage of the HapticMaster was that it was equipped with a stylus end-effector. The interaction between user and the haptic interface was therefore only possible by grasping the stylus whilst hemiplegic patients with sever stroke normally have difficulties in grasping as well as other arm functions. Thus it was 1
http://www.fcs-cs.com/robotics/
352
R. Loureiro, F. Amirabdollahian, and W. Harwin
necessary to design a mechanism that would allow patients with flaccid hands to interact with the robotic arm. Different ideas were presented and after brainstorming meetings, a three passive rotational degrees of freedom gimbal was designed. The gimbal rotates about its X, Y and Z axes (Fig. 22.2). To describe the orientation resulting from these rotations, they are termed with the conventional roll, pitch and yaw angles, respectively [12]. The gimbal’s coordinate frame is set with the same orientation as the HM coordinate frame allowing for fewer coordinate frame conversions.
Fig. 22.1. The Gentle/S prototype system, showing the overall frame, an elbow orthosis suspended from the overhead frame, the HapticMaster with gimbals, the exercise table, computer screen and the wheelchair
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation
a
353
b
Fig. 22.2. Gimbal end-effector, a Subject’s hand attached to the gimbal via wrist orthosis and magnetic connector, b three passive degrees of freedom provided by the gimbal - roll, pitch and yaw
The gimbal end-effector improved the performance of the HM system in several ways. Firstly, it provided the means of connecting the patient’s arm to the haptic interface without grasping. The connection was to be made using an orthosis at the wrist level, which could mate with the gimbal via a magnetic quickattach/quick release mechanism. This provided the second advantage of the gimbal, releasing the patient quickly was one of the safety issues considered. The third advantage was that the gimbal allowed for the free movement of the hand, by providing two degrees of freedom for hand movement at the wrist (roll and yaw) It also allowed for the pronation and supination of the elbow (pitch). These all added to the capabilities of the interface. The final advantage of the gimbal was that it had a position sensor equipped with each degree of freedom. The data from these position transducers could allow for calculating the exact orientation of the arm. 22.3.1 Antigravity Mechanism for the Shoulder and Elbow The shoulder support mechanism used for the prototype consisted of two connected cuffs (for upper arm and forearm), which are hooked up to the shoulder support mechanism overhead. Adjustable constant force springs (tensators) compensated the weight of the arm. The shoulder support mechanism compensates for the weight of the arm by applying two vertical compensating forces (Fcomp1 and Fcomp2) near the centres of gravity equal to the weights of the respective arm (Fig. 22.3). The two cuffs were connected with a hinge, that transfers forces from one cuff to the other. The hinge was also equipped with a sensor to measure elbow flexion/extension. The tensators on the overhead frame were free to slide on the transverse or horizontal plane using a passive 2DOF mechanism. This allowed the arm to be free to move whilst the gravity effects were almost eliminated.
354
R. Loureiro, F. Amirabdollahian, and W. Harwin
Fig. 22.3. Shoulder support mechanism. (Figure adapted with permission from Michield Oderwald, TNO-TPD, The Netherlands)
22.3.2 Exercises & Movement Guidance In the current software versions, three different virtual environments can be used (Fig. 22.4): 1. Empty room – A simple environment that represents the haptic interface workspace and intends to provide early post-stroke subjects with awareness of physical space and movement (Fig. 22.4 a). 2. Real room - An environment that resembles as far as possible what the patient sees on the table in the real world. The mat with 4 different shapes on the table (Figs. 22.5 and 22.6) is represented in the 3D graphical environment (Fig. 22.4 b). This environment was developed to help discriminating the third dimension that is represented on the Monitor 2D screen. 3. Detail room - A high detail 3D environment of a room comprising of a table, several objects (a book, can of soft drink), portrait of a baby, window, curtains, etc (Fig. 22.4 c). In order to allow the user to navigate and interact with a virtual/real task, several mathematical models have been implemented [13, 14, 15] as a control strategy capable of correcting the patient’s movement. An operation button on the keypad must be pressed continuously by the user for the duration of the movement. Since movement control was defined to be in between two points, a new
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation
355
concept was introduced. The ‘Bead Pathway’ concept assumes that movement takes place in between a start point and an end point. It is assumed that its behaviour is similar to the behaviour of beads on a wire, they can only move along their pathway. To achieve this behaviour the endeffector is connected to a virtual spring and damper (Fig. 22.7) where the bead is constrained to move along a ‘wirepathway’ that defines both the path and the velocity profile of the movement. Deviations from the movement profile are permitted but constrained depending on the restoring force of the spring and associated damper. Different levels of guidance and correction can be programmed for different subjects with different recovery levels. For a patient in early days after stroke, more help is needed to move along the pathway, and this behaviour can be achieved by implementing a velocity profile for the bead on the pathway and proper spring-damper combination for more assistance. For a patient who has recovered more motor function, we may need a different velocity profile along the pathway and a different setting defined for the spring-damper behaviour.
a
b
c
Fig. 22.4. Three different virtual environments can be used. a Empty room, b real room, c detail room
Fig. 22.5. Subject using the Gentle/S system. Arm is positioned in an elbow orthosis suspended from the overhead frame and connected to the robot using a wrist-orthosis that is secured using a quick release magnetic mechanism.
356
R. Loureiro, F. Amirabdollahian, and W. Harwin
Fig. 22.6. User’s view of a reaching exercise
Fig. 22.7. Spring and damper combination–Bead Pathway
Figure 22.8 shows the representation of the pathway, where in left part of the figure can be seen the yellow start point (light shaded), the blue end point (dark shaded) and the desired trajectory between these two points (pathway). The choice these of colours addresses the problem of colour blindness with some primary colours, such as green and red that was noticed with some subjects in earlier studies [16]. On the right part of Fig. 22.8, shadows and lines connecting shadow to object are used to help the user to perceive the depth and height of the positioned points with respect to the table on the screen.
Fig. 22.8. Representation of a movement trajectory in the virtual environment
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation
357
22.3.3 Different Therapy Modes Using the minimum jerk polynomials three different therapy modes are implemented on the Gentle/S system: 22.3.3.1 Patient Passive mode The Patient Passive mode was the first therapy mode implemented. As the patient lacks the power to initiate the movement and remains passive, the haptic interface will move the arm along the pre-defined path. When the patient’s arm reaches the target, depending on the exercise selected, the movement can be reversed back to the start position or continued towards the next defined position. 22.3.3.2 Patient Active Assisted mode The second mode is Patient Active-Assisted mode. In this mode, the haptic interface starts moving as soon as the patient initiates a movement in the direction of the pathway. The haptic interface initiates the movement when the user applies sufficient force towards the goal. After the initiation is made, the haptic interface assists the user to reach to the end point. 22.3.3.3 Patient Active mode The third mode is Bead-Pathway (ratchet) mode or Active mode. The user has an unlimited time to finish the task. This mode provides a unidirectional movement, where the amount of deviation can be controlled by changing spring-damper coefficients. Similar to the previous mode, the user initiates the right movement. The haptic interface stays passive until the user deviates from the predefined path. In this case, the spring-damper combination encourages the patient to return to the pathway. This operation will end on reaching the end point or releasing the operation button. Upon arrival at the end point, it is up to the user to continue the same movement back to the start point, a new point or end the whole session in this mode.
22.4 Clinical Trials A pilot study was carried out in the spring of 2001 and a principle study conducted from autumn of 2001 to autumn of 2002. The choice of sites in both the UK and Ireland gives the study access to a greater number of subjects (30 in total) for inclusion in the clinical trials. (In Dublin the studies were conducted at the Adelaide & Meath Hospital, a teaching hospital of Trinity College, Dublin, and in the UK at the Battle Hospital in Reading.) Some of the initial results of the pilot study are published in [16], with more detailed results of the principal clinical trial published in [17, 18]. Results from the initial pilot studies showed that the majority of the subjects were enthusiastic
358
R. Loureiro, F. Amirabdollahian, and W. Harwin
about the use of visual and haptic cues. The trial suggested that the system as a whole was able to motivate people with hemiplegia as a result of stroke and encourage them to participate and exercise more. The pilot study was used to evaluate the level of forces that should be imposed by the virtual springs and dampers and the subjects difficulties in interacting with the forces. It also assessed the level of interaction with the three virtual rooms and the consequent motivation. An encouraging data was that 7 of the 8 patients in the pilot stopped exercising because of fatigue rather than boredom, confirming the design objective of providing motivating therapies. The principle study was conducted in the Adelaide & Meath Hospital (AMH) center (19 subjects) and in Battle Hospital (BH) center (11 subjects). In both centres (Table 22.1) subjects were divided into two randomised groups, ABC (AMH=10, BH=6) and ACB (AMH=9, BH=5) and each group involved in three phases, each phase having duration of 9 trial sessions in three weeks. Under phase one the subject was assessed using validated outcome measures in order to identify the underlying baseline. At the start of the next two phases, the subject’s paretic limb was first assessed using selected outcome measures. The B phase was the Robot Mediated Therapy (RMT) customised for each individual. During phase C the subject’s paretic limb was suspended using sling suspension techniques.
Table 22.1. Gentle/S randomized clinical trial Weeks
1
4
1
2
6
7
RMT Phase B 9
Baseline Phase A
5
10
8
SS Phase C 18 19
SS Phase C
9
Assessment
ACB Group
3
Baseline Phase A
ABC Group Sessions
2
26 27 28
SS Phase B
22.4.1 Outcome Measures Validated outcome measures were used at both centers. The measures were either performed before each trial session (TS) measuring the impairment of the paretic arm or performed every three weeks, at the start of each phase (PH) to monitor impairment and disability of the patient. The TS measurement were the upper limb section of the Fugl-Meyer (FM), Motor Assessment Scale (MAS), Goniometry for elbow and shoulder (G). A total of 27 measures were made over the 9-week trial period for each assessment protocol. From these measures, as they are produced in each session, the slope of the regression line (SRL) was calculated using least-squares estimate. The slope was calculated for each phase of the trial and for each of the three outcome measures. The sum of the scores were also calculated and used within the data analysis.
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation
359
The PH measurement consisted of, a total of 9 outcome measures to evaluate the patient’s quality of life and disability. Each patient was assessed at the beginning of each trial phase (i.e. session 1 for the first phase, session 10 for the second phase, session 19 for the third phase). All the outcome measures, TS and PH, were also measured at a final assessment session (session 28). In this session, patient didn’t receive any treatment. 22.4.2 Data Analysis and Statistical Methodology Three different statistical tests, the two-sample t-test, the paired t-test and the general linear model, were used in order to analyse the clinical data from Battle Hospital Reading. These tests were selected to investigate the differences between the baselines of the two intervention groups (ABC and ACB), as well as the differences between each phase in each individual group. Dublin centre used an alternative analysis and work is in progress to draw these two methods together to allow for both comparison and statistical analysis of a larger number. To investigate the differences between ABC and ACB approach, a two-sample t-test was used. The subscript of the two-sample t-test was the grouping, ABC or ACB. The test was mainly used to identify the differences between the baseline (phase A) of the ABC and ACB groups. Because of the existence of such differences, the second test (paired t-test) was used to look into each group individually. This test compares the differences between each phase of the trial. The paired t-test was used to compare between the baseline (phase A) and the consequent phase (B phase for ABC group and C phase for ACB group). This allowed us to monitor the effects of the intervention and distinguish it from any late recovery. It was also used to see the effect of our intervention phase (RMT or B phase) when it compares the B phase of the trial to the control phase (SS or C phase) for each group. This identifies whether RMT therapy was of any benefit to any of our patients. The use of the paired t-test has its disadvantages. This is because when the paired t-test compares two groups of data, the calculation is based on the differences between the mean values of the samples in each group. This is a true assumption when the data in each group is similar data. The mean value can represent the data based on this similarity. In our case, in each group, different patients had different clinical outcomes and therefore there was a possibility that their samples couldn’t be represented by their mean values. To address this and support the results obtained from our paired t-test, a third model, a general linear model (GLM) was used. The sum of each score (FM and MAS) was used as input to our GLM. The advantage of the general linear model is that the differences between different patients were also considered in the model, as well as the difference between the intervention phase and the control phase. In conjunction to all these, different graphs were plotted to allow easy comparison between the two groups and their different phases of the trial.
360
R. Loureiro, F. Amirabdollahian, and W. Harwin
22.4.3 Results Further analysis of the results is necessary since, up to this point, only a subset of the measures has been considered, and no attempt has been made to compare results from the two centres. A non-parametric analysis is also still needed as the preliminary analysis assumed all data was parametric and this is clearly not the case. A valid criticism of the study is that it lacks the power but this might be expected since it was not possible to calculate the effect apriori. Based on our preliminary analysis a subsequent RCT study should plan to have group size of about 30 subjects. It was expected that both the treatment (robot mediated therapy) and the control (sling suspension) have a positive effect on the measures. Therefore analysis looked at the trends in recovery, in particular the gradients of the data over the treatment (RMT) or control (SS) phases of the study. Both centres showed that there was recovery during the baseline measures, the RMT and the SS phases of the study. The data collected in Dublin showed that for the ABC group of patients there was a result at 95% significance that RMT effected the measures for the Fugl Meyer upper extremity scores, and the active range of motion for the shoulder flexion and the elbow extension. Results at the same significant level were found for the ACB group and showed that RMT had a considerable difference for Shoulder range of movements in abduction, and the motor assessment scores (MAS) [17]. Results for the smaller study in Reading was inconclusive with regards measures for the ABC group but for the ACB group there were results significant at the 95% level for the motricity index, and Fugl Meyer scores. It is evident that with such small numbers there is a masking effect in the data and the next stage in the analysis (following a non parametric analysis) is to do a series of case studies since the early evidence is that who already have good functional arm movements benefit less from RMT over patients who have poor initial arm movements. (patients in the former category appear to benefit equally from SS and RMT) The primary purpose of the Gentle/s research project was to develop the technology and demonstrate it on stroke subjects who were considered clinically stable (between 3 months and 4 years post stroke) however the results of the clinical trial are sufficiently encouraging and show that the machine mediated therapies appear to be effective, in particular for more severely disabled subjects. Further work is needed both on developing the technologies and on developing the therapies, in particular for the initial clinical results analysis it should be remembered that the subjects were clinically stable so no further recovery was to be expected. Also the treatment was severely limited and amounted to only 9 hours of exposure to robot mediated therapies.
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation
361
22.5 Conclusions In this chapter we have described the Gentle/S approach to upper limb neurorehabilitation for patients affected by strokes. The system is based on VR and Haptic technologies and particular attention on its development was given to Human Computer (system) Interaction. The system provides an easy way to engage patients in attractive, challenging and motivating task oriented therapy sessions and has demonstrated that can do so for long periods of time without reaching to patient boredom levels. The clinical trials with a total of 30 stroke hemiplegic subjects (3 months to 4 years post stroke) revealed that RMT appear to be effective for more disabled subjects. It is clear that this type of technologies have the potential to revolutionise the way hospitals operate and hence reduce the cost of stroke rehabilitation by allowing therapists and clinicians to manage a larger number of patients in the same amount of time. We are confident that the recovery process can be reduced if this type of therapies is administrated right from the start, at the acute phase of stroke. Further work is still needed to evaluate this type of approach by increasing the number of subjects exposed to this type of therapies and to make the technology widely available, from it usage in hospitals to the patient’s home.
Acknowledgements The work presented in this paper has been carried out with financial support from the Commission of the European Union, Framework 5, specific RTD programme “Quality of Life and Management of Living Resources”, QLK6-1999-02282, “GENTLE/S – Robotic assistance in neuro and motor rehabilitation”. It does not necessarily reflect its views and in no way anticipates the Commission’s future policy in this area. We are grateful to all our colleagues in the Gentle/S consortium (University of Reading, UK; Rehab Robotics, UK; Zenon, Greece; Virgo, Greece; University of Stafordshire, UK; University of Ljubljana, Slovenia; Trinity College Dublin, Ireland; TNO-TPD, Netherlands; University of Newcastle, (UK) for their ongoing commitment to this work. Many thanks to Mr. Paul Hawkins for providing the drawing used in Fig 22.1, and to the Young Stroke Association at Stoke-on-Trent in the UK, for their invaluable input to this project. The last but not the least, a note of appreciation to the Battle Hospital for their clinical input. Gentle/S website: http://www.gentle.rdg.ac.uk
362
R. Loureiro, F. Amirabdollahian, and W. Harwin
References 1.
2.
3. 4.
5. 6. 7.
8. 9.
10.
11. 12. 13.
14.
Stewart JA, Dundas R, Howard RS, Rudd AG. Wolfe CDA (1999). Ethnic differences in incidence of stroke: prospective study with stroke register. British Medical Journal (BMJ) 318: 967–971 (2000). Section 9: Stroke, Scottish Health Statistics, ISD Scotland National statistics release. web resource: http://www.show.scot.nhs.uk/isd/Scottish_Health_Statistics/SHS2000/C9.pdf (Accessed on 06-02-04) (2002). Stroke – Facts and figures statistics, UK National Stroke Association; web resource: http://www.stroke.org.uk/noticeboard/facts.htm (Accessed on 06-02-04) (2002) All about stroke – National Stroke Association, USA web resource: http://209.107.44.93/NationalStroke/AllAboutStroke/default.htm (Accessed on 06-02-04) Langhorne PB, Williams B, Gilchrist W. Howie K (1993) Do stroke units save lives? Lancet, 342, pp 395–398 nd Carr JH, Shepherd RB (1987) A motor relearning programme for stroke. 2 edition, Oxford, Butterworth Heinemann Johnson MJ, Van der Loos HFM, Burgar CG, Leifer LJ (1999) Driver’s SEAT: simuth lation environment for arm therapy. In: Proc. 6 Int. Conf. Rehab. Robotics, ICORR’99, Stanford, CA, USA, pp 227–234 Volpe BT, Krebs HI, Hogan N, Is robot-aided sensorimotor training in stroke rehabilitation a realistic option? Current opinion in neurology, 14(6): 745–752 Butefisch C, Hummelsheim H, Denzler P, Mauritz KH (1995). Repetitive training of isolated movements improves the outcome of motor rehabilitation in the centrally paretic hand. Journal Neurological Sciences , 13: 59–68 Hesse S, Bertelt C, Jahnke MT, Schaffrin A, Baake P, Malezic M, Mauritz KH (1995). Treadmill training with partial body weight support compared with physiotherapy in non-ambulatory hemiparetic patients. Arch Phys Med Rehabil 75(10): 1087–1093 Westcott P (2000) Stroke – questions and answers. The Stroke Association, Stroke House, Whitecross street, London Craig JJ (1955) Introduction to robotics: mechanics and control, Addison-Wesley Publishing Company Amirabdollahian F, Loureiro R, Driessen B, and Harwin W (2001). Error Correction Movement for Machine Assisted Stroke Rehabilitation. In: M. Mokhtari (Ed.) Proth ceedings 7 International Conference on Rehabilitation Robotics (ICORR 2001), Integration of Assistive Technology in the Information Age, Assistive Technology Research Series, IOS Press, vol. 9, pp 60–65 Loureiro R, Amirabdollahian F, Driessen B, Harwin W. (2001). A novel method for computing natural path for robot assisted movements in synthetic worlds. In: Crt Marincek, C. Buhler, H. Knops, and R. Andrich (Eds.), Proceedings Association for the Advancement of Assistive Technology in Europe (AAATE 2001), Assistive Technology - Added Value to the Quality of Life, Assistive Technology Research Series, IOS Press, Vol. 10, pp 262–267
22 A Gentle/S Approach to Robot Assisted Neuro-Rehabilitation
363
15. Amirabdollahian F, Loureiro R, and Harwin W (2002) Minimum jerk trajectory control for rehabilitation and haptic applications. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA 2002), Washingtom D.C., USA, IEEE, pp 3380–3385 16. Loureiro R, Amirabdollahian F, Coote S, Stokes E, Harwin W. (2001) Using haptics technology to deliver motivational therapies in stroke patients: concepts and initial st pilot studies. In: Proc.1 European Conference on Haptics, EuroHaptics 2001, Educational Technology Research Paper Series, University of Birmingham, UK, pp 1–6 17. Coote S, Stokes E, Murphy B. Harwin W (2003) The effect of Gentle/S robot medith ated therapy on upper extremity dysfunction post stroke. In: Proceedings of 8 International Conference on Rehabilitation Robotics (ICORR 2003), HWRS-ERC Humanfriendly Welfare Robot System Engineering Research Center, KAIST, Republic of Korea, pp. 59–61 18. Amirabdollahian F, Gradwell E, Loureiro R, Collin C, Harwin W (2003) Effects of the Gentle/S robot mediated therapy on the outcome of upper limb rehabilitation postth stroke: Analysis of the battle hospital data. In: Proceedings of 8 International Conference on Rehabilitation Robotics (ICORR 2003), HWRS-ERC Human-friendly Welfare Robot System Engineering Research Center, KAIST, Republic of Korea, pp 55– 58 19. Krebs HI, Hogan N, Volpe BT, Aisen ML, Edelstein L, Diels C (1999) Robot-Aided th neuro-rehabilitation in stroke: Three-year follow-up. In: Proc. 6 Int. Conf. Rehab. Robotics, ICORR’99, Stanford, CA, USA.
23 Wire Driven Robots for Rehabilitation Paolo Gallina
23.1 Introduction In the last decades many attempts were made to employ robots or automatic mechanisms in the field of rehabilitation. In particular, orthopaedic rehabilitation and neurorehabilitation have been taken into account. Orthopaedic rehabilitation encompasses a wide range of injuries involving joints, muscles and ligaments. The therapy aims to resolve loss of function related to limited motion and weakness. In order to facilitate the work of the therapist, there exist a number of automatic or semiautomatic commercial devices. These mechanisms are meant for working in passive mode. In such a mode, the patient is asked to relax while the robot simply moves along a given trajectory. More challenging is the problem of neurorehabilitation. Post-stroke neurorehabilitation programs try to develop compensatory strategies in the patient that could decrease the degree of his permanent disability. Therefore, to this regard, the goal of a robot-aided neurological rehabilitation treatment consists in improving functional outcome by both passive and robot-assisted therapy. In robot-assisted mode, the patient is asked to cooperate with the robot in order to reach a given task. In many cases, the task consists in a simple action such as moving a target from a starting point to an ending one, mainly exploiting virtual reality techniques. For this reason, haptic devices (robots capable to provide the patient with virtual forces) are strongly employed in neurorehabilitation. Among others, robot-aided therapy has two advantages with respect to traditional therapy: • the therapy is programmed by the therapist and autonomously realized by the robot; in this way, the therapy can last more than a traditional therapy; • the strength the patient exerts on the end-effector of the robot can be measured as well as the end-effector position; in this way patient improvements can be recorded. As proved by many researchers, the integration of robotic therapy into current practice can increase the efficiency and effectiveness of the therapist by alleviating the labor-intensive aspect of physical rehabilitation [7, 11]. There is increasing evidence that repetitive practice of movements (passive and active) can strongly affect the recovery from brain injury [6]. Moreover, as far as upper limb rehabilitation is concerned, several authors relate encouraging results to the use of increased intensity of standard physical therapy treatment. Other studies have reported that repetitive practice of hand and Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 365-375, 2004.
© Springer-Verlag Berlin Heidelberg 2004
366
P. Gallina
finger movements against loads resulted in greater improvements in motor performance and function scales than Bobath-based treatments [1]. Traditional and commercial robots are not suited for directly interacting with a patient. In fact, in case of failures, the high forces they are able to exert to the patient’s limb could hurt the patient as well as the therapist. Moreover, in most cases [2] they are too heavy and not dextrous enough to be employed in a clinical environment. To address these problems, wire drive robots have been proposed in the literature [10]. They consist of a low-mass end-effector the human hand/finger-tip is fixed to and of a set of wires. The wires are direct-driven and the combination of their tensions provides the proper force to the end-effector. The advantages of such a configuration are low-inertia, low-cost and maximum safety. Liendmann and Tesarhave [5] have been pioneers in this field with the Texas 9string. More recently, Ishii and Sato developed the Spider system. This haptic device is made up of four strings giving force (without moment) to a single operator's finger tip. It has been extended to eight strings to include thumb feedback. Ishii et al. [3, 4] proposed and developed a force display device which can reach the position and posture by measuring the length of 8 strings which are tied up twistedly with the gripper. In this way they can manipulate 3D virtual objects with 6-DOF using a gripper and with the presence of force feedback. After a brief introduction about wire-driven robots and their advantages in rehabilitation, in Section 23.3 a new 3 DoF wire-driven robot called NeRebot is presented. 23.1.1 Advantages of Wire Driven Robots As far as rehabilitation is concerned, the advantages of wire driven robots are: 1. The robot is intrinsically safe, since the wires represent a light parallel structure. In fact, the wire section can be chosen in such a way that, the wire snaps in case the motor torque overcomes a dangerous threshold (because of an unexpected failure). Moreover the wire flexibility prevents the therapist or the patient from being hurt in case of accidental collision. 2. With respect to other robots made up of rigid links, a wire driven robot does not cause the patient to have the unpleasant feeling of being restrained by a machine. 3. The mechanical frame is simple and few mechanical elements are employed. In fact each wire is usually tensioned by a small winch mechanism made up of a drum (where the wire is wound around) operated by an electric motor. As a consequence, less maintenance is required. 4. The kinematic structure of a wire driven robot is parallel; as a consequence the structure compliance is reduced.
23 Wire Driven Robots for Rehabilitation
367
23.1.2 Problems Related to Wire Driven Robots In spite of the fact that wire driven robots could be qualified candidates to be employed in human interaction tasks, they have some drawbacks that need to be taken into account: 1. Since wires can only pull, the number of wires has to be greater than the number of DoFs of the end-effector. Let us consider, for example, a simple 3 DoF wire driven robot made up by a small point-like end-effector and only three wires. The vector sum of the three wire tensions can not give a force oriented in every direction. 2. Like all the parallel robots, wire driven robots have a small workspace. 3. Wire compliance can introduces some errors in the end-effector position estimation. We believe that the former problem can be solved by exploiting the force of gravity of the patient’s arm. In fact the force of gravity can be thought of as “useful” force, which is always downward oriented. Forces provided by the wires can be properly combined with force of gravity in order to produce the desired motion of the limb (fixed to the end-effector). The latter problem can be solved by rearranged the configuration of the robot as explained in Sect. 23.3.
23.2 Manipulability and Wire Tension Computation In the literature, manipulability, as originally defined by Yoshikawa [8], is a measure of the 'performances' of a robotic structure, normally given in the force domain by means of manipulability ellipsoids or polytopes. This definition has been partially revised for wire driven systems, since the wire actuation is unilateral, in the sense that the wire can only pull. Figure 23.1 represents the end-effector of a wire driven robot. Usually the limb of the human operator is fixed to the end-effector. The end-effector is moved by a set of wires tensioned by means of small winches. τi is the wire tension produced by the motor on the i-th wire. When an n DoF object is manipulated by m wires, T the relationship between the wire tension vector f = {τ 1 τ 2 L τ m } ∈ R m and the general force vector F ∈ R n acting on the end-effector, in static conditions, is given by
Af = F
(23.1)
where the matrix A ∈ R n×m is a function of the geometrical configuration of the object and of the wires. Just to give an example, in case of a planar wire driven robot where the end-effector has 3 DoF, the components of F are two forces and a torque: F = {Fx , Fy , M z } . Again, for a spatial wire driven robot with all the wires
368
P. Gallina
attached at the same point to the end-effector (point-like robot), it results that F = {Fx , Fy , Fz } and the columns of A are the unit vectors that represent wire directions.
Fig. 23.1. Sketch of a wire driven system
A wire driven system in a given configuration described by a matrix A, is said T to be manipulable if, for any F, there exists a vector f = {τ 1 τ 2 L τ m } with
τ i ≥ 0 , which satisfies the system A f = F . The above definition describes the necessary condition for a wire driven system to keep wire tension always 0. It can be proven that a necessary condition for a wire driven robot with n DoFs to be manipulable, is that the end-effector is operated by at least n+1 wires. Moreover, in this case the system is manipulable if all the component of the vector α = ker (A ) are all negative or all positive [9]. A wire driven robot has to produce the general force vector F acting on the endeffector, in order to give the human operator the feeling of a force. For a wire driven robot, wire tension computation aims to calculate the set of tensions f with τ i > 0 which satisfy Eq. 23.1. Moreover, it is required that the tensions are as low as possible. As a consequence, one tension will be null, while the others will be greater than zero [2]. It can be proven that, if the number of wires is m = n + 1 , and if the system is manipulable, the formula that gives wire tension is
f = f + +lα
(23.2)
where f + = (A T A ) A T F is the pseudoinverse of A, α = ker (A ) . α has to be cho−1
sen in such a way that all its components are positive. l = max (− {f + }i {α}i ) where i
the symbol {v}i indicates the i-th component of v . In some cases the gravitational force of the end-effector can be exploited in order to reduce the number of required wires. In fact, the gravitational force can be thought of as an external force produced by a virtual wire that downward pulls. This solution is employed in NeRebot.
23 Wire Driven Robots for Rehabilitation
369
23.3 NeRebot: An Example of Wire Driven Robot for Rehabilitation NeRebot (Neurorehabilitation Robot) is a 3 DoF wire driven robot realized at the University of Padova, Italy. Patients that could benefit from NeRebot are people that present movement deficits from neurological injury and/or disease. Particular care is required by the therapist during the flaccid phase where only a passive therapy is possible since the patient can not exert any force. The design of NeRebot takes into consideration problems related to the patient treatment during the flaccid phase. Therefore, in our study, we aim to compare robot-assisted movement training with conventional techniques for neurorehabilitation of the upper limb after stroke, especially during the flaccid phase. We believe that NeRebot, according to these studies, could benefit patients, by delivering a proper intensive robot-aided therapy, beginning from the flaccid phase. Nevertheless, NeRebot, thanks to its versatility, could be employed in other therapy fields like orthopaedic rehabilitation. The scheme of the robot structure is shown in Fig. 23.2. The frame of the robot consists of a C-shape base, provided with omnidirectional wheels and a square section central column. At the top of the column three arms support the wires. One end of each wire is fastened to the patient’s arm at the point Pf. The other end enters the corresponding robot’s arm at the point Pi (where i =1,2,3 indicates the wire) and goes through the robot’s arm and down along the column by means of proper guide elements. Eventually, this end of the wire is wound around a drum located at the base of the robot. Each drum is direct driven by a DC motor. In the scheme of Fig. 23.2, for sake of clarity, motors, drums and part of the wires (from point Pi to the drum) are not represented. Figure 23.3 shows a picture of NeRebot, while a detail of the 3 motors located around the base of the column is shown in Fig. 23.4. The patient sits on a wheel char during the rehabilitation therapy. His forearm (left or right) is fixed to the wires by means of a rigid splint. The wire length l i = Pi − Pf is controlled by each motor. By controlling the length of each wire, it is possible to change the spatial configuration of the patient’s arm.
370
P. Gallina
Fig. 23.2. NeRebot Scheme
As a general principle, it is well known that, in order to move a rigid body on the space, without any other external force acting on it, at least 7 wires are required for producing 3 forces and 3 torques, since each wire-drum-motor system can only exert a force oriented toward the drum [9]. Moreover, in order to move a point on the space without any other external force acting on it, at least 4 wires are required for producing 3 forces. On the contrary, the patient’s arm connected to the three wires of NeRebot represents a different system, since force of gravity due to the patient’s arm can be thought of as an external useful force. There is the possibility to set the robot for different therapies and patients. In fact, the central column of NeRebot can be set vertically in a range of 0.3 m. Moreover, each robot’s arm can be manually rotated along the column axis before each therapy section. This way, we believe that changing the robot configuration according to the specific motion required by the therapy could compensate the disadvantage of having few DoFs. In this specific case, the kinematic chain of the patient’s arm is made up of the shoulder and the forearm. The patient’s arm has 5 DoF (3 DoF for the shoulder, 1 DoF for the flexion-extension of the arm and 1 DoF for the adduction-abduction of the forearm). The external forces acting on this system are the three wire tensions and the force of gravity of the arm itself. By controlling the torques of the 3
(
motors, the three wires can generate a generic force Fg = ∑ τ i Pi − Pf i =1
Pi − Pf
)
which belongs to the pyramid Pf , P1 , P2 , P3 ( τ i is the wire tension produced by the
23 Wire Driven Robots for Rehabilitation
371
motors). Despite the fact that Fg can never be downward oriented, the force of gravity of the patient’s arm will help to keep the wire taut during the therapy.
Fig. 23.3. NeRebot
It is clear that, in this way, it is not possible to control all the 5 DoF of the arm, but just 3 DoF. Any case, we believe that the variety of available movements that NeRebot can provide could be sufficient to deliver a proper therapy. Moreover, in order to compensate this disadvantage (without resorting to a complicate robot with more DoFs), the possibility of rearranging the structure of NeRebot (between one therapy section and the another) has been taken into account during the design stage. In fact, each robot’s arm angular position ( ϑi ) and the distance of the point Pi from the column (si), can be set before each therapy. In this way, the therapy can be optimized with respect to the arm (left or right), the size of the patient and the kind of therapy. Even if NeRebot has been designed mainly for the upper-limb neurorehabilitation, it can be employed for the lower-limb as well. Indeed, the lower limb could be moved by the three controlled wires connected to the patient’s leg by a suited orthesis.
372
P. Gallina
Fig. 23.4. Motors and drums
23.3.1 Software and Control The control software of NeRebot has been developed using a C standard language in the Windows® environment. It is made up of two separate modules: a highlevel module for managing the user interface and low-level module for multi-axis control. The former is a non real-time software. Basically, it takes care of the graphics and handles the interaction with the user. For instance, it is possible to insert and view data about the patient as well as set the kinds of data (force, arm displacement, et. al.) that are going to be collected during the treatment session. So far, during the therapy, the software can represent the actual position of the patient’s arm and/or the forces (exerted by the motors) by means of 2D plots. As far as the graphics is concerned, the GTK+ library has been employed. All the patient data and treatment session data are saved in an embedded database. The second module, which is real-time, is responsible for the low level control of each axis. It implements an impedance controller and, at the same time, it monitors the forces exerted by the motors. Several other real-time components have been implemented in order to predict failures which could hurt the patient and/or the therapist. The real-time module has been implemented exploiting the developing library software distributed by VCI (VentureCom) written for Windows 2000 kernel. By means of this software, the Windows kernel which is normally non real-time is turned in a real-time one. The two modules (real time and non real time) share data by means of IPC (InterProcess Comunication) mechanism. In particular, semaphores and share memory tools have been employed. A PCI MultiQ Terminal Board (Produced by Quanser) has been used for controlling the motors and I/O data transmission. The board receives the 3 differential
23 Wire Driven Robots for Rehabilitation
373
encoder input signals; it receives 3 differential analog input signals proportional to each wire tension; and it provides the 3 differential analog output signals to drive the motors. Graphics is updated with a frequency of 50 Hz without any appreciable slowing down in speed. The software has been tested on an Intel Pentium 4 PC (1.7 MHz and 256 Mb RAM memory) provided with Microsoft Windows 2000. The final program is used by the therapist who follows the following standard steps: 1. Personal data about the patient are saved in the database. 2. The therapist fastens the splint to the patient’s arm and the wires to the splint. 3. A small amount of torque is exerted by the motors in order to keep the wires taut. 4. The therapist moves the patient’s arm to a desired location and saves the location data (x,y,z) by simply clicking the third bottom of the wireless mouse. 5. Then he moves the arm to an other location and the saving process is repeated. 6. When the therapist believes that an adequate number of locations (up to 10) has been reached, he stops the saving process. 7. From now on, the robot will move the patient’s arm moving trough a spatial path formed by the saved locations. During the therapy, the therapist follows the actual position of the arm (as well as forces exerted by the patient) on the display. It is possible to change some parameters characterizing the therapy such as the number of times the motion is going to be repeated or the maximum speed of the end-effector (3 ÷ 5 cm/s). It is always possible to stop the therapy by means of a software stop and/or by means of a safety switch. 23.3.2 Treatment Protocol Our protocol requires intensive repetitive exercise of the affected limb with early initiation after ischemic or haemorrhagic ictus (within the first week). In this phase, the robot training will supplement the standard poststroke multidisciplinary rehabilitation programme. The current study aims to test whether an additional sensorimotor exercise, delivered by a robotic device, can improve the motor outcome. In fact, we expect to measure a decreased impairment specific to the exercised limb after the treatment. We will use robot-assisted movement in passive mode, i.e. the subject relaxes as the robot moves the limb toward a target with a predetermined trajectory. This treatment will be administered by rehabilitation therapists that will devote in average 20 or 30 minutes for two or more sessions a day. NeRebot allows the execution of the treatment in a flexible way. We now briefly summarize the long-time goals and the tools of this research. Objective: to compare the effects of robot-assisted movement training in the precocious phase of the illness (flaccid phase), with conventional techniques for the rehabilitation of upper-limb motor function.
374
P. Gallina
Participants: consecutive sample of twenty hemiplegic patients of the upper and lower extremity after ischemic or haemorrhagic ictus in muscular flaccid phase, randomly allocated in two groups admitted to our Stroke Center. Criteria inclusion: age less than 80 years, stroke within 1 week, also with cognitive impairment but to be able to understand and follow instructions; written and informed consent has to be obtained from all patients. Intervention: all the patients are admitted to take similar standard post-stoke rehabilitation protocol (mobilisation, sensory stimulation, preventive work contractures, etc.). One of the two patient groups admitted to our study undergoes the training delivered by NeRebot for 1 hour for day, 5 days a week. Assessment (main outcome measures): at the beginning and at the end of the trial (after 3 weeks) we measure motor impairment with Medical Research Council (MRC) scale, Fugl-Meyer assessment of motor impairment, FIM™-motor instrument, biomechanics measures of strength and reaching kinematics. Clinical evaluations are performed by the same physiatrist blinded to group assignments.
23.4 Conclusions and Future Research The use of wire driven robots in rehabilitation along with problems and advantages has been presented. In particular, a wire-based robot, called NeRebot, for neurorehabilitation has been described in detail. Although the robot has only 3 DoFs, it is capable to move the patient’s arm along non-trivial spatial paths according to the required therapy. NeRebot is meant to be employed by the therapist on hemiplegic patients after ischemic or haemorrhagic ictus during the muscular flaccid phase. At this stage, patients undergo passive exercises.
References 1.
2. 3. 4. 5.
6.
Butefisch C, Hummelsheim H, Danzler P, Mauritz KH (1995) Repetitive training of isolated movements improves the outcome of motor rehabilitation of centrally paretic hand, J. Neurol SCI, 130: 59–68 Gallina P, Rosati G (2002) Manipulability of a planar wire driven haptic device, J Mechanism and Machine Theory, 37: 215–228 Ishii M, Sato M (1998) Six degree of freedom master using eight tensed strings, ISMCR '98, Prague, pp 251–255 Kim S, Ishii M, Koike Y, Sato M (2000) Development of a SPIDAR-G and Possibility of its application to virtual reality, VRST2000, pp 22–25 Liendmann R, Tesarhave D (1989) Construction and Demonstration of a 9-String 6DOF Force Reflecting Joystick for Telerobotics, NASA International Conference on Space Telerobotics, pp 55–63 Liepert J, Bauder H, Wolfgang HR, Miltner WH, Taub E, Weiller C (1999) Treatment-induced cortical reorganization after stroke in humans, Stroke, 31: 1210–16
23 Wire Driven Robots for Rehabilitation 7.
375
Lum PS, Burga CG, Shor PC, Majmundar M, Van der Loos M (2002) Robot-assisted movements training compared with conventional therapy techniques for the rehabilitation of upper-limb motor function after stroke, Arch Phys Med Rehabil, 83: 952–59 8. Sciavicco L. Siciliano B. (2000) Modelling and control of robot manipulators, Springer-Verlag, London 9. Shen Y, Osumi H. Arai T (1994) Set of manipulating forces in wire driven systems, Proc. IEEE/RSJ/GI Int. Conf. on Int. Robot. and Syst., 3: 1626–1631 10. Takahashi Y, Kobayashi T (1999) Upper limb motion assist robot, ICORR, Int. Conf. on Rehab. Robotics 11. Volpe BT, Krebs HI, Hogan N, Edelstein I, Diels C, Aisen M (2000) A novel approach to stroke rehabilitation: Robot-aided sensorimotor stimulation, Neurology, 54: 38–44.
24 A Wrist Extension for MIT-MANUS Hermano Igo Krebs, James Celestino, Dustin Williams, Mark Ferraro, Bruce Volpe, and Neville Hogan
Abstract In 1991, a novel robot named MIT-MANUS was introduced as a test bed to study the potential of using robots to assist in and quantify the neuro-rehabilitation of motor function. It introduced a new brand of therapy, offering a highly backdrivable mechanism with a soft and stable feel for the user. MIT-MANUS proved an excellent fit for the rehabilitation of shoulder and elbow of stroke patients with results in clinical trials showing a reduction of impairment in these joints. The greater reduction in impairment was limited to the group of muscles exercised. This suggests a need for additional robots to rehabilitate other target areas on the body. The focus here is the development and implementation of a robot for wrist rehabilitation, designed to provide three rotational degrees of freedom. This paper covers the basic system design and characteristics along with a description of therapy. We are presently conducting clinical trials at the Burke Rehabilitation Hospital (White Plains, NY). If improvements comparable to those seen for shoulder and elbow are seen with the wrist robot, then rehabilitation therapists will have a pair of powerful robotic tools at their disposal to promote both impairment reduction and functional independence (MIT-MANUS and wrist robot).
24.1 Introduction Each year, about 700,000 Americans have a stroke [1], making it the third largest cause of death and the leading cause of disability in the country. Stroke can cause immediate deficits of motor control including loss of volitional movement on the affected side (hemi-paresis) and inappropriately timed or graded muscle activations. With time, other impairments might appear including hyperactive stretch reflexes, increased resistance to passive movement due to changes in the passive mechanical properties of muscle (spasticity), and hypo-extensibility of the muscletendon complex (contracture). Depending on the severity of the stroke, survivors may lose their pre-stroke levels in abilities that rely on cognition and motor control. Stroke rehabilitation is a restorative process that seeks to hasten and manage recovery by treating the disability, largely through physical or occupational therapy [6]. The main goal of physical and occupational rehabilitation is to maximize motor performance and minimize functional deficits within the constraint of the Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 377-390, 2004.
© Springer-Verlag Berlin Heidelberg 2004
378
H.I. Krebs et al.
neurological deficit. Present therapeutical approaches make a “leap-of-faith” that the damaged human brain is capable of extraordinary degree of selfreorganization, or plasticity, enabling learning and memory and leaving open the possibility for motor recovery. Its tenets include the belief that afferent limb stimulation can lead to the re-establishment of the neural pathways that control volitional movement, so that neurological rehabilitation can be derived from physical or occupational therapy [4]. A more cynical, but equally appropriate, view suggests that nature, e.g., lesion size and foci, are the overwhelming factors determining the stroke patient outcome and that nurture (e.g. physical or occupational therapy) bestows no significant effect over the final outcome. If the later were true, the application of robotics to assist in neuro-rehabilitation1 would be fundamentally flawed. Therefore, from the onset we had to address this fundamental question of whether exercising therapy influences brain recovery. To that end, we deployed and commenced extensive clinical trials of our first robot, MIT-MANUS (see Fig. 24.1), at the Burke Rehabilitation Hospital [10]. MIT-MANUS, developed at the Newman Laboratory for Biomechanics and Human Rehabilitation at MIT with the support from the National Science Foundation, provides a platform for the study of human motor control and recovery as well as a tool for the administration of physical and occupational therapy. It is a planar, two-degree-of-freedom robot providing exercise for the upper extremity as the patient completes a series of “video games” that involve positioning the robot end-effector. The design of this robot, completed in 1991, is based on a five-bar, parallel drive Selective Compliance Assembly Robot Arm (SCARA). By minimizing the endpoint impedance of the robot, the feel of the robot can be easily modulated through control, allowing safe patient interaction and without excessively interfering with the patient's natural arm dynamics. The controller sets up a virtual spring and damper between the task defined, time dependant equilibrium point and the position of the end-effector. MIT-MANUS has been in daily operation for almost 10 years, delivering therapy to over 250 stroke patients, both inpatients and outpatients. Its commercial version, InMotion22, have been deployed at several hospitals and universities. Six of those facilities, namely Burke Rehabilitation Hospital (White Plains, NY), Spaulding Rehabilitation Hospital (Boston, MA), Helen Hayes Rehabilitation Hospital (West Haverstraw, NY), Rhode Island Rehabilitation Hospital (North Smithfield, RI), and Baltimore & Cleveland VA Medical Centers (MD, OH) are close MIT collaborators. Our results endow present therapeutical approaches with scientific evidence that there is a significant role for brain reorganization and plasticity following a stroke. Our results suggest that goal oriented exercise of a hemiparetic limb appears to harness and promote the neuromotor recovery following a stroke [2, 7, 8, 11, 16, 17, 18]. For example, ninety-six stroke inpatients exhibiting a unilateral le1
2
Rather than using robotics as an assistive technology, our research focus on the develoment of robotics as a tool to enhance the productivity of clinicians in their efforts to facilitate a disabled individual's recovery. Produced by Interactive Motion Technologies, Inc., Cambridge, Massachusetts.
24 A Wrist Extension for MIT-MANUS
379
sion were enrolled in clinical trials at the Burke Rehabilitation Hospital. Patients were typically enrolled 4 weeks following the onset of their stroke. Patients were randomly assigned to an experimental and a control group. The experimental group received an hour per day of robot-aided therapy exercising the shoulder and elbow. If the patient could not perform the exercise routine, the robot guided or assisted the patient movements. The control group received an hour per week of “sham” robot-aided therapy with the same video games. If the patient could not perform the exercise routine, he/she used the unimpaired arm to guide to impaired one to complete the game’s goal. The clinical team and the patients were kept blinded to the group assignment. Both groups were evaluated by the same blinded clinician (double blinded study).
Fig. 24.1. Photograph of the prototype MIT-MANUS, in daily operation with stroke patients at the Burke Rehabilitation Hospital (White Plains, NY) since 1994
Table 24.1 summarizes the results of the initial study with inpatients as measured by standard clinical impairment instruments: the Motor Power for shoulder and elbow (MP), Motor Status Score for shoulder and elbow (MS-se), and Motor Status Score for wrist and fingers (MS-wh). Note that the MS-se for shoulder and elbow (focus of the exercise routines) of the experimental group shows a statistically significant difference over the control group in a comparable period. Note also that we observed no difference between groups in the MS-wh for wrist and fingers (not exercised). This result suggests a local effect with limited generalization of the benefits to the unexercised limb or muscle groups. According to the notion of task specificity, improvements due to physical rehabilitation are localized to the targeted area, so that in order for a patient to relearn a given task, each required limb segment for that task must be re-
380
H.I. Krebs et al.
habilitated. This result, along with the clinical success of MIT-MANUS, has motivated the development of new modules to work with the shoulder and elbow in a three-dimensional workspace, the wrist, the fingers, and legs. Wrist and forearm articulation play an important role in enhancing the usefulness of the hand by allowing it to take up a variety of orientations with respect to the elbow. This paper focuses on the development of a robot for wrist rehabilitation in which two degrees of wrist rotation and one degree of forearm rotation are targeted. Table 24.1. Change during Acute Rehabilitation (96 inpatients): Experimental vs. Control Group - admission to discharge of rehabilitation hospital; p < 0.05 for statistical significance Between Group Comparisons: Final Evaluation Minus Initial Evaluation Motor Power (MP) Motor Status shoulder/elbow (MS-se) Motor Status Wrist / Hand (MS-wh)
Experimental (N = 55) 4.1 ± 0.4 8.6 ± 0.8 4.1 ± 1.1
Control (N = 41) 2.2 ± 0.3 3.8 ± 0.5 2.6 ± 0.8
P-Value <0.01 <0.01 NS
24.2 Specification for a New Wrist Device In 1991, we completed an initial version of a wrist module [10]. While this initial wrist module could achieve the design targets, it suffered from a significant handicap: it took too long to secure the spastic hand of stroke patients onto the device. Hence it was deemed inadequate to treat the stroke population. It is of paramount importance that the wrist device be easy for the therapist and the patient to use. To prevent daily use from becoming a chore for the patient and the therapist only a minimum amount of time and effort must be required to attach and remove the patient from the wrist device. The setup target time was estimated at 2 minutes maximum. Another key aspect is low endpoint impedance. That is, when a patient attempts to backdrive the robot, the effective friction, inertia and stiffness should ideally be low enough to feel as if no robot were connected. In this case, the robot hardware is termed “backdrivable”. The maximum reflected inertia for backdriveability for -4 2 each wrist degree of freedom was estimated to be 30 to 45.10 kg-m . The maximum reflected friction for backdriveability was estimated to be 2,160 gm-cm. The wrist device should also have the ranges of motion of a normal wrist in everyday tasks, i.e., flexion/extension 70°/65°, abduction/adduction 15°/30°, pronation/supination 90°/90°. The torque output from the device must be capable of lifting the patient’s hand against gravity, accelerating the inertia, and overcoming any tone. The estimated value for flexion/extension and abduction/adduction was 12,240 gm-cm and for pronation/supination 17,280 gm-cm [14, 19].
24 A Wrist Extension for MIT-MANUS
381
24.2.1 Kinematic Selection A curved slider was found to suffice for the robot’s pronation/supination axis. A curved rail sits between four guide wheels, which allow it to rotate (see Fig. 24.2). Several different options were considered for the remaining kinematics. These options must allow the patient to move in flexion/extension and abduction/adduction and also must allow the robot to apply torques to the patient’s hand. After reviewing each of these kinematic options, a cardan joint was found to be the most appropriate (see Fig. 24.3). A mockup is shown in Fig. 24.4.
Fig. 24.2. Curved Slider
Fig. 24.3. Cardan Joint Kinematics
Fig. 24.4. Mockup in Flexion/Extension, Abduction/Adduction, and Pronation/Supination
382
H.I. Krebs et al.
24.2.2 Actuator Placement and Transmission Selection Three major sub-categories emerged from the various actuator/transmission packages considered. The first option placed all actuators on the ground frame (see link 1 in Fig. 24.3). The second option placed an actuator on the ground and two actuators on link 2 (differential configuration). The last option placed an actuator on the ground frame, an actuator on link 2, and an actuator on link 3 (serial configuration) [15]. In comparing these options, the differential configuration clearly held the advantage. For the same actuators, the range of output torque was up to two times larger. Another advantage is that the actuators can more effectively counterbalance each other in the differential configuration. This is because both actuators are placed on a single link (link 2). 24.2.3 Actuator Selection We limited our search to ultimag rotary actuators, servo-disc, DC-brushed and brushless motors. Of these, we selected the brushless motors, which deliver high torque and runs smoothly at low speeds, a requirement in this application [3]. The brushless motors also allow for better heat dissipation because the windings are on the stator. In order to select from the many available brushless motors, the reflected output impedances for each axis of rotation were compared. The following brushless motors were deemed acceptable (< 0.5 kg): Parker series SM160A and SM161A, Pittman series 34x1,2 and 44x1,2,3, and the Kollmorgen series 512, 513, 711-714 actuators3. Figure 24.5 shows a sample of how the actuators were compared. In this case, we are comparing the added inertia in abduction/adduction and flexion/extension due to the motor. The abscissa shows the reduction ratio required for each motor to achieve the specified maximum output torque of 12,240 gm-cm. The ordinate shows the added inertia in both flexion/extension and abduction/adduction. This 2 number was found by taking 2.Im.R , where Im is the inertia of the given motor armature, R is the reduction ratio and the factor of 2 is due to the fact that both motors on link 2 will be backdriven. In a similar fashion, we estimated the added friction in abduction/adduction and flexion/extension. A similar approach yielded estimates for pronation/supination. We opted for the Kollmorgen’s RBE 711 motors for the abduction/adduction and flexion/extension actuators and the RBE 712 motors for pronation/supination. Because the flexion/extension and abduction/adduction motors will be in close proximity to the patient, a finite element analysis was performed to ensure that motor temperature would not rise to uncomfortable levels.
3
Williams redesign of the wrist robot commercial version at Interactive Motion Technologies, Inc. uses Maxon’s motors.
24 A Wrist Extension for MIT-MANUS
383
Fig. 24.5. Added Inertial Impedance in Flexion/Extension and Abduction/Adduction
24.2.4 Sensor Selection The determining factor in selecting the type of position feedback device was its size, servo-amplifier compatibility, and insensitivity to noise. The smallest system found was an incremental encoder from Gurley Precision, the R119. It is a highresolution mini-encoder with 10,240 cycles per rev. After quadrature, this gives a resolution of 40,960 counts per revolution. Its size was well suited for the wrist device allowing for its placement inside of the transmission housing.
24.3 Alpha-Prototype Overview 4 Figure 24.6 offers a depiction of the wrist robot [20]. The two side-mounted actuators are connected to two spur gear trains that meet at a differential mechanism. The differential mechanism allows for the wrist rotations. Wrist rotations about an axis BB’ as shown in Fig. 24.7 a are known as flexion and extension; wrist rotations about an axis AA’ as shown in Fig. 24.7 a are known as abduction and adduction (or radial and ulnar deviation). The entire differential mechanism is mounted onto a curved rack so that it can be actuated from beneath the forearm, thus accounting for the pronation and supination of the forearm shown in Fig. 24.7 b. The operation of the differential mechanism is detailed in Fig. 24.8. Two Kollmorgen RBE 711 brushless motors are used for actuation. A two-stage reduction between the motor pinion and differential shaft gives an overall reduction of 8.14. The pinions on the motors, gears A, mesh with a compound intermediate gear stage, gears B and C. Gears C then meshes with gears D, the differential end gears, which are rotationally fixed to the differential end bevel gears controlling 4
Part of this section resembles [5].
384
H.I. Krebs et al.
the spider gear. Arbitrary joint torque production on the differential is a result of the proper combination of motor torques. Basically, each motor contributes equal components of vertical and horizontal motion of the handle when actuated. When the two motors cooperate, motion is purely abduction/adduction; when the motion generated from these two motors oppose each other, the resulting motion is flexion/extension. These ideas are reflected in the expressions relating motor angles and torques to the orientation and torque on the robot arm.
Fig. 24.6. CAD representation of initial wrist robot design
a
b
Fig. 24.7. Wrist and forearm rotations [9]
Fig. 24.8. Detail of differential mechanism [20]
24 A Wrist Extension for MIT-MANUS
θ long = θ lat =
where
θ long
and
θ lat
~
~
θR +θL
385
(24.1)
2 ~
~
θR −θL
(24.2)
2
τ long = τ~R + τ~L
(24.3)
τ lat = τ~R − τ~L
(24.4)
are the longitude and latitude of the robot arm,
~
θR
and
~
θL
are the rotation of the right and left differential end gears referenced to a neutral handle position5, and τ long , τ lat , τ~R and τ~L are the corresponding torques. During therapy, the patient’s wrist is positioned over the spider gear, so that
θ long
is equal to the angle of wrist flexion. Abduction/adduction is accommodated for through the handle kinematics; the handle is attached to the robot arm through a linear ball slide guide whose rack can pivot. The entire handle mechanism can be viewed as a planar four-bar linkage that can pivot about one orthogonal axis. This results in a one-to-one mapping between θ lat and wrist abduction/adduction, with the precise relationship determined by the geometry of the patient. The differential housing and its motors are carried by a Kollmorgen RBE 712 motor through a Bishop Wisecarver 180° geared slide ring so that forearm rotations can be actuated. The gear ratio between the motor pinion and the ring gear is 10.5. We have characterized the alpha-prototype robot's subsystems and compared the results to expected models. From this testing, it is apparent that the mechanism is dominated by the effects of gravity, motor cogging, and backlash in the gear train. Properly compensating for these effects gives the robot a smooth feel. The maximum torque producing capability of the alpha-prototype robot about one of the differential axes is 1.7 Nm (flexion/extension and abduction/adduction). The robot is also capable of producing 1.5 Nm in pronation/supination. Figure 24.9 shows the alpha-prototype wrist robot workstation at the Burke Rehabilitation Hospital during pilot trials. The patient is seated with the robot to his side and is secured at the hand, wrist, and above the elbow. The workstation is meant to hold the patient comfortably with around 20° of shoulder abduction and 30° of shoulder flexion. The monitor in front of the patient conveys the orientation of the robot and the desired motions as described below. Sensing consists of three 5
Sign convention holds that clockwise rotation of the motors is positive.
386
H.I. Krebs et al.
Gurley R119 incremental encoders, one located on each actuator. This provides 40,960 counts per revolution of position information to the control algorithm. Velocity information is then derived from the position signals, with the system sampling at a rate of 1 kHz. The Kollmorgen CE06 servo-amplifiers are operated in analog torque mode through a UEI PD2DAO 16-bit resolution DA board. Steps have been taken to guarantee patient’s safety. Mechanical and software limits prevent over-rotation in any of the axes. A programmable logic controller disables all of the servo-amplifiers if a fault is detected in any one of them. Games can be ended prematurely by pressing a key or using one of the three emergency stop buttons provided. In the event of motor saturation, the commands are scaled so that the desired force direction is preserved.
Fig. 24.9. Photograph of wrist robot workstation at the Burke Rehabilitation Hospital
24.4 Robotic Therapy Our clinical collaborators at Burke were satisfied with the range of achievable forces and stiffness during the test phase with normal subjects, hence we commenced full-scale pilot therapy trial with stroke patients using the alpha-prototype wrist robot in standalone mode6. During this on-going pilot trial, robotic therapy is being administered through the use of interactive video games, much like those 6
The wrist robot can be operated as a standalone device or as a module mounted at the tip of MIT-MANUS.
24 A Wrist Extension for MIT-MANUS
387
used with MIT-MANUS. The three degrees of freedom provided by the robot are mapped onto the two dimensional video screen using the tools available with software written for the QNX operating system (to be migrated to RT-Linux at the conclusion of this pilot trial). System state variables are stored in shared memory, from where they can be accessed by both the control algorithm and the monitor program. Figure 24.10 shows the motions controlled by the differential (flexion/extension and abduction/adduction movements), represented by a cursor that moves on the screen, as the projection of the handle deviation from a neutral position. The patient is prompted to move from target to target by color changes. Target placement accounts for the normal wrist’s range of motion in each direction. The line on the cursor represents the angle the wrist saggital plane makes with the vertical (pronation and supination). Games that focus exclusively on forearm rotation use the alternate display shown in Fig. 24.11. These games include moving to specified targets, as well as tracking tasks (sinusoidal) traced out by the target line. Current video games available with the robot attempt to mimic basic therapy protocols by providing strength and sensorimotor training. During therapy, an impedance controller7 with constant stiffness and damping is used to guide the patient’s arm. Games that involve movements between targets are specified by the control algorithm to have minimum-jerk velocity profiles, consistent with the idea that humans tend to make smooth, bell-shaped movements in task space [12]. The extent to which this paradigm is applicable for pure rotational movements is an aspect of human motor control that is being investigated with this device. Figure 24.12 shows a sample run of a video game by a normal and by a stroke subject attempting to move along with the robot. The results shown are a cross-plot of the latitude versus the longitude of the robot arm, which corresponds to the wrist orientation. The time history of position, (derived) velocity, command torques, and current information (motor torques) are also available in the data file stored after each game.
Fig. 24.10. Basic display mode 7
Fig. 24.11. Pronation-Supination display mode
Conceived in the early 1980’s by one of the co-authors [13], Impedance Control has been applied successfully in numerous robot applications including human-motor interaction (see, e.g., the February 1997 issue of IEEE Control System, Special Issue on Robotics, which contains several articles on impedance control).
388
H.I. Krebs et al.
Fig. 24.12. Results from unassisted movement toward targets for a stroke patient (top row) and normal subject (lower row). Top row shows the stroke patient performance at admission (left plot) and at discharge (right plot). At discharge, stroke patient is able to hit all the targets and patient’s movement resembles that of a normal subject
24.5 Conclusions Clinical results to date suggest that robot-aided neuro-rehabilitation can have a positive influence on neuro-recovery following a stroke. Our pioneering clinical results are consistent with a prominent theme of current neuroscience research into the sequelae of brain injury, which posits that activity-dependent plasticity underlies neuro-recovery. Furthermore, our results with more than 250 stroke inpatients and outpatients open up a number of opportunities. We envision the rehabilitation clinic of the future as gyms of “rehabilitators” working with different limb segments, muscle groups, and functional tasks. At this gym, the therapist tailors an exercise routine to the particular patient’s needs to optimize recovery, increasing the clinic’s productivity by overseeing several patients at the same time. The productivity of the overall rehabilitation system may further be improved by the objective and precise measurements afforded by robotics, with the potential to automate the assessment and documentation of recovery. We also envision further improvements by extending treatment with robot-aids at patients’ homes.
24 A Wrist Extension for MIT-MANUS
389
From the realm of science fiction to the substance of humbling reality, the novel module for wrist rehabilitation is another marker along the trail. It follows the same design guidelines of MIT-MANUS, which includes back-drivability. Our experience has shown that it is an important feature of any successful interactive rehabilitation robot. This paper provided an overview of the alpha-prototype wrist robot from its design and its implementation, to its pilot clinical trials. The device has proven capable of providing continuous passive motion, strength, sensory, sensorimotor, and adaptive training for the wrist. In its final form, this robot will offer insights into human motor control and human learning, as well as the potential for customizable, adaptive, and rigorously quantified therapy in solo operation or mounted at the tip of the planar MIT-Manus. On a final note, while very little technology presently exists to support the recovery phase of rehabilitation, we believe the landscape will change quickly in the near future.
Acknowledgment This work was supported in part by The Burke Medical Research Institute, the Langeloth Foundation, and NIH Grant # 1 R01-HD37397-0 and #1 R01HD36827-02. James Celestino was supported by an NSF fellowship.
References 1. 2.
1.About Stroke | Internet Stroke Center: http://www.strokecenter.org/pat/about.htm Aisen ML, Krebs HI, McDowell F, Hogan N, Volpe BT (1997) The effect of robot assisted therapy and rehabilitative training on motor recovery following stroke. Archives of Neurology 54: 443–446 3. Asada H, Youcef-Toumi K (1987) Direct Drive Robots. MIT Press, Cambridge, MA. 4. Carr J, Shepherd R (1998) Neurological Rehabilitation: Optimizing Motor Performance. Butterworth-Heinemann, Boston 5. Celestino J, Krebs HI, Hogan N (2003) A Robot for Wrist Rehabilitation: Characterization and Initial Results. Proceedings of the ICORR 2003, pp 27–30 6. Duncan P, Badke MB (1987) Stroke Rehabilitation: The Recovery of Motor Control. Year Book Medical Publications 7. Fasoli SD, Krebs HI, Stein J, Frontera WR, Hogan N (2002) Effects of Robotic Therapy on Motor Impairment and Recovery in Chronic Stroke. Archives of Physical Medicine and Rehabilitation 84: 477–482 8. Ferraro M, Palazzolo JJ, Krol J, Krebs HI, Hogan N, Volpe BT (2003) Robot Aided Sensorimotor Arm Training Improves Outcome in Patients with Chronic Stroke. Neurology 61: 1604–1607 9. Kapandji IA (1970) The Physiology of the Joints: Annotated diagrams of the mechanics of the human joints. E & S Livingstone, London 10. Krebs HI, Hogan N, Aisen ML, Volpe BT (1998) Robot Aided Neuro-rehabilitation. IEEE Transactions on Rehabilitation Engineering 5: 75–87
390
H.I. Krebs et al.
11. Krebs HI, Volpe BT, Aisen ML, Hogan N (2000) Increasing Productivity and Quality of Care: Robot-Aided Neurorehabilitation. Journal of Rehabilitation Research and Development 37(6): 639–652 12. Hogan N (1984) An Organizing Principle for a Class of Voluntary Movements. Journal of Neuroscience 4(11): 2745–2754 13. Hogan N (1985) Impedance Control: An Approach to Manipulation: Part I – Theory, Part II – Implementation, Part III – Applications. Journal of Dynamic Systems Measurement and Control 107: 1–23 14. Palastanga N, Field D, Soames R (2002) Anatomy and Human Movement: Structure and Function, 4th Edition. Butterworth Heinemann, Oxford 15. Slocum AH (1992) Precision Machine Design, Prentice Hall, Englewood Cliffs, NJ 16. Volpe BT, Krebs HI, Hogan N, Edelstein L, Diels CM, Aisen ML (1999) Robot Training Enhanced Motor Outcome in Patients With Stroke Maintained over Three Years. Neurology 53: 1874–1876 17. Volpe BT, Krebs HI, Hogan N, Edelstein L, Diels CM, Aisen ML (2000) A Novel Approach to Stroke Rehabilitation: Robot Aided Sensorymotor Stimulation. Neurology 54: 1938–1944 18. 18.Volpe BT, Krebs HI, Hogan N (2001) Is robot-aided sensorimotor training in stroke rehabilitation a realistic option? Current Opinion in Neurology, Lippincott Williams & Wilkins, vol 14, pp 745–52 19. Webb Associates (1978) Anthropometric Source Book Volume I: Anthropometry for Designers. Scientific and Technical Information Office, Yellow Springs, Ohio 20. Williams DJ, Krebs HI, Hogan N (2001) A Robot for Wrist Rehabilitation, IEEE 23rd Annual International Conference of the IEEE-EMBS, session 3:6, paper #2.
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots András Tóth, Gusztáv Arz, Gábor Fazekas, Daniel Bratanov, and Nikolay Zlatov
Abstract Robotic motor rehabilitation is a promising approach to rehabilitation of post stroke, traumatic brain injury, and spinal cord injury neuromotor impairments. We have observed that manual physiotherapy of spastic hemiparetic patients include 45 different types of three-dimensional repetitive shoulder-elbow motion exercises. The physical capabilities as well as the availability of the physiotherapist limit the realization of effective physiotherapy in daily routine. The paper describes the REHAROB Therapeutic System that uses two industrial robot arms for exercising the patient’s upper limb. Special devices and measures make the system safe both for the patient and for the physiotherapist. The current REHAROB system performs individual, three-dimensional, anti-spastic passive motion therapy in the full range of concerted shoulder girdle (5) and elbow (2) motions but due to its versatility it is going to be expanded towards active assisted and active against resistance therapy, and for the physiotherapy of spastic hemiparetic lower limbs. The REHAROB Therapeutic System was designed to integrate commercially available robot, safety, control, and sensor components.
25.1 Introduction Stroke is one of the most common major neurological disorders comprising half of all patients admitted to hospital for a neurological disease. Annual incidence of stroke is between 150 and 400 cases for each 100000 population in the European Union while it is 214 annual cases in the United States of America and 400 cases in Hungary [1]. 80 percent of stroke survivors have significant neurological impairment. 69 percent of them can be rehabilitated successfully while the rest of the survivors need help in everyday activities. A characteristic neurological impairment of stroke patients is the spastic hemiparesis of the limbs. Evidence has shown that early and intensive motion therapy positively affects the restoration of the motor function after stroke [2]. Antispastic physiotherapy is mediated exclusively by physiotherapists because permanent sensation of and instant reaction to the patient’s kinesthetic and mental status is required. Budget constraints however limit the realization of a labor-intensive, one-to-one, two times per day physiotherapy in the rehabilitation practice. Widely available Continuous Passive Motion Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 391-411, 2004.
© Springer-Verlag Berlin Heidelberg 2004
392
A. Tóth et al.
(CPM) exercising machines used for post-surgical rehabilitation are not suitable for antispastic physiotherapy. Taking the challenge research groups are attempting to develop robotic systems that would assist the physiotherapists in gait-, trunk-, balance-, arm-, hand-, and finger rehabilitation of spastic hemiparetic patients. The ARM Guide [3], the MIME [4], the MIT-MANUS [5], ArmTrainer and the GENTLE/S [6] are the best known spastic arm physiotherapy systems. The MIT-MANUS has gained commercial success with a few installations, whilst the others remain operant in the developers’ rehabilitation organizations. By learning from its predecessors the REHAROB Therapeutic System was designed to bring advances in three fundamental features [7]: 1. To use two robotic manipulators for controlled moving of the upper arm and the lower arm of the patient 2. To perform complex full anatomic Range of Motion (ROM) exercises on all possible shoulder girdle and elbow motions: shoulder protraction-retraction, shoulder elevation-depression, shoulder flexion-extension, shoulder abductionadduction, shoulder external-internal rotation, elbow flexion-extension, and lower arm pronation-supination 3. To build the system from mass produced commercial components - like industrial robots - in order to cut product costs and to achieve critical mass for viable production
25.2 Analysis of Spastic Upper Limb Physiotherapy To find answer to the question: is it possible to base an upper limb motor rehabilitation system on industrial robots, spastic hemiparetic upper limb exercises have been analyzed. Standard spastic hemiparetic physiotherapy methods are the Bobath, PNF, Kabat, Brunnström, Rood, Vojta, Fay-Doman, Carr-Shepherd, Pet methods [8]. Based on observation a catalogue of 45 upper limb physiotherapy exercises was prepared: http://reharob.manuf.bme.hu/research/exercises. The start position and the end position of each exercise are described by text and illustrated by photographs (Fig. 25.1.). Execution of the movement is also described by text and illustrated with a short movie clip. Start and end positions of all exercises are shown by a skeleton in Fig. 25.1. To define the engineering parameters of the therapeutic system not only qualitative information about physiotherapy exercises but biomechanical data, such as the real range of motion of the arm segments and the force/torque exerted by the physiotherapist on the exercised arm, were collected. Biomechanical measurements have been performed on 5 healthy volunteers and on 15 spastic hemiparetic patients. Figure 25.3 shows how thermoplastic upper- and lower arm orthoses instrumented with 6 DOF Force/Torque sensors and ultrasound marker triplets were used for the biomechanical measurements.
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
393
Fig. 25.1. Sample of the catalogue description of a spastic hemiparetic upper limb exercise
Fig. 25.2. The set of anti-spastic upper limb exercises
394
A. Tóth et al.
Fig. 25.3. Biomechanical measurements of spastic exercises
Results of the qualitative and quantitative analysis are: • • • • • • • • •
Overall exercising time of one patient is 20-30 minutes The therapy program includes 5-10 exercises from the catalogue One exercise is repeated 10-20 times on average The therapy runs through the full ranges of the shoulder girdle and the elbow joint motions (ROM) The physiotherapist always grasps the patient’s arm with two hands to achieve full control over the five shoulder girdle and the two elbow motions Speed of orthosis reference frames is less than 0.25 m/s Maximum loads at reference frames – during spastic reactions – are 123 N force and 65 Ncm torque Resistance and ROM of the upper limb change during exercising due to relaxation Every patient receives individual therapy.
25.3 System Design and Development The REHAROB Therapeutic System is designed to deliver a complex therapy program based on the 45 spastic upper limb motion exercises to lying or sitting, right arm or left arm patients. Programming of the system is realized by demonstration of the exercises through the robots acting as haptic devices. 25.3.1 Mechanical Design Based on the specification of requirements initial system design was completed [9]. For delivery of exercises to the patient two ABB industrial robots were selected: the wall mounted 0.8m reach IRB 140 industrial robot is connected to the upper arm and the inverted 1.4m reach IRB 1400H industrial robot is connected to
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
395
the lower arm (Fig. 25.2). It is not self-evident why these robots in the shown layout are optimal to upper limb physiotherapy. Upper limb exercising machines – be they simple Continuous Passive Machines (CPMs) or robotic systems – follow two alternative kinematical structures [10]. The first type is the exoskeleton kinematical structure that is the out-of-the-body copy of © the exercised bone-joint structure. From the CPMs the Kinetec [11], while from the robotic systems the MULOS [12] belong to this group. The external grasping kinematical structure mimics the traditional human physiotherapy, which includes exter© nal grasping/supporting of the exercised limb segment. From the CPMs the Fisiotek [13] and from the robotic systems the ARM Guide [3], the MIT-MANUS [5], the MIME [4], and the GENTLE/S [6] belong to this group. All of the known systems are simple, 1-6 DOFs systems which limit their exercising capabilities. Exoskeleton type machines drive only 1-3 joints of the upper limb, while external grasping systems produce multi joint motions restricted to planar, unilateral or line paths. The REHAROB Therapeutic System also belongs to the external grasping type, which give adequate response to the requirements formulated in Section 25.1. The main objective of mechanical design is to select appropriate industrial robot arms for two-hand external grasp exercising and locate them in the 3D space for lying and sitting, and for left handed and right handed patient. Preliminary analysis revealed that a clear biomechanical-analytical approach to the modelling of upper limb physiotherapy could hardly overcome the following difficulties: 13
12
16
7
15
5
19&20
6
17
4
27&28 10
9
8
18
11 1
2&3
14
21&22
23&24
25&26
a
b
c
Fig. 25.4. Motion capturing: a) Active markers: 8-16 and virtual markers: 1-7, 17-28 b) Measurement of a tall sitting subject by an ultrasound-based motion analyzer, c) Motion trajectories of the upper and lower arms are visualized by co-ordinate frames in the IGRIP® robot simulation tool
396
A. Tóth et al.
• Solid modelling and collision detection of the moving elements: the two robots, the patient, the physiotherapist, and the fixed elements: the frame and the couch. • The real upper limb is a composite structure of bones, ligaments, tissues, and skin. As the orthoses cannot be fixed to the bones, unknown deformation will necessarily occur between the orthoses and the rigid bones during exercising, which makes modelling unreliable. The approach we have used for the mechanical design is a combination of the experimental and the simulation methods. 3D motion trajectories of 25 selected exercises out of the 45 exercises were recorded in lying and sitting positions with a 190 cm tall male volunteer and two 165 cm tall female volunteers. The spatial position of 28 markers were recorded with an ultrasound based motion analyzer at 25 Hz sampling rate (Fig. 25.4.), which provided a sufficient number of landmark points for the reconstruction of the patient body, and the motion trajectories of the upper arm and lower arm. Motion records then were exported from the 3D motion Table 25.1. List of executable exercises for the lying-short patient αlower ort=0º Couch position αlower or=45º OK z 690 y 600 NO OK z 790 y 100 OK OK z 690 y 100 OK OK z 640 y 300 OK OK z 640 y 200 NO OK z 740 y 0 NO OK z 740 y 0 NO OK z 740 y 0 OK OK z 740 y 0 OK OK z 690 y 0 NO OK z 840 y -200 OK NO NO NO NO NO NO OK z 840 y 150 OK NO NO OK z 890 y 200 OK OK z 540 y 100 OK OK z 840 y 0 OK OK z 840 y 220 NO OK z 840 y 220 NO OK z 940 y 0 OK 32 19 Nominal couch position: z 690 y 0 (Couch reference point is the top contact surface point below the shoulder centre.) Exercises No.12, No.17, and No.44 was not simulated due to technical reasons
Exercise 01, 02, 03 04, 05 07, 08 09 11 12 10, 13,14 15 17 18, 19 20 21 22, 23 16, 24, 25 26, 27 28, 29 30 32, 33, 41 31, 34, 35, 36 37, 38 39 40 06, 42, 43 44 45 ∑ of OK
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
397 ®
analyzer in the form of ASCII data, and - after modulation – fed to IGRIP , a leading 3D robot simulation tool. The exercises were tested with approx. 40 library robots of IGRIP, which number truly represents the product assortment of the leading industrial robot manufacturers. To find the optimal system layout with virtual exercising we used the “trial and error” method, so the role of the design engineer was central in the research. Advanced parallel search techniques could not be applied unfortunately, because the ® IGRIP license was dedicated only for a PIII 750 PC. The following system parameters were scanned: • Pose of the IRB 140 robot: xbase, ybase, zbase, Ybase, Pbase, Rbase (Yaw, Pitch, Roll) H H H H H H • Pose of the IRB 1400H robot: xbase, ybase, zbase, Ybase, Pbase, Rbase • Connector angle options at the lower (0º, ±45º, ±90º) and upper arm (0º, ±90º) Scanning of the parameter ranges has concluded that the highest rate of the exercises can be played back with an inverted long reach 6 DOF jointed type, and a wall-mounted short reach 6 DOF jointed type industrial robots [9]. In an international tender two ABB robots have been selected though we have proved that none of the current industrial robots can deliver all the 45 exercises in a single fixed patient position (Table 25.1). The parent exercises, that were measured, are typed in bold-face. Coordinate frames are shown in Fig. 25.5. Considering all four cases, on average 10–15 out of the 45 exercises cannot be played back. Dominant reasons of non-executability of exercises are: • Joint limits at Axis 5 of the industrial robots • Reach problems due to long instrumented orthoses • Collision between the robots, the patient, and the couch.
IRB 1400H robot
Adjustable couch IRB 140 robot
z x
Fig. 25.5. Final positions of the robots
398
A. Tóth et al.
a
b Fig. 25.6. Couch adjustment ranges in lying and in sitting treatment positions
Collision or joint limit errors cannot be compensated for1, so we recommend using redundant (having more than 6 DOFs) robot arms in future robotic physiotherapeutic systems whenever they become common among industrial robots. Moreover the ideal industrial robot should have longer upper arm and longer lower arm, and increased joint angle range at axis 5. Reach problems can be partly 1
We note that at manual physiotherapy the physiotherapist can change grasping locations of one or both of his/her hands (e.g. at E29), or move his/her trunk or even step closer to the patient. Fixed industrial robots cannot imitate these behaviors.
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
399
compensated for by changing the position of the patient in respect to the robots. We have also found that all the executable exercises require 13 different discrete couch settings. As final step in the mechanical design the spatial location of the two industrial robots (Fig. 25.5), as well as the vertical and horizontal motion ranges of the powered couch (Fig. 25.6) were defined. Mechanical design of the frame and the couch was completed in Pro/Engineer CAD system so that REHAROB is fully symmetrical to left arm and right arm therapy. 25.3.2 The Instrumented Orthoses To connect the patient’s upper and lower arms to the robots two instrumented orthoses were developed. Each device is 250 mm long and includes 7 elements (Fig. 25.7.) 25.3.3 Control Design The control system of REHAROB involves the two industrial robot controllers, the four sets of 6-axis force/torque measurement systems, the so-called watchdog PC and the high level controller PC. In this chapter we give details only on the implemented robot control method. The physiotherapy robot must work as a haptic device that responds to the kinetic status of the patient and the physiotherapist (Fig. 25.8). Industrial robot controllers are produced with fixed control parameters for fast and accurate positioning, which badly limits the implementation of custom designed control strategies.
Fig. 25.7. The elements of the REHAROB instrumented orthosis: 1. Thermoplastic ortho®4 sis, 2. Quick changer #1, 3. Six DOF F/T sensor for force monitoring, 4. Safeball with a turnable clip, 5. quick changer #2, 6. six DOF F/T sensor for force control, 7. Safety release mechanism
400
A. Tóth et al.
Alteration of the robot controller is not an option here, since due to safety and medical certification reasons integrity of the robot controller has to be retained. We have developed a novel outer-loop indirect force control method for the programming of the robots [14]. This is called teaching in, during which the physiotherapist freely exercises the patient by leading the orthoses through the required ® trajectory with grasping the safeballs , while the robots follow and learn the trajectories (Fig. 25.8.). Figure 25.9 shows the implemented inner-loop/outer-loop control architecture. The inner-loop represents the robot’s internal position control, while the outer ® loop represents the DSP based PI force controller [14]. The UniForce SPU measures the signal of the force control F/T sensor (specified as position 6 in Fig. 25.7.) and processes the force control law - Fd ≡ 0 , M d ≡ 0 - directly based on the measured forces and torques. In other words the force error input signal Fe of the indirect force controller is identical to the measured force signal Fs. The force controller commands the incremental position/orientation displacement of the orthosis to the S4C+ robot controller relative to the previously commanded position xd(k-1) of the robot arm with 40 Hz frequency. Force errors of the digital indirect force control are inevitable but must be radically minimized for safety and comfort.
Fig. 25.8. Teaching in an exercises to the robotic physiotherapy system
xd(k-1)
UniForce® SPU Fd ≡ 0 Fe(k)
Force Control Law
∆xd(k)
S4C+ robot controller
x(k)
Robot arm Fs(k) & Teaching-in device
Force/Torque Acquisition
Fig. 25.9. Block diagram of the implemented indirect force control
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
401
Two times bigger force errors are caused by the so called “look ahead” delay of the S4C+ robot controllers. Regular industrial robots are not real time systems; their use is not affected by the enormous deadtime of path planning caused by the time consuming complex trajectory calculations. The nominal value of the deadtime is 400 ms, independently from the complexity or the speed of the motion. For specific external sensor controlled robotic applications the S4C+ robot controller, however do not have enough system resources to execute real time calculations and sophisticated robot programs. To overcome this deficiency, i.e. to achieve acceptable sampling time for stable and reliable indirect force control, the robot is commanded by a very simple procedure during force-controlled movements. Furthermore, the proportional factor of the PI controller is hugely reduced to provide stability, which increases unfortunately again the force error. The cumulative lateral and axial plays present in the multi-component instrumented orthoses affect fortunately adversely the force errors and help the physiotherapist to stay within the stable control range of the robots during teaching in. This is in compliance with the theory of indirect force control that requires an elastic element in the teaching device for stability and comfort. The final parameters of the simple indirect force control algorithm were determined experimentally during the technical tests (Sect. 25.4) and fine-tuned during the clinical trial (Sect. 25.5). 25.3.4 User Interface and Programming The operating devices and the user interface of the control system were designed not only for safety but also for the maximum comfort of the physiotherapist (PT). In addition to the control devices assembled on the orthoses and on the couch, there is an operating panel on the frame. The panel includes a Touch Screen Display, a keyboard, 3 buttons, and 3 switches only (Fig. 25.10).
Fig. 25.10. The control panel of the REHAROB Therapeutic System
402
A. Tóth et al.
Fig. 25.11. Editing the therapy program using the high level control software
A carefully designed and tested program leads the physiotherapist through the use of the therapeutic system. This program starts just after the mains power of the therapeutic system is turned on. The program displays all the information in connection with the operation of the system. The process of changing the operation modes, the error messages, and also the error handling instructions are indicated. It is very important to underline that the PT should neither program the robots directly nor should they need extensive technical knowledge of industrial robotics. He/she only has to make a physiotherapeutic program by editing and setting the parameters of the exercises automatically appearing in the “Exercise” window of the high-level controller software (Fig. 25.11.) during teaching in. 25.3.5 Safety Measures and Devices Industrial robots, defined by the industrial robot safety standard EN ISO 8373:1994, must not be used in applications where contact with the human body can happen. Furthermore, when in manual mode only, the operators are allowed to attend the workspace of industrial robots. The IRB 140 and the IRB 1400H industrial robots meet the requirements of an other 18 safety and harmonized standards , not listed here due to the space limitations, which is a great advantage but does not make yet the system eligible for robotic physiotherapy. The REHAROB Therapeutic System is a medical device, so it has to meet fully the requirements of the relevant European directive, the Medical Device Directive [15]. Safety of the system was considered first in the system design, i.e. system
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
403
components are mass produced products. In addition to technical reliability of the components, safety equipment and measures ensure safety of the patient and the physiotherapist: • Principal safety rule of REHAROB is that robot(s) can only move under permanent patient enabling. In preparation mode, when the patient is not yet there, the physiotherapist must enable robot motions by pressing a certified three-state enabling device located on the control panel (Fig. 25.10.). During teaching in ®4 the PT has to use also the safeballs (Figs. 25.7 and 25.8.) for enabling the robot motions. As long as the patient stays in the couch, he/she needs to press a certified mobile three-state enabling device with the unaffected hand (the patients hold the device in the right hand in Figs. 25.8 and 25.14.). • Measurement based motion analysis has proved that physiotherapy exercises require low speed. To reduce the risk of high-speed collisions, the robot manufacturer has limited the speed of both robots to only 250 mm/s. • Emergency stop buttons can also be found at two places: on the couch, and on the operating panel. • To compensate for braking distances of the robots in emergency situations, the rigid patient-robot connection is softened by custom made safety release mechanisms (Fig. 25.12). • As final safety measure, a software program called Watchdog monitors the playback of therapy. If excessive deviation occurs in kinematical or F/T data compared to the reference data recorded during teaching in, the Watchdog generates an emergency signal. Data of the physiotherapy exercises, the motion trajectories, the supervision and safety parameters are stored for safetyresearch-simulation purposes. Off-line programming of REHAROB is not allowed, because even if the position of the spastic hemiparetic patient was secured for a series of therapy sessions, identical motor status of the spastic hemiparetic patient cannot be assumed.
Fig. 25.12. The safety release mechanism, after opening, can adapt safely to external loads
25.4 Testing and Calibration The robotic physiotherapy system, that uses two standard synchronized industrial robot arms, is the first system of its kind in the world. To reduce the risk of the first human test a self-operable artificial dummy limb was designed and developed. The dummy limb satisfies a complex set of requirements:
404
A. Tóth et al.
Fig. 25.13. Testing REHAROB with the artificial limb set as right arm
• Perfect emulation of the kinematical anatomy and joint ranges of the shoulder: shoulder girdle (2 DOF), shoulder (3 DOF), and the elbow (2 DOF) • Adjustable forearm and lower arm lengths • Emulation of spasticity (joint resistance) in the shoulder and in the elbow during exercising, • Recording of the joint movements and forces/torques during exercising. Modeling spasticity of the 7 DOF human upper limb is still uncovered from biomechanical point of view, so an experimental approach was used to define spasticity in the control model [16]. In the first step trade offs between joint resistance torques and the joint angles Tzi-θi have been extracted from the database of 15 spastic patients during non-robotic exercising (Sect. 25.2 and Fig. 25.3). In the second step, during the tests with the robotic system arbitrary exercises were taught in with the REHAROB system to the dummy limb in its resistance free state. We call this “kinematic” teaching in. Finally, the selected spasticity patterns were modulated to the kinematically taught in exercise. For the realization of controlled resistance reactions (spastic pattern), each joint of the dummy limb is equipped with DC motors coupled with planetary gearboxes, except for the joints performing the motions pronation-supination and external-internal rotation, where electromagnetic brakes are used. Joint angles of the joints are measured by potentiomentric sensors, whilst joint forces are measured by 6 axes Force/Torque sensors built in into the upper arm and the lower arm segments. The information from all sensors is fed to a Programmable Logical Controller (PLC). The PLC controls the resistance torques of the joints, based on the modulation performed by a Mat® lab program run on a personal computer (PC). By completing this procedure the dummy limb is set for the test with the robotic physiotherapy system. The REHAROB Therapeutic System was tested and calibrated (Fig. 25.13) with the 8 DOF spastic hemiparetic artificial limb for 5 days. The tests have proved that:
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
• • • •
405
Execution of the desired rehabilitation procedure is possible The robots play back the edited –, and taught in - therapy program accurately The load on the dummy arm joints is adequate during exercising The system is reliable both during normal operation and in emergency situations (e.g. triggering the enabling device, collision, spasm, etc.).
25.5 Clinical Results According to the European Medical Device Directive [15], the REHAROB Therapeutic System has to be certified before normal medical use. As the first step of this procedure it was put into clinical investigation. It is very important to note that the objective of the clinical trial part of the medical certification procedure is not the study of the clinical efficiency but, as quoted: • to verify that, under normal conditions of use, the performance of the devices conform to those intended by the manufacturer, which is ordered by Section 3 of Annex I: Essential Requirements of the Medical Device Directive [15], and • to determine any undesirable side-effects, under normal conditions of use, and assess whether they constitute risks when weighed against the intended performance of the device. The clinical trial of the REHAROB Therapeutic System was designed in compliance with the MDD, and the relevant European Standard on clinical trials [17]. th The clinical trial started on the 6 of April 2003 with four healthy volunteers. The second group and the third group of subjects included patients. Each subject received 30 minutes net robotic physiotherapy (Fig. 25.14), excluding sitting in- and out, and teaching in the exercises on 20 consecutive working days. After 7200 th minutes total robotic physiotherapy the clinical trial ended on the 8 of July 2003. 25.5.1 Subjects of the Clinical Trial Table 25.2 gives basic information on the subjects included in the clinical trial. Table 25.3 gives brief information on patients’ clinical record.
Table 25.2. Subjects of the clinical trial Subject Healthy subject Patient
Sex Male 1 6
Female 3 2
Min 28 20
Age Max 44 71
Average 32.5 46.75
406
A. Tóth et al.
Fig. 25.14. Hemiparetic patient is exercised with the REHAROB Therapeutic System
Table 25.3. Patients’ clinical record Patient # R1 R2 R3 R4 R5 R6 R7 R8
Sex
Age Diagnosis
male male male female female male male male
71 64 26 56 66 20 46 25
Ischaemic stroke Ischaemic stroke Subdural haemorrhage Subarachnoideal haemorrhage Ischaemic stroke Epidural haemorrhage Basilar artery thrombosis Cavernoma pontis
Setting up
Affected hand 9 years ago right 9 years ago right 5 years ago right 3 years ago right 5 weeks ago left 9 months ago left 22 months ago right 8 weeks ago right
25.5.2 Assessment Results Each patient was assessed at the entry to and at the discharge from the robotic therapy session. Follow up assessment will also be made, however the results of them not yet available at the moment. Traditional scales were used for the assessment the impairment status and the disability of the patients. The traditional semi quantitative scales were the Modified Ashwort Score, the FIM score (Total FIM score and separately the self-care score), and the Barthel index. Assessment of
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
407
the motion ranges at two anatomic joints was made by biomechanical measurements: the CMS HS type Motion Analyzing System of zebris Medizintechnik Ltd was used [18]. Tables 25.4, 25.5, 25,6, and 25.7 show the assessment results. Table 25.4. Modified Ashworth score of shoulder adductors and elbow flexors of the affected side Patient # R1 R2 R3 R4 R5 R6 R7 R8
Shoulder adductors Admission Discharge 1 1 2 2 0 0 1 1 2 2 1 0 3 2 0 1
Elbow flexors Admission Discharge 2 1 3 2 1 0 2 2 3 2 2 1 3 3 0 1
The average score of shoulder adductors at admission was 1.25, while at discharge it was 1.125 which means 10% improvement. The average score of elbow flexors at admission was 1.75, while at discharge it was 1.375 which means 21,4% improvement. Comment: The Ashworth score of the patient #R8 has increased. When starting the therapy he had flaccid hemiparesis, as it is usual during the first weeks after the brain damage. Spasticity appeared later, as it is frequent in such cases. We suppose, that without the robot mediated physiotherapy, the increase in Ashworth score could be higher. Table 25.5. Range of movement of elbow flexion-extension and pronation-supination of the affected side Patient # R1 R2 R3 R4 R5 R6 R7 R8
Elbow flexion-extension [degree] Admission Discharge
80 84 106 46 89 71 29 82
87 96 107 78 99 88 52 71
Pronation-supination [degree] Admission Discharge
133 75 120 35 97 53 47 41
137 107 124 65 97 59 69 89
The average elbow flexion at admission was 69.5 degree, while at discharge it was 84.75 degree which means 21.9% improvement. The average pronationsupination at admission was 75.1 degree, while at discharge it was 93.4 degree which means 24,3% improvement.
408
A. Tóth et al. Table 25.6. FIM and self care scores (self care is a part of the total FIM score)
Patient # R1 R2 R3 R4 R5 R6 R7 R8
Total FIM score Admission Discharge
Admission
121 115 106 115 86 98 111 103
42 36 36 36 25 32 36 29
122 122 126 115 89 106 113 115
Self-care Discharge
42 42 42 36 26 35 36 36
The average FIM (Functional Independence Measure) score at admission was 106.875, while at discharge it was 113.5 which means 6.2% improvement. The average self-care score at admission was 34, while at discharge it was 36.875 which means 8,46% improvement. Table 25.7. Barthel index Patient # R1 R2 R3 R4 R5 R6 R7 R8
Admission
Discharge
100 100 100 90 70 85 90 65
100 100 100 100 80 95 95 100
The average Barthel index at admission was 87.5, while at discharge it was 96.25 which means 10% improvement. 25.5.3 Analysis of Assessment Results The tables show that most of our patients were not seriously disabled; we have selected them intentionally to start the first clinical trial of the REHAROB Therapeutic System with non-serious cases. Most of our patients had their brain damage years ago, nevertheless robotic physiotherapy has improved their state regarding both the level of impairments and disability. To prove the clinical and economical efficiency of the robotized physiotherapy will be the objective of a second controlled trial, which is planned for next year. The most important conclusions of the current clinical trial are as follows: • The robotic physiotherapy system was working continuously, reliably and safely; there were no delay due to technical or other problems.
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
409
• The patients were not afraid of the robots; they found the robotized therapy interesting and useful. • The physiotherapists learnt easily how to work with the robots; the users interface proved to be really user friendly. • Based on the physiotherapists experience smaller improvement of the system is planned such as the improvement of the safety release mechanism, the armpit support, the headrest of the couch, and the patient enabling device.
25.6 Conclusions We believe that medical robotics applications must benefit from the use of mass produced and reliable industrial robots. The first prototype robotic physiotherapy system has proved that standard industrial robots are suitable for robotic physiotherapy. The REHAROB Therapeutic System has some unrivalled features among the passive physiotherapy machines and robots for spastic hemiparetic patients. Such features are that REHAROB uses two synchronized robotic arms, and exercise the spastic limb over the full ranges of the 5 shoulder and shoulder girdle joints as well as the 2 elbow joints. Limits of industrial robot controllers adversely affect the performance of the physiotherapy, but with evolution of the controllers due to industrial market needs the limitations will soon die away. All the patients included in the clinical trial have shown significant improvement in their impairment and disability indicators. Patients have found the duration, the constancy, the power, and the complexity of robotic exercises effective and calming compared with the traditional manual passive physiotherapy. To study however REHAROB’s true cost/benefit ratio, after minor system improvements, a second, one year long, controlled trial is planned for 2004 and 2005 in the framework of the FIZIOROBOT project supported by the Ministry of Health, Social and Family Affairs, Hungary. The cost of the first prototype system is quite high, approx. € 250,000 in comparison with the average patient day costs, that is € 500 in Europe and € 000 in the USA for large rehabilitation centers. Based on outcome of the second controlled trial, the REHAROB system can be optimized, and prepared for serial production and introduction to the market. The REHAROB Therapeutic System opens a strong perspective of moving from taught in passive repetitive exercising to biomechanical knowledge based automatic passive, and later purely active upper and lower limb physiotherapy. In this respect the REHAROB Therapeutic System could cover all physiotherapy needs of a spastic hemiparetic stroke patient. In the far future a customized physiotherapy and rehabilitation strategy for each patient can be developed and delivered automatically.
410
A. Tóth et al.
Acknowledgement th
This research is sponsored under the 5 Framework Programme of the European Commission by the project IST-1999-13109. The authors thank for their support – among the numerous contributors to system design development, and testing - the physiotherapists Ms Zsuzsa Boros and Ms Györgyi Stefanik, as well as the engineers Mr Mihály Jurák and Mr László Kovács.
References 1.
Feher M, Denes Z (1999) Neuro-rehabilitation in medical rehabilitation (in Hungarian). Medicina Publisher Co, Budapest 2. Taub E, Miller NE, Novak TA, Cook EW, Fleming WC, Nepomuceno, Connell JS, Crago JE (1993) Technique to improve chronic motor deficit after stroke. Arch Phys Med Rehabil 74: 347–354 3. Lum P, Reinkensmeyer D, Mahoney R, Rymer WZ, Burgar C (2002) Robotic devices for movement therapy after stroke: current status and challenges to clinical acceptance. Top Stroke Rehabil 8(4): 40–53 4. Burgar CG, Lum PS, Shor PC, Van der Loos HFM (2000) Development of robots for rehabilitation therapy: The Palo Alto VA/Stanford experience. J Rehabil Res Dev 37(6): 663–673 5. Krebs HI, Volpe BT, Ferraro M, Fasoli S, Palazzolo J, Rohrer B, Edelstein L, Hogan N (2002) Robot-aided neurorehabilitation: from evidence-based to science-based rehabilitation. Top Stroke Rehabil 8(4): 54–70 6. Amirabdollahian F, Loureiro R, Driessen B, Harwin W (2001) Error Correction Movement for Machine Assisted Stroke Rehabilitation. In: Mounir Mokhtari (ed) Integration of Assistive Technology in the Information Age. IOS Press, Amsterdam, pp 109–116 7. Arz G, Toth A, Fazekas G, Bratanov D, Zlatov N (2003) Three-dimensional Antispastic Physiotherapy with the Industrial Robots of “REHAROB”. In: Proc 8th Int Con Rehabilitation Robotics. Korea Advanced Institute of Science and Technology, Daejeon, pp 215–218 8. http://reharob.manuf.bme.hu/publications/Deliverable7 (project report on physiotherapy analysis) 9. Toth A, Arz G, Varga Z, Varga P (2001) Conceptual Design of an Upper Limb Physiother-apy System with Industrial Robots. In: Mounir Mokhtari (ed) Integration of Assistive Technology in the Information Age. IOS Press, Amsterdam, pp 109–116 10. Toth A, Arz G, Varga Z, Varga P, Papp J (2003) Layout Optimization of a Geometrically Complex Rehabilitation Robotic System through Virtual Physiotherapy. In: Proc 8th Int Con Rehabilitation Robotics. Korea Advanced Institute of Science and Technology, Daejeon, pp 68–71 11. http://www.rolyan.com (manufacturer of rehabilitation equipment) 12. Johnson GR, Valeggi R, Parrini G (1998) MULOS – A new motorised upper limb orthoic system. In: Proc 9th World Cong, Int Soc Prost Orth, Amsterdam, pp 639–647
25 Post Stroke Shoulder–Elbow Physiotherapy with Industrial Robots
411
13. http://www.rimec.it (manufacturer of rehabilitation equipment) 14. Kovacs LL, Toth A, Stepan G, Arz G, Magyar G (2001) Industrial Robot in a Medical Application – Back to Walk-through Programming. In: Pham DT, Dimov SS, O’Hagan VO (eds) Advances in Manufacturing Technology – XV. Professional Engineering Publishing Limited, London, pp 479–484 15. Council Directive 93/42/EEC of 14 June 1993 concerning medical devices 16. Mladenov M, Bratanov D (2002) Simulation of Muscle Resistance Reactions During Upper Limb Rehabilitation Using Intelligent Physical Model. In: Proc 4th Int Conf Techn Aut, Thessalonica, Greece, pp 173–178 17. EN 540: 1993 Clinical investigation of medical devices for human subjects 18. Fazekas G, Feher M, Kocsis L, Stefanik G, Boros Z, Jurak M (2002) Application of kinematical parameters for the assessment and monitoring of central motoneuron impairments (in Hungarian). Clin Neurosci/Ideggy Szle 55(7-8): 268–272.
26 STRING-MAN: A Novel Wire Robot for Gait Rehabilitation Dragoljub Surdilovic, Rolf Bernhardt, Tobias Schmidt, and Jinyu Zhang
Abstract This paper presents a novel robotic prototype for advanced gait rehabilitation. This system integrates sophisticated robotic technology with control algorithms. The robot opens up new possibilities within the field of rehabilitation for restoring posture balance and gait motoric functions. The paper provides an overview of the system’s fundamental components, such as the mechanical structure, patientmachine interface, sensory systems, control algorithms etc.
26.1 Introduction The development of robotic devices that can objectively examine, analyze and replicate complex human musculoskeletal movements, as well as apply therapeutic manipulations remains a challenging research goal. The on-going research indicates that millions of people worldwide are suffering from motoric disabilities caused by neurological injuries and/or joint diseases. The conventional methods and tools for rehabilitation are both time consuming and labor intensive. Recovery of the motoric functions, however, is often long and insufficient, which negatively impacts the patient's independence, absence at work and school etc. Recently, it has been widely recognized [1–4] that applying robotic and mechatronic technologies for rehabilitation can significantly overcome these problems. The initial clinical trials with prototype systems provide evidence that robot-aided training enhances recovery flexibility and efficiency which significantly improves rehabilitation outcomes and reduces social and health costs. However, significant research efforts are necessary for development of reliable and accessible commercial robotic systems, as well as solving problems which are still open: e.g. patient interface, an optimal tuning and adaptation of the control system to specific subjects; development of safety functions and subsystems in accordance with high standards; development of new therapies, etc. This paper presents the novel system for gait rehabilitation (STRING-MAN) recently developed at Fraunhofer IPK-Berlin, which opens up new possibilities for restoring posture balance and gait motoric functions.
Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 413-424, 2004.
© Springer-Verlag Berlin Heidelberg 2004
414
D. Surdilovic et al.
26.2 Development Goals STRING-MAN is a powerful robotic system for supporting the gait rehabilitation and restoration of motor functions by combining the advantages of partial bodyweight bearing (PWB) with a number of robotic and humanoid control functions. A safe, reliable and dynamically controlled weight-suspension and posture control supports the patients to autonomously perform the gait recovery training from the early rehabilitation stage onwards. The system is designed to support the gait restoration of the following patient groups (i.e. indications): neurological disorders (i.e. hemiplegia, paraplegia, cerebral palsy in children, traumatic brain injuries, etc.); orthopedic disorders (i.e. complicated fracture-dislocations with open fixations, simultaneous surgery at extremities, e.g. total knee replacement and tibial ostheotomy, total hip replacement, etc.); and bedridden elderly patients who have multiple pathologies, such as cardiac or pulmonary disorders. The STRING-MAN combines the following new medical and technical features: • automatic, comfortable, efficient and adjustable preparation of the patient for training including reliable fastening (corsages), automatic suspension (from a wheel-chair) and calibration of the initial standing position (minimal medical staff exertion); • planning, programming and realization of the repetitive functional gait training taking into account biomechanical patterns as well as specific patient disorders and disabilities (dislocations, spacity, muscle strength, etc.); • programmable and dynamically controlled weight bearing on the lower limbs, posture control and gait balancing (by the control of the zero-moment-point (ZMP), a key control feature of humanoid robots); • assessing and supporting the patient’s own initiative, efforts and will; quantitative measurements of the patient’s motor-functions (e.g. posture control, weight balancing etc.) needed to assess and document rehabilitation outcomes and improve therapeutic approaches. • storage of patient data to study the progress of rehabilitation; • maximum subject comfort and safety during training; • bringing the patient to the initial position after the training; • minimal maintenance and medical staff exertion.
26.3 Robotic Mechanisms Design The main requirements for the gait rehabilitation robot are concerned with weight bearing and balancing, as well as posture control. Key characteristics of training include the partial unloading of the limb along with the assistance of leg movements on a treadmill. During the first development phase, efforts were addressed to develop a weight-bearing module. Several partial-weight-bearing systems (PWB) have recently been developed for support in gait therapy. However, the
26 STRING-MAN: A Novel Wire Robot for Gait Rehabilitation
415
advanced PWB systems usually realize vertical displacements and controlled weight support, which is not sufficient for control of natural posture and gait training. Moreover, due to risks of falling, the patient’s own initiative is significantly hindered by these systems. Although the robotic gait training prototypes (e.g. REHABOT [3]) may potentially provide better dexterity, the simple user interface not only reduces the transfer of forces to the patient but also decreases the patient’s ability to move. In order to overcome the limitations of currently existing systems, the novel rehabilitation system was designed based on the string-puppet principle. As a result, the new STRING-MAN rehabilitation system consists of a wire robot (Fig. 26.1). The wires are connected via a user interface (e.g. harness, corsage) to the human trunk and pelvis, thereby closing the kinematic chains. This robotic structure optimally provides the required capabilities to control the posture in 6-DOFs, as well as to balance the weight on the legs according to different gait patterns and training programs. Moreover, by sensing the interaction forces, this system can quantify the patient efforts and therefore control the interaction. For example, the system can support the patient’s own initiative by applying force or impedance control. An innovative and attractive feature of this system is its ability to adjust the interaction control from totally passive to completely active. During the rehabilitation, the patient can be loosely harnessed with the minimum amount of wire tension required to monitor the patient’s motion. By these means, the subject is able to realistically and safely test its balancing capabilities. In sufficient time, the system recognizes the risk of patient collapsing whereby it instigates action in order to smoothly increase tension which keeps the patient upright. Finally, the system is able to bring the patient in the initial pose for further trials. This allows not only for an examination of the entire body’s ability to balance, but also for training of trunk stabilization with fixed legs. This unique feature, analogous to how children learn to balance, is expected to be quite promising for the improvement of rehabilitation. The design of wire robots is quite complex and requires sophisticated kinematic and dynamic modeling tools. In order to support STRING-MAN development, a human gait modeling toolbox, referred to as MATMAN was developed. This system integrates an arbitrary configurable wire-robot system and human kinematic and dynamic models (Fig. 26.1). The MATMAN is modeled as rigid-body system with 40 DOFs (7 per extremity, 6 for pelvis and trunk-head chain respectively) which have adjustable height and weight. All models are implemented in MATLAB/SIMULINK environment in order to realize an easy interface with available control development and simulation tools. The model parameters and gait patterns are generated using available anthropometrical databases and measurements. In order to include irregularity that are caused by deviations in the patient’s gait, several perturbations of the normal gait are introduced. These models are intended mainly for the design and for the control synthesis. Deviations between idealized nominal models and real systems have to be compensated for by using sophisticated control algorithms.
416
D. Surdilovic et al.
Fig. 26.1. Wire robot development environment
The design task was established as a multi-objective parameter optimization problem. The main requirement in wire robots is to ensure the wire tension is independent of Cartesian loads (i.e. weight-bearing and gait dynamics). In order to realize this, the number of applied wires must be one higher than the number of controlled DOFs (i.e. at least 7 in the system considered here). The tension condition can be expressed as:
τ ( F ) ≥ τ min
(26.1)
where τ is the vector of wire forces, F is Cartesian force vector including Cartesian forces and moments components, while τ min denotes minimum tension. The required redundancy causes the relationship between wire and Cartesian forces to be not-unique. This relation is defined by the mapping
( )
τ = Jx
T
+
(26.2)
F + λy
where J x is the wire robot Jacobian mapping Cartesian and relative wire displacements. The vector λ defines the null-space of the Jacobian
( );
λ ∈ ker J x
while
T
y∈R
1
(26.3)
y is an arbitrary parameter. In order to ensure the positive tension the
components of λ must have the same sign in the entire working space. Furthermore, a good Jacobian conditioning is required to minimize interaction forces and increase the body manipulability. In order to uniformly distribute the tensions among the wires, about the same size of the null-vector components are required.
26 STRING-MAN: A Novel Wire Robot for Gait Rehabilitation
417
Thus the design objective becomes
w1 max ( cond ( J x ) ) + w2 min ( null ( J x ) ) + w3 ( max ( null ( J x ) ) − min ( null ( J x ) ) )
(26.4)
min
where wi (i=1,…3) are weighting coefficients. Various configurations of robot wires have been optimized and tested. A promising system includes 10 wires (Fig. 26.2) attached to the trunk (6 wires) and pelvis (4 wires). This system allows compensation for spine loads during weight bearing. However, the user interface in this system becomes rather complex. Therefore a more reliable STRING-MAN configuration with 7 wires attached to the trunk has been selected (Fig. 26.3). A total of 19 parameters specifying locations of wire pulleys as well as trunk attachment points have to be optimized based on (4).
Fig. 26.2. Optimal wire robot with trunk and pelvis attachments
The kinematic structure of a wire drive chain is presented in (Fig. 26.4). A linear drive controls the wire length via a pulley. The force sensor (F) and pulley pivot-sensor (γ) are introduced to facilitate control of the interaction with the human and the computation of wire end-point position. The position control (i.e. trunk balancing) requires relative complex computations of both direct and inverse wire-robot kinematics. The wire end-point position is computed based on L = smax - s + h + R (π 2 + δ ) + l
(26.5)
where L is total wire length, s and smax denote actual and maximum linear drive displacements respectively, R is radius of the pulley, δ is envelope angle, and h is a constant distance.
418
D. Surdilovic et al.
Fig. 26.3. String-Man configuration
Fig. 26.4. Wire drive structure
The Cartesian end-points of the wires positions are determined based on the conditions of the wire intersections in at a common point, which is attached to the trunk (Fig. 26.3). Measurements of the pulley inclination angle facilitate an efficient on-line solving of non-linear kinematics equations. Indeed, computation of direct kinematics in a wire robot with a stiff common body interconnecting all wires, does not require measurement of inclination angles (γ). However, the STRING-MAN is a unique robot in which the common body is the patient’s trunk, which is intrinsically flexible. Moreover, a reliable interconnection with a human body must be elastic. Consequently, the wire robot dimensions are variable and
26 STRING-MAN: A Novel Wire Robot for Gait Rehabilitation
419
change not only from patient to patient, but also during the training. Therefore, the actual dimensions of the common body (established by wire end-points, see Fig. 26.5) are determined by means of the wire direct-kinematic model. Then the inverse kinematics is computed providing nominal linear wire drive positions for a given position of the body’s mid-point (Fig. 26.6).
Fig. 26.5. Wire-robot development environment
Fig. 26.6. Instantaneous wires common body
26.4 Human/Robot Interface The user interface represents one of the key and most critical components in human-robot interaction systems. The STRING-MAN user-interface consists of a pelvic harness which is re-enforced in order to reduce displacements (“skineffects”) that are caused by tensions in the wires and by relatively higher weight
420
D. Surdilovic et al.
bearing. The harness has been designed to improve patient comfort during the training (Fig. 26.7).
Fig. 26.7. Patient-interface
26.5 Sensory Systems Powerful sensory systems are utilized in both of the medical robot prototypes in order to improve accuracy and reliability as well as to support the control of interactions. The sensory systems of the STRING-MAN involve a foot gait-phase detection sensor, reaction force sensing (foot forces) and zero-moment point (ZMP) estimation sensor, knee-goniometer, wire force and linear actuator position sensors, as well as pulley rotation sensors. Integrated wire sensors support efficient computations of Cartesian positions of wire intersection points, as well as Cartesian body forces, without measuring patient geometry and biomechanic parameters. Gait phase detection system distinguishes four phases during walking (stance, heel-off, swing and heel-strike) and consists of several force sensitive resistors (Fig. 26.8). The sensors also detect abnormal gait, which often occur in walkingimpaired subjects (e.g. foot-tip strike), and provide valuable information for monitoring circumstances for weight sustaining in the stance phase. The specific problem is that the connection to the human cannot be realized as absolutely rigid. Since the elasticity of the harness interface causes the perturbations on real human body motion estimation, an INS sensor (MotionPack-II) has been attached to the
26 STRING-MAN: A Novel Wire Robot for Gait Rehabilitation
421
patient’s trunk. In order to compensate for drifts and measurements noise, the fusion of relatively low bandwidth wire positions and high-dynamic INS measurements has been realized using Kalman-filtering.
a ZMP trajectory
y direction (mm)
100
50
0
-50 -200
-150
-100
-50
0
50
100
150
200
x direction (mm) Right foot
Left foot
b Fig. 26.8. Gait phase detection system. a gait-phase sensors, b estimated ZMP location under feet
26.6 Control Algorithms The STRING-MAN control system is composed of powerful sets of algorithms. At the lower control layer, the kernel of this control system is a robust position
422
D. Surdilovic et al.
control for the axes of the wire robot. However, due to geometric uncertainties, the robot system is never controlled while in the position control mode. Indeed, the internal position control is only applied to implement robust position-based impedance as well as force controls algorithms. These are essential for the control of the coupled robot-human systems since they control the interaction between robot and human, in spite of uncertainties and perturbations. The STRING-MAN control is based on the ZMP concept [5] and uses the measurements of wire force and foot reactions in order to estimate and control the ZMP. As demonstrated in [5], it is possible to control both the body reaction and ZMP location by means of the wires’ forces. In order to cope with model inaccuracies and ZMP estimation errors, the STRING-MAN system implements a relatively complex control structure, which includes several control loops (Fig. 26.9) (i.e. reaction force, gait posture, internal wire robot and treadmill control). This control scheme is similar to the recent humanoid control approaches. The principal difference is that the control system of the STRING-MAN uses external wire forces instead of cooperative dynamic motion of body segments to stabilize posture and gait. The control scheme includes the basic gait pattern generator, which generates the desired ground reaction magnitude and nominal ZMP location online. These are based on the actual state of gait captured, on the required percentage of weight suspension, on nominal posture data and on subject parameters. These values are compared with the measured (i.e. estimated) values (Fig. 26.9) and the control feedback is closed by using Cartesian kinematic and dynamic models of the wire robot and human gait, which provide the input for the internal wire and treadmill control loops (e.g. treadmill velocity, nominal wire i.e. pulley position and wire forces).
Fig. 26.9. Global control scheme
In order to support the patient’s own initiative during the locomotion training, the wire robot control includes the compliance (i.e. impedance) interaction control
26 STRING-MAN: A Novel Wire Robot for Gait Rehabilitation
423
[6, 7]. Additionally, the control of interaction with a virtual-environment based on the kinaesthetic feedback has also been developed and integrated which will support stand-alone training while testing of patient’s balancing capabilities. The key element in this algorithm is a virtual envelope of the human trunk and pelvis with programmable 3D spring-damper characteristics (Fig. 26.10). The wire robot is controlled in so-called free-mode with a minimum tension required to accurately measure human trunk position and orientation. The system detects the contact with the virtual environment and computes the interaction forces based on simplified penetration-model. The virtual interaction forces are then realized through the wire robot. The parameters of the virtual-environment are selected to prevent patient from injuries or collapse. The virtual environment is therefore to keep the patient’s body safely in a desired posture. When the patient abuts the virtual surface he can try to stabilize the posture or will be supported by additional fire force to come in the initial pose and try again to test balance. The kinesthetic balance can be controlled in 6D (patient can slip down, rotate and displace in all directions) as well as only in 3D rotational space (only trunk posture can be tested, while the pelvis keeps the vertical position).
Fig. 26.10. Virtual-wall for safe testing of patient’s posture control capabilities
High-layer control algorithms provide an interface to the operator to manage various operating-modes as well as to provide essential information for diagnosis and therapy evaluation. The operation of the STRING-MAN mainly relies on the preparation of the patient and programming of the balance and interaction control functions. The programming is based on the compliance control concept which was developed, applied and tested in the space-robot control system [8] and further developed for human interaction and haptic systems [9]. First control systems versions are realized using advanced rapid prototyping control systems (dSPACE). However,
424
D. Surdilovic et al.
after completing the laboratory tests, these systems will be replaced with standard control components. The first clinical trials will significantly influence improving and further developments of therapy algorithms in order to obtain more reliable systems that are patient and operator friendly.
26.7 Conclusion The sophisticated system STRING-MAN for gait training provides qualitatively new functions and practical performance for improving the gait rehabilitation outcomes. The final laboratory testing, tuning and optimization of this new system are currently in their final stage of development and the preparations for clinical testing are underway. The tests are planned at the Neurological Clinic-Berlin, which should bring significant feedback for improving the control algorithms and for realizing reliable and advanced products.
References 1.
2.
3. 4. 5.
6.
7.
8.
9.
Reinkensmeyer D, Hogan N, Krebs HI, Lehman S, Lum P (2000) Rehabilitators, robots and guides: New tools for neurological rehabilitations. In: Winters J, Crago P (eds) Biomechanics and Neural Control of Posture and Movement. Springer-Verlag, Berlin, pp 516-533 Aisen ML, Krebs HI, Hogan N, McDowell F, Volpe B (1997) The effect of robotassisted therapy and rehabilitative training on motor recovery following stroke. Arch. of Neurology 54: 443–446 Siddiqi NA, Ide T, Chen MY, Akamatsu N (1997) Computer-aided walking rehabilitation robot. Am. J. Phys. Med. Rehabilitation 73(3): 212–216 Hesse S, Schmidt H, Sorowka D, Surdilovic D, Bernhardt R (2002) Automated motor rehabilitation: A new trend?. Proceed. EMBEC’02, Vienna, pp 1630–1631 Vukobratovic M, Borovac B, Šurdilovic D (2001) Zero-moment-point proper interpretation and new applications. Proceedings IEE-RAS Intern. Conf. On Humanoid Robots, Nov. 22–24, Waseda Center, Tokyo Surdilovic D, Bernhardt R (2000) Robust control of dynamic interaction between robot and human: Application in medical robotics. Proceedings of German Robotic Conference “Robotik 2000” (VDI Berichte 1552), pp 429–435 Surdilovic D (1997) Contact transition stability in impedance control. Proceedings of IEEE International Conference on Robotics and Automation, Albuquerque, New Mexico, pp 847–852 Surdilovic D, Vukobratovic M (2002) Control of robotic systems in contact tasks. In: Nwokah O, Hurmuzlu Y (eds) The Mechanical Systems Design handbook- Modeling, Measurement and Control. Chapter 23, CRC Press, Boca Raton Surdilovic D, Radojicic J (2003) Robust Control of Human-Robot Interaction: Application for Motoric Rehabilitation, Proceedings of International Conference on Rehabilitation Robotics ICORR, Daejeon, pp 112–116.
27 Great Expectations for Rehabilitation Mechatronics in the Coming Decade H.F. Machiel Van der Loos, Richard Mahoney, and Chantal Ammi
Abstract This paper discusses the opportunities in the future for applications of mechatronics in rehabilitation. Trends in demographics, computer science and wireless world-circling communication portend an increasing reliance of the healthcare sector on therapy models that are consumer-driven and technology-dependent. The consumer will be at the information hub, especially in high-cost, long-duration healthcare interventions, such as those commonly found in rehabilitation, since a smaller amount of financial investment will be expected to deliver more in terms of functional outcomes. People will use computer-controlled devices like robots in assistive or exercise functions in individualized, socially embedded scenarios. Reliable sensors, adaptive motion control and robust data analysis software will change the landscape of rehabilitation practice. In the coming years, the field of robotics will be a key component to enhancing the quality of rehabilitative care and improving the communication between patient and clinical professional.
27.1 Introduction Today, rehabilitation robotics is a niche research domain with little commercial headway in the larger economics of rehabilitation and assisted living. Two primary pathways are envisioned that will lead to the technology of rehabilitation robotics playing a relevant role in the lives of people with disabilities and in the care of people with short-term physical deficits. 1. Over the next ten years, the expectation of technology to meet the demands of rehabilitation and personal care will grow as demographic changes push the capabilities of the current healthcare marketplace to the limit [1]. There will be unprecedented market needs that can be met by rehabilitation robotics and related technologies. Massive private sector investment will lead to devices that meet real needs. 2. Visionary developers will channel advances in the computer and consumer product domain to provide leading edge solutions and convenience products related to enhancing the care and independence of disabled and elderly people. Z.Z. Bien and D. Stefanov (Eds.): Advances in Rehabilitation Robotics, LNCIS 306, pp. 427-433, 2004.
© Springer-Verlag Berlin Heidelberg 2004
428
H.F.M. Van der Loos, R. Mahoney, and C. Ammi
To move rehabilitation robotics to the next level, however, we need enormous market forces that spur enormous investment in development, or we need innovative thinkers that can create elegant solutions. Are there other paths? In the current healthcare marketplace, the market trends referred to above are just now beginning to gain greater notice. The opportunity for the research community is to be the visionary developers in #2 above – to push the envelope and set the stage for what is possible when relentless market forces, fueled by the need to care for millions of people, will spur private sector investment in bringing those technologies to market.
27.2 Emerging Demographics and Healthcare Trends The last decades have seen, at least in industrialized countries, a significant increase in the number of disabled people [2, 3]. This has occurred for a number of reasons, most notably: • The advent of more effective medical treatments and the development of new resuscitation systems have permitted many disabled persons to live longer and with a higher quality of life. • The increase in life expectancy in general has led to the emergence of a growing population of elderly people, and thus a correspondingly larger population of frail elderly and those with disabling conditions associated with aging, such as stroke. These increasing segments of the population – for example, the percentage of persons over 65 years of age in the countries listed above will almost double by the year 2030 – have generated changes in the market and are implicated in today’s growing medical expenses (see Table 27.1). The most significant societal question is, therefore: Who is going to finance the increasing expenses? Table 27.1. Demographics of Aging and Disability Country
# People with Disabilities
France USA Great Britain Netherlands Spain Japan Korea
5,146,000 52,591,000 4,453,000 1,432,000 3,528,220 5,136,000 3,195,000
% of Population with Disabilities 8.3 20.0 7.3 9.5 8.9 4.3 7.1
# of Elderly People 12,151,000 35,000,000 12,200,000 2,118,808 6,936,000 44,982,000 16,300,000
% of Population that is Elderly 19.6 12.4 29.5 13.4 17.6 35.7 36.0
Depending on the country, dependent and elderly persons may not have the same rights and financial advantages as younger, able-bodied people. For example:
27 Great Expectations for Rehabilitation Mechatronics in the Coming Decade
429
− Some countries have developed public social systems that finance medical expenses (most of Europe) and even robotic technical assistive aids (e.g., Netherlands). In these countries, therefore, regardless of professional or personal situation, everybody can benefit from the protection offered by the government. − Some countries (e.g., United States) rely mostly on private systems. Only the subscribers, in fact a small part of the demand, benefit from the financial coverage. − Some countries (e.g., Japan) have a mix of both systems. Faced with this demand, the promise of technical assistive aids such as personal and service robots is growing and may in the future become a catalyst to fundamentally change the market of dependence care and reverse the upwardly spiraling economic problem of payments. For example: − robots can decrease the number of caregivers needed for dependent people with the highest levels of need, − medical information networks decrease the costs of transportation and the number of medical devices needed to cover all persons, − telecommunications and Internet-based networks open up new types of job and new models of education, and can therefore generate new vocational opportunities for users, − smart houses permit people to stay, live and work at home, simultaneously decreasing medical expenses by reducing reliance on assistive living centers and nursing homes. It is encouraging, in fact, that many of these products and services are already on the market in some form and only need some help to be extended to new models of use. There is another questions looming on the horizon that is not about who will pay for services. This question is: Who will provide the services? As the world’s population continues to age, there is a shift in the ratio of older people to younger people. The outcome of this shift is that the pool of people available to care for the elderly and provide other services is diminishing. Money will not be able to solve the problem. There will not be enough people to do the work, no matter what they are paid [4]. There is a huge potential for technology to fill this service gap [5]. Once the economic burden of largely non-technology-based solutions of care becomes more onerous, financial support from private and public systems will become increasingly available to transform the potential demand into real solutions.
27.3 Emerging Technologies Relevant to Robotics Base technologies, for example materials, computer chip design processes and wireless communication electronics are often driven by application areas, although rarely rehabilitation. In the coming decade, base technology development will
430
H.F.M. Van der Loos, R. Mahoney, and C. Ammi
continue to make computers faster, cars and planes safer and communication more facile. Here are some examples of base technologies that will spur rehabilitation robotics: Nanotechnology and sensors: The robustness of robots depends most essentially on its transducers, the sensors and actuators with which the robot interacts with both the user and the environment. Today’s robots are designed with little if any redundancy in sensing due to the cost not only of the transducers but also the wiring, electronics and software to make sense of all the incoming data, and the cost of carrying the weight of redundant motors and power supplies. The most effective means of solving this is through miniaturization and modularization. The effect that CCD camera technology has had on optical sensing needs to be replicated in position and force sensing, and, projecting out further, of areas such as taste and smell recognition as well. Distributed design and control: Even with only a dozen sensors and half as many motors, a robot typically carries a huge weight in copper in its wires. An emerging trend is away from the discrete-component, star-configuration of most robots, with one controller at the hub of all communication, and toward a distributed system of smart nodes exhibiting semi-autonomous behavior. Coupled with fast, robust, short- and long-range wireless transmission and power distribution, can we see a robot of the future with essentially no wires anywhere? Fabrics and flexible architectures: Although we think most often of robots as rigid, stiff links connected by motors, the more robots will be interacting with humans, the more they will need to adapt physically and texturally with people. In fact, some of the next rehabilitation robots will be wearable, like a shirt. Imagine donning something like the LifeShirt™ with flat, unobtrusive motors in the elbows to help a person lift an object; imagine putting on robotic boots with actuators to prevent accidents for persons susceptible to falling, or helmets with traumareducing sensing and actuation capabilities. These are the rehabilitation robots of the future, even if they are not similar in appearance to today’s mechanisms. Assembly processes: Robots are costly devices not only because of their components or the precision of their assembly, but how they are assembled. Many equally complex consumer electronics are 2 or 2.5-dimensional, with die cast or extruded, punched metal bodies and components placed vertically into appropriate slots, holes and cavities in well-defined ways. Fasteners and the device cover keep everything in place. Unless unavoidable, nothing is mounted out of plane. If necessary, a complex subsystem is developed as a module and then inserted vertically. A robot, however, most of the time requires a fully 3-dimensional assembly process. Take a look at an industrial robot. Note that the fasteners are inserted in all directions and that often two hands are needed to assemble parts together. Full automation of final assembly is rare in the robotics industry, while commonplace in many others of a similar scale. Reducing robots to the level of consumer grade mechatronics is a challenge for the future. It has started, but it has a long way to go. Converging portable computing. The current mainstream marketplace for handheld, portable computers and electronics is extremely turbulent as the market searches for the correct configuration in light of continuing technological devel-
27 Great Expectations for Rehabilitation Mechatronics in the Coming Decade
431
opments. Items such as cellular phones, PDAs, notebook computers, palmtop computers, pagers, and beepers, are continuously being reconfigured so that the differences between them are becoming less discernable. In concert with these developments, the computer industry is investing tremendously in futuristic computing, including higher speed Internet communication, faster chip speeds, largecapacity, high-speed storage media, wireless networks and applications, and flexible display technologies, to name only a few. All of these products are also leading to the adoption of industry standards that will create a foundation for even more developments. The based technologies discussed above were chosen as examples only. Consider rapid prototyping, biomimetic design, heterogeneous formable materials, or self-repairing structures: many domains will impact robot development of the future [6, 7].
27.4 RoadBlocks and Enablers of Robotic Applications in Rehabilitation It is easy to dream and to realize that it is only a matter of investment before applications such as the ones listed above are accomplished. Consider the $150 million that Johnson & Johnson invested in the iBot™ dynamically balancing wheelchair, or the 15 years of development for the Honda P3™ and Asimo™ autonomous, walking robots. We not only need technology visionaries, but require financial backers. With a combination of demographics pushing the economics and scientists and engineers pushing the technology [8], rehabilitation and assisted functioning may be the next frontier. Along with the elder-shift in demographics in the world’s industrialized countries will also come a shift in the population’s sense of who we are. The media plays a crucial role in mixing the pot of public opinion and enticing us to use the television as a window to other elements of our society as well as a mirror to ourselves. The entertainment industry, largely unbound from reality, technology and logic, is free to dream even wilder dreams than philosophers or futurologists, but to sell its products must still tie back to human emotion and value. Hence it captures us and at the same time lets us live in future technology virtually and vicariously. The media, however, rarely addresses the consumer path for those products: who developed it, how it is sold, how it is installed, what kind of technical support or field service is required. The media also does not look at the hard problems that insurance companies know are the big reasons why people can no longer live independently, dressing oneself, toileting, bathing and grooming, transferring, preparing meals and eating, mobility. Rehabilitation technology solutions so far cannot compare to the resources applied to the consumer electronics and computers market. But, what will the marketplace be like when disability is mainstream?
432
H.F.M. Van der Loos, R. Mahoney, and C. Ammi
27.5 Mechatronic/Robotic Applications to Rehabilitation In the world of rehabilitation robotics, there are many next paths of research and development. Based on the information discussed so far, it seems that the sky is the limit. In the short term, we can easily see pet robots that soothe our minds as they monitor our pulse and relay alarms to a medical service, a smart home that adapts its lighting, heating, entertainment and information offerings to our needs, and an assistive kitchen robot that not only cleans the counter, but puts the dishes in the dishwasher, turns it on, and then empties it when done. In the slightly longer term, consider robot systems that might include fully-autonomous mobile servants, adaptive information technology to maintain our homes and monitor our health, limb prostheses and mobility systems controlled by neural prostheses, clothes that put themselves on, and injected nano-robots to repair cellular damage and perform location-specific medication release. The question remains whether these technologies will be spurred on by the need to make rehabilitation affordable, or rather through the model that has worked so well in rehabilitation in the past: borrow and adapt from mainstream consumer product development. Some products, such as crutches, electric wheelchairs or hip implants, have no mainstream uses and will continue to be highly competitive niches in rehabilitation. However, computer interface hardware and software, such as trackballs and voice recognition algorithms, were not developed for persons with limited hand mobility, but they are significant enabling technologies nonetheless.
27.6 Conclusions Another perspective is to observe the role that human caregivers play in the lives of the elderly and people with disabilities. Consider home care, nursing homes, inpatient and outpatient therapy, assisted living communities. Now consider the needs that emerge if you remove the caregivers from those scenarios. If there are not enough people available, then how will we assist the elderly and disabled in: • Toileting, bathing, and basic grooming like brushing teeth, combing hair, wiping the nose, etc. • Dressing and undressing. • Exercising. • Preparing meals and eating. • Taking medications and going for doctor’s visits and therapy. • Moving around the house, apartment complex and community. • Having a conversation and sharing ideas. • Cleaning, doing laundry, food shopping, and running errands. • Help with doing hobbies, learning about world affairs, meeting other people, going to entertainment and educational activities. • Living a life that has value and dignity.
27 Great Expectations for Rehabilitation Mechatronics in the Coming Decade
433
The great expectation for rehabilitation mechatronics is defined by this challenge. As researchers and developers, our responsibility is to bridge the gap between technology and this need. Over the last thirty years, rehabilitation robotics devices have tended to be developed as cousins of industrial robots, with further research looking at the next logical steps in controls, sensor integration, and design. This approach, however, when viewed against the vast array of future technologies described above, represents only a drop in the bucket of potential. To meet the future care needs of the world’s population, we need visionary developers starting with a blank sheet of paper, willing to tackle the most difficult problems. Because, in the end, meeting the Great Expectations of Rehabilitation Mechatronics is nothing more than meeting our expectations of ourselves.
References 1.
2.
3. 4.
5. 6.
7. 8.
Siegel J (1996) Aging into the 21st Century, National Aging Information Center, Bethesda, MD, under contract number HHS-100-95-0017, Administration on Aging, U.S. Department of Health and Human Services. (http://pr.aoa.dhhs.gov/aoa/stats/aging21/) Institute for Health and Aging (1997) Chronic care in America: A 21st century challenge. University of California, San Francisco, for The Robert Wood Johnson Foundation, Princeton, New Jersey, p. 20 Ammi C (2002) Les nouvelles technologies de la sSanté [New health care technologies]. Hermes Science, Paris, France Mahoney RM (1997) Robotic products for rehabilitation: Status and strategy. Keynote Address. Proceedings of the 1997 International Conference on Rehabilitation Robotics, Bath, England, pp 12–17 Stanger C, Cawley M (1996) Demographics of rehabilitation robotics users. Technology & Disability 5(2): 125–138 Leifer, LJ, Toye G, Van der Loos, HFM (1996) Tele-service - robot: Integrating the socio-technical framework of human service through the InterNet-World-Wide-Web. International Workshop on Biorobotics: Human-Robot Symbiosis, in Robotics and Autonomous Systems, Vol. 18, Elsevier Press, pp 117–126 Van der Loos HFM (2001) Immersive user environments in rehabilitation robotics and mechatronics. Artificial Life and Robotics 4(4): 176–181 Van der Loos HFM (1995) VA/Stanford rehabilitation robotics research and development program: Lessons learned in the application of robotics technology to the field of rehabilitation. IEEE Trans. Rehabilitation Engineering 3(1): 46–55.
Subject Index
3D motion analyzer
396
active compliance control (ACC) 77 activities of daily living (ADL) 128, 188 aging society 313 arm -, prosthetic 37 -, Utah/MIT artificial 37 artificial dummy limb 409 assistive technology 133 biomechatronic 233 body weight support (BWS)
333
caregivers 429 characteristic dimensions 149 clinical trial 405 color - based image processing 105 -, image 146 command architecture 214 communication - aids 39 - protocols 48 computer chip design 429 computer simulator ROSI 253 control -, closed-loop 100 -, collaborative 166 critical kinetic energy 201 danger index 190 decision for buying a rehabilitation robot 13 demographic changes 427 device -, haptic 399
-, medical 402 domotics 133 eating 155 elder-shift 431 electrically assisted walker electric motor 334 eye-mouse 81 exoskeleton 243 expectations 433
313
feeding aid 31 force limitation 201 force-sensing device 313 gait-phase sensor 420 gait rehabilitation system gravity-balancing 245
334
hand -, dexterous 37 -, five fingered 37 -, gestures 149 -, prosthetic 233 haptic technologies 347 health monitoring system 70 heart rate 337 home network 66 hue 147 human-robot interaction 419 r robot simulation 397 IGRIP" image-processing 161 impedance control 415 independence of disabled and elderly people 427 indication criteria 224 indirect force control 400
436
Subject Index
intelligent - bed robot system 62 - Robotic Home 17 - Sweet Home 59 intention reading 68, 78 interface -, EMG 81 -, head 82 -, human-friendly 47 -, Human Machine 35 -, operating 155 -, shoulder 82 International Conference on Rehabilitation Robotics (ICORR) 37 joystick
156
KARES 72 kinematic controller
336
laser range finder (LRF) 299 log-polar mapping (LPM) 79 manipulability 367 manipulator 28 manipulator -, Assistive Robotic (ARM) 221 -, robotic 335 -, wheelchair mounted service 221 map update 273 market forces 428 mechatronics 25 Medical Device Directive 402 medical treatments 428 miniaturization 430 MIT-MANUS 377 muscular dystrophy 155, 243 My Spoon 155 nanotechnology
430
obstacle avoidance orthosis 243
284
personal - care 427 - robotic assistance 129 physiotherapy methods 392
planning -, path 188 -, task 97 pneumatic actuator 334 portable computing 430 powered orthosis 26 programming -, demonstration-based 119 - languages 30 quality of life (QOL) 428 quantitative evaluation method
211
Real-Time Application Interface (RTAI) 303 reflex mechanism 206 rehabilitation -, neuro 347 -, stroke 377 -, walking 315 REHAROB Therapeutic System 392 restriction of the velocity 199 risk 177 - assessment 178 -, tolerable 199 robot - appliance 129 - assisted therapy 365 -, emotional interactive entertainment 9 -, industrial 392 -, intelligent rehabilitation 4 -, kitchen 432 -, MANUS 165, 221 -, mobile assistive 31 -, pet 432 -, rehabilitation 3, 95, 389, 414 -, wheelchair-based rehabilitation 72 -, wheelchair mounted 33 -, wire driven 365 -, wire 413 -, wrist 377 robotics -, advanced 26 -, assistive 27, 47, 211 -, rehabilitation 25, 128, 427 robotized physiotherapy 408 robot/human contacts 199 Robot Mediated Therapy 37
Subject Index safety 199 - control 188 - design 188 - standard 177, 402 sensors -, proprioceptive 233 -, exteroceptive 233 smart - homes 47 - houses 10, 429 soft remote control system (soft remocon) 67 Soft Robotic Arm 77 spastic hemiparesis 391 spinal cord injuries 155 stereovision 145 stroke 347, 377, 391 Task-oriented Design (TOD) 72 task-specific robot appliance 127 telematic 133 time -, double limb support (DLS) 337 -, real 372 -, single limb support (SLS) 337 transferring system 65
ultrasonic sensors 336 underactuated mechanisms 234 unstructured environment 165 upper limb - impairment 347 - motor rehabilitation 392 virtual - environments 347 - exercising 397 visual servoing 78, 101, 167 -, image based 167 -, position based 167 vocational opportunities 429 weight relief 334 welfare robot system 144 wheelchair -, automatically-guided 5, 253 -, iBOT 36 -, intelligent 64, 299 - navigation 258 -, powered 302 -, smart 35 wireless communication 429 workstation 28 zero-moment-point (ZMP)
414
437
Author Index
Abdulrazak, Bessam 47, 211 Amirabdollahian, Farshid 347 Ammi, Chantal 427 Arz, Guszt´ av 391 Avtanski, Alexander 253 Bernhardt, Rolf 413 Bien, Z. Zenn 3, 57, 253 Bratanov, Daniel 391 Cappiello, G. 233 Carrozza, M.C. 127, 233 Celestino, James 377 Chung, Myung Jin 323 Dario, P. 127, 233 Di Lauro, G.A. 127 Driessen, B.J.F. 165 Egawa, Saku
313
abor 391 Fazekas, G´ Feki, Mohamed Ali 47 Ferraro, Mark 377 Fukase, Azuma 143 Gallina, Paolo 365 Gr¨ aser, A. 95 Grandjean, Bernard 47, 211 Guglielmelli, E. 127 Harwin, William 347 Hillman, Michael 25 Hogan, Neville 377 Hong, Hyun Seok 323 Hoya, Ichiro 143 Ikuta, Koji 187 Ishii, Hideki 187 Ishii, Sumio 143
Ishii, Takeshi Johnson, M.J. Jung, Jik Han Jung, Jin-Woo
313 127 299 57
Kawarazaki, Noriyuki 143 Kim, Byung Kook 299 Kim, Chong Hui 299 Kim, Dae-Jin 57 Koseki, Atsushi 313 Kouzmitcheva, O. 95 Krebs, Hermano Igo 377 Kwon, Han Jo 323 Laschi, C. 127 Lee, Choon-Young 333 Lee, Ju-Jang 333 Loureiro, Rui 347 427
Mahoney, Richard Martens, C. 95 Mokhtari, Mounir
47, 211
Nishihara, Kazue 143 Nokata, Makoto 177, 187 Oh, Changmok
333
Pape, A. 95 Park, Kwang-Hyun Peters, Geer 221
57
Rahman, Tariq 243 Roccella, S. 233 R¨ omer, Gert Willem 221 Sample, Whitney 243 Schmidt, Tobias 413
440
Author Index
Sebastiani, F. 233 Seliktar, Rahamim 243 Seo, Kap-Ho 333 She, H. 95 Soyama, Ryoji 155 Stefanov, Dimitar 3, 253 Stuyt, Harry 221 Surdilovic, Dragoljub 413 Takeuchi, Ikuo 313 Tejima, Noriyuki 177, 199 T´ oth, Andr´ as 391
Van der Loos, H.F. Machiel Vecchi, F. 233 Versluis, A.H.G. 165 Volosyak, I. 95 Volpe, Bruce 377 Williams, Dustin 377 Woerden, J.A. van 165 Woerden, Koos van 221 Yoo, Dong Hyun 323 Yoshidome, Tadashi 143 Zecca, M. 233 Zhang, Jinyu 413 Zlatov, Nikolay 391
427
About the Editors
Z. Zenn Bien received his B.S. degree in electronics engineering from Seoul National University, Seoul, Korea, in 1969 and the M.S. and Ph.D. degrees in electrical engineering from the University of Iowa, Iowa City, Iowa, U.S.A., in 1972 and 1975, respectively. During 1976-1977 academic year, he taught as assistant professor at the Department of Electrical Engineering, University of Iowa. Then Dr. Bien joined Korea Advanced Institute of Science and Technology, Summer, 1977, and is now Professor of Control Engineering at the Department of Electrical Engineering and Computer Science, KAIST. He was a visiting faculty at the University of Iowa during his 1981-1982 sabbatical year and a visiting researcher at CASE Center of Syracuse University, New York, and visiting professor at Dep’t of Control Engineering, Tokyo Institute of Technology during 1987-1988 academic year. Prof. Bien was the president of the Korea Fuzzy Logic and Intelligent Systems Society during 1990-1995 and also, the general chairs for IFSA World Congress 1993, and for FUZZ-IEEE99, respectively. He is currently co-Editorin-Chief for International Journal of Fuzzy Systems (IJFS), Associate Editor for IEEE Transactions on Fuzzy Systems, and a regional editor for the International Journal of Intelligent Automation and Soft Computing. He has been serving as Vice President for IFSA (International Fuzzy Systems Association) since 1997, and is now serving as the President from July 1, 2003 until 2005. He was also the 2001 President for Institute of Electronics Engineers of Korea and has just started from last September, 2003 his service as the first President of Korea Robotics Society. At KAIST, Prof. Bien served as Dean of College of Engineering for 2 and a half years, and now is the Director of Human-friendly Welfare Robot System Engineering Research Center since 1999. His current research interests include intelligent and learning control methods, soft computing techniques with emphasis on fuzzy logic systems, service robotics and rehabilitation engineering systems, and large-scale industrial application systems. Prof. Bien has published more than 280 international journal/proceedings papers, 10 contributed papers for edited volumes and has authored/coauthored 5 technical books. Dimitar Stefanov received his M.Sc. degree in electrical engineering from the Technical University of Sofia in 1977 and a Ph.D. degree from the Institute
442
About the Editors
of Mechanics and Biomechanics at the Bulgarian Academy of Sciences, Sofia, Bulgaria in 1982. His doctoral dissertation was devoted to end-point control of a robot for use by people with severe movement disabilities. As a result of this work, the first Bulgarian rehabilitation robot, HOPE, was developed. The robot was controlled by intentional movements of the head and eyelids. In 1982 Dr. Stefanov jointed to the Institute of Mechanics and Biomechanics as an Assistant Professor and in the years 1982–1984 he was promoted to the highest end of the Assistant Professors scale. In 1990 he achieved accreditation and State Registration as a “habilitated” Professor in biomedical engineering and biomechanics and at the same year Dr. Stefanov became an Associate Professor at the Institute of Mechanics and Biomechanics. In 1998 he was awarded with the Research grant of the Hyogo Prefecture, Japan and from January, 1988 to March, 1999 he worked at the New Industrial Research Organization (NIRO) at Kobe, Japan as a Visiting Senior Researcher for a period of 15 months. His work there was oriented to research on advanced systems for movement assistance of elderly people. From September 2000 to June 2003, D. Stefanov was a Visiting Professor at the College of Engineering at KAIST, Taejon, Korea, where his activity included teaching a course on “Special Topics in Robotics: Design and Control of Devices for HumanMovement Assistance” (during the Spring semester of 2001) and research in rehabilitation robotics area at the Human-Friendly Welfare Robot System Research Center headed by Prof. Z. Zenn Bien. Since July 2003 he has been with the team of the Rehabilitation Engineering Unit at the Cardiff& Vale NHS Trust, UK. His current research interests include devices for movement independence of disabled and elderly (Rehabilitation robots and powered wheelchairs), human-machine interface, biomechanics of the human motions, electronic control, and sensors. D. Stefanov has published more than 60 international journal/proceedings papers. He is an autor/co-autor of 15 inventions. For his research activities Dr. Stefanov received the National Medal for Inventions (1980), the National Gold prize for best invention “Golden Integral” (1983). His name appeared in the reference book “2000 Innovators. Who is who in Bulgarian Innovation”. Dr. Stefanov is a Member of the Technical Committee ”Man-Machine Systems” of the International Federation for the Theory of Machines and Mechanisms (IFToMM), Senior Member of the Institute of Electrical and Electronics Engineers (IEEE), Engineering in Medicine and Biology Society (EMB Society); Member of the Union of the Bulgarian Scientists and one of the Founders of the Bulgarian Society of Biomechanics. In March 2003, Dr. Stefanov achieved a State registration with the Health Profession Council in the UK. He is currently an Associate Editor of the International Journal of Human-Friendly Welfare Robotic Systems. He served also as a co-chair of the IPC of the Eight International Conference on Rehabilitation Robotics (ICORR2003).
Lecture Notes in Control and Information Sciences Edited by M. Thoma and M. Morari Further volumes of this series can be found on our homepage: springeronline.com 2001{2004 Published Titles: Vol. 257: Moallem, M.; Patel, R.V.; Khorasani, K. Flexible-link Robot Manipulators 176 p. 2001 [1-85233-333-2]
Vol. 269: Niculescu, S.-I. Delay Effects on Stability 400 p. 2001 [1-85233-291-316]
Vol. 258: Isidori, A.; Lamnabhi-Lagarrigue, F.; Respondek, W. (Eds) Nonlinear Control in the Year 2000 Volume 1 616 p. 2001 [1-85233-363-4]
Vol. 270: Nicosia, S. et al. RAMSETE 294 p. 2001 [3-540-42090-8]
Vol. 259: Isidori, A.; Lamnabhi-Lagarrigue, F.; Respondek, W. (Eds) Nonlinear Control in the Year 2000 Volume 2 640 p. 2001 [1-85233-364-2] Vol. 260: Kugi, A. Non-linear Control Based on Physical Models 192 p. 2001 [1-85233-329-4] Vol. 261: Talebi, H.A.; Patel, R.V.; Khorasani, K. Control of Flexible-link Manipulators Using Neural Networks 168 p. 2001 [1-85233-409-6] Vol. 262: Dixon, W.; Dawson, D.M.; Zergeroglu, E.; Behal, A. Nonlinear Control of Wheeled Mobile Robots 216 p. 2001 [1-85233-414-2] Vol. 263: Galkowski, K. State-space Realization of Linear 2-D Systems with Extensions to the General nD (n>2) Case 248 p. 2001 [1-85233-410-X] Vol. 264: Banos, A.; Lamnabhi-Lagarrigue, F.; Montoya, F.J Advances in the Control of Nonlinear Systems 344 p. 2001 [1-85233-378-2]
Vol. 271: Rus, D.; Singh, S. Experimental Robotics VII 585 p. 2001 [3-540-42104-1] Vol. 272: Yang, T. Impulsive Control Theory 363 p. 2001 [3-540-42296-X] Vol. 273: Colonius, F.; Grune, L. (Eds) Dynamics, Bifurcations, and Control 312 p. 2002 [3-540-42560-9] Vol. 274: Yu, X.; Xu, J.-X. (Eds) Variable Structure Systems: Towards the 21st Century 420 p. 2002 [3-540-42965-4] Vol. 275: Ishii, H.; Francis, B.A. Limited Data Rate in Control Systems with Networks 171 p. 2002 [3-540-43237-X] Vol. 276: Bubnicki, Z. Uncertain Logics, Variables and Systems 142 p. 2002 [3-540-43235-3] Vol. 277: Sasane, A. Hankel Norm Approximation for Inˇnite-Dimensional Systems 150 p. 2002 [3-540-43327-9]
Vol. 265: Ichikawa, A.; Katayama, H. Linear Time Varying Systems and Sampled-data Systems 376 p. 2001 [1-85233-439-8]
Vol. 278: Chunling D. and Lihua X. (Eds) H∞ Control and Filtering of Two-dimensional Systems 161 p. 2002 [3-540-43329-5]
Vol. 266: Stramigioli, S. Modeling and IPC Control of Interactive Mechanical Systems { A Coordinate-free Approach 296 p. 2001 [1-85233-395-2]
Vol. 279: Engell, S.; Frehse, G.; Schnieder, E. (Eds) Modelling, Analysis, and Design of Hybrid Systems 516 p. 2002 [3-540-43812-2]
Vol. 267: Bacciotti, A.; Rosier, L. Liapunov Functions and Stability in Control Theory 224 p. 2001 [1-85233-419-3] Vol. 268: Moheimani, S.O.R. (Ed) Perspectives in Robust Control 390 p. 2001 [1-85233-452-5]
Vol. 280: Pasik-Duncan, B. (Ed) Stochastic Theory and Control 564 p. 2002 [3-540-43777-0] Vol. 281: Zinober A.; Owens D. (Eds) Nonlinear and Adaptive Control 416 p. 2002 [3-540-43240-X]
Vol. 282: Schroder, J. Modelling, State Observation and Diagnosis of Quantised Systems 368 p. 2003 [3-540-44075-5] Vol. 283: Fielding, Ch. et al. (Eds) Advanced Techniques for Clearance of Flight Control Laws 480 p. 2003 [3-540-44054-2] Vol. 284: Johansson, M. Piecewise Linear Control Systems 216 p. 2003 [3-540-44124-7] Vol. 285: Wang, Q.-G. Decoupling Control 373 p. 2003 [3-540-44128-X] Vol. 286: Rantzer, A. and Byrnes C.I. (Eds) Directions in Mathematical Systems Theory and Optimization 399 p. 2003 [3-540-00065-8] Vol. 287: Mahmoud, M.M.; Jiang, J. and Zhang, Y. Active Fault Tolerant Control Systems 239 p. 2003 [3-540-00318-5] Vol. 288: Taware, A. and Tao, G. Control of Sandwich Nonlinear Systems 393 p. 2003 [3-540-44115-8] Vol. 289: Giarre, L. and Bamieh, B. Multidisciplinary Research in Control 237 p. 2003 [3-540-00917-5] Vol. 290: Borrelli, F. Constrained Optimal Control of Linear and Hybrid Systems 237 p. 2003 [3-540-00257-X] Vol. 291: Xu, J.-X. and Tan, Y. Linear and Nonlinear Iterative Learning Control 189 p. 2003 [3-540-40173-3] Vol. 292: Chen, G. and Yu, X. Chaos Control 380 p. 2003 [3-540-40405-8] Vol. 293: Chen, G. and Hill, D.J. Bifurcation Control 320 p. 2003 [3-540-40341-8]
Vol. 294: Benvenuti, L.; De Santis, A.; Farina, L. (Eds) Positive Systems: Theory and Applications (POSTA 2003) 414 p. 2003 [3-540-40342-6] Vol. 295: Kang, W.; Xiao, M.; Borges, C. (Eds) New Trends in Nonlinear Dynamics and Control, and their Applications 365 p. 2003 [3-540-10474-0] Vol. 296: Matsuo, T.; Hasegawa, Y. Realization Theory of Discrete-Time Dynamical Systems 235 p. 2003 [3-540-40675-1] Vol. 297: Damm, T. Rational Matrix Equations in Stochastic Control 219 p. 2004 [3-540-20516-0] Vol. 298: Choi, Y.; Chung, W.K. PID Trajectory Tracking Control for Mechanical Systems 127 p. 2004 [3-540-20567-5] Vol. 299: Tarn, T.-J.; Chen, S.-B.; Zhou, C. (Eds.) Robotic Welding, Intelligence and Automation 214 p. 2004 [3-540-20804-6] Vol. 300: Nakamura, M.; Goto, S.; Kyura, N.; Zhang, T. Mechatronic Servo System Control Problems in Industries and their Theoretical Solutions 212 p. 2004 [3-540-21096-2] Vol. 301: de Queiroz, M.; Malisoff, M.; Wolenski, P. (Eds.) Optimal Control, Stabilization and Nonsmooth Analysis 373 p. 2004 [3-540-21330-9] Vol. 302: Filatov, N.M.; Unbehauen, H. Adaptive Dual Control: Theory and Applications 237 p. 2004 [3-540-21373-2] Vol. 303: Mahmoud, M.S. Resilient Control of Uncertain Dynamical Systems 278 p. 2004 [3-540-21351-1] Vol. 304: Margaris, N.I. Theory of the Non-linear Analog Phase Locked Loop 303 p. 2004 [3-540-21339-2] Vol. 305: Nebylov, A. Ensuring Control Accuracy 256 p. 2004 [3-540-21876-9]