Flying Insects and Robots
Dario Floreano · Jean-Christophe Zufferey · Mandyam V. Srinivasan · Charlie Ellington
Editors
Flying Insects and Robots
123
Editors Prof. Dario Floreano Director, Laboratory of Intelligent Systems EPFL-STI-IMT-LIS École Polytechnique Fédérale de Lausanne ELE 138 1015 Lausanne Switzerland
[email protected]
Dr. Jean-Christophe Zufferey Laboratory of Intelligent Systems EPFL-STI-IMT-LIS École Polytechnique Fédérale de Lausanne ELE 115 1015 Lausanne Switzerland
[email protected]
Prof. Mandyam V. Srinivasan Visual and Sensory Neuroscience Queensland Brain Institute University of Queensland QBI Building (79) St. Lucia, QLD 4072 Australia
[email protected]
Prof. Charlie Ellington Animal Flight Group Dept. of Zoology University of Cambridge Downing Street Cambridge, CB2 3EJ UK
[email protected]
ISBN 978-3-540-89392-9 e-ISBN 978-3-540-89393-6 DOI 10.1007/978-3-540-89393-6 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2009926857 ACM Computing Classification (1998): I.2.9, I.2.10, I.2.11 c Springer-Verlag Berlin Heidelberg 2009 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover illustration: Flying insect image reproduced with permission of Mondolithic Studios Inc. Cover design: KünkelLopka, Heidelberg Printed on acid-free paper Springer is a part of Springer Science+Business Media (www.springer.com)
Preface
Flying insects represent a fascinating example of evolutionary design at the microscopic scale. Their diminutive size does not prevent them from perceiving the world, flying, walking, jumping, chasing, escaping, living in societies, and even finding their way home at the end of a long day. Their size and energy constraints demand extremely efficient and specialized solutions, which are often very different from those that we are accustomed to seeing in larger animals. For example, the visual system of flying insects, which features a compound eye comprising thousands of ommatidia – “little eyes” – represents a dramatic alternative to the design of our own eyes, which we share with all vertebrates and which has driven the design of today’s cameras. Do insect eyes differ from human eyes only superficially with respect to the optical and imaging characteristics, or do the nervous systems of their owners process the information that they receive in different ways? Several aspects of this question are explored in this book. The nervous system of flying insects not only coordinates the perception and motion of the animal at extremely high speed and in very dynamic conditions, but it actively monitors features in the surrounding environment, supports accurate landings in very tiny spots, handles recovery from high turbulence and collisions, directs the exploration of the environment in search of food, shelter or partners, and even enables the animal to remember how to return to its nest. Flying insects move their wings by using the whole thorax to produce fast, resonating, respiration-like contractions that result in the movement of the wing appendages, whose morphology and constituent materials then modify the basic, passive flapping motion through the air. As such, these creatures represent a fascinating source of inspiration for engineers aiming to create increasingly smaller and autonomous robots that can take to the air like a duck to water, and go where no machine has gone before. At the same time, robotic insects can serve as embodied models for testing scientific hypotheses that would be impossible to study in numerical simulations because of the difficulty in creating realistic visual environments, capturing the physics of fluid dynamics in very turbulent and low-speed regimes, reproducing the elastic properties of the active and passive materials that make up an insect body, and accurately modelling the perception-action loops that drive the behavior of the system. Despite much recent progress, both the functioning of flying insects and the design of micro flying robots are not yet fully understood, which makes this transdisciplinary area of research extremely fascinating and fertile with discoveries. This book brings together for the first time highly selected and carefully edited contributions from a community of biologists and engineers who share the same passion for v
vi
Preface
understanding the design principles of flying insects and robots. The book is the offspring of a stimulating meeting with the same title and organizers that was held in the summer of 2007 at Monte Verità in Switzerland. After the meeting, we decided to assemble a carefully edited volume that would serve both as a tutorial introduction to the field and as a reference for future research. In the months that followed, we solicited some of the participants, as well as additional authors whose research was complementary and would fit the book plan, to write chapters for a larger audience. The authors and the editors spent most of 2008 writing, revising, and cross-linking the chapters in order to produce a homogeneous, accessible, and yet up-to-date book. Approximately half of the book is written from a biological perspective and the other half from an engineering perspective, but in all cases the authors have attempted to use plain terminology that is accessible to both sides and they have made several links and suggestions that cut across the traditional divide between biology and engineering. The book starts with a description of today’s advanced methods used to study flying insects. After this, the reader is taken through a description of the perceptual, neuronal, and behavioral properties of flying insects and of their implications for the design of sensors and control strategies necessary to achieve autonomous navigation of miniature flying vehicles. Once this ground has been covered, the reader is gradually introduced to the principles of aerodynamics and control suitable for microsystems with flapping wings and to several examples of robots with fixed and flapping wings that are inspired by the principles of flying insects. We encourage readers to photocopy the figures of Chapter 15, cut out the drawings, and assemble a moving model of the thorax, which should provide an intuitive understanding of the typical workout routine of a flying insect. Two chapters venture into the area of robots that live and transit between the ground and the air. Although these chapters are more speculative from a biological perspective, they highlight the fact that flying insects are also terrestrial animals and that robots capable of a transition between terrestrial locomotion and flight can have several advantages. Finally, the book closes with two engineering chapters: one dedicated to energy supply and the possible use of solar cells to power micro aerial vehicles, and another to the technology available today and in the near future for realizing autonomous, flying micro robots. We sincerely hope that you will enjoy and learn from this book as much as we did throughout the entire creation and editing of this project. We would like to express our deep gratitude to the contributors of all the chapters, who enthusiastically presented their knowledge and achievements in such a small space and time, and who displayed amazing patience and dedication in revising their own material and that of their colleagues. Last, but not least, we would like to thank Ronan Nugent at Springer for welcoming this project, following it through all its production stages while accommodating many of our requests, and making sure that it is presented in a form that best fits its content both on the Internet and in the printed edition. Lausanne Brisbane Cambridge 12 June 2009
Dario Floreano Jean-Christophe Zufferey Mandyam V. Srinivasan Charlie Ellington
Contents
1 Experimental Approaches Toward a Functional Understanding of Insect Flight Control . . . . . . . . . . . . . . . . . Steven N. Fry
1
2 From Visual Guidance in Flying Insects to Autonomous Aerial Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mandyam V. Srinivasan, Saul Thurrowgood, and Dean Soccol
15
3 Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance Nicolas Franceschini, Franck Ruffier and Julien Serres 4 Active Vision in Blowflies: Strategies and Mechanisms of Spatial Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Egelhaaf, Roland Kern, Jens P. Lindemann, Elke Braun, and Bart Geurten
29
51
5 Wide-Field Integration Methods for Visuomotor Control . . . . . . . J. Sean Humbert, Joseph K. Conroy, Craig W. Neely, and Geoffrey Barrows
63
6 Optic Flow to Steer and Avoid Collisions in 3D . . . . . . . . . . . . Jean-Christophe Zufferey, Antoine Beyeler, and Dario Floreano
73
7 Visual Homing in Insects and Robots . . . . . . . . . . . . . . . . . . Jochen Zeil, Norbert Boeddeker, and Wolfgang Stürzl
87
8 Motion Detection Chips for Robotic Platforms . . . . . . . . . . . . . Rico Moeckel and Shih-Chii Liu
101
9 Insect-Inspired Odometry by Optic Flow Recorded with Optical Mouse Chips . . . . . . . . . . . . . . . . . . . . . . . . Hansjürgen Dahmen, Alain Millers, and Hanspeter A. Mallot 10 Microoptical Artificial Compound Eyes . . . . . . . . . . . . . . . . Andreas Brückner, Jacques Duparré, Frank Wippermann, Peter Dannberg, and Andreas Bräuer 11 Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . W. Shyy, Y. Lian, S.K. Chimakurthi, J. Tang, C.E.S. Cesnik, B. Stanford, and P.G. Ifju
115 127
143
vii
viii
12
Contents
Flow Control Using Flapping Wings for an Efficient Low-Speed Micro-Air Vehicle . . . . . . . . . . . . . . . . . . . . . . Kevin D. Jones and Max F. Platzer
159
13
A Passively Stable Hovering Flapping Micro-Air Vehicle . . . . . . . Floris van Breugel, Zhi Ern Teoh, and Hod Lipson
14
The Scalable Design of Flapping Micro-Air Vehicles Inspired by Insect Flight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Lentink, Stefan R. Jongerius, and Nancy L. Bradshaw
185
Springy Shells, Pliant Plates and Minimal Motors: Abstracting the Insect Thorax to Drive a Micro-Air Vehicle . . . . . Robin J. Wootton
207
15
171
16
Challenges for 100 Milligram Flapping Flight . . . . . . . . . . . . . Ronald S. Fearing and Robert J. Wood
219
17
The Limits of Turning Control in Flying Insects . . . . . . . . . . . . Fritz-Olaf Lehmann
231
18
A Miniature Vehicle with Extended Aerial and Terrestrial Mobility . Richard J. Bachmann, Ravi Vaidyanathan, Frank J. Boria, James Pluta, Josh Kiihne, Brian K. Taylor, Robert H. Bledsoe, Peter G. Ifju, and Roger D. Quinn
247
19
Towards a Self-Deploying and Gliding Robot . . . . . . . . . . . . . Mirko Kovaˇc, Jean-Christophe Zufferey, and Dario Floreano
271
20
Solar-Powered Micro-air Vehicles and Challenges in Downscaling . . André Noth and Roland Siegwart
285
21
Technology and Fabrication of Ultralight Micro-Aerial Vehicles . . . Adam Klaptocz and Jean-Daniel Nicoud
299
Contributors
Richard J. Bachmann BioRobots, LLC, Cleveland, USA Geoffrey Barrows Centeye, Washington, DC, USA,
[email protected] Antoine Beyeler Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland Robert H. Bledsoe United States Marine Corps and the Naval Postgraduate School, USA Norbert Boeddeker Department of Neurobiology and Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, Bielefeld, Germany,
[email protected] Frank J. Boria Department of Mechanical Engineering at the University of Florida, USA Nancy L. Bradshaw Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands; Experimental Zoology Group, Wageningen University, 6709 PG Wageningen, The Netherlands Andreas Bräuer Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Str. 7, D-07745, Jena, Germany Elke Braun Department of Neurobiology & Center of Excellence “Cognitive Interaction Technology”, Bielefeld University, D-33501 Bielefeld, Germany Floris van Breugel Cornell Computational Synthesis Lab, Cornell University, Ithaca, New York, USA,
[email protected] Andreas Brückner Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Str. 7, D-07745, Jena, Germany,
[email protected] C.E.S. Cesnik Department of Aerospace Engineering, University of Michigan, Ann Arbor, Michigan, USA,
[email protected] S.K. Chimakurthi Department of Aerospace Engineering, University of Michigan, Ann Arbor, Michigan, USA,
[email protected] Joseph K. Conroy Autonomous Vehicle Laboratory, University of Maryland, College Park, MD, USA,
[email protected] Hansjürgen Dahmen Cognitive Neurosciences, University of Tübingen, Germany,
[email protected] ix
x
Peter Dannberg Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Str. 7, D-07745, Jena, Germany Jacques Duparré Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Str. 7, D-07745, Jena, Germany Martin Egelhaaf Department of Neurobiology & Center of Excellence “Cognitive Interaction Technology”, Bielefeld University, D-33501 Bielefeld, Germany,
[email protected] Ronald S. Fearing Biomimetic Millisystems Lab, University of California, Berkeley, CA, USA,
[email protected] Dario Floreano Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland,
[email protected] Nicolas Franceschini Biorobotics Lab, Institute of Movement Science, CNRS & Univ of the Mediterranean, Marseille, France,
[email protected] Steven N. Fry Institute of Neuroinformatics (INI) & Institute of Robotics and Intelligent Systems (IRIS) – Swiss Federal School of Technology Zürich, Switzerland Bart Geurten Department of Neurobiology & Center of Excellence “Cognitive Interaction Technology”, Bielefeld University, D-33501 Bielefeld, Germany J. Sean Humbert Autonomous Vehicle Laboratory, University of Maryland, College Park, MD, USA,
[email protected] Peter G. Ifju Department of Mechanical and Aerospace Engineering, University of Florida, Gainsville, FL, USA,
[email protected] Kevin D. Jones Naval Postgraduate School, Monterey, CA, USA,
[email protected] Stefan R. Jongerius Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands Roland Kern Department of Neurobiology & Center of Excellence “Cognitive Interaction Technology”, Bielefeld University, D-33501 Bielefeld, Germany Josh Kiihne United States Marine Corps and the Naval Postgraduate School, USA Adam Klaptocz Lab of Intelligent Systems, EPFL, Lausanne, Switzerland,
[email protected] Mirko Kovac Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland,
[email protected] Fritz-Olaf Lehmann Institute of Neurobiology, University of Ulm, Albert-Einstein-Allee 11, 89081 Ulm,
[email protected] David Lentink Experimental Zoology Group, Wageningen University, 6709 PG Wageningen, The Netherlands; Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands,
[email protected] Y. Lian Department of Aerospace Engineering, University of Michigan, Ann Arbor, Michigan, USA,
[email protected]
Contributors
Contributors
xi
Jens P. Lindemann Department of Neurobiology & Center of Excellence “Cognitive Interaction Technology”, Bielefeld University, D-33501 Bielefeld, Germany Hod Lipson Cornell Computational Synthesis Lab, Cornell University, Ithaca, New York, USA,
[email protected] Shih-Chii Liu Institute of Neuroinformatics, University of Zürich and ETH Zürich, Zürich, Switzerland,
[email protected] Hanspeter A. Mallot Cognitive Neurosciences, University of Tübingen, Germany,
[email protected] Alain Millers Cognitive Neurosciences, University of Tübingen, Germany,
[email protected] Rico Moeckel Institute of Neuroinformatics, University of Zürich and ETH Zürich, Zürich, Switzerland,
[email protected] Craig W. Neely Centeye, Washington, DC, USA,
[email protected] Jean-Daniel Nicoud Didel SA, Belmont, Switzerland,
[email protected] André Noth Autonomous Systems Lab, ETHZ, Zürich, Switzerland,
[email protected],
[email protected] Max F. Platzer Naval Postgraduate School, Monterey, CA, USA,
[email protected] James Pluta United States Marine Corps and the Naval Postgraduate School, USA Roger D. Quinn Department of Mechanical and Aerospace Engineering at Case Western Reserve University, Cleveland, USA,
[email protected] Franck Ruffier Biorobotics Lab, Institute of Movement Science, CNRS & Univ of the Mediterranean, Marseille, France,
[email protected] Julien Serres Biorobotics Lab, Institute of Movement Science, CNRS & Univ of the Mediterranean, Marseille, France,
[email protected] W. Shyy Department of Aerospace Engineering, University of Michigan, Ann Arbor, Michigan, USA,
[email protected] Roland Siegwart Autonomous Systems Lab, ETHZ, Zürich, Switzerland,
[email protected] Dean Soccol ARC Centre of Excellence in Vision Science, Queensland Brain Institute, University of Queensland, St. Lucia, QLD 4072, Australia,
[email protected] Mandyam V. Srinivasan ARC Centre of Excellence in Vision Science, Queensland Brain Institute, University of Queensland, St. Lucia, QLD 4072, Australia,
[email protected] B. Stanford Department of Mechanical and Aerospace Engineering, University of Florida, Gainsville, FL, USA,
[email protected]
xii
Wolfgang Stürzl Department of Neurobiology and Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, Bielefeld, Germany,
[email protected] J. Tang Department of Aerospace Engineering, University of Michigan, Ann Arbor, Michigan, USA,
[email protected] Brian K. Taylor Department of Mechanical and Aerospace Engineering at Case Western Reserve University, Cleveland, USA Zhi Ern Teoh Cornell Computational Synthesis Lab, Cornell University, Ithaca, New York, USA,
[email protected] Saul Thurrowgood ARC Centre of Excellence in Vision Science, Queensland Brain Institute, University of Queensland, St. Lucia, QLD 4072, Australia,
[email protected] Ravi Vaidyanathan Department of Mechanical Engineering at the University of Bristol, Bristol, UK; Department of Systems Engineering at the Naval Postgraduate School, CA, USA,
[email protected] Frank Wippermann Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Str. 7, D-07745, Jena, Germany Robert J. Wood Harvard Microrobotics Laboratory, Harvard University, Cambridge, MA, USA Robin Wootton School of Biological Sciences, Exeter University, Exeter EX4 4PS, UK,
[email protected] Jochen Zeil ARC Centre of Excellence in Vision Science and Centre for Visual Sciences, Research School of Biological Sciences, The Australian National University, Biology Place, Canberra, ACT 2601, Australia,
[email protected] Jean-Christophe Zufferey Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland,
[email protected]
Contributors
Chapter 1
Experimental Approaches Toward a Functional Understanding of Insect Flight Control Steven N. Fry
Abstract This chapter describes experimental approaches exploring free-flight control in insects at various levels, in view of the biomimetic design principles they may offer for MAVs. Low-level flight control is addressed with recent studies of the aerodynamics of free-flight control in the fruit fly. The ability to measure instantaneous kinematics and aerodynamic forces in free-flying insects provides a basis for the design of flapping airfoil MAVs. Intermediate-level flight control is addressed by presenting a behavioral system identification approach. In this work, the motion processing and speed control pathways of the fruit fly were reverse engineered based on transient visual flight speed responses, providing a quantitative control model suited for biomimetic implementation. Finally, high-level flight control is addressed with the analysis of landmark-based goal navigation, for which bees combine and adapt basic visuomotor reflexes in a context-dependent way. Adaptive control strategies are also likely suited for MAVs that need to perform in complex and unpredictable environments. The integrative analysis of flight control mechanisms in free-flying insects promises to move beyond isolated emulations of biological subsystems toward a generalized and rigorous approach.
S.N. Fry () Institute of Neuroinformatics, University of Zurich and ETH Zurich; Institute of Robotics and Intelligent systems, ETH Zurich e-mail:
[email protected]
1.1 Introduction Flying insects achieve efficient and robust flight control despite size constraints and hence limited neural resources [6, 12]. This is achieved from closely integrated and often highly specialized sensorimotor control pathways [19], making insects an ideal model system for the identification of biological flight control mechanisms, which can serve as design principles for future autonomous micro-air vehicles (MAVs) [18, 16, 38, 62]. While the implementation of biomimetic design principles in MAVs and other technical devices is inherently appealing, such an approach has its pitfalls that can easily lead to misconceptions [54]. A first problem relates to the immense complexity of biological systems, in particular flight control mechanisms. The multimodal sensorimotor pathways represent a high-dimensional control system, whose function and underlying physiology are understood only partially. A second problem relates to the often substantially different spatial and temporal scales of insects and MAVs. A meaningful transfer of a control mechanism identified in a small insect to its typically much larger robotic counterpart is non-trivial and requires detailed knowledge of the system dynamics. For example, it is not obvious how to control a robot based on motion processing principles derived from insects [41, 62], which perform maneuvers much faster and based on completely different locomotion principles than their robotic counterparts. This chapter presents recent experimental approaches aimed at a functional understanding of insect flight control mechanisms. To this end, flight control in insects is addressed at various levels, from the
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_1, © Springer-Verlag Berlin Heidelberg 2009
1
2
biomechanics of flapping flight to flight control strategies and high-level navigational control. The experimental approaches share in common a detailed analysis of the time-continuous processes underlying the control of free flight under highly controlled and yet meaningful experimental conditions.
1.1.1 Chapter Overview • Low level: Biomechanics. A formidable challenge for the design of MAVs is to generate sufficient aerodynamic forces to remain aloft, while controlling these forces to stabilize flight and perform maneuvers. The sensorimotor system of insects has evolved under size constraints that may be quite similar to those of MAVs. Consequently, the biological solutions enabling flight in these small animals may provide useful design principles for the implementation of MAVs. The first example in this chapter addresses lowlevel flight control with a detailed description of free-flight biomechanics in the fruit fly Drosophila [29, 30]. 3D high-speed videography and dynamical force scaling were combined to resolve the movements of the wings and the resulting aerodynamic forces. The time-resolved analysis reveals aerodynamic and control requirements of insect flight, which are likewise essential to MAV design [60, 17, 16, 58], also see Chaps. 11–16. • Intermediate level: Visuomotor reflexes. To navigate autonomously in a cluttered environment, MAVs need to sense objects and produce appropriate responses, such as to avoid an impending collision. Insects meet this challenge with reflexive responses to the optic flow [33], i.e., the perceived relative motion of the environment during flight. The so-called optomotor reflexes mediate various visual flight responses, including attitude control, collision avoidance, landing, as well as control of heading, flight speed, and altitude. Optomotor reflexes provide a powerful model system to explore visual processing and flight control principles, reviewed in [12, 21, 46, 50], also see Chaps. 2, 4, and 17. The second example in this chapter describes a rigorous system analysis of the fruit fly’s visual flight speed response using TrackFly, a wind tunnel
S.N. Fry
equipped with virtual reality technology [27]. The identification of the control dynamics in the form of a controller provides a powerful strategy to transfer biological control principles into the robotic context, including MAVs [17, 16], also see Chap. 3. • High level: Landmark navigation. Autonomous MAVs should ultimately be able to flexibly solve meaningful tasks, such as navigate through cluttered, unpredictable, and potentially dangerous environments, and safely return to their base. Here, too, insects can serve as a model system, as some species show the impressive ability to acquire the knowledge of specific locations in their environment (e.g., nest, food site), which they repeatedly visit over the course of many days [56], also see Chaps. 2 and 7. The third example in this chapter describes an experimental approach aimed to explain landmarkbased goal navigation in honey bees from a detailed analysis of individual maneuvers. Goal navigation is explained with basic sensorimotor control mechanisms that are combined and modified through the learning experience. The ability to achieve robust, adaptive, and flexible flight control as an emergent property of basic sensorimotor control principles offers yet more interesting options for the design of autonomous MAVs with limited built-in control circuits.
1.2 Low-Level Flight Control – Biomechanics of Free Flight A detailed knowledge of flight biomechanics provides the foundation for our understanding of biological flight control strategies – or their implementation in MAVs. Flight control is ideally studied in free flight, in which the natural flight behavior of an insect can be measured under realistic sensory and dynamic flight conditions. To understand how a flapping insect stabilizes its flight and performs maneuvers, the underlying mechanisms must be studied at the level of single wing strokes – not an easy task, considering the tiny forces and short timescales involved. The example described in this section shows how the application of 3D high-speed videography and dynamic force scaling using a robotic fly wing allowed such a detailed analysis of free-flight biomechanics to
1
Experimental Approaches to Insect Flight Control
3
be performed in the fruit fly Drosophila, a powerful model system for the design of flapping airfoil MAVs [18, 17, 16, 58].
1.2.1 Research Background The aerodynamic basis of insect flight has remained enigmatic due to the complexities related to the intrinsically unsteady nature of flapping flight [57]. A solid theoretical basis for quantitative analyses of insect flight aerodynamics was provided by Ellington’s influential theoretical work based on time-averaged models [22]. At the experimental level, dynamically scaled robotic wings provided the technological breakthrough allowing aerodynamic effects to be explored empirically at the timescale of a single wing stroke [44].
by a black cylindrical cup filled with vinegar, the flies approached the center of the chamber, where they often hovered before landing on the cup or instead performed a fast turning maneuver (saccade) in order to avoid colliding with it. These flight sequences were filmed using three orthogonally aligned high-speed cameras, whose lines of sight intersected in the middle of the flight chamber. Next, the wing and body kinematics were extracted using a custom-programmed graphical user interface (Fig. 1.1B). The wing kinematics were then played through a dynamically scaled robotic wing (Robofly, Fig. 1.1C) to measure the aerodynamic wing forces (arrows in Fig. 1.1B) throughout the filmed flight sequence. Combining the kinematic and force data allowed a direct calculation of instantaneous aerodynamic forces, torques, and power (see below). 1.2.2.1 Hovering Flight
1.2.2 Experiments To perform a time-resolved biomechanical analysis of free flight, the wing and body movements of fruit flies were recorded using 3D high-speed videography (Fig. 1.1A). For this, hungry flies were released into a small flight chamber (side length 30 cm). Attracted
B
A
Hovering flight offers itself for an analysis of the aerodynamic requirements of flapping flight without the complications resulting from body motion. The analysis of such a hovering sequence, consisting of six consecutive wing strokes, is shown in Fig. 1.2. The precisely controlled wing movements are characterized by a high angle of attack and maximal force
C Motors
HS
Wing Head
C
Mineral oil
Fly’s path
HS HS
Abdomen Model wing
LED
Aerodynamic force 0.8 m
Fig. 1.1 Measurement of kinematics and forces. (A) Setup: Flies were filmed with three orthogonally aligned high-speed (5000 fps) cameras (HS). Arrays of near-infrared light emitting diodes (LEDs) were used for back-lighting. Flies were attracted to a small cylindrical cup (C), in front of which they were filmed within the small overlapping field of view of the cameras (shown as a wire-frame cube). (B) Kinematic extraction: Wing and body kinematics were measured using a graphical user interface, which allowed a user to match the wing silhouettes and the
positions of the head and abdomen to obtain their 3D positions. Arrows show the aerodynamic forces measured using Robofly. (C) “Robofly”: Plexiglas wings (25 cm in length) were flapped in mineral oil at the appropriate frequency to match the Reynolds number of the fly’s flapping wings in air (and hence the fluid dynamics). The up-scaled fluid dynamic forces were measured with strain gauges on the wings and the aerodynamic forces acting on the fly’s wings (shown in B) were back-calculated. Figure modified from [30]
4
S.N. Fry
A
Net force (N) Lift Thrust
0
20 10 Time (ms)
30
D
Measured
Model
1 0.5
5
Pitch torque (10–8 Nm)
5 0 –5
1.5
0
Down Up
Angle Devi- Stroke of ation angle attack (deg)
90 0 –90
Flight forces (10–5 N)
C
0
–5
B 1 × 10–5 N
E
500
Down
Up
Power (W kg–1)
Total Aerodyn. 0 Intertial –200
Down
Up
Stroke cycle
Fig. 1.2 Hovering flight. (A) Wing kinematics and flight forces. Data from six consecutive wing beats are shown. For a definition of the stroke angles refer to [30]. (B) Wing motion and forces. Successive wing positions during a hovering stroke cycle are shown with matchstick symbols (dots show the leading edge). Instantaneous aerodynamic forces are shown with arrows. Axes indicate horizontal (± 90◦ ) and vertical (± 10◦ ) stroke positions. The arrow shown with the fly shows the aerodynamic force averaged over the stroke cycle. (C) Quasi-steady analysis of
instantaneous aerodynamic force. The measured force is shown together with the force predicted by a quasi-steady model [20]. (D) Instantaneous pitch torque. Pitch torque (black line ± S.D.) oscillates around a mean of zero during hovering. (E) Instantaneous specific flight power (in W/kg muscle mass). Traces show total mechanical power (±S.D.), which is composed of the aerodynamic and inertial power required for wing motion. Figure modified from [30]
production during the middle of the downstroke and the early upstroke (Fig. 1.2A, B). As expected for hovering, the drag of the downstroke and upstroke cancels itself out, while the mean lift offsets the fly’s weight. The aerodynamic forces generated by the wing movements are largely explained with a quasi-steady model that takes into account translational and rotational effects of the wing motion (Fig. 1.2C). The main discrepancy between the modeled and measured forces is a phase delay, which is likely due to unsteady effects (e.g., wing–wake interactions) not considered here. A further requirement of stable hovering flight is a precise balance of the aerodynamic torques over the course of a wing stroke. To maintain a constant body pitch, for example, the substantial torque peaks
generated throughout the stroke cycle must cancel each other out precisely, as shown in Fig. 1.2D. Finally, the instantaneous power was calculated directly from the scalar product of wing velocity and the forces acting on the wings (Fig. 1.2E). The power associated with aerodynamic force production peaks around the middle of each half-stroke, when aerodynamic forces and wing velocity are maximal. Conversely, the power required to overcome wing inertia reverses its sign during each half-stroke due to the deceleration of the wings toward the end of each half-stroke. The total mechanical power, the sum of these two components, is positive for most part of the wing stroke. Power is negative when the wings decelerate while producing little aerodynamic force, which occurs briefly toward
1
Experimental Approaches to Insect Flight Control
5
the end of each half-stroke. During this phase, the mechanical power could be stored elastically and partially retrieved during the subsequent half-stroke to reduce the total power requirements. The potential reduction of flight power in fruit flies, however, is quite limited (in the order of 10%).
1.2.2.2 Maneuvering
B
Yaw torque High Low
30 deg
The physical constraints may be similar for MAVs and flies operating at similar size scales, and the flight control mechanisms evolved in insects can therefore provide valuable design principles for MAVs. The application of high-speed videography and aerodynamic force measurements using dynamically scaled robots provides detailed insights into the requirements of insect flight control that can help identify important design constraints for MAVs. This analysis reveals critical aspects of flight control in Drosophila that need to be considered also for MAV design. Precise and fast wing actuation appears most critical for flight control. As shown by the example of pitch torque, the instantaneous torques produced by the wings vary considerably and must be precisely balanced over the course of a stroke cycle. As shown by the analysis of yaw torque during turning maneuvers, even subtle changes in wing motion are sufficient to induce fast turns within a few wing beats. Precise and fast sensorimotor control loops are obviously required for flight control using similar morphologies. The experiments also indicate less critical features of flapping flight control, at least at the size scale of the fruit fly. Wing stiffness and surface structure, for example, may be relatively unimportant under certain conditions. The simple Plexiglas wing used in Robofly was sufficient to reproduce the required aerodynamic forces without the need to mimic the quite complicated structure of the fly’s wing. The feasibility of a stiff,
15 10
2 1
5 0
0
–5
–1
–10
Stroke plane angle
–20 –10
Fig. 1.3 Maneuvering. (A) Changes in wing kinematics associated with yaw torque production during saccades. Wing tip trajectories were measured during free-flight turning maneuvers (saccades). Wing tip trajectories associated with high and low aerodynamic yaw torques are shown as light and dark gray lines,
3
Stroke amplitude
0 10 20 Time (ms)
–2 30
Stroke plane angle (deg)
A
Stroke amplitude (deg)
This approach was taken further to explore how flies modify their wing kinematics during flight maneuvers. Flight sequences containing saccadic turning maneuvers were filmed and the aerodynamic wing forces again measured using Robofly. Figure 1.3A shows wing tip trajectories during such maneuvers, labeled according to the yaw torque they produced. At the onset of a turn, the outside wing tilts backward and its amplitude increases (light gray tip trajectories). Conversely, the inside wing tilts forward and its amplitude decreases (dark gray trajectories). The resulting difference in yaw torque generated by the two wings is sufficient to accelerate the fly to over 1000◦ /s within about five wing strokes [29]. The changes in stroke plane angle and stroke amplitude over the course of a saccade are shown in Fig. 1.3B. To maximally accelerate at the onset of the saccade, the difference in stroke amplitude between the outside and inside wing is only around 5◦ , while the stroke plane angle differs by a mere 2◦ . Even during extreme flight maneuvers, therefore, the changes in wing kinematics are quite small.
1.2.3 Conclusions
40
respectively. (B) Bilateral changes in wing kinematics during a turning maneuver. At the onset of a turn, the outside wing increases the stroke amplitude and stroke plane angle (backward tilt) relative to the inside wing. Figure modified from [29]
6
light-weight wing structure for flapping lift production was recently demonstrated in [59]. As a further simplification, quasi-steady mechanisms of aerodynamic force production dominate in the production of aerodynamic forces of flapping wings, at least in the fruit fly. Unsteady effects, such as wing–wing and wing–wake interactions, play a minor role, such that simple analytical tools can be applied at least in first approximation. Finally, elastic storage plays a comparatively small role given the small mass of the wings, and therefore does not present a significant design constraint. In conclusion, the measurement of instantaneous wing positions in free flight, together with the aerodynamic forces measured in the robotic wing, is sufficient to robustly quantify several relevant aspects of flight biomechanics. The example of the fruit fly, itself a powerful model for MAV design [61, 60], reveals a suitable strategy to get flapping MAVs off the ground. The next substantial challenge is to actively stabilize flight, for which exceedingly fast and precise wing control is required. The impressive advances in flight biomechanics provide a solid foundation for biomimetic MAV design that takes into account the requirement for flight control.
1.3 Intermediate-Level Flight Control – Visuomotor Reflexes An intermediate level of flight control involves reflexive responses to sensory input. On the one hand, they can mediate corrective maneuvers to recover from disturbances and unstable flight conditions to increase dynamic system stability. On the other hand, they can mediate flight maneuvers to suitably respond in an unpredictable environment. For example, an MAV equipped with motion sensors can sense an object appearing in front and respond with an avoidance maneuver to prevent a collision. Equipped with the appropriate sensors and flight control strategies, autonomous MAVs can navigate more safely and efficiently within cluttered and unpredictable environments (Chaps. 3 and 8). The extremely efficient and robust visuomotor reflexes of insects can provide design principles for biomimetic flight control strategies in autonomous MAVs. The second part of this chapter describes behavioral experiments aimed at a rigorous system
S.N. Fry
identification of visuomotor control pathways in the fruit fly. The characterization of biological control pathways in the form of a control model allows more direct and meaningful transfer of biological flight control principles into a robotic context, including MAVs [17, 16].
1.3.1 Research Background Pioneering experiments explored the transfer properties of optomotor turning reflexes in insects. This was achieved with a simple preparation, in which insects were stimulated using a rotating drum and their intended turning responses measured using elegant techniques [36, 23, 34]. The response tuning of optomotor turning reflexes provides the foundation for a cohesive theory of optic flow processing in insects to this day, reviewed in [6, 4, 37, 5], also see Chaps. 4 and 5. While tethering provides a simple method to deliver stimuli without influence of the behavioral reactions (referred to as open loop in the biological literature), the results of such experiments are difficult to interpret functionally [49]. Tethering disrupts various reafferent feedback circuits, which leads to significant behavioral artifacts [30] and prevents the analysis of flight control under realistic dynamical conditions. Visual reflexes have also been extensively explored in free flight, reviewed in [12, 6, 46], in which case the visual input is coupled to the insect’s flight behavior (natural closed-loop condition) [45, 49]. This coupling hinders a rigorous system identification because the stimuli are no longer under complete experimental control. Nevertheless, data obtained in this way can provide valuable insight into flight control mechanisms [40, 11, 3], Chap. 2. A simpler behavioral analysis becomes possible from measuring free-flight behavior under steady-state conditions [15, 47, 2], but cannot address questions relating to flight control dynamics.
1.3.2 Experiments A functional understanding of biological flight control principles that can be meaningfully transferred into MAVs requires careful consideration of the
1
Experimental Approaches to Insect Flight Control
multimodal reafferent pathways. Below I describe a recent experimental approach aimed at a system identification of visuomotor pathways that can serve as design principles for MAV control.
1.3.2.1 System Analysis of Visual Flight Speed Control Using Virtual Reality Display Technology To perform a system identification of the fruit fly’s motion-dependent flight speed control pathways, a wind tunnel was equipped with virtual reality display technology (TrackFly [27], Fig. 1.4). An automated procedure was implemented to induce flies to fly to the center of the wind tunnel and then stimulate them with horizontally moving sine gratings of defined temporal frequency (TF), spatial frequency (SF), and contrast. To hold the linear image velocity (defined as TF/SF) constant in the fly’s eyes, the grating speed was adjusted continuously to compensate for the displacement of the fly (one-parameter open-loop paradigm). The automated high-throughput system allowed a large data set of visual responses to be measured for a broad range of temporal and spatial frequencies. The results show that fruit flies
M LP
Cam
Cam Projector
Wind Fly M
LP
Fig. 1.4 TrackFly. A wind tunnel was equipped with a virtual reality display system (only working section of the wind tunnel is shown). Visual stimuli were presented to free-flying flies in open loop, i.e., the pattern offset was adjusted to the fly’s position along the wind tunnel in real time. M: Mirror; LP: Light path; Cam: Video camera. For details see [27]
7
respond to the linear velocity (TF/SF) of the patterns, which serves as a control signal for flight speed [27]. The visual tuning properties of visual flight speed responses differ from optomotor turning responses, which instead show a response maximum at a particular temporal frequency (TF) of displayed patterns [34]. Next, system identification procedures were applied to obtain a controller that was able to reproduce the transient open-loop response properties, i.e., reproduce the transient changes in flight speed after onset of the optic flow stimulus. The controller was then used to predict the speed responses under more realistic visual closed-loop conditions, and the results confirmed with data obtained from flies tested in closed loop. A detailed quantitative account of the procedures and data is published elsewhere ([27], [28], Rohrseitz and Fry, in prep.).
1.3.3 Conclusions The reflexive flight control pathways of insects can provide powerful control architectures for biomimetic MAVs. For a meaningful interpretation of the biological measurements, however, the behavioral context and relevance of multimodal feedback must be carefully considered. Free-flight experiments are ideal to explore flight control under realistic flight conditions, but the difficulty of delivering arbitrary stimuli in a controlled manner is a hindrance for detailed behavioral system identification. The described approach works around this limitation by allowing a particular parameter (here: pattern speed) to be presented in open loop, without disrupting the remaining stimuli. From the measured transient responses, linear pattern velocity was identified as the relevant control parameter for visual flight speed control. Based on this high-level understanding, the underlying visual computations and neural structures can be further explored. Next, the transient responses were used to reverse engineer the control scheme underlying flight speed control (Fig 1.5). The measurements performed in open and closed loop are quantitatively explained by a proportional control law, which is simpler still than a PID controller recently suggested for insects [24], Chap. 3. A rigorous system identification approach in biology provides a functional understanding of the
8
S.N. Fry
Wind tunnel TF, SF
Pattern slip speed
Speed signal Vision
Flight speed command
Control
Locomotor limit
Open/closed loop
Flight speed
Fig. 1.5 Control model. The fly’s flight speed responses are quantitatively explained by a simple control model. The visual system computes pattern velocity as input to a controller of flight
speed, which is constrained by measured locomotor limits. The simulated flight speed responses in open and closed loop (note switch symbol) were verified experimentally using TrackFly
underlying neuromotor pathways and characterizes their dynamics in a concrete control model. The control strategy can then be meaningfully transferred into MAVs even if the underlying neuromotor mechanisms remain only partially known.
1.4.1 Research Background
1.4 High-Level Flight Control – Landmark-Guided Goal Navigation High-level flight control strategies are ultimately required to enable MAVs to perform meaningful tasks, for which sensory input must be processed in a contextdependent way. For example, an MAV could rely on the same visual objects encountered along its flight path during the outbound trip and to return to its home base, requiring a context-dependent processing of the visual input. Some insect species reveal the amazing ability to return to quite distant places that they previously visited. Honey bees, for example, learn the location of a rewarding food source, which they repeatedly visit to collect food for the hive (also see Chaps. 2 and 7). The third example of this chapter describes experiments exploring the basic control principles underlying such complex, context-dependent behaviors. Landmark-based goal navigation is explained with visuomotor control mechanisms that are modified through learning experience. Complex flight behaviors result as an emergent property of basic flight control strategies and their interactions with the environment. Similar control strategies could allow MAVs with limited resources to likewise perform well in complex, real-world applications.
The mechanisms by which flying insects use landmarks to return to a learned place was pioneered by Tinbergen’s (1932) [52] classic neuroethological studies in the digger wasp. His approach, followed by many later researchers (e.g., for flying honey bees [1, 7]; review [55]), was to induce search flights in experimentally modified visual surroundings and conclude from the search location the internal visual representation of the visual environment. A similar approach in honey bees performed half a century later led to the influential snapshot model [8], which explained goal-directed flight control to result from a comparison between the current retinal image and a template image formerly stored at the goal location. The ways in which insects represent locations as visual memories and use these to return to a learned place are studied experimentally in increasing detail, see recent reviews in [13, 10]. Not least due to its algorithmic formulation, the snapshot and related models found widespread appeal in the robotic community and were further explored in numeric [43] and robotic [39, 42] implementations, reviewed in [25, 53].
1.4.2 Experiments The experiments giving rise to the snapshot model were first replicated and extended to explore landmarkbased goal navigation in more detail [31]. The results were suggestive of alternative visuomotor control strategies, which were subsequently explored with detailed analyses of individual approach flights using more advanced video tracking techniques [26].
Experimental Approaches to Insect Flight Control Ai
Initial training Aiii
9
Aii
B
Aiv
Flight durations (s)
1
Feeder
15 10 5 0 1
20 40 60 80 100 120
Flights
Fig. 1.6 Learning experiments. (A) Successive approach flights of a single bee. The experiments were performed in a cylindrical tent (Ø 2.4 m). The bee entered on the left and flew to an inconspicuous feeder (location indicated with a stippled cross-hair) 0.5 m in front of a black paper square attached to the back wall (shown as a black bar on the right). (i) Initial training. The bee was trained by displacing a temporary feeder (location shown
with arrows) closer toward the final feeder position on consecutive foraging trips. Lines show the flight trajectories toward the temporary and final feeder positions. (ii) Flights 1–20. (iii) Flights 51–70. (iv) Flights 101–120. (B) Duration of successive flights. With increasing experience, flight duration decreased to about 2 s after about 50 flights. Figure modified from [32]
Detailed measurements of approach flights of bees to a goal location were made, taking care not to disrupt their natural behavior. To explore the relevance of learning, every single approach flight of a single bee was measured (Fig. 1.6). By moving a temporary feeder stepwise through a uniform flight tent, the bee was trained to fly toward a permanent feeder located in front of a single landmark (Fig. 1.6 Ai). The approach flights of this inexperienced bee were slow and quite convoluted. The first 20 flights of the same bee with the permanent feeder were faster, but still revealed turns and loops reminiscent of search flights (Fig. 1.6 Aii). The bee’s approaches became progressively faster and smoother as it gained experience during successive foraging trips (Fig. 1.6 Aiii). After about 100 flights, the bee approached the feeder with straight and fast trajectories. Duration of successive flights decreased from about 5–10 s to 2–3 s (Fig. 1.6B). Next, individual bees were trained using landmark settings that differed in position, number, and color of the landmarks. First, a bee was trained with a single black cylinder (•) located just to the right of the feeder (Fig. 1.7A, left; feeder location is marked with crosshairs). The bee approached the cylinder and performed occasional left turns, roughly aimed at the feeder position. During these approaches, the bee held the cylinder roughly in the frontal-right visual field (Fig. 1.7A, right). A different bee trained with the cylinder further to the right side still held the cylinder in the
right visual field and performed more convoluted flight paths toward the goal (Fig. 1.7B). In the identical situation, a second bee approached the feeder with a different approach pattern, but like the first bee held the cylinder in the right visual field (Fig. 1.7C). This simple rule was even used by bees trained with two differently colored cylinders (marked L and R in Fig. 1.7D). The bees simply relied on one of the cylinders (in this case the right cylinder) for the initial approach, again holding it in the right visual field. The detailed structure of an approach flight, together with the measured body axis direction, is shown in (Fig. 1.7E). These and other experiments provide a coherent view on the visuomotor strategies employed by bees to locate a goal in different environmental situations. Bees with little experience with a landmark setting (Fig. 1.6 Ai, Aii) or in the absence of a suitable (i.e., near-frontal) landmark (Fig. 1.7C) perform search-like flights. If a landmark is suitably located behind the goal (Fig. 1.6), an experienced bee will simply head toward it and find its goal. To do so, the bee needs to fixate the landmark in a frontal position, as symbolized with the curved arrows in Fig. 1.8A. If the bee tends to keep the landmark in the right visual field (Fig. 1.8B), it will tend to make turns to the left, as required for the final goal approach (Fig. 1.7). Finally, the bee can rely on one of several landmarks during its goal approach, if it associates it with the appropriate retinal position and/or turning direction (Fig. 1.8C). These results are
10
S.N. Fry
B
1
0 180
Frequency
Frequency
A
90
0
90
180
1
0 180
D
1
L
Frequency
Frequency
C
R 0 180
90
0
90
90
0
90
180
Azimuth distribution (deg)
Azimuth distribution (deg) 1
L
R
0 180
180
90
0
90
180
Azimuth distribution (deg)
Azimuth distribution (deg)
E
Fig. 1.7 Approach flights and landmark azimuth during approach flights. (A) Left: 40 successive approach flights of an individual bee with a cylinder (•) positioned at an angular distance of 15◦ from the feeder (+). Right: Distribution of landmark positions in the bee’s visual field. (B) Cylinder placed 40◦ to the right of the feeder. (C) As in (B), with data from a differ-
A
Landmark
ent bee. (D) Approach flights of two individual bees in the presence of two cylinders of different colors. The bees headed toward the right (R) cylinder. (E) Typical example of a bee’s approach flight. The bee’s position (dots) and body axis direction (lines) were measured at 50 Hz using a pan-tilt tracking system [26]. Data are subsampled for clarity. Figure modified from [32]
B
Feeder
C 2
1
1
1 1
2 1
1
2
Flight path
Fig. 1.8 Visuomotor guidance model. (A) Frontal landmark. Bees fixate the landmark frontally. (B) Lateral landmark. Bees perform biased turns to the left, keeping the cylinder in the right visual field. (C) Two landmarks. Bees use one landmark dur-
ing the approach. Lines show hypothetical flight paths. Curved arrows symbolize a learned visuomotor association. For details see [32]
consistent with various experiments performed in both flying and walking insects, e.g., [9, 35].
process. Relatively unstructured, search-like flights are observed in bees with limited experience with the landmark setting. A suitable landmark is used to direct the flight toward the goal, while successful flight motor patterns are reinforced by operant learning to increase the efficiency and reliability of the approach flights. Control strategies based on a flexible and adaptable employment of basic control loops may also be suited for MAVs with limited storage and processing capacity to enable successful landmark navigation. A
1.4.3 Conclusions Landmark-based goal navigation in honey bees is explained as an emergent property of basic visuomotor reflexes, which are modified by a continuous learning
1
Experimental Approaches to Insect Flight Control
possible scenario could consist of exploring unfamiliar terrain and subsequently patrolling along suitable routes. The flexible use of comparatively basic visuomotor control strategies can more likely meet the high requirements for fast and robust flight control required by MAVs than a single complex and hard-wired algorithm. Flexible adaptation to varying environmental conditions and increasing experience provides a powerful strategy for flight control in complex and unpredictable environments.
1.5 Closing Words Insects perform complex flight control tasks despite their small size and presumably limited neural capacity. The fact that insects nevertheless excel in their flight performance is explained with a close integration of specialized sensorimotor pathways. While it is intriguing to take inspiration from flying insects for the design of autonomous MAVs, the high complexity of an insect’s multimodal flight control system renders this task non-trivial and prone to misconceptions [54, 14, 50]. It may therefore be hardly fruitful – and indeed counter-productive – to take superficial inspiration from biology and implement the purported principles in robots without due care. Instead, biologists and engineers should take advantage of the fact that insects can achieve superior flight control with possibly quite basic, but highly integrated control principles. To this end, detailed biological studies are required that address flight control mechanisms at various levels, including biomechanics, neural processing, sensorimotor integration, and high-level behavioral strategies. The experimental approaches described in this chapter show that advanced concepts and technologies can help provide the functional understanding of biological flight control principles required for meaningful biomimetic implementations in MAVs. Not only can engineers profit from rigorous biological research of flight control, but the concepts and tools applied in engineering [17, 16] can likewise be meaningfully applied to explore biological control principles in a rigorous and quantitative way [51, 48, 49, 27]. The control principles thus identified in insects can then be transferred more easily into a
11
robotic environment with appropriate consideration of the behavioral context and scaling issues. The presented research examples motivate closer interactions between biological research of flight control mechanisms and engineering design of MAVs. Such interactions promise significant benefits to both fields, in that biologists can aim at more rigorous quantitative analyses of flight control and engineers can aim at more meaningful biomimetic implementations. Indeed, this aim will likely have been reached when the common fascination about flight control becomes the defining element of a coherent, interdisziplinary research effort. Acknowledgments I wish to thank to reviewers for useful comments, Chauncey Grätzel for advice on the writing, Jan Bartussek, Vasco Medici and Nicola Rohrseitz for useful comments and discussions. The work described in this chapter was funded by the following institutions: Human Frontiers Science Program (HFSP), Swiss Federal Institute of Technology (ETH) Zurich; Swiss National Science Foundation (SNSF), University of Zurich and Volkswagen Foundation.
References 1. Anderson, A.: A model for landmark learning in the honeybee. Journal of Comparative Physiology A 114, 335–355 (1977) 2. Baird, E., Srinivasan, M.V., Zhang, S., Cowling, A.: Visual control of flight speed in honeybees. Journal of Experimental Biology 208(20), 3895–3905 (2005) 3. Boeddeker, N., Kern, R., Egelhaaf, M.: Chasing a dummy target: Smooth pursuit and velocity control in male blowflies. Proceedings of the Royal Society of London Series B Biological Sciences 270(1513), 393–399 (2003) 4. Borst, A., Egelhaaf, M.: Principles of visual motion detection. Trends in Neurosciences 12(8), 297–306 (1989) 5. Borst, A., Haag, J.: Neural networks in the cockpit of the fly. Journal of Comparative Physiology A 188(6), 419–37 (2002) 6. Buchner, E.: Behavioral analysis of spatial vision in insects. In: M.A. Ali (ed.) Photoreception and Vision in Invertebrates, pp. 561–621. Plenum Press, New York (1984) 7. Cartwright, B.A., Collett, T.S.: How honey bees use landmarks to guide their return to a food source. Nature 295, 560–564 (1982) 8. Cartwright, B.A., Collett, T.S.: Landmark learning in bees: Experiments and models. Journal of Comparative Physiology A 151, 521–543 (1983) 9. Chittka, L., Kunze, J., Shipman, C., Buchmann, S.L.: The significance of landmarks for path integration in homing honeybee foragers. Naturwissenschaften 82, 341–343 (1995)
12 10. Collett, T.S., Graham, P., Harris, R.A., Hempel de Ibarra, N.: Navigational memories in ants and bees: Memory retrieval when selecting and following routes. Advances in the Study of Behavior 36, 123–172 (2006) 11. Collett, T.S., Land, M.F.: Visual control of flight behaviour in the hoverfly Syritta pipiens L. Journal of Comparative Physiology 99, 1–66 (1975) 12. Collett, T.S., Nalbach, H.O., Wagner, H.: Visual stabilization in arthropods. Reviews of Oculomotor Research 5, 239–63 (1993) 13. Collett, T.S., Zeil, J.: Places and landmarks: An arthropod perspective. In: S. Healy (ed.) Spatial Representation in Animals, pp. 18–53. Oxford University Press, Oxford, New York (1998) 14. Datteri, E., Tamburrini, G.: Biorobotic experiments for the discovery of biological mechanisms. Philosophy of Science 74(3), 409–430 (2007) 15. David, C.T.: Compensation for height in the control of groundspeed by Drosophila in a new, ’barber’s pole’ wind tunnel. Journal of Comparative Physiology A 147, 485–493 (1982) 16. Deng, X.Y., Schenato, L., Sastry, S.S.: Flapping flight for biomimetic robotic insects: Part II - Flight control design. IEEE Transactions on Robotics 22(4), 789–803 (2006) 17. Deng, X.Y., Schenato, L., Wu, W.C., Sastry, S.S.: Flapping flight for biomimetic robotic insects: Part I - System modeling. IEEE Transactions on Robotics 22(4), 776–788 (2006) 18. Dickinson, M.H.: Bionics: Biological insight into mechanical design. Proceedings of the National Academy of Sciences of the United Statesof America 96(25), 14, 208–9. (1999) 19. Dickinson, M.H.: Insect flight. Current Biology 16(9), R309–R314 (2006) 20. Dickson, W.B., Dickinson, M.H.: The effect of advance ratio on the aerodynamics of revolving wings. Journal of Experimental Biology 207(24), 4269–4281 (2004) 21. Egelhaaf, M., Kern, R.: Vision in flying insects. Current Opinion in Neurobiology 12(6), 699–706 (2002) 22. Ellington, C.P.: The aerodynamics of hovering insect flight. Philosophical Transactions of the Royal Society B: Biological Sciences 305, 1–181 (1984) 23. Fermi, G., Reichardt, W.: Optomotorische Reaktionen der Fliege Musca domestica. Kybernetik 2, 15–28 (1963) 24. Franceschini, N., Ruffier, F., Serres, J.: A bio-inspired flying robot sheds light on insect piloting abilities. Current Biology 17(4), 329–335 (2007) 25. Franz, M.O., Schöllkopf, B., Mallot, H.A., Bülthoff, H.H.: Where did i take that snapshot? Scene based homing by image matching. Biological Cybernetics 79, 191–202 (1998) 26. Fry, S.N., Bichsel, M., Müller, P., Robert, D.: Tracking of flying insects using pan-tilt cameras. Journal of Neuroscience Methods 101(1), 59–67 (2000) 27. Fry, S.N., Rohrseitz, N., Straw, A.D., Dickinson, M.H.: TrackFly: Virtual reality for a behavioral system analysis in free-flying fruit flies. Journal of Neuroscience Methods 171(1), 110–117 (2008) 28. Fry, S.N., Rohrseitz, N., Straw, A.D., Dickinson, M.H.: Visual control of flight speed in Drosophila melanogaster. Journal of Experimental Biology 212, 1120–1130 (2009)
S.N. Fry 29. Fry, S.N., Sayaman, R., Dickinson, M.H.: The aerodynamics of free-flight maneuvers in Drosophila. Science 300(5618), 495–498 (2003) 30. Fry, S.N., Sayaman, R., Dickinson, M.H.: The aerodynamics of hovering flight in Drosophila. Journal of Experimental Biology 208(12), 2303–2318 (2005) 31. Fry, S.N., Wehner, R.: Honey bees store landmarks in an egocentric frame of reference. Journal of Comparative Physiology A 187(12), 1009–1016 (2002) 32. Fry, S.N., Wehner, R.: Look and turn: Landmark-based goal navigation in honey bees. Journal of Experimental Biology 208(20), 3945–3955 (2005) 33. Gibson, J.J.: The visual perception of objective motion and subjective movement. 1954. Psychological Review 101(2), 318–23 (1994) 34. Götz, K.G.: Optomotorische Untersuchung des visuellen Systems einiger Augenmutanten der Fruchtfliege Drosophila. Kybernetik 2, 77–92 (1964) 35. Graham, P., Fauria, K., Collett, T.S.: The influence of beacon-aiming on the routes of wood ants. Journal of Experimental Biology 206(3), 535–541 (2003) 36. Hassenstein, B., Reichardt, W.: Systemtheoretische Analyse der Zeit-, Reihenfolgen- und Vorzeichenauswertung bei der Bewegungsperzeption des Rüsselkäfers Chlorophanus. Zeitschrift für Naturforschung 11b, 513–524 (1956) 37. Hausen, K.: Decoding of retinal image flow in insects. Reviews of Oculomotor Research 5, 203–35 (1993) 38. Jeong, K.H., Kim, J., Lee, L.P.: Biologically inspired artificial compound eyes. Science 312(5773), 557–561 (2006) 39. Lambrinos, D., Möller, R., Labhart, T., Pfeifer, R., Wehner, R.: A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems 30, 39–64 (2000) 40. Land, M.F., Collett, T.S.: Chasing behaviour of houseflies (Fannia canicularis). Journal of Comparative Physiology A 89, 331–357 (1974) 41. Liu, S.C., Usseglio-Viretta, A.: Fly-like visuo-motor responses on a robot using aVLSI motion chips. Biological Cybernetics 85(6), 449–457 (2001) 42. Möller, R.: Insect visual homing strategies in a robot with analog processing. Biological Cybernetics 83, 231–243 (2000) 43. Möller, R.: Do insects use templates or parameters for landmark navigation? Journal of Theoretical Biology 210, 33– 45 (2001) 44. Sane, S.P., Dickinson, M.H.: The aerodynamic effects of wing rotation and a revised quasi-steady model of flapping flight. Journal of Experimental Biology 205(8), 1087–1096 (2002) 45. Schuster, S., Strauss, R., Götz, K.G.: Virtual-reality techniques resolve the visual cues used by fruit flies to evaluate object distances. Current Biology 12(18), 1591–4 (2002) 46. Srinivasan, M.V., Zhang, S.: Visual motor computations in insects. Annual Review of Neuroscience 27, 679–96 (2004) 47. Srinivasan, M.V., Zhang, S., Lehrer, M., Collett, T.S.: Honeybee navigation en route to the goal: Visual flight control and odometry. Journal of Experimental Biology 199(1), 237–44 (1996) 48. Tanaka, K., Kawachi, K.: Response characteristics of visual altitude control system in Bombus terrestris. Journal of Experimental Biology 209(22), 4533–4545 (2006) 49. Taylor, G.K., Bacic, M., Bomphrey, R.J., Carruthers, A.C., Gillies, J., Walker, S.M., Thomas, A.L.R.: New experimen-
1
50.
51.
52.
53.
54. 55.
56.
Experimental Approaches to Insect Flight Control tal approaches to the biology of flight control systems. Journal of Experimental Biology 211(2), 258–266 (2008) Taylor, G.K., Krapp, H.G.: Sensory systems and flight stability: What do insects measure and why? Advances in Insect Physiology 34, 231–316 (2008) ˙ Taylor, G.K., Zbikowski, R.: Nonlinear time-periodic models of the longitudinal flight dynamics of desert locusts Schistocerca gregaria. Journal of the Royal Society Interface 2(3), 197–221 (2005) Tinbergen, N.: über die Orientierung des Bienenwolfs (Philantus triangulum Fabr.). Zeitschrift für Vergleichende Physiologie 16, 305–334 (1932) Vardy, A., Möller, R.: Biologically plausible visual homing methods based on optical flow techniques. Connection Science 17(1), 47 – 89 (2005) Webb, B.: Validating biorobotic models. Journal of Neural Engineering 3(3), R25–R35 (2006) Wehner, R.: Spatial vision in arthropods. Handbook of Sensory Physiology, vol. VII/6C, pp. 287–616. Springer, Berlin, Heidelberg, New York, Tokyo (1981) Wehner, R.: Arthropods. In: F. Papi (ed.) Animal homing, pp. 45–144. Chapman & Hall (1992)
13 57. Weis-Fogh, T.: Quick estimates of flight fitness in hovering animals, including novel mechanisms for lift production. Journal of Experimental Biology 59, 169–230 (1973) 58. Wood, R.J.: Design, fabrication, and analysis of a 3DOF, 3 cm flapping-wing MAV. Intelligent Robots and Systems, 2007. IROS 2007, pp. 1576–1581 (2007) 59. Wood, R.J.: The first takeoff of a biologically inspired atscale robotic insect. Robotics, IEEE Transactions on 24(2), 341–347 (2008) 60. Wu, W., Shenato, L., Wood, R.J., Fearing, R.S.: Biomimetic sensor suite for flight control of a micromechanical flying insect: Design and experimental results. Proceedings of the 2003 IEEE International Conference on Robotics and Automation (ICRA 2003), vol. 1, pp. 1146–1151. IEEE Press, Piscataway, NJ (2003) 61. Wu, W.C., Wood, R.J., Fearing, R.S.: Halteres for the micromechanical flying insect. Proceedings of the IEEE International Conference on Robotics and Automation, ICRA 2002 1, 60–65 (2002) 62. Zufferey, J.C., Floreano, D.: Fly-inspired visual steering of an ultralight indoor aircraft. IEEE Transactions on Robotics 22(1), 137–146 (2006)
Chapter 2
From Visual Guidance in Flying Insects to Autonomous Aerial Vehicles Mandyam V. Srinivasan, Saul Thurrowgood, and Dean Soccol
Abstract Investigation of the principles of visually guided flight in insects is offering novel, computationally elegant solutions to challenges in machine vision and robot navigation. Insects perform remarkably well at seeing and perceiving the world and navigating effectively in it, despite possessing a brain that weighs less than a milligram and carries fewer than 0.01% as many neurons as ours does. Although most insects lack stereovision, they use a number of ingenious strategies for perceiving their world in three dimensions and navigating successfully in it. Over the past 20 years, research in our laboratory and elsewhere is revealing that flying insects rely primarily on cues derived from image motion (“optic flow”) to distinguish objects from backgrounds, to negotiate narrow gaps, to regulate flight speed, to compensate for headwinds and crosswinds, to estimate distance flown and to orchestrate smooth landings. Here we summarize some of these findings and describe a vision system currently being designed to facilitate automated terrain following and landing.
2.1 Introduction Insect eyes differ from vertebrate or human eyes in a number of ways. Unlike vertebrates, insects have immobile eyes with fixed-focus optics. Therefore, they
M.V. Srinivasan () Queensland Brain Institute and ARC Centre of Excellence in Vision Science, University of Queensland, St. Lucia, QLD 4072, Australia e-mail:
[email protected]
cannot infer the distances to objects or surfaces from the extent to which the directions of gaze must converge to view the object, or by monitoring the refractive power that is required to bring the image of the object into focus on the retina. Furthermore, compared with human eyes, the eyes of insects are positioned much closer together and possess inferior spatial acuity [16]. Therefore, the precision with which insects could estimate range through binocular stereopsis would be much poorer and restricted to relatively small distances, even if they possessed the requisite neural apparatus [15]. Not surprisingly, then, insects have evolved alternative strategies for dealing with the problems of visually guided flight. Many of these strategies rely on using image motion, generated by the insect’s own motion, to infer the distances to obstacles and to control various manoeuvres (Franceschini, Chap. 3 of this volume) [10, 16, 21, 22]. This pattern of image motion, known as “optic flow”, is used in many ways to guide flight. For example, distances to objects are gauged in terms of the apparent speeds of motion of the objects’ images, rather than by using complex stereo mechanisms [12, 17, 26]. Objects are distinguished from backgrounds by sensing the apparent relative motion at the boundary [18]. Narrow gaps are negotiated safely by balancing the apparent speeds of the images in the two eyes [11, 19]. The speed of flight is regulated by holding constant the average image velocity as seen by both eyes [1, 5, 27]. This ensures that flight speed is automatically lowered in cluttered environments and that thrust is appropriately adjusted to compensate for headwinds and tail winds [2, 27]. Visual cues are also used to compensate for crosswinds. Bees landing on a horizontal surface hold constant the image velocity of the surface
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_2, © Springer-Verlag Berlin Heidelberg 2009
15
16
as they approach it, thus automatically ensuring that flight speed is close to zero at touchdown [24]. Foraging bees gauge distance flown by integrating optic flow: they possess a visually driven “odometer” that is robust to variations in wind, body weight, energy expenditure and the properties of the visual environment [4, 6, 7, 23]. In this chapter we concentrate on two aspects of visually guided navigation in flying insects, which are heavily reliant on optic flow. One relates to landing on a horizontal surface. The other concerns a related behaviour, terrain following, which involves maintaining a constant height above the ground and following the local fluctuations of terrain altitude.
2.2 Landing on a Horizontal Surface We begin by summarizing the results of experiments that were conducted a few years ago in our laboratory to elucidate the visual mechanisms that guide landing in honeybees. How does a bee land on a horizontal surface? When an insect makes a grazing landing on a flat surface, the dominant pattern of image motion generated by the surface is a translatory flow in the front-toback direction. What are the processes by which such landings are orchestrated? Srinivasan et al. [24] investigated this question by video-filming trajectories, in three dimensions, of bees landing on a flat, horizontal surface. Two examples of landing trajectories, reconstructed from the data, are shown in Fig. 2.1a,b. A number of such landings were analysed to examine the variation of the instantaneous height above the surface (h), instantaneous horizontal (forward) flight speed (Vf ), instantaneous descent speed (Vd ) and descent angle (α). These variables are illustrated in Fig. 2.1c. Analysis of the landing trajectories revealed that the descent angles were indeed quite shallow. The average value measured in 26 trajectories was ca. 28◦ [24]. Figure 2.2a,b shows the variation of flight speed with height above the surface, analysed for two landing trajectories. These data reveal one of the most striking and consistent observations of this study: Horizontal speed is roughly proportional to height, as indicated by the linear regression on the data. When a bee flies at a horizontal speed of Vf (cm/s) at a height of h (cm), the angular velocity ω of the image of the surface directly
M.V. Srinivasan et al.
beneath the eye is given by ω=
Vf rad/s h
(2.1)
From this relationship it is clear that, if the bee’s horizontal flight speed is proportional to her height above the surface (as shown by the data), then the angular velocity of the image of the surface, as seen by the eye, must be constant as the bee approaches it. This angular velocity is given by the slope of the regression line. The angular velocity of the image varies from one trajectory to another, but is maintained at an approximately constant value in any given landing. An analysis of 26 landing trajectories revealed a mean image angular velocity of ca. 500◦ /s [24]. These results reveal two important characteristics. First, bees landing on a horizontal surface tend to approach the surface at a relatively shallow descent angle. Second, landing bees tend to hold the angular velocity of the image of the ground constant as they approach it. What is the significance of holding the angular velocity of the image of the ground constant during landing? One important consequence is that the horizontal speed of flight is then automatically reduced as the height decreases. In fact, by holding the image velocity constant, the horizontal speed is regulated to be proportional to the height above the ground, so that when the bee finally touches down (at zero height), her horizontal speed is zero, thus ensuring a smooth landing. The attractive feature of this simple strategy is that it does not require explicit measurement or knowledge of the speed of flight or the height above the ground. Thus, stereoscopic methods of measuring the distance of the surface (which many insects probably do not possess) are not required. What is required, however, is that the insect be constantly in motion, because the image motion resulting from the insect’s own motion is crucial in controlling the landing. The above strategy ensures that the bee’s horizontal speed is zero at touchdown, but does not regulate the descent speed. How does the descent speed vary during the landing process? Plots of descent speed versus height reveal a linear relationship between these two variables, as well. Two examples are shown in Fig. 2.2c,d. This finding implies that landing bees (i) adjust their forward (i.e. flight) speed to hold the
2
Flying Insects and Autonomous Aerial Vehicles
17
Fig. 2.1 (a, b) Three-dimensional reconstruction of two typical landing trajectories, from video films. Vertical lines depict the height above surface. (c) Illustration of some of the variables analysed to investigate the control of landing. h (cm): height
above surface; Vf (cm/s): horizontal (forward) flight speed; Vd (cm/s): vertical (descent) speed; α (deg or rad): descent angle [α=(tan-1 (Vf /Vd )]. Adapted from [24]
image velocity of the ground constant and (ii) couple the descent speed to the forward speed, so that the descent speed decreases with the forward speed and also becomes zero at touchdown. These results reveal what appears to be a surprisingly simple and effective strategy for making grazing landings on flat surfaces. A safe, smooth landing is ensured by following two simple rules: (a) adjusting the speed of forward flight to hold constant the angular velocity of the image of the surface as seen by the eye and (b) making the speed of descent proportional to the forward speed, i.e. flying at a constant descent angle. This produces landing trajectories in which the forward speed and the descent speed decrease progressively as the surface is approached, both approaching zero at touchdown. What are the advantages, if any, of using this landing strategy? We can think of three attractive features. First, the strategy is very simple because it does not
require explicit knowledge of instantaneous height or flight speed. Second, forward and descent speeds are adjusted in such a way as to hold the image velocity constant. This is advantageous because the image velocity can then be maintained at a level at which the visual system is most sensitive to deviations from the “set” velocity, thereby ensuring that the control of flight is as precise as possible. An alternative strategy, for instance, might be to approach the surface at constant flight speed, decelerating only towards the end. Such a constant speed approach, however, would cause the image velocity to increase rapidly as the surface is approached and to reach levels at which the image velocity measurements may no longer be precise enough for adequate flight control. This situation would be avoided by the bee’s landing strategy, which holds the image velocity constant. Third, an interesting by-product of the bee’s landing strategy is that the projected time to touchdown is constant throughout the
18
M.V. Srinivasan et al.
Fig. 2.2 (a, b) Variation of horizontal flight speed (Vf ) with height (h) above the surface for two different landing trajectories. (c, d) Variation of descent speed (Vd ) with height (h) above the surface for two different landing trajectories. The
straight lines are linear regressions through the data, as represented by the equations; r denotes the regression coefficient. Adapted from [24]
landing process (details in [24]). In other words, if, at any time during the landing process, the bees were to stop decelerating and continue downward at constant velocity, the time to contact the ground would be the same, regardless of where this occurs in the landing trajectory. From the landing trajectories, one calculates a projected time to touchdown of about 0.22 s. Thus, it appears that landing bees allow themselves a “safety margin” of a fifth of a second to prepare for touchdown if they were to abandon the prescribed landing strategy at any point, for whatever reason, and proceed towards the ground in the same direction without further deceleration.
the ground and to fly parallel to the surface by following any fluctuations of height (terrain following). The strategy would now be not to fly towards a target on the ground, but instead towards a distant target or the horizon. This will ensure a level flight attitude. Flight speed is held constant by maintaining a constant forward thrust (which should be effective, at least in still air). The height above the ground is regulated by adjusting the altitude so that the magnitude of the optic flow generated by the ground is constant. If the forward flight speed is Vf and the desired height above the ground is h, the magnitude of the optic flow that is to be maintained is given by Eq. (2.1) above as simply the ratio of Vf to h. If the measured optic flow is greater than this target value, it signifies that the altitude is lower than the desired value and a control command is then generated to increase altitude until the target magnitude of optic flow is attained. If the optic flow is lower than the target value, the opposite control command is issued, to again restore the optic flow to its desired level. This strategy has been tested successfully in simulations, in tethered robots in the laboratory and in some
2.3 Terrain Following A simple modification of the landing strategy described above can be used, in principle, to regulate the height above the ground during cruising flight. Here the aim is to maintain a constant height above
2
Flying Insects and Autonomous Aerial Vehicles
freely flying model aircraft (Franceschini, Chap. 3 of this volume; Zufferey et al., Chap. 6 of this volume) [3, 13, 14, 25, 28].
19 A’ η
O
r
2.4 Practical Problems with the Measurement of Optic Flow When flying at low altitudes or during landing, the image of the ground can move very rapidly, making it difficult to obtain accurate estimates of optic flow. Here we describe a specially shaped mirror surface that, first, scales down the speed of image motion as seen by the camera and, second, removes the perspective distortion (and therefore the distortion in image velocity) that a camera experiences when viewing a horizontal plane that stretches out to infinity in front of the aircraft. Ideally, the moving image that is captured by the camera through the mirror should exhibit a constant, low velocity everywhere, thus simplifying the optic flow measurements and increasing their accuracy (see Fig. 2.3).
Video camera
Reflective surface
Ground plane
Fig. 2.3 Illustration of a system for visually guided terrain following and landing. The optical system is shown on an enlarged scale relative to the aircraft in order to illustrate its configuration
2.5 A Mirror-Based Vision System for Terrain Following and Landing We seek a mirror profile that will map equal distances along the ground in the flight direction to equal displacements in the image plane of the camera. The geometry of the camera–mirror configuration is shown in Fig. 2.4. A is the camera image of a point A on the ground. We want A (the image of A) to move at a con-
Camera image
Mirror surface
θ
f
Camera nodal point
Ground
P
β
A
Fig. 2.4 Geometry of camera/mirror configuration for the design of mirror profile. Modified from [20]
stant velocity in the image plane of the camera, independent of the position of A in the ground plane. That is, we require dη dt = L, where η is the distance of the point A from the centre of the camera’s image plane, and L is the desired constant velocity; f is the focal length of the camera. The derivation of a mirror profile that meets these objectives is given in [20] and we shall not repeat it here. Instead, we use an example profile to illustrate the performance of the mirror. Figure 2.5 shows one example of a profile of the reflective surface and includes the computed ray paths. In this example the camera faces forward, in the direction of flight. The nodal point of the camera is at (0.0). The image plane of the camera is to the left of this point and is not included in the figure, but its reflection about the nodal point (an equivalent representation) is depicted by the vertical line to the right of the nodal point. Parameters used in this design are as follows: V = 1000.0 cm/s; h = 100.0 cm; r0 = 10.0 cm; f = 3.5 cm; L=2.0 cm/s, where V is the speed of the aircraft, h is the height above the ground, L is the image velocity, f is the focal length of the camera and r0 is the distance from the nodal point of the camera to the tip of the reflective surface. If the camera was looking directly downwards at the ground, the image velocity of the ground would have been 35 cm/s; with the mirror, the image velocity is reduced to 2.0 cm/s. Thus, the mirror scales down the image velocity by a factor of 17.5. The curvature of the mirror is highest in the region that images the ground directly beneath the aircraft, because this is the region of the ground that moves at the highest angular velocity with respect to the camera,
20
M.V. Srinivasan et al.
Fig. 2.5 Example of computed mirror profile. Adapted from [20]
5
Camera nodal point Mirror profile
y
0
–5
–10
Forward –15 5
10
15
20
25
x
and which therefore requires the greatest reduction of motion. We see from Fig. 2.5 that equally spaced points on the ground along the line of flight map to equally spaced points in the camera image. This confirms the correct operation of the surface. Figure 2.6a illustrates the imaging properties of the mirror, positioned with its axis parallel to and above a plane carrying a checkerboard pattern. Note that the mirror has removed the perspective distortion (foreshortening) of the image of the plane. The scale of the mapping depends upon the radial direction. Compression is lowest in the vertical radial direction and highest in the horizontal radial direction. Figure 2.6b shows a digitally remapped view of the image in Fig. 2.6a, in which the polar co-ordinates of each pixel in Fig. 2.6a are plotted as Cartesian coordinates. Here the vertical axis represents radial distance from the centre of the image of Fig. 2.6a and the horizontal axis represents the angle of rotation about
Fig. 2.6 (a) Illustration of imaging properties of mirror and (b) remapped version of image in Fig. 2.6a. Adapted from [20]
a
the optic axis. The circle in Fig. 2.6a maps to the horizontal line in Fig. 2.6b. Regions below the line represent areas in front of the aircraft and regions above it represent areas behind. Thus, the mirror endows the camera with a large field of view that covers regions in front of, below and behind the aircraft. A consequence of the geometrical mapping produced by the mirror is that, for straight and level flight parallel to the ground plane, the optic flow vectors will have constant magnitude along each radius but will be largest along the vertical radius and smallest (zero) along the horizontal radius. This is illustrated in Fig. 2.7, which shows the optic flow vectors that are generated by a simulated level flight over an ocean. Figure 2.7a shows the flow field in the raw image and Fig. 2.7b the flow field in the remapped image. In the remapped image, all flow vectors are oriented vertically. The magnitudes of the vectors are constant within each column and
b
2
Flying Insects and Autonomous Aerial Vehicles
21
Fig. 2.7 Optic flow field generated by simulated level flight over an ocean. (a) The flow in the raw image and (b) the flow in the remapped image. Adapted from [15]
a
b
decrease progressively with increasingly lateral directions of view. These vectors provide information on the height above the ground, the topography of the ground and the ranges to objects in the field of view (which is quite large). If the system undergoes pure translation along its optic axis, as in Fig. 2.5, the magnitude of the optic flow vector at each point on the unwarped image will be inversely proportional to the distance of the viewed point from the optic axis of the system. Thus, the flow vectors will provide information on the profile of the terrain in a cylindrical co-ordinate system relative to the aircraft. The mapping that is provided by the mirror should be particularly useful for aircraft guidance. If a cylinder of “clear space” is desired for obstaclefree flight along a given trajectory, the maximum permissible flow magnitude is determined by the speed of the aircraft and the radius R of this cylinder (see Fig. 2.8). This simplifies the problem of determining in advance whether an intended flight trajectory through the environment will be collision-free and of making any necessary adjustments to the trajectory to ensure safe flight. The system will provide information on the height above the ground, as well as the distance of any potential obstacles, as measured from the optical axis.
2.6 Height Estimation and Obstacle Detection During Complex Motions If the aircraft undergoes rotation as well as translation, the optic flow vectors that are induced by the rotational components of aircraft motion will contaminate the range measurement. They must be subtracted from the total optic flow field, to obtain a residual flow field that represents the optic flow that is created only by the translational component of aircraft motion. The rotations of the aircraft can be estimated through gyroscopic signals. Since each component of rotation (yaw, roll, pitch) produces a known, characteristic pattern of optic flow which depends on the magnitude of the rotation but is independent of the ranges of objects or surfaces in the environment, the optic flow vector fields that are generated by the rotations can be predicted computationally as a sum of the characteristic optic flow fields for yaw, pitch and roll, each weighted by the magnitude of the measured rotation about the corresponding axis. This composite rotational flow field must then be subtracted from the total flow field to obtain a residual flow field that represents the flow due to just the translational component of motion. The
Collision-free radius R
Fig. 2.8 Illustration of collision-free cylinder mapping achieved by the system. Modified from [15]
Vision system
Flight axis Maximum permissible flow magnitude in mirror image
22
M.V. Srinivasan et al.
distances to objects and surfaces from the optical axis can then be computed from this residual field.
2.7 Hardware Realization and System Tests The mirror profile shown in Fig. 2.5 was machined in aluminium on a numerically controlled lathe. It was mounted on a bracket, which also carried an analogue video CCD camera (320 × 240 pixels) with its optical axis aligned with the axis of the mirror. The nodal point of the camera’s lens was positioned 10 cm from the tip of the mirror, as per the design specifications (see above). In order to extract the range of objects during complex motions, it is necessary to first determine the flow signatures, or “templates”, that characterise the patterns of optic flow during pure yaw, pure roll and pure pitch. This was done by using a robotic gantry to move the vision system in a richly textured visual environment. The environment consisted of a rectangular arena 3.05 m long, 2.2 m wide and 1.13 m tall (Fig. 2.9a). The walls and floor of the arena were lined with a texture composed of black circles of five different diameters (150 mm, 105 mm, 90 mm, 75 mm and 65 mm) on a white background. The rich visual texture permitted dense and accurate measurements of the optic flow in the lower hemisphere of the visual field. A raw image of the arena, as acquired by the system, is shown in Fig. 2.10a. An unwarped version of this image is shown in Fig. 2.10b. The gantry was used to position the optical axis of the system at a height of 650 cm above the floor. Optic
Fig. 2.9 (a) View of vision system, carried by a robotic gantry in a visually textured arena and (b) plan view of a curved trajectory used to test the system. Adapted from [15]
Mirror
a
flow templates for yaw, pitch and roll were obtained for the remapped images by using the gantry to rotate the vision system by small, known angles (ranging from 0.25◦ to 2.5◦ , in steps of 0.25◦ ) about each of the three axes, in turn. Measurements were repeated with the vision system positioned at several different locations in the arena and the results were pooled and normalized to obtain reliable and dense estimates of the optic flow templates for a 1◦ rotation. (In theory, the rotational optic flow templates should be independent of the position or attitude of the vision system within the arena.) The optic flow was computed using a correlation algorithm [8]. The resulting rotational templates for yaw, roll and pitch are shown in Fig. 2.11, for rotations of 1◦ in each case. For small angular rotations, the magnitude of the flow vector at any given point in the visual field should increase approximately linearly with the magnitude of the rotation, and the direction of the flow vector should remain approximately constant. This has been verified in [15]. Thus, we can legitimately scale the optic flow templates for the measured rotations in yaw, pitch and roll, in order to subtract out the contributions of each of these rotational components in a flow field that is generated by complex motion. The next step was to test performance when the system executed compound motions that combined translation and rotation. The aim was to investigate whether the system could determine the range to objects in the environment (and specifically, the height above the ground) while executing complex motions. This investigation was performed by using the gantry to move the system along a trajectory along a curved trajectory, as shown in Fig. 2.9b.
Camera
b
2
Flying Insects and Autonomous Aerial Vehicles
23
Fig. 2.10 Raw (a) and unwarped (b) images of the arena as viewed by the system. Adapted from [15]
a
b
Fig. 2.11 Measured rotational templates for yaw, roll and pitch
Yaw
Roll
Pitch The height of the system was held constant at 650 mm above the floor throughout the trajectory. The trajectory consisted of a sequence of stepwise motions in the horizontal plane. The optical axis of the system was always aligned along the instantaneous direction of translation, i.e. it was parallel to the local tangent to the trajectory. Each step, in general, consisted of an elementary translation (of 50 mm) along the optical axis of the vision system, followed by an elementary yaw rotation of known, but variable, magnitude (ranging from +2.9◦ to –1.5◦ ). A visual frame was acquired from the camera at the end of each elementary step (translation or rotation). Figure 2.12a shows the optic flow generated by a compound step, consisting of an elementary translation followed by an elementary rotation (yaw). This is the
flow that would be generated between two successive frames if the system had moved along a smooth curve. Figure 2.12b shows the flow that would have been induced by the rotational component of motion that occurred during this compound step. This flow pattern was computed by weighting the template for yaw rotation by the (known) magnitude and polarity of the yaw that occurred during the compound step. (During flight, this yaw information would be obtained from a rate gyro.) Figure 2.13a shows the optic flow that is obtained when the rotational component of the flow (from Fig. 2.12b) is subtracted from the total flow that is generated by the compound step (Fig. 2.12a). This residual flow should represent the flow that is induced solely by the translational component of the system’s motion. It
24
M.V. Srinivasan et al.
Fig. 2.12 (a) Flow measured after a compound step consisting of translation followed by yaw and (b) flow induced by the rotational (yaw) component, calculated from the template for yaw. Adapted from [15]
a
a
b
b
Fig. 2.13 (a) Residual flow obtained by subtracting the yaw-induced flow (Fig. 2.12b) from the measured flow (Fig. 2.12a). (b) Flow induced by the translatory component of motion in the compound step. Adapted from [15]
is evident that all the vectors in the residual flow field are parallel and vertical, as would be expected during pure translation. Furthermore, this pattern of optic flow is in excellent agreement with the pattern of flow that is actually generated by pure translation at this par-
Fig. 2.14 Results of extending the de-rotation procedure to roll as well as yaw. The figure shows one frame of a motion sequence in which the vision system executed a trajectory involving translation, yaw and roll. The upper left panel shows the raw image. The upper right panel shows one frame of the unwarped image and the instantaneous raw optic flow. The lower right panel shows the result of de-rotation, by using the roll and yaw templates to subtract the flow components induced by the measured roll and yaw
ticular location in the arena. This latter flow pattern, shown in Fig. 2.13b, is the flow measured between the two frames that bracketed the translatory segment of the compound step. A comparison of Fig. 2.13a,b reveals that these flow patterns are virtually identical.
Flying Insects and Autonomous Aerial Vehicles
This result, which was obtained consistently for all the compound steps in the trajectory, demonstrates that the pattern of optic flow that is generated during a small but complex motion can be successfully “de-rotated” to extract the optic flow that is created purely by the translatory component of motion that occurred during the period. The results of extending this de-rotation procedure to rotations about two axes – yaw and roll – are summarized in Fig. 2.14. Here again, after subtracting the optic flow components induced by yaw and roll, the residual optic flow is purely translatory, thus demonstrating the validity of the de-rotation procedure.
25
a 12
10
Flow magnitude (pixels)
2
8
6
4
2
0 –50 –40 –30 –20 –10
2.8 Extracting Information on Range and Topography
0
10
20
30
40
50
Lateral viewing angle (deg)
b The final step is to examine whether the residual component of the optic flow (the translation-induced component) can be used to obtain accurate information on the range and profile of the terrain over which flight occurs. With reference to Figs. 2.7b and 2.13, for horizontal flight over level ground, the magnitudes of the translation-induced optic flow vectors should be a maximum in the central column (corresponding to the ground directly beneath the aircraft) and should fall off as a cosine function of the lateral angle of view (in the columns to the left and right; [20]). This should be true for any row of vectors. This prediction is tested in Fig. 2.15a, which shows the variation of translatory flow magnitude with lateral angle for flow vectors in any given row, in data sets such as that shown in Fig. 2.13a. The results show the magnitude profiles for the vectors in the second row from the bottom, computed for each step of the trajectory. The second row represents a view that is oriented at approximately 90◦ to the axial direction (a lateral view). A least-squares analysis reveals that each of the profiles approximates a cosine function quite well. The thick red curve shows the mean of the cosine functions fitted to each of the profiles obtained along the trajectory. If the speed of the aircraft is known, the amplitude of this curve provides information on the height above the ground (the amplitude is inversely proportional to height). The mean amplitude of the curves is
Fig. 2.15 (a) Variation of flow magnitude with lateral viewing angle in the residual flow field at each of the steps along the trajectory of Fig. 2.9b. Adapted from [15]. (b) Magnitudes of the de-rotated optic flow computed along the entire trajectory of a motion path involving translation and roll
9.076 pixels and the standard deviation is 0.062 pixels, indicating that the estimate of flight height is consistent and reliable throughout the trajectory of Fig. 2.9b. Figure 2.15b shows the magnitudes of the de-rotated optic flow computed along the entire trajectory of a complex motion path involving translation and roll. It
26
is evident that, in each frame, the profile of the optic flow magnitude closely approximates a cosine function. The peak of the function wanders to the left or the right from frame to frame because the vision system is undergoing roll as well as translation, and the optic flow data are plotted in relation to the co-ordinates of the vision system.
2.9 Preliminary Flight Tests Figure 2.16 shows a view of a prototype of the vision system mounted on the underside of a model aircraft. Figure 2.17 shows views of the raw camera image
M.V. Srinivasan et al.
(left) and the unwarped image (right) with the computed optic flow vectors. The horizon (in the raw image as well as the unwarped image) distorts in predictable ways depending upon the roll and pitch attitudes of the aircraft. The horizon profile can thus be used to estimate the roll and pitch of the aircraft, when flying at high altitudes over reasonably flat terrain. Figure 2.18a shows one frame of the unwarped camera image during a test flight, with the computed raw optic flow field. Figure 2.18b shows the same frame, with optic flow computed after de-rotation in yaw, pitch and roll, using information from the aircraft’s gyroscopes. The optic flow vectors in the de-rotated field are all very close to vertical, indicating that the de-rotation procedure is successful and accurate.
2.10 Conclusions and Discussion
Fig. 2.16 View of vision system mounted on the underside of a model aircraft
This study has described the design of a vision sensor, based partly on principles of insect vision and optic flow analysis, for the measurement and control of flight height and for obstacle avoidance. A video camera is used in conjunction with a specially shaped reflective surface to simplify the computation of optic flow and extend the range of aircraft speeds over which accurate data can be obtained. The imaging system also provides a useful geometrical remapping of the environment, which facilitates obstacle avoidance and
Fig. 2.17 Test flight. Views of the raw camera image (left) and the unwarped image (right) with the computed optic flow vectors
2
Flying Insects and Autonomous Aerial Vehicles
27
Fig. 2.18 Test flight. (a) One frame of unwarped camera image, with computed raw optic flow field. (b) Same frame, with optic flow computed after de-rotation in yaw, pitch and roll
a computation of three-dimensional terrain maps. By using calibrated optic flow templates for yaw, roll and pitch, accurate range information can be obtained even when the aircraft executes complex motions. In principle, the vision system described here can be used in the platforms described in (Franceschini, Chap. 3 of this volume Zufferey et al., Chap. 6 of this volume) to facilitate vision guidance during landing or when flying at high speeds close to obstacles or close to the ground. Future work with this system will involve open-loop and closed-loop tests on flying vehicles, to examine the utility of this approach for autonomous guidance. Acknowledgements This work was supported partly by the US Army Research Office MURI ARMY-W911NF041076, Technical Monitor Dr Tom Doligalski, US ONR Award N00014-041-0334, an ARC Centre of Excellence Grant CE0561903 and a Queensland Smart State Premier’s Fellowship to MVS.
References 1. Baird, E., Srinivasan, M.V., Zhang, S.W., Cowling, A.: Visual control of flight speed in honeybees. The Journal of Experimental Biology 208, 3895–3905 (2005) 2. Barron, A., Srinivasan, M.V.: Visual regulation of ground speed and headwind compensation in freely flying honey bees (Apis mellifera L.). Journal of Experimental Biology 209, 978–984 (2006) 3. Barrows, G.L., Chahl, J.S., Srinivasan, M.V.: Biologically inspired visual sensing and flight control. The Aeronautical Journal, London: The Royal Aeronautical Society 107(1069), 159–168 (2003) 4. Dacke, M., Srinivasan, M.V.: Honeybee navigation: Distance estimation in the third dimension. Journal of Experimental Biology 210, 845–853 (2007) 5. David, C.T.: Compensation for height in the control of groundspeed by Drosophila in a new, “Barber’s Pole” wind tunnel. Journal of Comparative Physiology 147, 485–493 (1982)
b
6. Esch, H., Burns, J.E.: Honeybees use optic flow to measure the distance of a food source. Naturwissenschaften 82, 38– 40 (1995) 7. Esch, H., Zhang, S.W., Srinivasan, M.V., Tautz, J.: Honeybee dances communicate distances measured by optic flow. Nature (London) 411, 581–583 (2001) 8. Fua, P.: A parallel stereo algorithm that produces dense depth maps and preserves image features. Machine Vision and Applications 6(1), 35–49, December (1993) 9. Horridge, G.A.: Insects which turn and look. Endeavour N.S. 1, 7–17 (1977) 10. Horridge, G.A.: The evolution of visual processing and the construction of seeing systems. Proceedings of the Royal Society of London. Series B 230, 279–292 (1987) 11. Kirchner, W.H., Srinivasan, M.V.: Freely flying honeybees use image motion to estimate object distance. Naturwissenschaften 76, 281–282 (1989) 12. Lehrer, M., Srinivasan, M.V., Zhang, S.W., Horridge, G.A.: Motion cues provide the bee’s visual world with a third dimension. Nature (London) 332, 356–357 (1988) 13. Neumann, T.R., Bülthoff, H.: Insect inspired visual control of translatory flight. In: Kelemen J, Sosik P (eds.) Proceedings of the ECAL 2001, pp. 627–636. Springer, Berlin (2001) 14. Ruffier, F., Franceschini, N.: Optic flow regulation: the key to aircraft automatic guidance. Robotics and Autonomous Systems 50, 177–194 (2005) 15. Soccol, D., Thurrowgood, S., Srinivasan, M.V., A vision system for optic-flow-based guidance of UAVs. Proceedings, Ninth Australasian Conference on Robotics and Automation. Brisbane, 10–12 December (2007) 16. Srinivasan, M.V.: How insects infer range from visual motion. In: F.A. Miles, J. Wallman (eds.) Visual Motion and its Role in the Stabilization of Gaze. Elsevier, Amsterdam, pp. 139–156 (1993) 17. Srinivasan, M.V., Lehrer, M., Zhang, S.W., Horridge, G.A.: How honeybees measure their distance from objects of unknown size. Journal of Comparative Physiology A 165, 605–613 (1989) 18. Srinivasan, M.V., Lehrer, M., Horridge, G.A.: Visual figureground discrimination in the honeybee: the role of motion parallax at boundaries. Proceedings of the Royal Society of London. Series B 238, 331–350 (1990) 19. Srinivasan, M.V., Lehrer, M., Kirchner, W., Zhang, S.W.: Range perception through apparent image speed in freely-flying honeybees. Visual Neuroscience 6, 519–535 (1991)
28 20. Srinivasan, M.V., Thurrowgood, S., Soccol, D.: An optical system for guidance of terrain following in UAVs. Proceedings, IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS ’06), Sydney, 51–56 (2006) 21. Srinivasan, M.V., Zhang, S.W.: Visual control of honeybee flight. In: M. Lehrer (ed.) Orientation and Communication in Arthropods, pp. 67–93. Birkhäuser Verlag, Basel (1997) 22. Srinivasan, M.V., Zhang, S.W.: Visual navigation in flying insects. In: M. Lappe (ed.) International Review of Neurobiology, Vol. 44, Neuronal Processing of Optic Flow, pp. 67–92. Academic Press, San Diego (2000) 23. Srinivasan, M.V., Zhang, S.W., Altwein, M., Tautz, J.: Honeybee navigation: nature and calibration of the ‘odometer’. Science 287, 851–853 (2000) 24. Srinivasan, M.V., Zhang, S.W., Chahl, J.S., Barth, E., Venkatesh, S.: How honeybees make grazing landings on flat surfaces. Biological Cybernetics 83, 171–183 (2000)
M.V. Srinivasan et al. 25. Srinivasan, M.V., Zhang, S.W., Chahl, J S, Stange, G, Garratt, M.: An overview of insect inspired guidance for application in ground and airborne platforms. Proceedings of the Institution of Mechanical Engineers. Part G, Journal of Aerospace Engineering 218, 375–388 (2004) 26. Srinivasan, M.V., Zhang, S.W., Chandrashekara, K.: Evidence for two distinct movement-detecting mechanisms in insect vision. Naturwissenschaften 80, 38–41 (1993) 27. Srinivasan, M.V., Zhang, S.W., Lehrer, M., Collett, T.S.: Honeybee navigation en route to the goal: visual flight control and odometry. The Journal of Experimental Biology 199, 237–244 (1996) 28. Zufferey, J.C., Floreano, D.: Fly-inspired visual steering of an ultralight indoor aircraft. IEEE Transactions on Robotics 22, 137–146 (2006)
Chapter 3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance Nicolas Franceschini, Franck Ruffier and Julien Serres
Abstract The explicit control schemes presented here explain how insects may navigate on the sole basis of optic flow (OF) cues without requiring any distance or speed measurements: how they take off and land, follow the terrain, avoid the lateral walls in a corridor, and control their forward speed automatically. The optic flow regulator, a feedback system controlling either the lift, the forward thrust, or the lateral thrust, is described. Three OF regulators account for various insect flight patterns observed over the ground and over still water, under calm and windy conditions, and in straight and tapered corridors. These control schemes were simulated experimentally and/or implemented onboard two types of aerial robots, a micro-helicopter (MH) and a hovercraft (HO), which behaved much like insects when placed in similar environments. These robots were equipped with optoelectronic OF sensors inspired by our electrophysiological findings on houseflies’ motion-sensitive visual neurons. The simple, parsimonious control schemes described here require no conventional avionic devices such as rangefinders, groundspeed sensors, or GPS receivers. They are consistent with the neural repertory of flying insects and meet the low avionic payload requirements of autonomous micro-aerial and space vehicles.
N. Franceschini () Biorobotics Lab, Institute of Movement Science, CNRS & Univ. of the Mediterranean, Marseille, France e-mail:
[email protected]
3.1 Introduction When an insect is flying forward, an image of the ground texture and any lateral obstacles scrolls backward across the ommatidia of its compound eye. This flowing image set up by the animal’s own forward motion is called the optic flow (OF). Recent studies have shown that freely flying insects use the OF to avoid collisions [94, 11, 88], follow a corridor [53, 83, 3, 4, 80], cruise, and land [93, 84, 78, 86]. The OF can be described as a vector field where each vector gives the direction and magnitude of the angular velocity at which any point in the environment moves relative to the eye [54]. Several authors have attempted to model the control systems at work in insects during free flight, focusing on specific aspects such as speed control [15, 84], distance or speed servoing [55, 17, 20], course control, and saccadic flight behavior (Egelhaaf et al., Chap. 4 of this volume) [39, 66]. Freely flying flies navigate by making pure translations alternating with fast, saccade-like turns [39, 94, 11, 92]. This idiosyncratic flight behavior was interpreted [94, 60, 11] as an active means of reducing the image flow to its translational components (that depends on the distances to objects [54]). A biorobotic project was launched in the mid-1980s to investigate how a fly could possibly navigate and avoid collisions based on OF cues. The prototype Robot-Fly (“robot mouche”) that we developed [60, 26] was a 50-cm high, fully autonomous wheeled robot carrying a compound eye driving 114 OF sensors with an omnidirectional azimuthal field of view (FOV). This reactive robot sensed the translational OF while moving straight ahead until detecting an obsta-
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_3, © Springer-Verlag Berlin Heidelberg 2009
29
30
cle, which triggered a quick eye and body turn of a suitable amplitude (during which time vision was inhibited). Since the Robot-Fly traveled at a constant speed (50 cm/s), the OF measured during any translation easily gave the object range, and the robot was thus able to dodge and slalom to its target lamp through a random array of posts [26]. Despite the success of this early neuromimetic robot, flying insects would obviously have to use OF cues differently, since they are not in mechanical contact with the ground and cannot therefore easily estimate their groundspeed. How might flies, or micro-aerial vehicles (MAVs) cope with the severe disturbances (obstacles, wind, etc.) they encounter? Here we summarize our attempts to model the visuomotor control systems that provide flying insects with a means of close-range autonomous piloting. First, we focus on ground avoidance in the vertical (longitudinal) plane (Sect. 3.3). We then discuss the ability to avoid corridor walls (Sect. 3.4), which has been closely analyzed in honeybees. Independent vertical and horizontal flight control systems (Egelhaaf et al., Chap. 4 of this volume) (see also [59]) were suggested by the performance of flies, which control their movements along the three orthogonal axes independently [94], while keeping their head transiently fixed in space [92, 96] via compensatory head roll and pitch mechanisms [40, 41]. Experimental simulations were performed and our control schemes were tested on two fly-by-sight aerial robots: a micro-helicopter (MH) (Fig. 3.5a) and a miniature hovercraft (HO) (Fig. 3.8a). These aerial robots use neuromorphic OF sensors [7, 25, 8] inspired by the elementary motion detectors (EMDs) previously studied at our laboratory in houseflies [67, 22, 27]. These sensors are briefly described in Sect. 3.2.
3.2 From the Fly EMDs to Electronic Optic Flow Sensors Conventional cameras produce images at a given frame rate. Each image is scanned line by line at a high frequency. Many authors working in the field of computer vision have presented algorithms for analyzing the OF field based on scanned camera images. Although an
N. Franceschini et al.
OF algorithm has been implemented onboard a highly miniaturized, slow but fully autonomous indoor MAV (Zufferey et al., Chap. 6 of this volume), none of the OF algorithms to be found in the insect brain actually start with a retinal image scanned like a television image. Insects analyze the OF locally, pixel by pixel, via a neural circuit called an “elementary motion detector” (EMD). Further down the neural pathways, well-known collator neurons called “lobula plate tangential cells” (LPTCs) integrate the outputs of large numbers of EMDs and analyze the OF field generated by the animal’s locomotion. Some of them transmit electrical signals via the neck to thoracic interneurons directly or indirectly responsible for driving the wing, leg muscles, or head muscles. Other LPTCs send relevant signals to the contralateral eye (see [87, 37, 9, 90]). To determine the functional principles underlying an EMD, the responses of an LPTC neuron were recorded (H1, Fig. 3.1b) while two neighboring photoreceptors in a single ommatidium were being stimulated using a high-precision instrument (a hybrid between a microscope and a telescope: Fig. 3.1d), where the main objective lens was a single ocular facet (diameter ∼ = 50 μm) = 25 μm, focal length ∼ (Fig. 3.1a). By illuminating the two photoreceptors successively, a motion occurring in the visual field of the selected ommatidium was “simulated”. The H1 neuron responded with a vigorous spike discharge to this “apparent motion,” provided the motion was mimicked in the preferred direction (compare top and bottom traces in Fig. 3.1c) [67]. By applying various sequences of light steps and/or pulses to selected receptor pairs, an EMD block diagram was obtained and the dynamics and nonlinearity of each block were characterized [22, 27, 23]. In the mid-1980s, we designed a neuromorphic OF sensor inspired by the results of these electrophysiological studies [7, 25]. By definition, the OF is an angular speed ω corresponding to the inverse of the time Δt taken by a contrasting feature to travel between the visual axes of two adjacent photodiodes separated by an angle Δϕ (Fig. 3.2a). Our OF sensor’s scheme is not a “correlator” [38, 65] but rather a “feature-matching scheme” [91], where a given feature (here, a passing edge) is extracted and tracked in time. Each photodiode signal is first band-pass filtered
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
31
Fig. 3.1 (a)–(c) Experimental scheme used to analyze the principles underlying elementary motion detectors (EMDs) in flies, using single neuron recording and single photoreceptor stimulation. (d) The triple-beam incident light “microscope-telescope” used to deliver a sequence of 1 μm light spots to two neigh-
boring photoreceptors (the fly’s head is indicated by the arrow). The microelectrode (c) recorded the electrical response (nerve impulses) of the motion-sensitive neuron H1 to this “apparent motion” (from [27])
(Fig. 3.2b), mimicking the analog signals emitted by the large monopolar neurons present in the fly lamina [97]. The next processing step consists in performing hysteresis thresholding and generating a unit pulse. In the EMD version built in 1989 for the Robot-Fly (Fig. 3.2d), the unit pulse from one channel sampled a long-lived decaying exponential function generated by the other channel, via a simple nonlinear circuit called a minimum detector (Fig. 3.2b), giving a monotonically increasing output VEMD with the angular velocity ω = Δϕ/Δt (Fig. 3.2b) [8]. The thresholding makes the EMD respond whatever the texture and contrast encountered, contrary to what occurs with “correlator” EMDs [65, 19] (see also Egelhaaf et al., Chap. 4 of this volume). A very similar EMD principle was developed, independently, a decade later by Koch’s group at CALTECH and termed the “facilitate and sample” velocity sensor [47]. These authors patented an aVLSI chip based on this principle [77], another variant of which
was recently presented (Moeckel & Liu, Chap. 8 of this volume). Our OF sensor actually comprises two parallel EMDs, each of which responds to either positive or negative contrast transitions, as in the fly EMD (cf. Figs. 15 and 16 in [27]). The circuit responds equally efficiently to natural scenes [61]. Our current OF sensors are still based on our original “travel time” principle [7, 25, 8] but for the sake of miniaturization, the signals are processed using a mixed analog + digital approach [75] and the time Δt is converted into ω via a lookup table (Fig. 3.2c). Although they are much larger than any aVLSI (or fly’s) EMDs, our current OF sensors (Fig. 3.2e,f) are small and light enough to be mounted on MAVs. Several OF sensors of this type can also be integrated into a miniature FPGA [1, 2]. A different kind of OF sensor was recently designed and mounted on a model aircraft [6, 35]. Optical mouse sensors have also been used as OF sensors (Dahmen et al., Chap. 9 of this volume) [36].
32
N. Franceschini et al.
Fig. 3.2 Principle (a, b) of the elementary motion detector (EMD) inspired by our electrophysiological findings (cf. Fig. 3.1) (after [7, 25, 8]) and (d) completely analog version (mass 5 g) made in 1989 with small mounted device (SMD) technology
(from [8]). (c, e) Hybrid version (mass 0.8 g) based on a microcontroller (from [75]). (f) Similar hybrid version (size 7 mm × 7 mm, mass 0.2 g) built using low-temperature co-fired ceramics technology (LTCC) (from [64])
3.3 An Explicit Control Scheme for Ground Avoidance
Flies and bees are able to measure the translational OF, ω, irrespective of the spatial texture and contrast encountered [15, 83, 3], and some of their visual neurons respond monotonically to ω [45]. Neurons facing downward can therefore act as ventral OF sensors and directly assess the groundspeed-to-groundheight ratio Vx /h (Fig. 3.3). Before Gibson introduced the OF concept [32], Kennedy established that an insect sees and reacts to the OF presented to its ventral viewfield [49] (see also [13]). This flowing visual contact with the ground is now known to be essential for insects to be able to orient upwind and migrate toward an attractive source of odor [49, 48] or pheromones [51]. Based on field experiments on locusts, Kennedy developed the “optomotor theory” of flight, according to which locusts have a “preferred retinal velocity” with respect to the
3.3.1 Avoiding the Ground by Sensing the Ventral Optic Flow The ventral OF perceived in the vertical plane by airborne creatures (including aircraft pilots) is the angular velocity ω generated by a point on the underlying flight track [33, 95]. Based on the definition of the angular velocity (Fig. 3.3A), the ventral OF ω is the ratio between groundspeed Vx and groundheight h as follows: ω = V x /h[rad.s−1 ]
(3.1)
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
33
Fig. 3.3 (A) The ventral OF is an angular speed ω [rad s–1 ] corresponding to the groundspeed-to-groundheight ratio. (B) The OF sensor comprises a microlens and two photoreceptors separated by a small angle ϕ (see Fig. 3.2a) driving an EMD. The
latter delivers a signal ωmeas ∼ =∼ = ϕ/t = Vx /h, which serves as a feedback signal in the OF regulator (Fig. 3.4A). The onedimensional random texture is a magnified sample of that shown in Figs. 3.5b and 3.6A (from [28])
ground below [49, 50]. In response to wind, for example, they adjust their groundspeed (or groundheight) to restore the velocity of the ground feature images. Kennedy’s theory has been repeatedly confirmed during the last 30 years. Flies and bees seem to maintain a constant OF with respect to the ground while cruising or landing [13, 62, 84, 86, 4]. The problem is how they achieve this remarkable feat, since maintaining a given OF is a kind of chickenand-egg problem, as illustrated by Eq. (3.1): an insect may hold its perceived OF, ω, constant by controlling Vx (if it knows h) or by controlling h (if it knows Vx ). Moreover, it could maintain an OF of say 1 rad/s (i.e., 57◦ /s) by flying at a speed of 1 m/s at a height of 1 m or by flying at a speed of 2 m/s at a height of 2 m: there exists an infinitely large number of possible combinations of groundspeeds and groundheights generating the same “preferred OF”. Kennedy’s “theory” therefore lacked an explicit control scheme elucidating:
Our first attempt to develop a control scheme [55] was not very successful, as we were cornered by the chicken-and-egg OF problem mentioned above and by the assumption prevailing in those days that insect navigation involves estimating distance [53, 60, 83, 26, 82]. In the experimental simulations described in 1994, for example [55], we assumed that flying insects (and robots) are able to measure their groundspeed Vx (by whatever means), so that by measuring ω they would then be able to assess the distance h from the ground (Eq. (3.1)) and react accordingly to avoid it. Although this stratagem – which is in line with the Robot-Fly’s working principles (Sect. 3.1) – may be acceptable for aerial vehicles able to gauge their own groundspeed [5, 31], it does not tell us how insects function. In 1999, we established (via experimental simulations) how a seeing helicopter (or an insect) might manage to follow a terrain and land on the sole basis of OF cues without measuring its groundspeed or groundheight (see Figs. 4 and 5 in [57]). The landing maneuvers were performed under the permanent feedback control of an OF-sensing eye, and the driving force responsible for the loss of altitude was the decrease in the horizontal flight speed which occurred when the rotorcraft (or the insect) was about to land, either voluntarily or because of an unfavorable headwind. The landing trajectory obtained in these simulations [57] resembled the final approach of bees landing on a flat surface [84]. The 840-g rotorcraft we constructed was able to jump over 1-m-high obstacles (see Fig. 8 in [58]).
1. 2. 3. 4.
the flight variables really involved the sensors really required the dynamics of the various system components the causal and dynamic links between the sensor(s) and the variable(s) to be controlled 5. the points of application and the effects of the various disturbances that insects may experience 6. the variables insects control to compensate for these disturbances
34
N. Franceschini et al.
3.3.2 The “Optic Flow Regulator”
3.3.3 Micro-Helicopter (MH) with a Downward-Looking Optic Flow Sensing Eye More recently we developed a genuine “OF-based autopilot” called OCTAVE (which stands for optical altitude control system for aerial vehicles) that enables a micro-helicopter to perform challenging tasks such as takeoff, terrain following, reacting suitably to wind, and landing [70–74]. The idea was to integrate an OF sensor into a feedback loop driving the robot’s lift so as to compensate for any deviations of the OF sensor’s output from a given set point. This is what we call the OF regulator for ground avoidance. The term “regulator” is used here as in control theory to denote a feedback control system striving to maintain a variable constantly equal to a given set point. The variable regulated is often a temperature, a speed, or a distance, but here it is the variable ω [rad/s] corresponding to the Vx :h ratio, which can be sensed directly by an OF sensor. The OF sensor produces a signal ωmeas (Fig. 3.3b) that is compared with an OF set point ωSet (Fig. 3.4A). The error signal ε = ωmeas − ωSet drives a controller adjusting the lift L, and hence the groundheight h, so as to minimize ε. All the operator does is to set the pitch angle and hence the forward thrust, and hence the airspeed (see Fig. 3.4A): the OF regulator does the rest, keeping the OF, i.e., the Vx :h ratio, constant. In the steady state (i.e., at t = ∞), ωmeas ∼ = ωSet and the groundheight h becomes proportional to the groundspeed Vx : h = K · V x (with K = 1/ωSet = constant)
(3.2)
Fig. 3.4 (A) The OCTAVE autopilot consists of a feedback control system, called the optic flow regulator (bottom part), that controls the vertical lift, and hence the groundheight, so as to maintain the ventral OF, ω, constant and equal to the set point ωset whatever the groundspeed. (B) Like flies [13] and bees [21],
To test the robustness of the OF regulator, we implemented it on a micro-helicopter (MH) equipped with a ventral OF sensor [70]. The robot (Fig. 3.5a) is tethered to the arm of a flight mill driven in terms of its elevation and azimuth by the MH’s lift and forward thrust, respectively (Fig. 3.5b). Any increase in the rotor speed causes the MH to lift and rise, and the slightest (operator mediated) forward tilting induces the MH to gain speed. The flight mill is equipped with ground-truth azimuthal and elevation sensors with which the position and speed of the MH can be measured with great accuracy in real time. The MH is equipped with a minimalistic two-pixel ventral eye driving a single EMD (Fig. 3.2e). The latter senses the OF produced by the underlying arena covered by a richly textured, randomly distributed pattern in terms of both the spatial frequency and the contrast m (0.04<m<0.3). Whenever the operator commands the robot to pitch forward or backward, the eye is actively counterrotated by a micro-servo that keeps the gaze oriented vertically downward. This process mimics the pitch stabilization mechanism at work in the eye in flying flies, ensuring suitable alignment of the eyes in space [41]. It eliminates the adverse effects of the rotational OF and the resulting need for a “derotation” procedure (Srinivasan et al., Chap. 2 of this volume; Zufferey et al., Chap. 6 this volume).
our micro-helicopter (MH) gains speed by pitching its mean flight force vector forward at an angle with respect to the vertical. Controlling F (via the rotor rpm) amounts to mainly controlling L because remains small (max < 10◦ for Vx max = 3 m/s) (from [28])
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
35
Fig. 3.5 (a) 100-g micro-helicopter (MH) equipped with a ventral OF sensor (Fig. 3.2e) and an OF regulator (Fig. 3.4a). The 5-g rotor is driven by a DC motor via a reducer. (b) The MH, which is mounted on the light pantographic arm of a flight mill, can be remotely pitched forward at an angle while keeping
its roll attitude. The MH circles counterclockwise over a large arena (outside diameter: 4.5 m). In the experiments presented here (Fig. 3.6), the arena was flat, without any rising slopes (from [71])
3.3.4 Insects’ Versus the Seeing Helicopter’s Behavioral Patterns
seemingly unconnected flying abilities observed by many authors during the last 70 years in various species (fruitflies, honeybees, moths, mosquitoes, dung beetles, migrating locusts, butterflies, and birds), as discussed in [28]. The most striking parallels with insect behavior focus on the following:
OCTAVE’s OF regulator scheme (Fig. 3.4A) results in the behavioral patterns shown in Fig. 3.6, which give the MH flight variables monitored during a 70-m flight over a flat terrain [28]. In Fig. 3.6A (left), the operator simply pitched the MH forward rampwise by an angle = +10◦ (between arrowheads 1 and 2). The ensuing increase in groundspeed Vx (see B) automatically made the MH take off, since the feedback loop consistently increased h proportionally to Vx to comply with Eq. (3.2). Once it had reached 3 m/s, the MH flew level at a groundheight h of approximately 1 m – the value imposed by the OF set point ωSet (Fig. 3.6C). After covering 42 m, the MH was pitched backward rampwise by an opposite angle = –10◦ (between arrowheads 3 and 4 in Fig. 3.6A), and the ensuing deceleration (see 3.6B) automatically triggered a gradual descent until landing occurred. As the landing gear kept the robot’s eye 0.3 m above ground (dotted horizontal line), touchdown occurred shortly before the groundspeed Vx had reached zero, and the MH ended its journey with a short ground run. The MH flight pattern shows that an airborne vehicle can take off, navigate, and even land on flat terrain without having to measure the groundheight or groundspeed, provided it is equipped with an OF sensor facing the ground and an OF regulator. The OF regulator concept and the robot’s performances – which are extremely reproducible [73] – were found to account for a series of puzzling,
• Automatic terrain following: A gradual increase in relief constitutes a “disturbance” that impinges on the system at a particular point (see Fig. 3.4A). The closed feedback loop overcomes this perturbation by increasing the flight altitude, resulting in a constant groundheight over the rising terrain [70]. This may account for the terrain and canopy following abilities of migrating insects, as described in [28]. • Suitable reactions to headwind: Windspeed is a disturbance that impinges on the system at a different point (see Fig. 3.4A). The feedback loop overcomes this perturbation by forcing the robot to descend by headwind (and even to land smoothly by strong headwind; see Fig. 13 in [73]). A similar reaction was observed in locusts, honeybees, and dung beetles, as reported in [28]. • Flight over a no-contrast zone: Here the OF sensor fails to respond, which irremediably causes the robot to crash, just as honeybees crash into mirrorsmooth water, as observed in 1963 [42]. • Landing on a flat surface: During the final approach, which starts when the MH has regained its completely upright position (arrowhead 4 in Fig. 3.6A), the OF regulator forces the MH to
36
N. Franceschini et al.
Fig. 3.6 Actual flight path (A) and flight parameters monitored during a 70-m flight (consisting of about six laps over the test arena: Fig. 3.5b) performed by the micro-helicopter (MH) equipped with the OF regulator (Fig. 3.4A). The complete flight
path (A) over the randomly textured pattern includes takeoff, level flight, and landing. (B) Groundspeed Vx . (C) Output ωmeas of the OF sensor. (D) Actual OF output ω (calculated from Vx /h) generated by the OF regulator (from [28])
land smoothly at a constant descent angle, α [rad] (Fig. 3.6A), which depends on only two parameters Eq. (3.3): 1 (3.3) α = − arctan ωset τ
the forward speed” [86]. In contrast, our OF regulator automatically generates smooth landing with a constant slope by acting neither upon the forward speed nor upon the descent speed (details in [28]). In short, the MH’s outstanding visuomotor performance and its close resemblance to insects’ behavioral patterns show how insects and MAVs may take off, follow terrain, and even land smoothly without having to measure the groundheight, groundspeed, or descent speed, provided they are equipped with OF sensors facing the ground and an OF regulator that servoes the OF to a reference value. This model differs markedly from another one where the OF controls the groundspeed Vx rather than the groundheight h [63, 84, 86, 3]. It can be seen in
where ωSet [rad s–1 ] is the OF set point and τ [s] is the surge time constant of the MH (τ MH = 2.15 s) [28]. Honeybees also land with a constant slope on flat surfaces [86]. The way bees may achieve this feat has been explained quite differently, however. Landing bees would follow two rules: “(1) adjusting the speed of forward flight to hold constant the angular velocity of the image of the surface as seen by the eye and (2) making the speed of descent proportional to
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
Fig. 3.4A that if the ventral OF controlled Vx instead of h, this would generate strikingly different flight patterns, as follows: (i) Instead of following a slanting terrain, as migrating butterflies and our MH do, insects would gradually decelerate until touching the rising ground at a negligible speed, thus inopportunely interrupting their journey. (ii) Instead of descending in a headwind and rising in a tailwind, as honeybees [68], locusts [50], and our MH do, insects would compensate for an unfavorable headwind by increasing their airspeed without changing their groundheight. These two models can be reconciled, however, if we add the hypothesis that another OF regulator based on OF sensors oriented toward the lateral parts of the eyes could be used to control the groundspeed. In the next section, we give the example of flight in a corridor to show how the lateral OF might control the insect’s behavior by determining both the groundspeed and the clearance from the walls.
3.4 An Explicit Control Scheme for Speed Control and Lateral Obstacle Avoidance Based on behavioral experiments on honeybees conducted at Srinivasan’s laboratory and at our own laboratory, we designed the LORA III autopilot (LORA III stands for Lateral Optic flow Regulator Autopilot, Mark III), which is able to control both the forward speed Vx of an aerial vehicle and its lateral distances DR and DL from two corridor walls [79].
3.4.1 Effects of Lateral OF on Wall Clearance and Forward Speed Honeybees flying through a narrow corridor tend to fly along the midline [53, 83]. To explain this “centering behavior,” the authors hypothesized that bees might balance the speeds of the retinal images (i.e., the lateral OFs) of the two walls. This was confirmed by experiments where one wall was set in motion: bees flying
37
in the same direction as the moving wall tended to fly closer to it, whereas bees flying in the opposite direction tended to fly farther away from it. These experiments showed compellingly that the visuomotor control mechanism at work in flying bees depends on the laterally perceived translational OF [83]. We recently reported that honeybees trained to fly along a corridor toward a nectar source did not systematically center on the corridor midline, but could hug one wall (Fig. 3.7b,c) [80]. Bees kept on hugging one wall even when part of the opposite wall was missing (Fig. 3.7d). Interestingly, forward speed Vx and distances from the walls, DR and DL , were on average such that the speed-to-distance ratio (i.e., the lateral OF, ω) was maintained virtually constant (at about 230◦ /s in our 95-cm-wide corridor) [80]. These novel findings prompted us to examine how bees may adopt either “centering behavior” (Fig. 3.7a) or “wallfollowing” behavior (Fig. 3.7b–d) on the basis of OF sensing, a prowess that seems to require jointly controlling the forward speed and the clearance from the walls. Experiments on tethered [34, 10] and free-flying flies [14–16] and bees [84, 3] and tethered locusts [81] have long shown that motion detected in the lateral part of these insects’ compound eyes affects the forward thrust, and hence the forward speed. When flying through a tapered corridor lined with regular black and white vertical stripes, honeybees slowed down as they approached the narrowest section and speeded up when the corridor widened beyond this point [84]. Bees therefore tended to adjust their speed proportionally to the local corridor width. The authors concluded that “the speed of the flight is controlled by regulating the image velocity” [84]. The honeybee’s “centering behavior” was successfully mimicked by numerous terrestrial robots [12, 18, 76, 85, 44, 43]. The speed of one of the early robots was also controlled on the basis of the sum of the lateral OFs perceived on both sides [76].
3.4.2 New Robotic Demonstrator Based on a Hovercraft (HO) With a view to explaining honeybees’ behavior in a corridor, a miniature hovercraft (HO) was used [79]. With an aerial vehicle of this kind endowed with
38
N. Franceschini et al.
Fig. 3.7 Honeybees’ centering and wall-following behavior. Bees were trained to enter an apparatus where sugar solution was provided at the end of a wide (width 0.95 m) 3-m-long corridor formed by two 0.25-m-high walls. A digital camera placed above the insect netting covering the corridor filmed the trajectory of individual flying bees over the central part of the corridor. The walls were lined with regularly spaced vertical white-and-gray
stripes (period 0.1 m; contrast m = 0.27). The bee’s entrance E and the feeder F were placed either on the corridor midline (a: Ec and Fc ) or on one side (b: EL and FL ; c and d: ER and FR ). In (d), part of the left wall was removed during the trials. The mean ordinate of the trajectories was distributed as shown above (n = number of trajectories recorded under each of the experimental conditions) (from [80])
natural roll and pitch stabilization abilities, planar flight control systems can be developed conveniently. Like flies [94, 92] and sandwasps [96], our modified HO can produce forward and sideward slips independently, because the two lateral thrusters LT1 and LT2
that we have added (Fig. 3.8) make it fully actuated. It travels at a constant altitude (~2 mm) and senses the environment with two laterally oriented eyes, which measure the right and left OFs. Each eye contains only two photoreceptors, each driving an EMD of the
Fig. 3.8 Fully actuated hovercraft (HO) developed to test the LORA III autopilot (Fig. 3.9). The HO is equipped with two elementary eyes looking at an angle of +90/–90◦ to the side. Each of them comprised only 2 pixels driving a single OF sensor (cf. Fig. 3.2c,e). (b) The miniature HO (36 × 21 × 14 cm) moves along a corridor, the walls of which have a random texture. The vehicle s heading is maintained along the X-axis via a heading lock sys-
tem compensating for any yaw disturbances by activating the two rear thrusters differentially. The two lateral eyes are therefore oriented perpendicular to the corridor axis. The OF experienced by each eye is proportional to the groundspeed Vx and inversely proportional to the distance (DR or DL ) from the wall the eye is looking at (Eqs. (3.4) and (3.5)) (from [79])
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
type shown in Fig. 3.2e. The HO’s heading is maintained along the X-axis of the corridor (Fig. 3.8b) by a heading lock system (based on a micro-magnetic compass enhanced by a rate-gyro), which compensates for any yaw disturbances by controlling the two rear thrusters differentially. This system mimicks the honeybee’s heading lock system, which is based on a polarized light compass [30, 69] and gives the insect an impressively straight course even in the presence of wind [68]. The 820-g HO travels at a groundspeed vector V over a flat surface along a corridor, the walls of which are wallpapered with vertical stripes with a random pattern of spatial frequency and contrast, mimicking a richly textured visual environment (Fig. 3.8b). Since any yaw rotations are compensated for, each eye experiences a purely translational OF, ωR or ωL , which can be simply defined as follows: ωR = V X /DR ωL = V X /DL
(3.4) (3.5)
where Vx is the hovercraft’s groundspeed; DR and DL are the distances from the right and the left walls, respectively.
3.4.3 The LORA III Autopilot: A Dual OF Regulator The hovercraft is controlled by the bio-inspired autopilot called LORA III (Fig. 3.9) [79]). This control scheme has been called the dual OF regulator, because it consists of two strongly interdependent feedback loops, the side control loop and the forward control loop, each of which uses the data collected by the OF sensors looking at both sides. Each loop controls the thrust along its own translational degree of freedom (i.e., in the surge or sway direction) and has its own OF set point. The tight coupling existing between the two loops is illustrated, for instance by the fact that the feedforward gain in the side control loop is proportional to the forward speed Vx , which is determined in turn by the forward control loop (see Fig. 3.9). The hovercraft reacts to any changes in the lateral OFs by selectively adjusting the two orthogonal components Vx and Vy of its groundspeed vector V according to the principles described in the two following subsections:
39
3.4.3.1 Side Control System The principle underlying the side control system (Fig. 3.9, bottom loop) is based on findings made on the honeybee’s centering and wall-following behaviors described in Sect. 3.4.1. The side control system is a lateral OF regulator akin to the ventral OF regulator for ground avoidance (OCTAVE: see Sect. 3.3). It takes the OF generated by the two corridor walls into account and strives to keep the larger of the two OFs measured (left or right) constantly equal to a sideways OF set point ωSetSide . The hovercraft then reacts to any OF deviations from ωSetSide by controlling the lateral thrust, and hence the side slip speed Vy , and hence the distance from the left or right wall. A maximum criterion selects the larger of the two OFs measured, ωRmeas or ωLmeas , and the sign of their difference determines which wall will be followed. The larger OF is compared with the sideways OF set point ωSetSide. The error signal εSide transmitted to the side controller (Fig. 3.9) is therefore calculated as follows: ε Side = sign(ωLmeas − ωRmeas ) × (ωSetSide − max (ωLmeas, ωRmeas ))
(3.6)
In the steady state, the OF selected will become equal to ωSetSide due to the natural integrator present in this loop. The side control loop is nonlinear since the OF is an inverse function of the variable DL or DR controlled. The transfer function of the “side dynamics” (Fig. 3.9) relating the hovercraft’s side speed Vy to the control signal (uLT1 -uLT2 ) delivered by the side controller approximates a first-order low-pass filter with an identified time constant of 0.5 s [79]. A proportionalderivative (PD) controller was introduced into the side control system to increase the damping, thus improving the stability and enhancing the response dynamics. Details of the tuning procedures are given in [79]. 3.4.3.2 Forward Control System The forward control system (Fig. 3.9, upper loop) is also based on the OF perceived on both sides, as reported in Sect. 3.4.1 for a robot [76] and for the honeybee [84]. This control system strives to hold the sum of the left and right OFs measured (ωLmeas + ωRmeas ) constant and equal to a forward OF set point ωSetFwd by controlling the forward thrust, and hence
40
N. Franceschini et al.
Fig. 3.9 The LORA III autopilot is a dual OF regulator consisting of two interdependent visual feedback loops: the forward control loop and the side control loop, each of which controls one degree of freedom (x or y). The forward controller adjusts the forward thrust, and hence the forward speed Vx , so as to minimize the error signal εFwd . The side controller adjusts the lateral thrust, and hence the side speed, and hence the ordinate y, so
as to minimize the error signal ε Side . The sign of the difference between the left and right OFs measured determines which wall will be followed. The robot’s initial ordinate y0 and the right and left wall ordinates (yR , yL ) are regarded by LORA III as disturbances impinging on the system at particular points (black arrows) (from [79])
the forward speed Vx . This control scheme automatically generates a “safe” forward speed, i.e., one commensurate with the local corridor width. More specifically, the feedback signal corresponding to the sum of the two OFs measured is compared with the forward OF set point ωSetFwd (Fig. 3.9). In the steady state, the sum of the OF becomes equal to ωSetFwd . The transfer function of the “forward dynamics” relating the hovercraft’s forward speed Vx to the forward control signal uRT1 + uRT2 (Fig. 3.9) is also given by a first-order low-pass filter with a time constant of 0.5 s [79]. A proportional-integral (PI) controller was introduced here into the feedback loop to improve the closed-loop dynamics and give a zero steady-state error. Further details of the tuning procedures used are given in [79].
tance from the walls. In the following discussion, the LORA III autopilot is assumed to have reached the steady state (t = ∞), that is, (i) the sum of the two lateral OFs measured (ωRmeas +ωLmeas ) has reached the forward OF set point ωSetFwd and (ii) the larger of the two lateral OFs measured (here it is taken to be the right one, ωRmeas ) has reached the sideways OF set point ωSetSide (see above). We can therefore write as follows:
3.4.4 “Operating Point” of the Dual OF Regulator The scheme in Fig. 3.9 requires neither the speed nor the distance from the walls to be measured. The question therefore arises as to whether the system will converge and give a single steady-state “operating point” in terms of a specific forward speed and a specific dis-
⎧ ⎨ ωSetFwd = ωSetFwd = ωR∞ + ωL∞ ⇔ ⎩ω ωSetSide = ωR∞ SetSide =
Vx∞ Vx∞ DR∞ + DL∞ Vx∞ DR∞
(3.7, 3.8)
The forward speed Vx∞ and the ordinate y∞ = DR that the hovercraft will reach in the steady state can be said to define an “operating point” (Vx∞ , y∞ ) that can be calculated straightforwardly from Eqs. (3.7) and (3.8) as follows: ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩
ωSetSide · ωSetFwd − ωSetSide · D ωSetFwd ωSetFwd − ωSetSide · D , y∞ ∈ 0, D y∞ = DR∞ = ωSetFwd 2
Vx∞ =
D = DR∞ + DL∞ , (3.9, 3.10, 3.11)
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
where D is the corridor width. Equations (3.9) and (3.10) have five important consequences: • Two single parameters, ωSetFwd and ωSetSide , suffice to define the speed and lateral position (Vx∞ , y∞ ) that the hovercraft (or the bee) will adopt in a corridor of a given width D: the flying agent needs to measure neither its speed nor its clearance from the walls nor the corridor width. • Both the speed Vx∞ and the ordinate y∞ adopted in the steady state can be seen to be proportional to the corridor width D (Eqs. (3.9) and (3.10)). If the corridor is twice as large, the agent will automatically fly twice as fast. This mechanism therefore accounts for the view that “the bee controls its flight speed by monitoring and regulating the angular velocity of the environment’s image as represented on the eye, that is, if the tunnel width is doubled, the bee flies twice as fast” [84]. In cases where the agent follows one wall (as in Figs. 3.7b–d), doubling the corridor width will not only double the forward speed but also the clearance from the wall followed. • If the sideways OF set point is larger than half the value of the forward OF set point (i.e., ωSetSide > ωSetFwd /2), the agent will reach a final ordinate y∞ (Eq. (3.10)) that is smaller than half the value of the corridor width. This means that the flying agent will hug one wall, thus generating wall-following behavior. • At all sideways OF set point values that are smaller than half the forward OF set point (i.e., ωSetSide < ωSetFwd /2), the agent will not reach the final ordinate y∞ predicted by Eq. (3.10) without causing a change in sign in the error signal εSide (see Eq. (3.6)). The agent will therefore be forced to maintain the OF value at ωSetFwd /2, i.e., it will consistently navigate along the midline of the corridor, thus showing centering behavior. Oscillations in the trajectory are likely to occur, however, because of the repeatedly changing sign of the error signal εSide (Eq. (3.6)).
3.4.5 Simulation Results: Flight Paths Along Straight or Tapered Corridors The implementation of the complete LORA III visuomotor autopilot on the robotic hovercraft is under-
41
way, and only computer-simulated experiments will be presented here [79]. The computer simulations include the dynamic hovercraft model on both the surge and sway axes, the controllers with their PD or PI compensation, actuator saturations, full optical transfer function of the lens/photoreceptor system of each eye, detailed interactions between each photoreceptor and the random patterns lining the two walls, and complete processing within each OF sensor, according to the EMD principle described in Fig. 3.2c. These simulations were carried out on a standard PC equipped with the MatlabTM /Simulink toolbox at a sampling frequency of 1 kHz. This sampling frequency is the same as the sampling rate of the microcontroller installed onboard the LORA robot (Fig. 3.8a).
3.4.5.1 Wall-Following Behavior Along a Straight Corridor Figures 3.10, 3.11, and 3.12 show some examples of the computer-simulated behavior of the hovercraft equipped with the LORA III autopilot (Fig. 3.9) [79]. The first visual environment simulated was a straight corridor (3 m long, 1 m wide). The right and left walls were lined with a random pattern of various gray vertical stripes covering a 1-decade contrast range (from 4% to 38%) and a 1.1-decade spatial frequency range (from 0.068 c/◦ to 0.87 c/◦ reading from the corridor midline). From the three simulated trajectories shown in Fig. 3.10a, the hovercraft can be seen to navigate without ever colliding with the walls. It reached the same steady speed regardless of its initial ordinate y0 at the entrance to the corridor (Fig. 3.10b). All three trajectories correspond to the case where the sideways OF set point was larger than half the forward OF set point (ωSetSide = 230◦ /s > 150◦ /s = ωSetFwd /2). Under these conditions, the HO adopted wall-following behavior, as predicted in Sect. 3.4.4, and followed the right or left wall, depending on the initial ordinate y0 . The HO consistently generated a steady-state clearance of 0.25 m from either wall (left wall: squares and crosses; right wall: full dots) and a “safe” forward speed of Vx∞ =1 m/s. Steady-state clearance and speed define a similar “operating point” to that calculated from Eqs. (3.9) and (3.10): taking ωSetFwd = 300◦ /s, ωSetSide = 230◦ /s, and a corridor width D = 1 m give an operating point Vx∞ = 0.94 m/s and y∞ = 0.23 m.
42
N. Franceschini et al.
Fig. 3.10 (a) Wall-following behavior of the hovercraft (HO) (time marks on the flight paths are at 0.3-s intervals). The HO moves to the right at a forward OF set point ωSetFwd = 300◦ /s (3.28 V) and a sideways OF set point ωSetSide = 230◦ /s (2.21 V), starting at various ordinates y0 (squares: y0 =0.90 m, crosses: y0 =0.50 m, full dots: y0 = 0.10 m). (b) Irrespective of its initial ordinate, the HO ends up following one wall with a clearance
of 0.25 m, at a forward speed Vx∞ = 1 m/s. (c, d) The sum of the lateral OFs measured and the sum of the actual OFs both eventually equal the forward OF set point ωSetFwd = 300◦ /s. (e, f) The larger value of the OFs measured and that of the actual OFs both eventually equal the sideways OF set point ωSetFwd = 230◦ /s (from [79])
Whether the hovercraft followed the right or the left wall depended on the sign of the error signal ε Side (Eq. (3.6)). The hovercraft’s initial ordinate y0 was treated like a disturbance (see Fig. 3.9), which was rejected by the dual OF regulator. It can be seen from Fig. 3.10d,f that both the sum and the larger value of the lateral OFs eventually reacts the OF set points of 300◦ /s and 230◦ /s, respectively.
reached the sideways OF set point, ωSetSide (which was set at 90◦ /s or 110◦ /s or 130◦ /s). This is because these required values of ωSetSide were all smaller than half the value of the forward OF set point (i.e., ωSetSide < ωSetFwd /2 = 150◦ /s). These low values of ωSetSide relative to ωSetFwd forced the hovercraft to center between the two walls, as predicted at the end of Sect. 3.4.4. In addition, the robot’s ordinate can be seen to oscillate about the midline, due to the ever-changing sign of the error signal εSide (Eq. (3.6)) – as also predicted at the end of Sect. 3.4.4. The LORA III dual OF regulator may therefore even account for the striking oscillatory flight pattern observed in honeybees when they fly centered along a corridor (see Fig. 2a in [53]).
3.4.5.2 “Centering Behavior”: A Particular Case of “Wall-Following Behavior” Figure 3.11 illustrates the opposite case, where the OFs generated on either side (150◦ /s: Fig. 3.11d) never
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
43
Fig. 3.11 “Centering behavior” as a particular case of “wallfollowing behavior”. (a, b) Simulated trajectories of the hovercraft (HO) moving to the right along a straight corridor at a forward OF set point ωSetFwd = 300◦ /s, with various sideways OF set points (crosses: ωSetSide = 130◦ /s, open dots: ωSetSide = 110◦ /s, full dots: ωSetSide = 90◦ /s). Initial condition y0 = 0.25 m, time marks as in Fig. 3.10. The HO can be seen to consis-
tently end up centering between the two walls at a forward speed of 1.3 m/s. (c, d) The larger of the two OFs measured and the larger actual OF both eventually equal 150◦ /s (=ωSetFwd /2). In attempting to reach either of the three sideways OF set points, the LORA III autopilot triggers changes in the sign of the error signal εSide (Eq. (3.6)), causing oscillations about the midline (see (a)) (from [79])
The error signal ε Side is consistently minimum along the midline, but the OF cannot become smaller than 150◦ /s (i.e., ωSetFwd /2). The hovercraft can be seen to have reached the steady-state forward speed of Vx∞ = 1.3 m/s (Fig. 3.11b) in all three cases because all three values of ωSetSide were below 150◦ /s. In all three cases, the steady-state operating point of the hovercraft was similar to that predicted in Sect. 3.4.4: at ωSetFwd = 300◦ /s and ωSetSide < 150◦ /s, a 1-m-wide corridor gives Vx∞ =1.31 m/s and y∞ = DR∞ = DL∞ = 0.5 m (Eqs. (3.9) and (3.10)). The lateral OFs measured on either side reached 150◦ /s (= ωSetFwd /2) in all three cases (Fig. 3.11c), and their sum was therefore equal to 300◦ /s (=ωSetFwd ). Similar centering behavior occurred at all values of ωSetSide such that ωSetSide ≤ ωSetFwd /2. Centering behavior can therefore be said to be a particular case of wall-following behavior.
3.4.5.3 Flight Pattern Along a Tapered Corridor In the last set of computer-simulated experiments presented here, the environment was a 6-m-long tapered corridor with a 1.24-m-wide entrance and a 0.5-mwide constriction located midway (Fig. 3.12). The right and left walls were lined with a random pattern of gray vertical stripes covering a 1-decade contrast range (from 4% to 38%) and a 1.5-decade spatial frequency range (from 0.034 c/◦ to 1.08 c/◦ reading from the corridor midline). As shown in Fig. 3.12a and b, whatever the position of its initial ordinate y0 , the HO automatically slowed down on approaching the narrowest section of the corridor and accelerated when the corridor widened beyond this point. In Fig. 3.12a, the HO adopted wallfollowing behavior because ωSetSide > ωSetFwd /2 =
44
N. Franceschini et al.
Fig. 3.12 Automatic navigation along a tapered corridor, requiring no data on the corridor width or the tapering angle α (marks on trajectories as in Fig. 3.10). Again, the hovercraft’s behavior is entirely determined by its two OF set points: ωSetFwd = 300◦ /s and ωSetSide = 230◦ /s. (a) Simulated trajectories of the HO moving to the right along the corridor (tapering angle α = 7◦ ) with three initial ordinates y0 (open dots: y0 = 0.90 m, crosses: y0 = 0.60 m, full dots: y0 = 0.30 m). (b) The for-
ward speed decreases and increases linearly with the local corridor width and the distance x traveled. (c, d) The sum of the two lateral OFs measured (and that of the actual OFs computed with Eq. (3.4) and (3.5)) is maintained constant and is equal to 300◦ /s (=ωSetFwd ). (e, f) The side control system effectively keeps whichever lateral OF is larger at a constant value of approximately 230◦ /s (=ωSetSide ) (from [79])
150◦ /s (see Eq. (3.10)). It can be seen from Fig. 3.12c that the forward control system succeeded in keeping the sum of the two lateral OFs measured constant and equal to the forward OF set point ωSetFwd = 300◦ /s. Likewise, the side control system succeeded in keeping the larger of the two lateral OFs measured virtually constant and equal to the sideways OF set point ωSetSide = 230◦ /s (Fig. 3.12e). Not only the initial ordinate y0 but also the (gradually changing) ordinates yR and yL are regarded by the LORA III autopilot as output perturbations, which are rejected by the dual OF regulator (see the points of application of these disturbances in Fig. 3.9).
The ensuing forward speed profile along the tapered corridor is particularly instructive (Fig. 3.12b): the HO’s forward speed Vx tends at all times to be proportional to the distance traveled x, as observed with the flight path of bees flying freely along a tapered corridor [84]. This plot of Vx = dx/dt versus x actually defines a phase plane, in which the linear change in speed observed with the distance traveled means that the speed Vx (t) is bound to vary as an exponential function of time [79]: sign(α)·(t−t )/τ (α)
Vx (t) = Vx (t0 ).e
0
(3.12)
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
where the time constant τ (α) is a monotonic function of the tapering angle α, as follows: ωSetFwd 2 · tan |α| · ωSetSide · (ωSetFwd − ωSetSide ) (3.13) Thus, without having any knowledge of the corridor width D or the tapering angle α, the HO (or the bee) is therefore bound to slow down exponentially with a time constant τ (α) when entering the narrowing section (α < 0) of the corridor and to speed up exponentially with the same time constant τ (α) after leaving the constriction (α > 0), in accord with the observed honeybee’s flight path in a tapered tunnel [84]. The behavioral effects of two other types of disturbances affecting the LORA III autopilot were also studied [79]. With a large gap in a wall (as in the bee experiments shown in Fig. 3.7d) the HO was not flummoxed and kept on following the remaining wall [79]. When a wall pattern was moved at a constant speed VP in the direction of travel (as in the original bee experiments [53]), the robot moved closer to the moving wall, and vice versa [79]. The reason is that the robot’s relative speed with respect to the moving wall became Vx –VP instead of Vx in Eq. (3.8), causing a predictable shift in the robot’s operating point, Vx∞ , y∞ as computed from Eqs. (3.9–3.10). τ (α) =
3.5 Conclusion 3.5.1 Is There a Pilot Onboard an Insect? In this chapter, we have recounted our attempts to specify the types of operations that insects may perform to guide their flight on the sole basis of optic flow (OF) cues. The OCTAVE principle differs markedly from another current OF-based navigation strategy, where OF sensing needs to be completed by groundspeed sensing (based on a GPS, for example) to estimate the groundheight (see Eq. (3.1)) so as to follow terrain and land safely [5, 31]. The OCTAVE and LORA III autopilots harness the power of the translational OF more parsimoniously because they do not need to measure or estimate any distances or groundspeeds and therefore do not require any sensors other than OF sensors. The purpose of these autopilots is not to reg-
45
ulate any distances or groundspeeds. The only variable they need to regulate (i.e., maintain constant) is the OF – a variable that represents a speed-to-distance ratio and that can be directly measured by a dedicated sensor called an OF sensor. OCTAVE and LORA III autopilots include three interdependent OF regulators in all, which control the lift, lateral thrust, and forward thrust, on which the groundheight, lateral positioning, and groundspeed, respectively, depend. The block diagrams (Figs. 3.4 and 3.9) show which variables need to be measured, which ones are controlled, and which ones are regulated, as well as the point of application of the various disturbances. They also give the causal and dynamic relationships between these variables. These three feedback control loops may enable an agent having no mechanical contact with the ground to automatically attain a given groundspeed, a given groundheight and a given clearance from the walls in a simple environment such as a corridor, without any need for speed or range sensors giving explicit speed or distance data. In a tapered corridor, for example, the hovercraft (HO) automatically tunes both its clearance from the walls and its groundspeed to the local corridor width (Fig. 3.12), although it is completely “unaware” of the exact corridor width, the forward speed, and the clearance from the walls. The behavior depends wholly on three constants which are the three OF set points ωSetVentr , ωSetSide, and ωSetFwd . Experimental simulations and physical demonstrators showed that difficult operations such as automatic takeoff, ground avoidance, terrain following, centering, wallfollowing, suitable reaction to headwind, groundspeed control, and landing can be successfully performed on the basis of these three OF regulators. These simple control schemes actually account for many surprising findings published during the last 70 years on insects’ visually guided performances (details in [28]), including honeybees’ habit of landing at a constant slope [86] and their flight pattern along a tapered corridor [84]. Our novel finding that bees do not center systematically in a corridor but tend to follow a wall, even when the opposite wall has been removed (Fig. 3.7b–d), cannot be accounted for by the optic flow balance hypothesis [53, 83] and is convincingly accounted for by the LORA III model, where “centering behavior” [53, 83] turned out to be a particular case of “wall-following behavior” (Sect. 3.4.5.2). LORA III (Fig. 3.9) would give the bee a safe speed and safe clearance from the walls, whereas OCTAVE
46
(Fig. 3.4) would give the bee safe clearance from the ground – a clearance commensurate with its forward speed, whatever the speed. These explicit control schemes can therefore be viewed as working hypotheses. Onboard insects, the three OF set points may depend on either innate, internal, or external parameters. Recent findings have shown that the forward speed of bees flying along a corridor depends not only on the lateral OF but also partly on the ventral OF [4], which suggests that the ventral and lateral OFs should not perhaps be handled as separately as with OCTAVE and LORA III. Recent experimental simulations have shown how an agent might fly through a tunnel by relying on the OF generated by all four sides: the lateral walls, ground, and roof [61]. Indoor experiments on the autonomously flying microplane MC2 showed that when the plane makes a banked turn to avoid a wall, the ventral OF may originate partly from the lateral wall and the lateral OF partly from the ground (Zufferey et al., Chap. 6 of this volume). This particularity may not concern insects, however, since they compensate for the banking and pitching of their thorax [40, 41] by actively stabilizing their “visual platforms” [92, 96]. A high-speed, one-axis oculomotor compensatory mechanism of this kind was recently implemented onboard a 100-g aerial robot [52]. The electronic implementation of an OF regulator is not very demanding (nor is its neural implementation), since it requires only a few linear operations (such as adding, subtracting, and applying various filters) and nonlinear operations (such as minimum and maximum detections). OF sensors are the crux of OF regulators. Our neuromorphic OF sensors deliver an output that grows monotonically with the OF ω, regardless of the spatial frequency and contrast encountered (see Sect. 3.2), much like honeybees’ velocity-tuned (VT) neurons [45]. Most importantly, the OF regulator scheme greatly reduces the dynamic range constraints imposed on OF sensors, since it tunes the animal’s behavior all the time so that the OF will deviate little from the OF set point [28]. In Fig. 3.6C, for example, the MH holds ωmeas virtually constant throughout its journey, despite the large groundspeed variations that occur during takeoff and landing, for instance (see Fig. 3.6B). This and other examples (see Figs. 3.10c,e, 3.11c, and 3.12c,e) show that the 1-decade dynamic range (from 40◦ /s to 400◦ /s [75]) of our OF sensors is up to the tasks.
N. Franceschini et al.
3.5.2 Potential Aeronautics and Aerospace Applications The control schemes presented here rely on the OF to carry out reputedly difficult tasks such as taking off, terrain following, landing, avoiding lateral walls, and groundspeed control. These simple control schemes are restricted so far to cases where the OF is sensed either ventrally or laterally, perpendicular to the heading direction. The field of view (FOV) of the eyes and the provocatively small number of pixels (2 pixels per eye) and EMDs (one EMD per eye) obviously need to be increased when dealing with navigation in more sparsely textured environments. Increasing the number of motion sensors was found to improve the goal-directed navigation performances of an airship in noisy environments [46], and we recently described how an additional forward-looking EMD might enable our MH to climb steeper rises by providing the autopilot OCTAVE with an anticipatory feedforward signal [74]. It will also be necessary to enlarge the FOV and control the heading direction (about the yaw axis) to enable the HO to successfully negotiate more challenging corridors including L-junctions and T-junctions. The more frontally oriented visual modules required for this purpose could be based on measuring the OF divergence [56–58], a procedure that flies seem to use when they land and trigger body saccades [93, 78, 89]. In the field of aeronautics, these systems could serve to improve navigation aids and automatic maneuvers. Steady measurement of the ventral OF could prevent deadly crashes by warning pilots that the current altitude is “too low for the current groundspeed” – without requiring any altitude and speed measurements [29]. An OCTAVE OF regulator implemented onboard an aircraft would enable it to gradually take off “under automatic visual control”, to veto any attempt to descend to a groundheight not commensurate with the current groundspeed, and to make it land safely [72, 29]. These systems could also potentially be harnessed to guiding MAVs indoors or through complex terrains such as mountains and urban canyons. Since these control systems are parsimonious and do not rely on GPS or bulky and power-hungry emissive sensors such as FLIRs, RADARs, or LADARs, they meet the strict constraints imposed on the bird and insect scale in terms of their size, mass, and consumption.
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
For the same reasons, these autopilots could potentially be adapted to micro-space vehicles (MSVs) performing rendezvous and docking missions in space or exploration missions on other celestial bodies. A Martian lander equipped with a more elaborate OCTAVE autopilot could perform smooth automatic landing (see Sect. 3.3 and [29, 72]). A flying reconnaissance rover equipped with more elaborate OCTAVE and LORA III autopilots could take off autonomously and explore an area, skimming the ground and hugging the walls of a canyon, and adapting its groundspeed and clearance from the walls automatically to the width of the canyon. The orbiter (or base station) would simply have to send the rover a set of three low-bandwidth signals: the values of the three OF set points [24]. Acknowledgments We are grateful to S. Viollet, F. Aubépart L. Kerhuel, and G. Portelli for their fruitful comments and suggestions during this research. G. Masson participated in the experiments on bees and D. Dray in the experimental simulations on LORA III. We are also thankful to Marc Boyron (electronics engineer), Yannick Luparini, and Fabien Paganucci (mechanical engineers) for their expert technical assistance and J. Blanc for revising the English manuscript. Serge Dini (beekeeper) gave plenty of useful advice during the behavioral experiments. This research was supported by CNRS (Life Science; Information and Engineering Science and Technology), an EU contract (IST/FET – 1999-29043), and a DGA contract (2005 – 0451037).
7.
8.
9.
10.
11.
12.
13.
14. 15.
16.
References 1. Aubépart, F., Franceschini., N.: Bio-inspired optic flow sensors based on FPGA: Application to micro-air vehicles. Journal of Microprocessors and Microsystems 31, 408–419 (2007) 2. Aubépart, F., El Farji, M., Franceschini, N.: FPGA implementation of elementary motion detectors for the visual guidance of micro-air vehicles. Proceedings of the IEEE International Symposium on Industrial Electronics (ISIE’2004), pp. 71–76. Ajaccio, France (2004) 3. Baird, E., Srinivasan, M.V., Zhang, S.W., Cowling., A.: Visual control of flight speed in honeybees. Journal of Experimental Biology 208, 3895–3905 (2005). 4. Baird, E., Srinivasan, M.V., Zhang, S.W., Lamont R., Cowling. A.: Visual control of flight speed and height in the honeybee. In: S. Nolfi et al. (eds.) From Animals to Animats 9, (SAB 2006), LNAI 4095, 40–51 (2006) 5. Barber, D.B.; Griffiths, S..R.; McLain, T.W.;& Beard, R.W.: Autonomous landing of miniature aerial vehicles. Proceedings American Institute of Aeronautics and Astronautics Conference. (2005) 6. Barrows, GL, Neely, C., Miller, K.T.: Optic flow sensors for MAV navigation. Fixed and flapping wing aerodynamics
17.
18.
19.
20.
21.
22.
47
for micro-air vehicle applications. Progress in Astronautics and Aeronautics 195, 557–574 (2001) Blanès, C.: Appareil visuel élémentaire pour la navigation à vue d’un robot mobile autonome. M.S. thesis in Neuroscience, Université d’Aix-Marseille II, Marseille (1986) Blanès, C.: Guidage visuel d’un robot mobile autonome d’inspitation bionique. PhD thesis, Polytechnical Institute, Grenoble. Thesis work at the Neurocybernetics lab, CNRS, Marseille, France (1991) Borst, A., Haag, J.: Neural networks in the cockpit of the fly. Journal of Comparative Physiology A188, 419–437 (2002) Buchner, E., Götz, K.G.: Evidence for one-way movement detection in the visual system of Drosophila. Biological Cybernetics 31, 243–248 (1978) Collett, T., Nalbach, H.O., Wagner, H.: Visual stabilization in Arthropods; In: F.A. Miles, J. Wallman (eds.) Visual motion and its role in the stabilization of gaze, pp. 239–263. Elsevier (1993) Coombs, D., Roberts, K.: Centering behavior using peripheral vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New York, USA (1993) David, C.: The relationship between body angle and flight speed in free-flying Drosophila. Physiological Entomology 3, 191–195 (1978) David, C.: Height control by free-flying Drosophila. Physiological Entomology 4, 209–216 (1979) David, C.: Compensation for height in the control of groundspeed by Drosophila in a new, ‘barber’s pole’ wind tunnel. Journal of Comparative Physiology 147, 1432–1351 (1982). David, C.: Visual control of the partition of flight force between lift and thrust in free-flying Drosophila. Nature 313, 48–50 (1985) Dickson, W., Straw, A., Poelma, C., Dickinson, M.: An integrative model of insect flight control. Proceedings of the 44th American Institute of Aeronautics, Astronautics and Aerospace Meeting, Reno, USA (2006) Duchon, A.P., Warren, W.H.: Robot navigation from a Gibsonian viewpoint. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Antonio, USA. 2272–2277 (1994) Egelhaaf, M., Borst, A.: Movement detection in Arthropods. In: F.A. Miles, J. Wallman (eds.) Visual motion an its role in the stabilization of gaze. Amsterdam: Elsevier, 53–77 (1993) Epstein, M., Waydo, S., Fuller, S.B., Dickson, W., Straw, A., Dickinson, M.H., Murray R.M.: Biologically inspired feedback design for Drosophila flight. Proceedings of the American Control Conference (ACC), NY, USA, pp. 3395– 3401 (2007). Esch, H., Natchigall, W., Kogge, S.N.: Correlations between aerodynamic output, electrical activity in the indirect flight muscles and flight positions of bees flying in a servomechanically controlled flight tunnel. Journal of Comparative Physiology A 100, 147–159 (1975) Franceschini, N.: Early processing of colour and motion in a mosaic visual system. Neuroscience Research 2, 517–549 (1985)
48 23. Franceschini, N.: Sequence-discriminating neural network in the eye of the fly. In: F.H.K. Eeckman (ed.) Analysis and Modeling of Neural Systems, pp.142–150. Norwell, USA, Kluwer Acad. Pub. (1992) 24. Franceschini, N.: Towards automatic visual guidance of aerospace vehicles: from insects to robots. Proceedings of the ACT Workshop on Innovative Concepts « A Bridge to Space » ESA-ESTEC, Noordwijk, The Netherlands, Acta Futura 4: 8–23 (2008) 25. Franceschini, N., Blanes, C. and L. Oufar, L.: Appareil de mesure, passif et sans contact, de la vitesse d’un objet quelconque. Technical Report ANVAR/DVAR, N◦ 51549, Paris, France (1986) 26. Franceschini, N., Pichon, J.M., Blanès, C.: From insect vision to robot vision. Philosophical Transactions of the Royal Society of London. Series B 337, 283–294 (1992) 27. Franceschini, N., Riehle, A., Le Nestour A.: Directionally selective motion detection by insect neurons. In: D.G. Stavenga, R.C. Hardie (eds.) Facets of Vision, Chapt. 17, pp. 360–390. Berlin, Springer (1989) 28. Franceschini, N., Ruffier, F., Serres, J.: A bio-inspired flying robot sheds light on insect piloting abilities. Current Biology 17, 329–335 (2007) 29. Franceschini, N., Ruffier, F., Viollet, S., Boyron, M.: Steering aid system for altitude and horizontal speed, perpendicular to the vertical, of an aircraft and aircraft equipped therewith. International Patent PCT FR2003/002611 (2003) 30. von Frisch, K.: Tanzsprache und Orientierung der Bienen, Berlin, Springer (1965) 31. Garratt M.A., Chahl, J.S.: Vision-based terrain following for an unmanned rotorcraft. Journal of Field Robotics 25, 284–301 (2008) 32. Gibson, J.J.: The perception of the visual world. Boston, Houghton Mifflin (1950) 33. Gibson, J.J., Olum, P., Rosenblatt, F.: Parallax and perspective during aircraft landings. The American Journal of Psychology 68, 372–395 (1955) 34. Götz, K.G.: Flight control in Drosophila by visual perception of motion. Kybernetik 4, 199–208 (1968) 35. Green, WF, Oh, Y., Barrows, G.: Flying insect inspired vision for autonomous axial robot maneuvers in near earth environments. Proceedings of the IEEE International Conference on Robotics and Automation vol. 3, pp 2347–2352 (2004) 36. Griffiths, S., Saunders, J., Curtis, A., Barber, B., McLain, T., Beard, R.: Maximizing miniature aerial vehicles – obstacle and terrain avoidance for MAVs. IEEE Robotics and Automation Magazine 13, 34–43 (2006) 37. Hausen K., Egelhaaf, M.: Neural mechanisms of visual course control in insects. In: D.G. Stavenga, R.C. Hardie (eds.) Facets of Vision, chapt. 18, pp. 391–424. Berlin: Springer (1989) 38. Hassenstein, B., Reichardt, W.: Systemtheoretische Analyse der Zeitreihenfolgen und Vorzeichen-Auswertung bei der Bewegungsperzeption des Rüsselkäfers Chlorophanus. Zeitschrift für Naturforschung 11b, 513–524 (1956) 39. Heisenberg, M., Wolf, R.: Vision in Drosophila. Genetics of microbehavior. Berlin: Springer (1984) 40. Hengstenberg R.: Mechanosensory control of compensatory head roll during flight in the blowfly Calliphora ery-
N. Franceschini et al.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51. 52.
53.
54. 55.
56.
57.
throcephala Meig. Journal of Comparative Physiology A 163, 151–165 (1988) Hengstenberg R.: Control of head pitch in Drosophila during rest and flight. Göttingen Neurobiology Meeting 305 (1992) Heran, P., Lindauer, M.: Windkompensation und Seitenwindkorrektur der Bienen beim Flug über Wasser. Zeitschrift für Vergleichende Physiologie 47, 39–55 (1963) Humbert, J.S., Hyslop, H., Chinn, M.: Experimental validation of wide-field integration methods for autonomous navigation. Proceedings of IEEE/RSJ International Conference Intelligent Robots and Systems (IROS). San Diego, USA, 2144–2149 (2007) Humbert, J.S., Murray, R.M., Dickinson, M.H.: Sensorimotor convergence in visual navigation and flight control systems. In: Proceedings of 16th IFAC World Congress. Prague, Czech Republic (2005) Ibbotson, M.R.: Evidence for velocity-tuned motionsensitive descending neurons in the honeybee. Proceedings of the Royal Society of London, Series B 268, 2195–2201 (2001) Iida, F.: Goal-directed navigation of an autonomous flying robot using biologically inspired cheap vision. Proceedings of the 32nd International Symposium on Robotics (2001) Kramer, J., Sarpeshkar, R., Koch, C.: Pulse-based analog VLSI velocity sensors. IEEE Trans. Circuits and Systems II : Analog and digital signal processing 44, 86–101 (1997) Kellog, F.E., Fritzel, D.E., Wright, R.H.: The olfactory guidance of flying insects IV. Drosophila. Canadian Entomology, 884–888 (1962) Kennedy, J.S.: Visual responses of flying mosquitoes. Proceedings of the Zoological Society of London 109, 221–242 (1939) Kennedy, J.S.: The migration of the desert locust (Schistocerca gregaria Forsk.) I. The behaviour of swarms. Philosophical Transactions of the Royal Society London, B 235, 163–290 (1951) Kennedy, J.S., Marsh D.: Pheromone-regulated anemotaxis in flying moths. Science 184, 999–1001 (1974) Kerhuel, L., Viollet, S., Franceschini, N.: A sighted aerial robot with fast gaze and heading stabilization. Proceedings of the IEEE International Conference on Intelligent Robots Systems, San Diego, pp. 2634–2641 (2007) Kirchner, W.H. ; Srinivasan, M.V.: Freely moving honeybees use image motion to estimate distance. Naturwissenchaften 76, 281–282 (1989) Koenderink, J.J.: Optic flow. Vision Research 26, 161–179 (1986) Mura, F., Franceschini., N.: Visual control of altitude and speed in a flying agent. In: D. Cliff et al. (eds.) From Animals to Animats III, pp. 91–99. Cambridge: MIT Press (1994) Nelson, RC, Aloimonos, Y. Obstacle avoidance using flow field divergence. IEEE Transactions on Pattern Analysis and Machine Intelligence 11, 1102–1106 (1989) Netter, T., Franceschini, N.: Neuromorphic optical flow sensing for Nap-of-the-Earth flight. In: D.W. Gage, H.M. Choset (eds.) Mobile Robots XIV, SPIE, Vol. 3838, Bellinghan, USA 208–216 (1999)
3
Optic Flow Based Autopilots: Speed Control and Obstacle Avoidance
58. Netter, T., Franceschini, N.: A robotic aircraft that follows terrain using a neuromorphic eye. Proceedings of the IEEE International Conference on Intelligent Robots & Systems (IROS) Lausanne, Switzerland, 129–134 (2002) 59. Neumann T.R., Bülthoff, H.: Insect inspired visual control of translatory flight. Proceedings ECAL 2001, pp. 627–636. Berlin: Springer (2001) 60. Pichon, J-M., Blanès, C., Franceschini, N.: Visual guidance of a mobile robot equipped with a network of self-motion sensors. In: W.J. Wolfe, W.H. Chun (eds.) Mobile Robots IV, Bellingham, U.S.A.: SPIE Vol. 1195, 44–53 (1989) 61. Portelli, G., Serres, J., Ruffier F., Franceschini, N.: An Insect-Inspired Visual Autopilot for Corridor-Following. Proceedings of the IEEE International Conference on Biomedical Robotics & Biomechatronics, BioRob 08, Scottsdale, USA, pp. 19–26 (2008) 62. Preiss, R.: Set-point of retinal velocity of ground images in the control of swarming flight of desert locusts. Journal Comparative Physiology A 171, 251–256 (1992) 63. Preiss, R., Kramer, E.: Control of flight speed by minimization of the apparent ground pattern movement. In: D. Varju, H. Schnitzler (eds.) Localization and orientation in Biology and Engineering, pp. 140–142 Berlin, Springer (1984) 64. Pudas, M, Viollet, S, Ruffier, F, Kruusing, A, Amic, S, Leppävuori, S, Franceschini, N.: A miniature bio-inspired optic flow sensor based on low temperature co-fired ceramics (LTCC) technology, Sensors and Actuators A133, 88–95 (2007) 65. Reichardt, W.: Movement perception in insects. In: W. Reichardt (ed.) Processing of Optical Data by Organisms and by Machines. New York, Academic Press (1969) 66. Reiser, M, Humbert, S., Dunlop, M., Vecchio, D., Murray, M., Dickinson, M.: Vision as a compensatory mechanism for disturbance rejection in upwind flight. American Control Conference 04. Proceedings IEEE Computer Society pp. 311–316 (2004) 67. Riehle, A, Franceschini, N.: Motion detection in flies: parametric control over ON-OFF pathways. Experimental Brain Research 54, 390–394 (1984) 68. Riley, J.R., Osborne, JL.: Flight trajectories of foraging insects: observations using harmonic radar. In: D.R. Reynolds, C.D. Thomas (eds.) Insect Movement: Mechanisms and Consequences, Chap. 7, pp. 129–157. CAB International (2001) 69. Rossel, S., Wehner, R.: How bees analyze the polarization pattern in the sky. Experiments and model. Journal of Comparative Physiology A154, 607–615 (1984) 70. Ruffier, F., Franceschini, N.: OCTAVE, a bioinspired visuomotor control system for the guidance of Micro-Air Vehicles, In: A. Rodriguez-Vazquez, D. Abbott, R. Carmona, (eds.) Bioengineered and Bioinspired Systems, Vol. 5119, pp. 1–12. Bellingham, U.S.A.: SPIE (2003) 71. Ruffier, F., Franceschini, N.: Visually guided micro-aerial vehicle: automatic take-off, terrain following, landing and wind reaction. Proceedings of the IEEE International Conference on Robotics & Automation (ICRA), New Orleans, USA, pp. 2339–2346 (2004) 72. Ruffier, F., Franceschini, N.: Automatic landing and takeoff at constant slope without terrestrial aids. Proceedings of the 31st Eur. Rotorcraft Forum, AAF/CEAS, Florence, pp. 92.1–92.8. (2005)
49
73. Ruffier, F., Franceschini, N.: Optic flow regulation: the key to aircraft automatic guidance. Robotics and Autonomous Systems 50, 177–194 (2005) 74. Ruffier, F., Franceschini, N.: Aerial robot piloted in steep relief by optic flow sensors. Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), Nice, pp.1266–1273 (2008) 75. Ruffier, F., Viollet, S., Amic, S., Franceschini, N.: Bioinspired optical flow circuits for the visual guidance of micro-air vehicles. Proceedings IEEE, ISCAS, III, pp. 846–849. Bangkok, Thaïland (2003) 76. Santos-Victor, J., Sandini, G., Curotto, F., Garibaldi, S.: Divergent stereo in autonomous navigation: from bees to robots. International Journal of Computer Vision 14, 159– 177 (1995) 77. Sarpeshkar, R.; Kramer, J.; Koch, C.: Pulse domain Neuromorphic Circuit for Computing Motion, US patent Nb 5,781,648 (1998) 78. Schilstra, C., van Hateren J.H.: Blowfly flight and optic flow. I. Thorax kinematics and flight dynamics. Journal of Experimental Biology. 202, 1481–1490 (1999) 79. Serres, J., Dray, D., Ruffier, F., Franceschini, N.: A visionbased autopilot for a miniature airvehicle: joint speed control and lateral obstacle avoidance. Autonomous Robots 25, 103–122 (2008) 80. Serres, J., Masson, G., Ruffier, F., Franceschini, N.: A bee in the corridor : centering and wall following. Naturwissenschaften 95, 1181–118 (2008) 81. Spork, P., Preiss, R.: Control of flight by means of lateral visual stimuli in gregarious desert locusts, Schistocerca gregaria. Physiological Entomology 18, 195–203 (1993) 82. Srinivasan, M.V.: How insects infer range from visual motion. In: F.A. Miles, J. Wallman (eds.) Visual motion and its role in the stabilization of gaze. Amsterdam, Elsevier (1993) 83. Srinivasan, M.V., Lehrer, M., Kirchner, W.H., Zhang, S.W.: Range perception through apparent image speed in freely flying honeybees. Visual Neuroscience 6, 519–535 (1991) 84. Srinivasan, M.V., Zhang, S.W., Lehrer, M., Collett., T.: Honeybee navigation en route to the goal: visual flight control and odometry. Journal of Experimental Physiology 199, 237–244 (1996) 85. Srinivasan, M.V., Chahl, J.S., Weber, K., Venkatesh, S., Nagle, M.G., Zhang, S.W.: Robot navigation inspired by principles of insect vision. Robotics and Autonomous Systems 26, 203–216 (1999) 86. Srinivasan, M.V., Zhang, S.W., Chahl, J.S., Barth, E., Venkatesh, S.: How honeybees make grazing landings on flat surface. Biological Cybernetics 83, 171–183 (2000) 87. Strausfeld, N.: Beneath the compound eye: Neuroanatomical analysis and physiological correlates in the study of insect vision. In: D.G. Stavenga, R.C. Hardie (eds.) Facets of Vision., Berlin, Springer, chapt. 16, pp. 316–359 (1989) 88. Tammero, L.F., Dickinson, M.H.: Collision-avoidance and landing responses are mediated by separate pathways in the fruit fly, Drosophila melanogaster. Journal of Experimental Biology 205, 2785–2798 (2002) 89. Tammero, L.F., Dickinson, M.H.: The influence of visual landscape on the free flight behavior on the fruit fly
50
90.
91. 92.
93.
N. Franceschini et al. Drosophila melanogaster. Journal of Experimental Biology 205, 327–343 (2002) Taylor G.K., Krapp, H.G.: Sensory systems and flight stability: what do insects measure and why? In: J. Casas, S.J. Simpson (eds.) Advances in Insect Physiol. 34: Insects mechanisms and control, pp. 231–316 Amsterdam, Elsevier (2008) Ullman, S.: Analysis of visual motion by biological and computer systems, Computer 14, 57–69 (1981) van Hateren, H., Schilstra, C.: Blowfly flight and optic flow, II. Head movement during flight. Journal of Experimental Biology 202: 1491–1500 (1999) Wagner, H.: Flow-field variables trigger landing in flies. Nature 297, 147–148 (1982)
94. Wagner, H.: Flight performance and visual control of flight of the free-flying housefly Musca domestica, I. Organisation of the flight motor. Philosophical Transactions of the Royal Society of London, Series B312, 527–551 (1986) 95. Whiteside, T.C., G.D. Samuel.: Blur zone. Nature 225, 94– 95, (1970) 96. Zeil, J., Boeddeker, N., Hemmi, J.M.: Vision and the organization of behavior, Current Biology 18, 320–323 (2008) 97. Zettler, F., Weiler, R.: Neuronal processing in the first optic neuropile of the compound eye of the fly. In: F. Zettler, R. Weiler (eds.) Neural principles in vision, pp. 226–237. Berlin, Springer (1974)
Chapter 4
Active Vision in Blowflies: Strategies and Mechanisms of Spatial Orientation Martin Egelhaaf, Roland Kern, Jens P. Lindemann, Elke Braun, and Bart Geurten
Abstract With its miniature brain blowflies are able to control highly aerobatic flight manoeuvres and, in this regard, outperform any man-made autonomous flying system. To accomplish this extraordinary performance, flies shape actively by the specific succession of characteristic movements the dynamics of the image sequences on their eyes (‘optic flow’): They shift their gaze only from time to time by saccadic turns of body and head and keep it fixed between these saccades. Utilising the intervals of stable vision between saccades, an ensemble of motion-sensitive visual interneurons extracts from the optic flow information about different aspects of the self-motion of the animal and the spatial layout of the environment. This is possible in a computationally parsimonious way because the retinal image flow evoked by translational self-motion contains information about the spatial layout of the environment. Detection of environmental objects is even facilitated by adaptation mechanisms in the visual motion pathway. The consistency of our experimentally established hypotheses is tested by modelling the blowfly motion vision system and using this model to control the locomotion of a ‘Cyberfly’ moving in virtual environments. This CyberFly is currently being integrated in a robotic platform steering in three dimensions with a dynamics similar to that of blowflies.
M. Egelhaaf () Department of Neurobiology & Center of Excellence “Cognitive Interaction Technology”, Bielefeld University, D-33501 Bielefeld, Germany e-mail:
[email protected]
4.1 Virtuosic Flight Behaviour: Approaches to Unravel the Underlying Mechanisms Anyone who observes a blowfly landing on the rim of a cup or two flies chasing each other will be fascinated by the breathtaking aerobatics these tiny animals can produce (Fig. 4.1). While the human eye is hardly capable of even following their flight paths, the pursuer fly is quite capable of catching its speeding target. During their virtuosic flight manoeuvres blowflies can make up to 10 sudden, so-called saccadic turns per second, during which they reach angular velocities of up to 4,000◦ /s [58, 70]. During their flight manoeuvres blowflies rely to a great extent on information from the displacements of the retinal images across the eyes (‘optic flow’). This visual motion information is then transformed in a series of processing steps into motor control signals that are used to steer the flight course. The analysis of neural computations underlying behavioural control often rests on the implicit assumption that sensory systems passively pick up information about their surroundings and process this information to control the appropriate behaviour. This concept, though useful from an analytical point of view, misses one important feature of embodied and situated behaviour: normal behaviour operates under closed-loop conditions and all movements of the animal may shape the sensory input to a large extent. Although we will mainly concentrate in this chapter on the sensory side of the action–perception cycle and, in particular, the processing of visual motion information, our approach is distinguished by envisioning the blowfly as a dynamic system embedded in continuous interactions with its environment.
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_4, © Springer-Verlag Berlin Heidelberg 2009
51
52 Fig. 4.1 Five snapshots from a high-speed film sequence of a blowfly landing on the rim of a cup. The fly turns by almost 180◦ within approximately 140 ms
M. Egelhaaf et al. 0,0s
0,026s
0,036s
0,086s
0,14s
Since it is currently not possible to probe during flight behaviour into the neural circuits processing visual motion information, behavioural and neuronal analyses are done separately. Both types of analyses are interlinked by employing for visual stimulation in experiments at the neural level reconstructions of the image sequences free-flying blowflies have previously seen during their virtuosic flight manoeuvres as well as targeted manipulations of such sequences. Neural analysis can, for methodological reasons, unravel only subsystems at a time, such as in our case a population of motion-sensitive nerve cells in the blowfly visual system. The functional role of the analysed subsystems for the performance of the entire system, i.e. in our case for visually guided behaviour, can therefore hardly be assessed appropriately by experimental analysis alone. Reasons are the non-linearity of most computational mechanisms, the recurrent organisation of many neuronal subsystems and the resulting complex dynamics of neuronal activity, as well as the closed-loop nature of behaviour. These constraints can only be overcome by closely linking and complementing experiments with computational approaches: experimentally established hypotheses are modelled in our CyberFly project, both in software and hardware, and put into the context of the entire system interacting with its environment.
The following aspects of visually guided behaviour of blowflies and the underlying neural computations will be addressed: (1) Flight activity of blowflies will be scrutinised and segregated into sequences of individual prototypical components. The corresponding behaviourally generated visual input, a consequence of the closed action–perception cycle, will then be reconstructed; (2) we will pinpoint what information about self-motion and the outside world is provided by populations of output neurons of the visual motion pathway; and (3) the experimentally established hypotheses on the mechanisms of motion computation, on the coding properties of this population of cells and on how this neural population activity is used to control behaviour are challenged with our CyberFly model under openloop and closed-loop conditions.
4.2 Active Vision: The Sensory and Motor Side of the Closed Action–Perception Cycle Optic flow induced on the eyes during locomotion does not only provide information about self-motion of the animal but is also a potent visual cue for spatial
4
Active Vision in Blowflies
information. Optic flow is the most relevant source of spatial information in flying animals which probably do not have other distance cues at their disposal. Some insects use relative motion very efficiently to detect objects and to infer information about their height [10, 34, 45, 61]. When the animal passes or approaches a nearby object, the object appears to move faster than its background. Motion can thus provide the perceived world with a third dimension. Locusts and mantids, for instance, are known to judge the distance of prey objects based on motion parallax actively generated by peering movements [37]. Moreover, honeybees assess the travelled distance or the distance to the walls of a flight tunnel on the basis of visual movement cues [1, 14, 15, 27, 60, 62, 63, 66]. Hummingbird hawkmoths hovering in front of flowers use motion cues to control their distance to them [16, 33, 53]. Several insect species, such as wasps and honeybees, perform characteristic flight sequences in the vicinity of their nest or of a food source. It has been concluded that the optic flow on the eyes is shaped actively by these characteristic flight manoeuvres to provide spatial information [44, 72–74] (see also Chaps. 1, 7 and 17). Blowflies employ a characteristic flight and gaze strategy with strong consequences for the optic flow patterns generated on the eyes: Although blowflies are able to fly continuous turns while chasing targets [2, 3], they do not show smooth turning behaviour during cruising flight or in obstacle avoidance tasks. Instead, they keep their gaze almost straight for short flight segments and then execute sharp fast turns, commonly referred to as saccades (Fig. 4.2A,B). These saccades only last for about 50–100 ms. During saccades blowflies may reach rotational velocities of up to 4000º/s and change their body orientation by up to 90◦ [58]. The head is actively moved so that gaze shifts are even shorter and more precise than those of the body. Active head movements considerably improve stabilisation of the gaze direction between saccades [70]. This behaviour can be interpreted as an active vision strategy stabilising the gaze rotationally as much as possible [31, 57]. The temporal pattern of saccades and intersaccadic intervals as well as the amplitude of saccades may vary systematically depending on the behavioural context. For instance, blowflies, even when flying in a straight tunnel, do not fly in a straight line, but perform a sequence of alternating saccades.
53
The frequency and amplitude of these vary with the width of the flight tunnel (Kern et al. in prep.). As a consequence of the saccadic strategy of flight and gaze control the rotational and translational components of the optic flow are largely segregated at the behavioural level. In blowflies, this strategy is not obviously reflected as sharp bends in the path of the body’s centre of mass. Changes in orientation do not immediately result in changes in flight direction due to sideward drift after a body saccade, presumably caused by the inertia of the blowfly [58]. In this regard blowflies appear to differ from the much smaller fruitflies ([65]; Chap. 17). So far, we mainly decomposed behaviour into just two prototypical components, saccades and straight flight segments during the intersaccadic intervals. This segmentation was mainly based on the horizontal components of movement. Does the segmentation of behaviour into distinct prototypical movements generalise if we consider all six degrees of freedom of locomotion? To answer this question we decomposed behavioural free-flight sequences into their constituent prototypical movements by clustering algorithms. Clustering approaches have been successfully applied, for instance, to classify basic skills and behaviours within observation data of humans and also in computer science and robotics when controlling the movements of artificial agents [56, 67, 68]. In contrast to classical behavioural analysis, clustering methods (such as k-means; [28]) allow us to analyse vast amounts of data in a relatively short time. The large database enables us to assess the relative frequency and the characteristic sequences of prototypical movements occurring in different behavioural contexts. We applied k-means clustering to the six-dimensional set of translational and rotational velocities obtained from cruising flight sequences in an indoor flight arena. We could identify nine stable prototypical movements as behavioural building blocks during cruising behaviour of blowflies. Even if all six degrees of freedom of self-movement are taken into account (Fig. 4.2B,C), the prototypical movements can be classified into two distinct main classes, i.e. rotational movements, on the one hand, and translational movements, on the other hand (Fig. 4.2D). Within the translation class of prototypes, for instance, prototypes reflecting almost pure forward translation and
54
A
500 ms
5 cm
1000 ms
0 ms
5 cm
1200 ms rot. vel. [deg /s ]
B 2000
yaw
0
pitch roll
–2000
C transl. vel. [m /s ]
1 forward
0
sideward lift
–1
D
10 nb prot. move
Fig. 4.2 Flight sequence of a free-flying blowfly and its decomposition into prototypical movements. (A) Section of the flight sequence. The position of the head and its orientation as seen from above are indicated every 10 ms by a dot and a line, respectively (behavioural data: courtesy J.H. van Hateren, University of Groningen, NL). (B) Rotational velocities (yaw, pitch, roll) of the head for the flight sequence shown in (A). Note the saccadic structure of rotational movements. (C) Translational velocities (forward, sideward, lift) for the same flight sequence as shown in (A). The translational velocities predominantly change on a slower timescale than the rotational velocities and do not show an obvious saccadic structure. (D) Prototypical movements into which the flight sequence shown in (A) can be decomposed. Four saccadic (two rightward and two leftward saccadic prototypes) and five translational prototypes were identified. Each prototype is characterised by a distinct combination of rotational and translational movements
M. Egelhaaf et al.
sacc. left
5 0 0
prototypes characterised by a strong sideward component can be distinguished. Organising behaviour as a sequence of prototypical movements leads to a tremendous complexity reduction which is likely to be favourable for both motor control and sensory information processing. The prototypical movements are thought to be selected from a limited pool of possible movement prototypes according to a strategy that depends on the respective behavioural context. This strategy may simplify motor control tremendously. Because of the closed-loop nature of behaviour, the different types of prototypical movements go along with retinal image displacements which are characterised by distinct spatiotemporal features. We could show that the sensory
intersacc. sacc. right
400
800 time [ms]
1200
input is shaped by the very nature of the prototypical movements in a way that greatly facilitates the neuronal analysis of complex sensory information. The corresponding optic flow is either mainly rotational (i.e. during saccades) or translational (i.e. during the intersaccadic intervals). The behavioural segregation of rotational and translational self-movements enables flies to gather spatial information about the three-dimensional layout of their environment during intersaccadic flight sections by relatively simple computational means. The optic flow component resulting from translational selfmotion depends on distance of environmental objects from the observer, whereas the rotational optic flow component is independent of the distance [35]. Thus,
4
Active Vision in Blowflies
only the translational optic flow component contains spatial information. It should be noted that, although translational optic flow contains information about the three-dimensional layout of the environment, it does not directly provide information about metric distances. Rather, spatial information derived from optic flow is only relative, because it depends on (i) the velocity of the observer, (ii) his/her distance to objects in the surroundings and (iii) the location of the objects in the visual field relative to the direction of translation. Although it is mathematically possible to decompose optic flow fields into their rotational and translational components [8, 50, 54], blowflies and other insects appear to avoid the heavy computational effort by a smart behavioural strategy that keeps largely apart rotational and translational flow components from the bottom up. It is not yet clear how this behavioural strategy is accomplished. In particular, the underlying head–body coordination is demanding, because it leads to almost perfect gaze stabilisation between saccades within only few milliseconds, while the body still shows residual slow rotational movements [70]. We can only surmise that feed-forward control and/or mechanosensory information may play a major role, whereas visual feedback might be too slow on the relevant short timescale. In conclusion, the active flight and gaze strategy of blowflies as well as those of many insect species may have been shaped during evolution by requirements of image motion processing. This behavioural strategy can help to reduce the complexity of the sensory input by structuring the movements in an adaptive way. Active vision strategies may thus facilitate the extraction of spatial cues by smart, i.e. relatively simple, mechanisms. Such mechanisms may also be relevant when engineering lightweight autonomous air vehicles.
4.3 Extracting Spatial Information from Actively Generated Optic Flow The blowfly visual motion pathway is well adapted to make use of the structured visual input resulting from the saccadic flight and gaze strategy when extracting spatial information from optic flow. As the blowfly visual system is optimised for reliable performance in
55
virtuosic flight behaviour and is amenable to a broad spectrum of neuronal and behavioural methods it has proved to be a good model system for tracing the computations which serve to process image motion proceeding from the eyes [7, 10, 13, 20]. Retinal image displacements are not perceived directly by the eye. Rather, the photoreceptors in the retina register just a spatial array of brightness values continuously changing in time. From this, the nervous system has to go through a series of steps to evaluate information on the image movements. In the blowfly visual system motion is initially processed in the first and second visual areas of the brain by successive layers of retinotopically arranged columnar neurons. One major function of the first visual area is to remove spatial and temporal redundancies from the incoming retinal signals and to maximise the transfer of information about the time-dependent retinal images by adaptive neural filtering [29, 43, 69]. There is evidence that direction selectivity is first computed by retinotopically arranged local movement detectors in the most proximal layers of the second visual area (review in [9, 64]). The performance of such local movement detection circuits can be accounted for by a computational model, the correlation-type motion detector, often referred to as elementary motion detector (EMD) ([6, 11, 12, 36, 64]; for hardware implementations of local movement detectors, see Chap. 8). This model explains neuronal responses to a wide range of motion stimuli including those that are experienced during highly aerobatic flight manoeuvres [48]. EMDs correlate the brightness data of adjacent light-sensitive cells receiving appropriately filtered brightness signals from neighbouring points in visual space and subtract the outputs of two such correlation units with opposite preferred directions. Movement is signalled when the input elements report the same brightness value in immediate succession. During this process, each motion detector reacts with a large excitatory signal to movement in a given direction and with a negative, i.e. inhibitory, signal to motion in the opposite direction. The responses of EMDs depend not only on image velocity but also on the contrast, the spatial frequency content and orientation of the pattern elements [5, 12]. As a result of these coding properties the representations of local motion information in biological systems, such as the blowfly, are likely to differ considerably from the veridical retinal velocities forming the optic flow (review in [10, 12]).
56
M. Egelhaaf et al.
Since behaviourally relevant information is contained in the global features of optic flow rather than in the local velocities, EMD signals from large areas of the visual field need to be combined. Accordingly, in a variety of animal groups ranging from insects to primates, neurons sensitive to optic flow were found to have large receptive fields (reviews in [5, 42]; see also
Chap. 5). In blowflies, spatial pooling is accomplished by an ensemble of individually identifiable motionsensitive neurons, the so-called tangential cells (TCs) [7, 10, 13, 24, 38]. TCs are thought to spatially pool on their large dendrites the output signals of many local motion detectors received via excitatory and inhibitory synapses. The local motion detectors are activated
100 cm
1
N=4 n = 132
Response [rel. units]
0.8
0.6
0.4
0.2
0
−0.2
102 103 Minimal distance to arena wall [mm]
Fig. 4.3 Significance of three-dimensional layout of the environment for responses of HS cells during intersaccadic intervals is shown by manipulating the distance of the fly to the walls of the flight arena. This was accomplished by increasing the size of the virtual arena while keeping the flight trajectory the same as in the original arena (see sketches of top views of the arenas above the data plot). HS cells were stimulated with the image sequences as would have seen by free-flying blowflies in the original flight arena and different virtual flight arenas. The data are based on six different trajectories (one of them shown
from different perspectives in inset). In addition, the translational movements were completely omitted and the fly only rotated with its natural rotation velocity in the centre of the arena (right data point). The mean response amplitudes between saccades decrease substantially when the distance of the animal to the arena wall gets smaller. For a minimal distance of approx. 1 m, virtually the same intersaccadic responses are obtained as without any translational movement at all. Data are averages based on 4 HSN cells and a total of 132 stimulus repetitions
4
Active Vision in Blowflies
either by downward, upward, front-to-back or back-tofront motion. However, the actual preferred directions change somewhat across the visual field according to the geometrical lattice axes of the compound eye [41, 52]. Two classes of TCs, the 3 HS cells [22, 23, 40] and the 10 VS cells [25, 26, 39], have been analysed in particular detail with respect to encoding the optic flow generated by self-motion of the fly. Whereas the dominant preferred direction of motion of HS cells is ipsilateral front-to-back motion in different overlapping areas of the visual field [23, 40], the dominant preferred direction of motion of VS cells is downward motion in overlapping neighbouring vertically oriented stripes of the visual field [17, 39]. HS and VS cells are output elements of the visual motion pathway and thought to be involved in controlling visually guided orientation behaviour. We could recently show that the populations of HS and VS cells make efficient use of the saccadic flight and gaze strategy of blowflies to represent spatial information. The head trajectories of flying blowflies and thus – because of their immobile eyes in the head capsule – the gaze direction could be determined in a laboratory setting with the help of a magnetic coil system [58, 70]. Using the knowledge of the threedimensional layout and wall patterns of the flight arena, the retinal image sequences were calculated at a high temporal resolution [47]. Under outdoor conditions, the flight paths and body orientations of freeflying blowflies were recorded with a pair of highspeed cameras. The retinal image sequences were assessed by moving a panoramic camera along the same path with a robotic gantry [4]. We used these behaviourally generated retinal image sequences as visual stimuli in electrophysiological experiments on HS and VS cells. On the basis of such experiments, the functional properties of these cells were interpreted in a different conceptual framework than in previous analyses: rather than being primarily viewed as sensors for determining rotational self-motion from the retinal optic flow patterns, they were concluded to provide also spatial information. Since blowflies keep their gaze virtually constant between saccades leading to prominent translational optic flow if environmental objects are sufficiently close to the eyes, the HS and VS cells can extract information about the spatial layout of the environment [4, 30–32]. For instance, the intersaccadic depolarisation level depends on the distance of the
57
blowfly to environmental structures (Fig. 4.3). As we could show in recent experiments, the sensitivity of HS cells to retinal velocity increments caused by nearby objects is even enhanced by motion adaptation, i.e. after the fly is exposed to repeated optic flow patterns for sometime [46].
4.4 A CyberFly: Performance of Experimentally Established Mechanisms Under Closed-Loop Conditions Given the ability of flies to perform extraordinary acrobatic flight manoeuvres, it is not surprising that there have been various attempts to implement fly-inspired optic flow processing into simulation models and on robotic platforms (for review [19, 51, 71, 75]; see also Chaps. 5 and 6). Although these approaches usually employed simplified model versions of the visual motion pathway, in all these studies the sensorimotor loop was closed. Most of them used optic flow information to stabilise the agent’s path of locomotion against disturbances or to avoid collisions with walls. On the other hand, only few attempts in robotics make use of the fly’s saccadic strategy of locomotion and of the implicit distance information present in translatory optic flow between saccades to implement obstacle avoidance [18, 55, 59, 75]. However, most of these agents generate very low dynamic movements compared to blowflies. In a recent study we implemented a saccadic controller that receives its sensory input from a model of the blowfly’s visual motion pathway and takes the specific dynamic features of blowfly behaviour into account. The model of the sensory system providing the input to the controller has been calibrated on the basis of experimentally determined responses of a major motion-sensitive output neuron of the blowfly’s visual system, one of the HS cells, to naturalistic optic flow, i.e. the visual input of flies in free-flight situations [48]. Apart from an array of retinotopically organised spatiotemporal filters which mimic the overall signal processing in the peripheral visual system, the core elements of the sensory model are elementary motion detectors of the correlation type (EMDs; see Sect. 4.3). These are spatially pooled by two elements – one in
58
M. Egelhaaf et al.
either hemisphere of the simulated brain – corresponding to the equatorial HS cell (HSE). To simulate the behaviour of a blowfly, the output signals of the sensory model are transformed into motor signals to generate behavioural responses. The properties of the motor controller determine how these motor signals are transformed into movements of the animal. By simulating the system in a closed control loop, hypotheses about the functional significance of the responses of sensory neurons and different types of sensorimotor interfaces can be tested. So far, the main task of this CyberFly has been to avoid collisions with obstacles, one of the most fundamental tasks of any autonomous agent [49]. Coupling the differential signal of the sensory neurons proportionally to the generated yaw velocities does not lead to sufficient obstacle avoidance behaviour. A more plausible sensorimotor interface is based on a saccadic controller modelled after the flight behaviour of blowflies. Here, the responses of simulated HSE cells in both halves of the visual system are processed to generate the timing and amplitude of saccadic turns. Timing and direction are determined by applying a threshold operation to the neuronal signals. The amplitudes of saccades are computed from their relative difference. The threshold used to initiate a saccade is very high just after the end of a preceding saccade, to prevent the CyberFly from generating
a)
saccades at unrealistically high frequencies. From this start value the threshold continually decreases, increasing the readiness for the generation of a new saccade with extending intersaccadic interval. Even with sideward drift after saccadic turns as is characteristic of real blowflies [58], the CyberFly is able to successfully avoid collisions with obstacles. The implicit distance information resulting from translatory movements between saccades is provided by the responses of model HS cells in the two halves of the visual system and appears to be crucial for steering the CyberFly safely in its environment (Fig. 4.4). A limitation of this simple mechanism is its strong dependence on the textural properties of the environment. A strong pattern dependence is also suggested by behavioural experiments of flight behaviour of Drosophila ([21]; see also Chap. 17). Currently, we are analysing in combined electrophysiological and behavioural experiments on blowflies as well as by model simulations the reasons for the sensitivity of the CyberFly to changes in the textural properties of the environment. Moreover, the current CyberFly, which operates so far only in the horizontal plane, is being elaborated to a fully three-dimensional model and implemented on a robotic gantry platform. These elaborations will be based on the recent knowledge of prototypical movements comprising all three degrees of freedom of rotation and translation as well as by
b) HSE
HSE R
L
>?
>?
|L−R| |L|+|R|
turn left
turn right
amplitude pattern− generator
pattern− generator
yaw rotation velocity
Fig. 4.4 Structure and performance of CyberFly: (a) Responses of simulated motion-sensitive neurons (HSE) are processed to generate the timing and amplitude of saccadic turns. Timing and direction are determined by a threshold operation, and amplitudes are computed from the contrast of the neuronal signals. A pattern generator replays yaw velocity templates computed from
behavioural data. (b) Example for the trajectories generated by CyberFly (grey marker: start). The big markers indicate position (spheres) and viewing directions. The texture of the cylindrical simulation environment is shown with reduced contrast to enhance perceptibility
4
Active Vision in Blowflies
taking larger populations of motion-sensitive tangential cells (HS cells and VS cells) into account. One aim of these modelling studies is to assess which aspects of the saccadic flight and gaze strategy of blowflies and of the underlying neural mechanisms will turn out to be particularly relevant and advantageous for flight performance and, thus, might prove suitable for implementation in micro-air vehicles.
4.5 Conclusions Although blowflies and many other insects are only equipped with a tiny brain, they operate with ease in complex and ever-changing environments. In this regard, they outperform any technical system. It is becoming increasingly clear that visually guided orientation behaviour of blowflies is only possible because the animal actively reduces the complexity of its visual input and the mechanisms underlying visual information processing make efficient use of this complexity reduction. By segregating the rotational from the translational optic flow generated during normal cruising flight, processing of information about the spatial layout of the environment is much facilitated. By adapting the neural networks of motion computation to the specific spatiotemporal properties of the actively shaped optic flow patterns evolution has tuned the blowfly nervous system to solve apparently complex computational tasks efficiently and parsimoniously. Biological agents such as blowflies generate at least part of their power as adaptive autonomous systems through efficient mechanisms acquiring their strength through active interactions with their environment and not by simply manipulating passively gained information about the world according to a predominantly predefined sequential processing scheme. These agent– environment interactions lead to adaptive behaviour in environments of a wide range of complexity. By cunningly employing the consequences of a closed action– perception loop animals’ even tiny brains are often capable of performing extraordinarily well in specific behavioural contexts. Model simulations and robotic implementations reveal that the smart biological mechanisms of motion computation and of controlling flight behaviour might be helpful when designing micro-air vehicles that may carry an on-board processor of only a relatively small size and weight.
59
References 1. Baird, E., Srinivasan, M.V., Zhang. S., Cowling, A.: Visual control of flight speed in honeybees. The Journal of Experimental Biology 208, 3895–3905 (2005) 2. Boeddeker, N., Egelhaaf, M.: Steering a virtual blowfly: Simulation of visual pursuit. Proceedings of the Royal Society of London B 270, 1971–1978 (2003) 3. Boeddeker, N., Kern, R., Egelhaaf, M.: Chasing a dummy target: Smooth pursuit and velocity control in male blowflies. Proceedings of the Royal Society of London B 270, 393–399 (2003) 4. Boeddeker, N., Lindemann, J.P., Egelhaaf, M., Zeil, J.: Responses of blowfly motion-sensitive neurons to reconstructed optic flow along outdoor flight paths. Journal of Comparative Physiology A 25, 1143–1155 (2005) 5. Borst, A., Egelhaaf, M.: Principles of visual motion detection. Trends in Neurosciences 12, 297–306 (1989) 6. Borst, A., Egelhaaf, M.: Detecting visual motion: Theory and models. In: F.A. Miles, J. Wallman (eds.) Visual motion and its role in the stabilization of gaze. Amsterdam: Elsevier (1993) 7. Borst, A., Haag, J.: Neural networks in the cockpit of the fly. Journal of Comparative Physiology A 188, 419–437 (2002) 8. Dahmen, H.J., Franz, M.O., Krapp, H.G.: Extracting ego-motion from optic flow: limits of accuracy and neuronal filters. In: J.M. Zanker, J. Zeil (eds.) Computational, neural and ecological constraints of visual motion processing. Berlin, Heidelberg, New York, Springer (2000) 9. Douglass, J.K., Strausfeld, N.J.: Pathways in dipteran insects for early visual motion processing. In: J.M. Zanker, J. Zeil (eds.) Motion vision: computational, neural, and ecological constraints. Berlin, Heidelberg, New York, Springer; 67–81 (2001) 10. Egelhaaf, M.: The neural computation of visual motion. In: E. Warrant, D.-E. Nilsson (eds.) Invertebrate vision. Cambridge, UK, Cambridge University Press (2006) 11. Egelhaaf, M., Borst, A.: Motion computation and visual orientation in flies. Comparative Biochemistry and Physiology 104A, 659–673 (1993) 12. Egelhaaf, M., Borst, A.: Movement detection in arthropods. In: J. Wallman, F.A. Miles (eds.) Visual motion and its role in the stabilization of gaze. Amsterdam, London, New York, Elsevier, pp. 53–77 (1993) 13. Egelhaaf, M., Kern, R., Kurtz, R., Krapp, H.G., Kretzberg, J., Warzecha, A.-K: Neural encoding of behaviourally relevant motion information in the fly. Trends in Neurosciences 25, 96–102 (2002) 14. Esch, H.E., Burns, J.M.: Distance estimation by foraging honeybees. The Journal of Experimental Biology 199, 155–162 (1996) 15. Esch, H.E. Zhang, S., Srinivasan, M.V., Tautz, J.: Honeybee dances communicate distances measured by optic flow. Nature 411, 581–583 (2001) 16. Farina, W.M., Varjú, D., Zhou, Y: The regulation of distance to dummy flowers during hovering flight in the hawk moth Macroglossum stellatarum. Journal of Comparative Physiology 174, 239–247 (1994)
60 17. Farrow, K., Borst, A., Haag, J.: Sharing Receptive Fields with Your Neighbors: Tuning the Vertical System Cells to Wide Field Motion. Journal of Neuroscience 25, 3985– 3993 (2005) 18. Franceschini, N., Pichon, J.M., Blanes, C.: From insect vision to robot vision. Philosophical Transactions of the Royal Society of London. Series B 337, 283–294 (1992) 19. Franz, M.O., Mallot, H.A.: Biomimetic robot navigation. Robots and Autonomous Systems, 133–153 (2000) 20. Frye, M.A., Dickinson, M.H.: Closing the loop between neurobiology and flight behavior in Drosophila. Current Opinion in Neurobiology 14, 729–736 (2004) 21. Frye, M.A., Dickinson, M.H.: Visual edge orientation shapes free-flight behavior in Drosophila. Fly 1, 153–154 (2007) 22. Hausen, K.: Motion sensitive interneurons in the optomotor system of the fly. I. The Horizontal Cells: Structure and signals. Biological Cybernetics 45, 143–156 (1982) 23. Hausen, K.: Motion sensitive interneurons in the optomotor system of the fly. II. The Horizontal Cells: Receptive field organization and response characteristics. Biological Cybernetics 46, 67–79 (1982) 24. Hausen, K., Egelhaaf, M.: Neural mechanisms of visual course control in insects. In: D. Stavenga, R.C. Hardie (eds.) Facets of vision. Berlin, Heidelberg, New York, Springer; 391–424 (1989) 25. Hengstenberg, R.: Common visual response properties of giant vertical cells in the lobula plate of the blowfly Calliphora. Journal of Comparative Physiology 149, 179–193 (1982) 26. Hengstenberg, R., Hausen, K., Hengstenberg, B.: The number and structure of giant vertical cells (VS) in the Lobula plate of the blowfly, Calliphora erythrocephala. Journal of Comparative Physiology 149, 163–177 (1982) 27. Hrncir, M., Jarau, S., Zucchi, R., Barth, F.G.: A stingless bee (Melipona seminigra) uses optic flow to estimate flight distances. Journal of Comparative Physiology Series A 189, 761–768 (2003) 28. Jain, A.K., Dubes, R.C.: Algorithms for clustering data. Prentice-Hall (1988) 29. Juusola, M., French, A.S., Uusitalo, R.O., Weckström, M.: Information processing by graded-potential transmission through tonically active synapses. Trends in Neurosciences 19, 292–297 (1996) 30. Karmeier, K., van Hateren, J.H., Kern, R., Egelhaaf, M.: Encoding of naturalistic optic flow by a population of blowfly motion sensitive neurons. Journal of Neurophysiology 96, 1602–1614 (2006) 31. Kern, R., van Hateren, J.H., Egelhaaf, M.: Representation of behaviourally relevant information by blowfly motionsensitive visual interneurons requires precise compensatory head movements. Jounal of Experimental Biology 209, 1251–1260 (2006) 32. Kern, R., van Hateren, J.H., Michaelis, C., Lindemann, J.P., Egelhaaf, M.: Function of a fly motion-sensitive neuron matches eye movements during free flight. PLOS Biology 3, 1130–1138 (2005) 33. Kern, R., Varjú, D.: Visual position stabilization in the hummingbird hawk moth, Macroglossum stellatarum L.: I. Behavioural analysis. Journal of Comparative Physiology Series A 182, 225–237 (1998)
M. Egelhaaf et al. 34. Kimmerle, B., Srinivasan, M.V., Egelhaaf, M.: Object detection by relative motion in freely flying flies. Naturwiss 83, 380–381 (1996) 35. Koenderink, J.J.: Optic Flow. Vision Research 26, 161–180 (1986) 36. Köhler, T., Röchter, F., Lindemann, J.P., Möller, R.: Bioinspired motion detection in an FPGA-based smart camera module Bioinspiration & Biomimimetics 4 (1:015008), 2009 doi: 10.1088/1748-3182/4/1/015008 37. Kral, K., Poteser, M.: Motion parallax as a source of distance information in locusts and mantids. Journal of Insect Behavior 10, 145–163 (1997) 38. Krapp, H.G.: Neuronal matched filters for optic flow processing in flying insects. In: M. Lappe (ed.) Neuronal processing of optic flow. San Diego, San Francisco, New York, Academic Press, pp. 93–120 (2000) 39. Krapp, H.G., Hengstenberg, B., Hengstenberg, R.: Dendritic structure and receptive-field organization of optic flow processing interneurons in the fly. Journal of Neurophysiology 79, 1902–1917 (1998) 40. Krapp, H.G., Hengstenberg, R., Egelhaaf, M.: Binocular contribution to optic flow processing in the fly visual system. Journal of Neurophysiology 85, 724–734 (2001) 41. Land, M.F., Eckert, H.: Maps of the acute zones of fly eyes. Journal of Comparative Physiology Series A 156, 525–538 (1985) 42. Lappe, M. Ed: Neuronal processing of optic flow. San Diego, San Francisco, New York, Academic Press (2000) 43. Laughlin, S.B: Matched filtering by a photoreceptor membrane. Vision Research 36, 1529–1541 (1996) 44. Lehrer, M.: Small-scale navigation in the honeybee: Active acquisition of visual information about the goal. The Journal of Experimental Biology 199, 253–261 (1996) 45. Lehrer, M., Srinivasan, M.V., Zhang S.W., Horridge, G.A.: Motion cues provide the bee’s visual world with a third dimension. Nature 332, 356–357 (1988) 46. Liang, P., Kern, R., Egelhaaf, M.: Motion adaptation facilitates object detection in three-dimensional environment. Journal of Neuroscience 29, 11328–1332 (2008) 47. Lindemann, J.P., Kern, R., Michaelis, C., Meyer, P., van Hateren, J.H., Egelhaaf, M.: FliMax, a novel stimulus device for panoramic and highspeed presentation of behaviourally generated optic flow. Vision Research 43, 779–791 (2003) 48. Lindemann, J.P., Kern, R., van Hateren, J.H., Ritter, H., Egelhaaf, M.: On the computations analysing natural optic flow: Quantitative model analysis of the blowfly motion vision pathway. Journal of Neuroscience 25, 6435–6448 (2005) 49. Lindemann, J.P., Weiss, H., Möller, R., Egelhaaf, M.: Saccadic flight strategy facilitates collision avoidance: Closedloop performance of a cyberfly. Biological Cybernetics 98, 213–227 (2007) 50. Longuet-Higgins, H.C., Prazdny, K.: The interpretation of a moving retinal image. Proceedings of the Royal Society of London. Series B 208, 385–397 (1980) 51. Neumann, T.R.: Biomimetic spherical vision. Universität Tübingen; 2004. 52. Petrowitz, R., Dahmen, H.J., Egelhaaf, M., Krapp, H.G.: Arrangement of optical axes and the spatial resolution in the compound eye of the female blowfly Calliphora.
4
53.
54. 55.
56.
57. 58.
59. 60.
61.
62.
63.
64.
Active Vision in Blowflies Journal of Comparative Physiology Series A 186, 737–746 (2000) Pfaff, M., Varjú, D.: Mechanisms of visual distance perception in the hawk moth Macroglossum stellatarum. Zoology Jb Physiology 95, 315–321 (1991) Prazdny, K.: Ego-Motion and Relative Depth Map from Optical-Flow. Biological Cybernetics 36, 87–102 (1980) Reiser, M.B., Dickinson, M.H.: A test bed for insectinspired robotic control. Philosophical Transactions of the Royal Society of London. Series A 361, 2267–2285 (2003) Schack, T.: The cognitive architecture of movement. International Journal of Sport & Exercise Psychology 2, 403– 438 (2004) Schilstra, C., van Hateren, J.H.: Stabilizing gaze in flying blowflies. Nature 395, 654 (1998) Schilstra, C., van Hateren, J.H.: Blowfly flight and optic flow. I. Thorax kinematics and flight dynamics. The Journal of Experimental Biology 202, 1481–1490 (1999) Sobey, P.J.: Active navigation with a monocular robot. Biological Cybernetics 71, 433–440 (1994) Srinivasan, M.V., Lehrer, M., Kirchner, W.H., Zhang, S.W.: Range perception through apparent image speed in freely flying honeybees. Visual Neuroscience 6, 519–535 (1991) Srinivasan, M.V., Lehrer, M., Zhang, S.W., Horridge, G.A.: How honeybees measure their distance from objects of unknown size. Journal of Comparative Physiology Series A 165, 605–613 (1989) Srinivasan, M.V., Zhang, S., Altwein, M., Tautz, J.: Honeybee navigation: Nature and calibration of the “odometer”. Science 287, 851–853 (2000) Srinivasan, M.V., Zhang, S.W., Lehrer, M., Collett, T.S.: Honeybee navigation en route to the goal: Visual flight control and odometry. The Journal of Experimental Biology 199, 237–244 (1996) Strausfeld, N.J., Douglass, J.K, Campbell, H.R., Higgins, C.M.: Parallel processing in the optic lobes of flies and the occurrence of motion computing circuits. In: E. Warrant,
61
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
D.-E. Nilsson (eds.) Invertebrate vision, pp. 349–398 Cambridge, Cambridge University Press (2006) Tammero, L. F., Dickinson, M.H.: The influence of visual landscape on the free flight behavior of the fruit fly Drosophila melanogaster. The Journal of Experimental Biology 205, 327–343 (2002) Tautz, J., Zhang, S., Spaethe, J., Brockmann, A., Si, A., Srinivasan, M.: Honeybee odometry: performance in varying natural terrain. PLOS Biology 2, 915–923 (2004) Thoroughman, K.A., Shadmehr, R.: Learning of action through adaptive combination of motor primitives. Nature 407, 742–747 (2000) Thurau, C., Bauckhage, C., Sagerer, G.: Synthesizing movements for computer game characters, pp. 179– 1863175 Heidelberg, Springer (2004) van Hateren, J.H.: Processing of natural time series of intensities by the visual system of the blowfly. Vision Research 37, 3407–3416 (1997) van Hateren, J.H., Schilstra, C.: Blowfly flight and optic flow. II. Head movements during flight. The Journal of Experimental Biology 202, 1491–1500 (1999) Webb, B., Harrison, R.R., Willis, M.A.: Sensorimotor control of navigation in arthropod artifical systems. Arthropod Structure & Development 33, 301–329 (2004) Zeil, J.: Orientation flights of solitary wasps (Cerceris, Sphecidae, Hymenoptera). I. Description of flights. Journal of Comparative Physiology Series A 172, 189–205 (1993) Zeil, J.: Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera). II. Similarities between orientation and return flights and the use of motion parallax. Journal of Comparative Physiology Series A 172, 207–222 (1993) Zeil, J., Kelber, A., Voss, R.: Structure and function of learning flights in bees and wasps. The Journal of Experimental Biology 199, 245–252 (1997) Zufferey, J.-C., Floreano, D,: Fly-Inspired Visual Steering of an Ultralight Indoor Aircraft. IEEE Transactions on Robotics 22, 137–146 (2006)
Chapter 5
Wide-Field Integration Methods for Visuomotor Control J. Sean Humbert, Joseph K. Conroy, Craig W. Neely, and Geoffrey Barrows
Abstract In this chapter wide-field integration (WFI) methods, inspired by the spatial decompositions of wide-field patterns of optic flow in the insect visuomotor system, are reviewed as an efficient means to extract visual cues for guidance and navigation. A control-theoretic framework is described that is used to quantitatively link weighting functions to behaviorally relevant interpretations such as relative orientation, position, and speed in a corridor environment. The methodology is demonstrated on a micro-helicopter using analog VLSI sensors in a bent corridor.
5.1 Introduction In recent years robotics research has seen a trend toward miniaturization, supported by breakthroughs in microfabrication, actuation, and locomotion [24], as described in Chap. 16. Engineers are on the verge of being able to design and manufacture a variety of microsystems; however, the challenge is to endow these creations with a sense of autonomy that will enable them to successfully interact with their environments. Scaling down traditional paradigms will not be sufficient due to the stringent size, weight, and power requirements of these vehicles (Chap. 21). Novel sensors and sensory processing architectures will need to be developed if these efforts are to be ultimately successful.
J.S. Humbert () Autonomous Vehicle Laboratory, University of Maryland, College Park, MD USA e-mail:
[email protected]
Recently there has been considerable interest [1] (see Chaps. 2, 3, and 6) in utilizing optic flow for navigation as an alternative to the more traditional methods of computer vision [12] and machine vision [4]. The general approach has been to extract qualitative visual cues from optic flow and use these directly in a feedback loop. One example is the detection of expansion, which can be used as an indication of an approaching obstacle. In [22] it was shown that simple models of integrated expansion on the left versus the right eyes of fruit flies accounted well for saccadic behavior of freely flying animals in an arena. This technique was successfully implemented by Zufferey et al. (Chap. 6) where reflexive obstacle avoidance was demonstrated on lightweight, propeller-driven airplane. Navigation methods based on optic flow are mostly inspired by the insightful work of Srinivasan et al. (Chap. 2) who postulated a well-known heuristic, the centering response, observed in honeybees as they traversed a corridor. This heuristic states that in order to negotiate a narrow gap, an insect must balance the image velocity on the left and right retinas, respectively. Local navigation utilizing this centering technique has been described in recent approaches [19, 15], as well as by Franceschini et al. (see Chap. 3). An excellent review and summary of earlier work is given in [8]. In this chapter the method of wide-field integration is reviewed, an analogue to tangential cell processing inspired by the spatial decompositions of optic flow in the insect visuomotor system. The concept is based on extracting information for navigation by spatially decomposing wide-field patterns of optic flow with sets of weighting functions. Outputs are interpreted as encoding information about relative speed and proximity with respect to obstacles in the surrounding
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_5, © Springer-Verlag Berlin Heidelberg 2009
63
64
J.S. Humbert et al.
environment, which are used directly for feedback. The methodology described herein is demonstrated on a ground vehicle and a micro-helicopter navigating corridor environments of varying spatial structure. In both examples, weighted sums of the instantaneous optic flow field about the yaw axis are used to extract relative heading and lateral position (and, in addition, relative speed for the ground vehicle). The resulting closedloop responses replicate the navigational heuristics described by Srinivasan in Chap. 2.
5.2 The Insect Visuomotor System The insect retina can be thought of as a map of the patterns of luminance of the environment. As an insect moves, these patterns become time dependent and are a function of the insect’s relative motion and proximity to objects through motion parallax. The rate and direction of these local image shifts, taken over the entire visual space, form what is known as the optic flow field (see Chap. 6, Sect. 6.3). Local estimates of optic flow are thought to be computed by correlation-type motion detectors in the early portion of the visual pathway. Subsequently, these outputs are pooled by approximately 60 tangential neurons in the third visual neuropile of each hemisphere (see Chap. 4, Sect. 4.3), a region referred to as the lobula plate
(Fig. 5.1A). Extraction of wide-field visual information occurs at the level of these tangential cells, which spatially decompose complicated motion patterns over large swaths of the visual field into a set of feedback signals used for stabilization and navigation [5, 2]. Descending cells, which receive dendritic input from tangential cells, connect to the neurons in the flight motor to execute changes in wing kinematics. Tangential cells respond selectively to stimuli by either graded shifts in membrane potential or changes in spiking frequency, depending on class [9]. Shifts are depolarizing (or spiking frequency increases) if the motion pattern is in the preferred direction and hyperpolarizing (or spiking frequency decreases) in the null direction [10, 11]. In addition to direction selectivity, tangential cells also exhibit spatial selectivity within their receptive fields [10, 11, 18, 17], as shown in Fig 5.1B. Due to their motion sensitivity patterns, which are similar to the equivalent projected velocity fields for certain cases of rotary self-motion, it has been postulated that tangential cells function as direct estimators of rotational velocity [17]. However, recent work has shown that translational motion cues, which are the source of proximity information, are also present in the outputs of cells that were previously thought to be used only for compensation of rotary motion (Chap. 4, Sect. 4.2). This suggests that cell patterns might be structured to extract a combination of relative
Motion Detection
A
B
Lamina Medulla
VS1 Lobula Plate
Dendritic Arborization
Tangential Cells
Photoreceptors
VS1 Cell Sensitivity Pattern
Descending Cells
Fig. 5.1 (A) Visuomotor system structure, (B) VS1 tangential cell and associated wide-field sensitivity pattern. The data were extracted and replotted from [17]
elevation (deg)
To Flight Motor 50
0
−50 0
50
100
azimuth (deg)
150
5
Wide-Field Integration Methods for Visuomotor Control
65
speed and proximity cues, rather than direct estimates of the velocity state. Hence, significant progress has been made in understanding structure, arrangement, and synaptic connectivity [5]; however, the exact functional role that each of these neurons hold in the flight control and navigation system of the fly remains a challenging and open question.
5.3 Wide-Field Integration of Optic Flow The goal of this research is to investigate how the spatial decompositions of optic flow by tangential cells, described here as wide-field integration, might be used in closed-loop feedback to explain the visualbased behaviors exhibited by insects. The approach is based on a novel premise: tangential cells are not used to directly estimate self-motion quantities of a flying organism as in traditional implementations [7, 6]. Rather, it is assumed their purpose is to detect departures from desired patterns of optic flow, generating signals that encode information with respect to the surrounding environment. The resulting set of signals can be used in a feedback loop to maintain a safe distance from obstacles in the immediate flight path. The intuition behind this approach is shown in Fig 5.2. Forward motion of a vehicle constrained to move in 3 DOF (forward and lateral translation along
with yaw rotation) in the horizontal plane generates an optic flow pattern with a focus of expansion in the front field of view, a focus of contraction in the rear, with the largest motion on the sides. If plotted as a function of the angle δ around the retina, this is approximately a sine wave (Fig. 5.2A). Perturbations from this equilibrium state of a constant forward velocity u0 along the centerline of the tunnel introduce either an asymmetry in this signal for lateral displacements δy (Fig. 5.2B) or a phase shift for rotary displacements δψ (Fig. 5.2C). If the forward speed is increased by δu, the amplitude of this signal increases (Fig. 5.2D), and if the vehicle is rotating at angular velocity ψ˙ about the yaw axis a DC shift in the signal of equal magnitude occurs. Therefore, the amplitude, phase, and asymmetry of the pattern of optic flow around the yaw axis encode important information that could be used for navigation and speed regulation. The structure of the visuomotor pathway of insects (Fig. 5.1A) gives us a hint as to how extracting this type information from patterns of optic flow might be achieved simply and quickly. Tangential cells, which parse the complicated patterns of optic flow generated during locomotion, exhibit either a shift in membrane potential or an increase in spike frequency when they are presented with a wide-field pattern they are tuned to. Essentially, each cell makes a comparison between its preferred pattern sensitivity and the pattern of the stimulus. Mathematically, this process can be represented as an inner product u,v, analogous to the dot
B
δy
γ
z xb
yb
x
Lateral Displacement
y
C
δψ
δψ γ
δy Fig. 5.2 Perturbations of azimuthal optic flow and their correlations to the relative state of the vehicle. Amplitude, phase, and asymmetry of the azimuthal optic flow pattern encode relative proximity and speed with respect to obstacles in the environment
a
δψ
Orientation Shift
Optic Flow Pattern
A
D
Left
δu
γ γ
Right
Centered Motion
Forward Speed Increase
γ
66
J.S. Humbert et al.
product between vectors, which tells us how similar two objects u and v are. In the case of Fig. 5.2, the patterns are assumed to reside in L2 [0,2π], the space of 2π-periodic and square-integrable functions, where the inner product is given by ˙ i = zi (x) = Q,F
2π
˙ ,x) · Fi (γ ) dγ . Q(γ
(5.1)
0
˙ ,x) is the measured optic flow about the yaw Here Q(γ axis, Fi (δ) is any square-integrable weighting function such that (5.1) exists, and zi (x) is the resulting output which is a function of the relative state x (proximity and velocity) of the insect with respect to the environment. This expression, which could represent either a shift in membrane potential or a change in spiking frequency, gives a number that is maximum if the patterns line up, negative if the pattern has the same structure but is in the opposite direction, and zero if the patterns are completely independent (orthogonal) of one another. The resulting set of outputs represents a decomposition of the motion field into simpler pieces that encode perturbations from the desired pattern. Navigation behavior can be achieved by implementing a feedback loop which attempts to maintain a sine wave pattern of optic flow (Fig. 5.2A) on the circular imaging surface. Departures from the desired relative state create spatial perturbations that can be extracted with the above tangential cell analogue (5.1). For example, an estimate of the phase shift is given by ˙ against an F(γ) = cos γ weighting funcintegrating Q ˙ against tion, relative speed results from integrating Q an F(γ) = sin γ weighting function, and asymmetry can be extracted with an F(γ) = cos 2γ weighting function. These correspond to the first cosine a1 , first sine b1 , and second cosine a2 Fourier harmonics of the optic flow signal. These low-order spatial harmonics yield the orientation, speed, and lateral position relative to the corridor and can be used as feedback commands to maneuver the vehicle accordingly to maintain the desired sine wave pattern of optic flow. Rotational motion ψ˙ about the yaw axis also introduces an asymmetry between the left and right values of the optic flow signal; however, since the result is a pure DC shift, the F(γ) = cos 2γ weighting extracts only the portion of the asymmetry due to a lateral displacement δy. To extract the rotational motion ψ˙
directly, a function which is a combination of a DC shift and a cos 2γ weighting would be required as a lateral displacement δy also introduces a change in the DC value of the signal. Something similar occurs when attempting to extract the relative heading. A pure orientation offset δψ creates a phase shift in all the harmonics of the optic flow signal; however, a lateral velocity δv will introduce a shift in only the low-frequency harmonics. This is not an issue for platforms which have sideslip constraints such as a ground vehicle [14]. For other platforms such as hovercraft or helicopters, a coupling between the orientation ψ and lateral position y degrees of freedom is introduced which has a detrimental effect on the achievable closed-loop bandwidth in the lateral degree of freedom [15]. The extension of the proposed wide-field integration methodology to 3D environments and 6-DOF vehicles has shown that this issue can be resolved by utilizing optic flow about either the pitch or roll axis, which can be used to unambiguously decouple the two distinct motions [16]. The primary advantage of this framework is that it can be used to quantitatively link weighting functions to interpretations such as relative orientation, position, and speed, as shown above. For standard obstacles (one ˙ wall, two walls, cylinders, etc.) one can express Q in closed form, analytically compute (5.1), and subsequently linearize about the state corresponding to desired equilibrium pattern of optic flow. This results in an output equation y = Cx, which is a linear function of the relative state. Analysis tools from control theory, such as output LQR [21], can then be applied to derive gains for desired stability and performance. Results from an experimental implementation on a wheeled robot [13] are shown in Fig. 5.3. Optic flow is computed using an OpenCV implementation of the Lucas–Kanade algorithm around a 360◦ ring from imagery generated by a vertically mounted camera which points upward at a parabolic mirror. Spatial harmonics (integral weightings) of optic flow, encoding perturbations from the equilibrium pattern, are used as control inputs to maneuver the vehicle. Regulation of the sine wave pattern on the mirror is achieved by using these signals as feedback to force or torque accordingly. Therefore, this simple visuomotor control architecture (wide-field integration) gives rise to the centering and clutter navigational heuristics observed in insect behavior [20].
5
Wide-Field Integration Methods for Visuomotor Control
A
67
B
Finish
rad/s
y (m)
1
0
1
0.1 0 −0.1
Start
0
−0.2
2
x (m)
0
2
4
6
8
10
Time (s) 0.6
5
b1
0.5
4
y (m)
a1 a2
0.2
2
C
Fourier Coefficients
0.3
u b1
(rad/s)
0.4
3
0.3 2 0.2
Fwd Speed
1
u (m/s)
0.1 0 −1
0
0
1
0
1
2
3
4
Distance Along Tunnel (m)
x (m)
Fig. 5.3 (A) Autonomous corridor navigation of a ground vehicle using wide-field integration techniques. (B) Navigation of a 90◦ bend. The mean values of the position in the corridor along with all 20 individual trials are plotted for a combined lateral and rotational offset, along with the corresponding Fourier harmon-
ics a1 and a2 of the optic flow signal. (C) Converging–diverging corridor navigation. The first sine harmonic b1 of the optic flow is held constant over 20 runs by modulating the forward speed, resulting in a speed decrease as the vehicle navigates the narrow gap
5.4 Application to a Micro-helicopter
closed using optic flow-derived estimates. No altitude control is required as the helicopter is sufficiently stable for a constant thrust in ground effect. Optic flow is measured in a tangential ring arranged around the azimuth of the helicopter (Fig. 5.4B). Due to a lack of available payload for the camera and parabolic mirror configuration, VLSI sensors are used for optic flow estimation. The increased noise inherent in the VLSI implementation of the optic flow algorithms (see Chap. 8) along with the increased vibrations present on the helicopter reduces the quality of the optic flow estimates. It is demonstrated, however, that the wide-field integration methodology is sufficient to affect navigation capabilities.
The preceding experimental implementation of optic flow-based navigation on the ground vehicle has been extended to the case of an electric micro-helicopter (Fig. 5.4A). The goal in this demonstration is to provide autonomous navigation of a bent corridor environment using estimates of relative heading and lateral position derived from weighted sums of the instantaneous optic flow field about the yaw axis. Pitch, roll, and forward speed control of the helicopter is accomplished with an external marker-based visual tracking system while the heading and lateral control loops are
A
Fig. 5.4 (A) E-Sky Honeybee micro-helicopter, and (B) optic flow sensor ring with six ARZ-Lite sensors from Centeye, Inc
B
Micro-Helicopter
Optic Flow Sensor Ring
68
J.S. Humbert et al.
Fig. 5.5 Closed-loop block diagram for WFI-based navigation. Inner loop pitch and roll control are accomplished using Vicon measurements while optic flow-derived outputs are used to close the outer navigation loop
For a helicopter, the high-order dynamics and a large degree of coupling require additional consideration, thus making implementation more complex than that of the ground vehicle. Additionally, the motion is unconstrained such that sideslip, forward speed, and rotation can all be varied independently. A state-space model of the helicopter dynamics, linearized about the hover condition, has been identified in prior work [3]. This identified model confirms that the lateral and longitudinal dynamics are highly coupled and of high order. Additionally, the heading degree of freedom is highly sensitive to the yaw input and disturbances due to the low rotational damping and inertia. This degree of freedom is immediately modified via onboard integrated yaw rate feedback to add additional damping. While the helicopter is clearly not a planar constrained vehicle in general, given a fixed altitude and non-aggressive maneuvers, a planar analysis and implementation can be assumed. A ViconTM visual tracking system provides an offboard feedback capability that reduces the effective dynamics of the vehicle. Direct measurements of the vehicle state x = (x,y,ψ,u,v,r) are available at 350 Hz at 10 ms latency and provide ground truth data for the experiment. Attitude feedback for the roll and pitch degrees of freedom greatly reduces the cross-coupling effects inherent in the dynamics, thus allowing the lateral and longitudinal degrees of freedom to be considered separately. The resulting set of reduced order dynamics is given by v˙ = Yv v + gφr u˙ = Xu u − gθr ,
(5.2)
where Yv and Xu are aerodynamic damping derivatives [23], g is gravity, and φr and θ r are the commanded orientation values for roll and pitch, respectively. For low frequencies these can be considered to be the same as the actual roll and pitch values, φ and θ . The corresponding transfer function form, given only for the lateral degree of freedom, is v φr
=
g . s + Yv
(5.3)
The effective lateral and longitudinal transfer functions can be further modified via a simple proportional velocity tracking feedback using a Vicon estimate for the lateral velocity v (Fig. 5.5). Given a control gain Kv and a commanded velocity vr , the resulting transfer function for the lateral degree of freedom is v vr
=
Kv g . s + (Yv + Kv g)
(5.4)
A transfer function of identical form is valid for the longitudinal degree of freedom. The velocity tracking loop further reduces the effect of the cross-coupling in the vehicle dynamics and adds effective damping. In the current experiment (Fig. 5.6) the forward commanded velocity ur is fixed to a constant value u0 = 0.6 m/s, whereas the desired lateral velocity reference vr can be used as a control input to laterally maneuver the helicopter. Heading control is also augmented using the visual tracking system to supply rate tracking capability. The open-loop heading dynamics are given as
5
Wide-Field Integration Methods for Visuomotor Control
Fig. 5.6 (A) Bent corridor test environment, (B) ˙ azimuthal optic flow pattern Q measured at time t = 2.7, (C) helicopter trajectory through the corridor along with the relative heading given by the angle of the L-shaped markers with respect to the vertical, and (D) time history of the controlled inputs uy and uψ .
69 Test Corridor
A
B
rad/s
0.5
−1
y C
v
u ψ
uy
meters
6
1 0 0
0
uψ
2
0.5
4
6
8
t = 2.7 sec
0
−0.5
−2
−1
0 meters
r˙ = Nr r + Nμy μy ,
(5.5)
where r is the yaw rate, Nr is the yaw damping, Nμy is the control sensitivity, and μy is the tail motor input. The term Nr is naturally aerodynamic in origin; however, we include in this term the effects of the onboard yaw rate damping as well. Simple proportional feedback is again used to provide yaw rate tracking capability (Fig. 5.5), resulting in the following transfer function, from desired reference rate, rr , to the actual rate, r is:
s + (Nr + Kr Nμy )
2 4 azimuth (rad)
−1
t = 2.7 sec
Kr Nμy
0
Controlled Inputs
D
Trajectory
1
=
0 −0.5
−1
r rr
Optic Flow (@ t = 2.7sec)
1
.
(5.6)
The desired yaw velocity reference rr can be used as a control input to modulate the heading of the helicopter. We express the tangential component of the optic flow on a circular-shaped sensor that is constrained to 3-DOF motion in the horizontal plane for a general configuration of obstacles as follows: ˙ ,x) = −r + μ(γ ,x) (u sin γ − v cos γ ). Q(γ
(5.7)
This expression is a 2π -periodic function in the viewing angle δ and the state of the vehicle x = (x,y,ψ,u,v,r). The function μ(γ ,x) = 1/d(γ ,x) is defined as the
1
2
3
0
2
4 time (sec)
6
8
nearness, where d(γ ,x) is a continuous representation of distance to the nearest point in the visual field from the current pose of the vehicle within the environment. For the case of a straight corridor, the nearness function μ(γ ,x) is independent of the axial position and can be expressed in closed form as a function of the lateral position y, the relative body frame orientation ψ, and the tunnel half-width a: ⎧ sin (γ + ψ) ⎪ ⎪ 0≤γ +ψ <π ⎨ a−y . μ(γ ,x) = sin (γ + ψ) ⎪ ⎪ ⎩− π ≤ γ + ψ < 2π a+y (5.8) The above lateral (5.4) and yaw (5.6) dynamics are provided in the body frame of reference. To facilitate the selection of the spatial weighting functions, we express these equations, along with (5.7), in terms of the pose of the vehicle, where x and y denote the longitudinal and lateral corridor positions, respectively, and the heading orientation is denoted as ψ. The equivalent reduced-order helicopter dynamics for the lateral and yaw degrees of freedom in the inertial frame of reference are y¨ = −(Yv + Kv g)˙y + Kv gvr ψ¨ = −(Nr + Kr Nμy )ψ˙ + Kr Nμy rr .
(5.9)
70
J.S. Humbert et al.
Outputs suitable for feedback are computed using (5.1), with weighting functions that extract the relative lateral position and orientation in the corridor. The estimate for the relative lateral position in the corri˙ y is obtained using the weighting funcdor uy = Q,F tion Fy (γ ) = cos 2γ . The orientation estimate uψ = ˙ ψ can be extracted using the weighting function
Q,F Fψ (γ ) = sin 2γ for 0 ≤ γ < π and Fψ (γ ) = sin 2γ for π ≤ γ < 2π . The choice of Fψ (γ ), while providing the same linearized signal content as cos γ , maximally weights the optic flow field in regions where it has the best signal-to-noise ratio, hence is more robust for experimental implementation. The controlled inputs (Fig. 5.5) are then given by ˙ y vr = Ky Q,F ˙ rr = Kψ Q,Fψ .
(5.10)
The analytical result of these spatial inner products is nonlinear; however, each inner product can be linearized about the desired optic flow pattern, corresponding to motion at a fixed speed ur = u0 along the centerline of the corridor, to determine the linear output equation. When combined with (5.9), the resulting closed-loop linearized dynamics are
using the wide-field integration architecture to fuse relatively noisy optic flow estimates (measured on a vibrating platform) to provide feedback sufficient for navigation.
5.5 Summary In this chapter we have reviewed wide-field integration (WFI) methods for visuomotor control, which are based on the spatial decompositions of wide-field patterns optic flow in the insect visuomotor system. By decomposing patterns of optic flow with weighting functions, one can extract signals that encode relative proximity and speed with respect to obstacles in the environment, which are used directly for visual navigation feedback. The methods discussed have been grounded in a theoretical framework and can be analyzed using traditional control system design tools. In addition, these methods have the advantages of computation speed and simplicity, hence are consistent with the stringent size, weight, and power requirements of MAVs.
⎧ ⎫ ⎡ ⎤⎧ ⎫ 0 1 0 0 ⎪ ⎪ y˙ ⎪ ⎪ ⎪ ⎪y⎪ ⎪ ⎪ ⎬ ⎬ ⎢ −K u0 Kv g −(Y + K g) ⎨ ⎪ ⎨ y¨ ⎪ ⎥⎪ 0 0 y 2a2 v v ⎢ ⎥ y˙ . =⎢ ⎥ 0 0 0 1 ⎪ ⎣ ⎦⎪ ψ⎪ ψ˙ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ˙⎭ ⎩ ¨⎭ Kr Nμ u0 Kr Nμ ψ ψ 0 Kψ 2a y −Kψ a y −(Nr + Nμy Kr ) The feedback gains Ky and Kψ , which determine the amount of lateral and rotational stiffness added to the dynamics, can be adjusted to position closed-loop eigenvalues for desired stability and performance. A test utilizing a bent corridor (Fig. 5.6A) was conducted to demonstrate the wide-field integration-based feedback on a micro-helicopter. Six equally spaced VLSI hard-coded sensors from Centeye were used for optic flow estimation (Fig. 5.4B). Eight points were taken from the 45◦ field of view of each sensor, resulting in a set of 48 optic flow measurements. A representative trajectory and optic flow pattern are shown in Fig. 5.6B. The relative heading of the vehicle at various points on the trajectory is given by the angle of the L-shaped markers with respect to the vertical (Fig. 5.6C). This test demonstrates the potential for
(5.11)
Acknowledgments The support for this research was provided in part by the Army Research Office under grants DAAD1903-D-0004 and Army-W911NF0410176, and the Air Force Research Laboratory under contract FA8651-07-C-0099. The authors would also like to thank Andrew M. Hyslop and Evan R. Ulrich for contributions to the work presented.
References 1. Barrows, G., Chahl, J., Srinivasan, M.: Biologically inspired visual sensing and flight control. The Aeronautical Journal 107, 159–168 (2003) 2. Borst, A., Haag, J.: Neural networks in the cockpit of the fly. Journal of Comparative Physiology A 188, 419–437 (2002) 3. Conroy, J., Pines, D.: System identification of a miniature electric helicopter using mems inertial, optic flow,
5
Wide-Field Integration Methods for Visuomotor Control
4.
5.
6.
7.
8. 9.
10.
11.
12. 13.
14.
and sonar sensing. Proceedings of the American Helicopter Society. Virginia Beach, VA (2007) Davies, E.R.: Machine Vision: Theory, Algorithms, Practicalities. Morgan Kaufmann, San Francisco, CA (2005) Egelhaaf, M., Kern, R., Krapp, H., Kretzberg, J., Kurtz, R., Warzecha, A.: Neural encoding of behaviourally relevant visual-motion information in the fly. Trends in Neurosciences 25, 96–102 (2002) Franz, M., Chahl, J., Krapp, H.: Insect-inspired estimation of egomotion. Neural Computation 16, 2245–2260 (2004) Franz, M., Krapp, H.: Wide-field, motion-sensitive neurons and matched filters for optic flow fields. Biological Cybernetics 83, 185–197 (2000) Franz, M., Mallot, H.: Biomimetic robot navigation. Robotics and Autonomous Systems 30, 133–153 (2000) Hausen, K.: Motion sensitive interneurons in the optomotor system of the fly, part i. the horizontal cells: structure and signals. Biological Cybernetics 45, 143–156 (1982) Hausen, K.: Motion sensitive interneurons in the optomotor system of the fly, part ii. the horizontal cells: Receptive field organization and response characteristics. Biological Cybernetics 46, 67–79 (1982) Hengstenberg, R., Hausen, K., Hengstenberg, B.: The number and structure of giant vertical cells (vs) in the lobula plate of the blowfly Calliphora erythrocephala. Journal of Comparative Physiology. 149, 163–177 (1982) Horn, B.K.: Robot Vision. MIT Press and McGraw-Hill, Cambridge, MA (1986) Humbert, J.S., Hyslop, A.M., Chinn, M.W.: Experimental validation of wide-field integration methods for autonomous navigation. Proceedings of the IEEE Conference on INtelligent Robots and Systems (IROS). San Diego, CA (2007) Humbert, J.S., Murray, R.M., Dickinson, M.H.: A controloriented analysis of bio-inspired visuomotor convergence
71
15.
16.
17.
18.
19.
20.
21. 22.
23.
24.
(submitted). Proceedings of the 44th IEEE Conference on Decision and Control. Seville, Spain (2005) Humbert, J.S., Murray, R.M., Dickinson, M.H.: Sensorimotor convergence in visual navigation and flight control systems. Proceedings of the 16th IFAC World Congress. Praha, Czech Republic (2005) Hyslop, A., Humbert, J.S.: Wide-field integration methods for autonomous navigation of 3d environments. Proceedings of the AIAA Conference on Guidance, Navigation, and Control. Honolulu, HI (2008) Krapp, H., Hengstenberg, B., Hengstenberg, R.: Dendritic structure and receptive-field organization of optic flow processing interneurons in the fly. Journal of Neurophysiology 79, 1902–1917 (1998) Krapp, H., Hengstenberg, R.: Estimation of self-motion by optic flow processing in single visual interneurons. Letters to Nature 384, 463–466 (1996) Santos-Victor, J., Sandini, G.: Embedded visual behaviors for navigation. Robotics and Autonomous Systems 19, 299–313 (1997) Srinivasan, M., Zhang, S., Lehrer, M., Collet, T.: Honeybee navigation en route to the goal: visual flight control and odometry. The Journal of Experimental Biology 199, 237–244 (1996) Stevens, B., Lewis, F.: Aircraft Control and Simulation. John Wiley & Sons, Inc., Hoboken, NJ (2003) Tammero, L.F., Dickinson, M.H.: The influence of visual landscape on the free flight behavior of the fruit fly Drosophila melanogaster. The Journal of Experimental Biology 205, 327–343 (2002) Tichler, M., Remple, R.K.: Aircraft and Rotorcraft System Identification: Engineering Methods and Flight Test Examples. American Institute of Aeronautics and Astronautics, Inc., Reston, VA (2006) Wood, R., Avadhanula, S., Sahai, R., Steltz, E., Fearing, R.: First takeoff of a biologially-inspired at-scale robotic insect. IEEE Transactions on robotics 24(2), 341–347 (2008)
Chapter 6
Optic Flow to Steer and Avoid Collisions in 3D Jean-Christophe Zufferey, Antoine Beyeler, and Dario Floreano
Abstract Optic flow is believed to be the main source of information allowing insects to control their flight. Some researchers have tried to apply this paradigm to small unmanned aerial vehicles (UAVs). So far, none of them has been able to demonstrate a fully autonomous flight of a free-flying system without relying on other cues such as GPS and/or some sort of orientation sensors (IMU, horizon detector, etc.). Getting back to the reactive approach suggested by Gibson (direct perception) and Braitenberg (direct connection from sensors to actuators), this chapter discusses how a few optic flow signals can be directly mapped into control commands for steering an aircraft in cluttered environments. The implementation of the proposed control strategy on a 10-g airplane flying autonomously in an office-sized room demonstrates how the proposed approach can result in ultra-light autopilots.
6.1 Introduction Current unmanned aerial vehicles (UAVs) tend to fly far away from any obstacles, such as the ground, trees or buildings. This is mainly because aerial platforms have such tremendous constraints in terms of manoeuvrability and weight that enabling them to actively avoid collisions in cluttered or confined environments is highly challenging. Very often, researchers and developers use GPS as the main source of infor-
J.-C. Zufferey () Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland e-mail:
[email protected]
mation to achieve what is commonly called “waypoint navigation”. By carefully choosing the waypoints in advance, it is easy to make sure that the resulting path will be free of obstacles. The aerial robotics community has therefore, in effect, avoided tackling the collision avoidance problem since GPS has provided an easy way around it. However, the problem of 3D collision avoidance definitely deserves some attention in order to produce UAVs capable of flying at lower altitudes or even within buildings. Such capabilities would enable various applications such as search and rescue operations, low-altitude imagery for surveillance or mapping, environmental monitoring and wireless communication relays. The classical approach to collision avoidance consists of relying on active distance sensors such as sonars, lasers or radars to construct and maintain a 3D map of the surroundings in real time. However, this results in very heavy and therefore potentially unsafe systems. The only known system that has been able to achieve near-obstacle flight using a 3D scanning laser range finder is a 100-kg helicopter equipped with a 3-kg scanning laser range finder [45]. Since the classical approach tends to be too heavy and power consuming for small flying platforms, we looked at biological systems for some hints about how to solve this problem using far less weight and energy. Flying insects are capable of dynamic navigation in cluttered environments while keeping energy consumption and weight at an incredibly low level. In general, biological systems far outperform today’s robots at tasks involving real-time perception and action, in particular if we take energy efficiency and size into account. It turns out that flying insects (Fig. 6.1) rely mainly on low-resolution monocular vision [34], gyroscopic [40], and airflow sensors [12] to control their flight.
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_6, © Springer-Verlag Berlin Heidelberg 2009
73
74
J.-C. Zufferey et al. antennae and hairs
compound eye & ocelli
motion (translation and rotation), the distance to the surrounding objects and the considered viewing directions [30]. In flying systems, it is usually feasible to estimate self-motion using a set of rate gyroscopes and an anemometer. Optic flow can thus be used to estimate the distance from surrounding objects.
6.2 Review of Optic Flow-Based Flying Robots halteres
Fig. 6.1 The most important perceptive organs related to flight control in flies: the large compound eyes (and the ocelli), the halteres and the antennas and hairs
This is interesting because the artificial counterparts of these sensors are now available in small, lightweight and power-efficient packages. Therefore, rather than opting for bulky active range finders weighing several kilograms, dynamic flight in the vicinity of obstacles can be achieved with far lower weight by using passive sensors such as vision, MEMS rate gyros and miniature anemometers. Approximately two-thirds of the neurons in the insect brain are dedicated to visual information processing [56, 58]. Biologists have unravelled a significant part of their functioning. In summary, image motion, also called optic flow, plays a significant role in flight control by providing information on self-motion (see Chap. 9 or [33, 31]) and distances to surrounding objects (see Chap. 4 or [54, 58, 16]). In robots as in insects, optic flow can be estimated with very few pixels, allowing for the use of low-resolution vision sensors. The challenge is rather to have a large field of view (FOV) to cover divergent viewing directions and grab images as fast as possible to obtain good approximations of optic flow. Optic flow is the perceived visual motion of surrounding objects projected onto the retina of an observer. The fact that visual perception of changes represents a rich source of information about selfmotion and distances from surrounding objects is widely recognised [20]. If we assume a mobile observer moving in an otherwise stationary environment, the motion field describing the projection of the object velocities onto its retina depends on its self-
Several teams considered using optic flow to estimate or control the altitude or, more precisely, the height above ground of various unmanned flying systems. Most of them were inspired by an experiment with honeybees describing how these insects execute grazing landings on horizontal surfaces [52, 55]. The early attempts were carried out either in simulation [38] or with tethered helicopters (see Chap. 3 or [43]). The first experiment of altitude control with a freeflying airplane was performed outdoors by Barrows and colleagues in 2002 with a 1-m-large model airplane equipped with a custom-built optic flow detector [3]. A simple (on/off) altitude control law managed to maintain the aircraft airborne for 15 min, during which three failures occurred where the human pilot had to rescue the aircraft due to it dropping too close to the ground. Further experiments were carried out on larger platforms for controlling descent rate [11], altitude above a flat and homogeneous surface [60] or to simply help with estimating altitude above ground [1]. All these experiments were focused on controlling altitude while the lateral steering was remotely controlled by a human pilot. The attempts at automating free-flying UAVs using bio-inspired vision are quite limited. In 2001, Barrows and colleagues described preliminary experiments on lateral obstacle avoidance in a gymnasium with a model glider carrying a single, laterally oriented, optic flow sensor [4]. An accompanying video shows the glider steering away from a wall when tossed towards it at a shallow angle. More recently, a team in Washington carried out a second experiment on lateral obstacle avoidance with an indoor aircraft equipped with a single lateral optic flow sensor [22]. A video shows the aircraft avoiding a basketball net in a sport hall. Again, only one sensor was used so that the aircraft could sense and avoid obstacles
6
Optic Flow to Steer and Avoid Collisions in 3D
Fig. 6.2 The F2 indoor airplane. The on-board electronics consist of a 6-mm geared motor with a balsa-wood propeller, two miniature servos controlling the rudder and the elevator, a microcontroller board with a Bluetooth module and a rate gyro, two horizontal 1D cameras located on the leading edge of the wing and a 310 mAh lithium-polymer battery
75
2 miniature servos
Rudder
Elevator
Cameras
Microcontroller, gyroscope, and Bluetooth radio module Lithium-polymer battery
10 cm
6mm DC motor with gearbox
only on one side. In 2006, we used a 30-g indoor airplane to demonstrate continuous and symmetrical collision avoidance in a relatively small indoor arena of 16 by 16 m [67]. The robot (Fig. 6.2) was equipped with two miniature, custom-made, optic flow detectors looking at 45◦ off the forward direction (Fig. 6.3). A model of landing response in flies [7] was used in this airplane to trigger saccadic turn-aways whenever a wall was too close, either in front or on either side of the airplane. While altitude and airspeed were manually controlled, the airplane was able to fly collisionfree for more than 4 min without any lateral steering intervention from the safety pilot.1 The plane was engaged in turning actions only 20% of the time, which means that it was able to fly in straight trajectories most of the time, except when very close to a wall. During a 4-min trial, it would typically generate 50 saccades and cover approximately 300 m in straight motion. On larger outdoor platforms, Griffiths and colleagues have used optic flow mouse sensors as complementary distance sensors [23]. The UAV was fully equipped with an inertial measurement unit (IMU) and GPS. It computed the optimal 3D path based on an a priori map of the environment stored in its memory. In order to be able to react to unforeseen obstacles on the computed nominal path, it used a frontal laser range finder and two lateral optical mouse sensors. This robot
1 Video clips showing the behaviour of the plane can be downloaded from http://lis.epfl.ch/microflyers.
demonstrated low-altitude flight in a natural canyon while the mouse sensors provided a tendency to move towards the centre of the canyon when the nominal path was deliberately biased towards one side or the other. Although no data showing the accuracy of optic flow-based distance measurements were provided, this experiment demonstrated that optical mouse sensors could be used to estimate rather large distances in outdoor environments. In 2005, Hrabar and colleagues [28] also employed lateral optic flow to enable a large helicopter to centre among obstacles outdoors, while stereo vision was utilised to avoid frontal obstacles. This work was quite similar to a study carried out in simulation by Muratet and colleagues the same year [39], which was focused on optic flow-based navigation in urban canyons. However, in these later projects the vision sensors were by no means used as primary sensors for navigation and the control system still
45°
Left OFD
Right OFD 40°
40°
Top view
Fig. 6.3 The arrangement of the two optic flow detectors on the F2 airplane
76
relied mainly on a classical autopilot using IMU and for some of them a GPS. Although all these early attempts at mimicking insects and using optic flow to achieve flight control and collision avoidance are remarkable, none of them have reached the holy grail of completely automating a free-flying UAV without relying on additional information such as GPS, attitude estimates (IMU) or 3D maps. The first project which got close to achieving this goal was carried out in simulation by Neumann and Bülthoff [42]. A flying agent with a relatively simple helicopter-like dynamics could stabilise its course, control its altitude and avoid obstacles like trees using fly-inspired optic flow and matched filters [32, 63]. For this control strategy to work, the agent was required to be flying level at all time. To ensure this, the attitude of the agent was constantly regulated by a mechanism relying on the ambient light intensity gradient (there is more light when looking at the sky than at the ground). This mechanism was loosely inspired from the dorsal light response found in insects [47]. However, for this attitude control mechanism to work, the environment needs to have a well-defined light intensity gradient. This is easy to ensure in simulated worlds, but not always so in real-world conditions because the sky can be occluded by buildings or trees and the sun will almost never be directly overhead. Only very recently, we were able to eliminate the need for explicitly estimating or regulating attitude and to rely exclusively on optic flow to achieve fully autonomous flight with a simulated airplane in a confined environment [5]. In this experiment, optic flow was computed in three specific directions (left, right and down) at 45◦ off the longitudinal axis of the aircraft. Although utilising a dynamics model more realistic than most previous work in simulation, this research still lacked testing in reality. In this chapter, we describe what represents – to the best of our knowledge – the first example of a fully autonomous physical airplane achieving optic flowbased navigation. This endeavour was mainly driven by an attempt at finding the minimal set of sensors and control strategy allowing for fully autonomous flight. Relying on the minimal number of pixels and optic flow detectors (OFD) was taken as a challenge, which may become less formidable as the vision sensors become lighter and faster, or when using larger platforms.
J.-C. Zufferey et al.
6.3 Optic Flow As an observer moves through its environment, the pattern of light reflected on its retina changes continuously, creating optic flow [20], formally defined as the apparent motion of the image intensities or brightness patterns. This optic flow is very useful for navigation because it contains information regarding both the selfmotion of the observer (Chap. 9) and the distances to surrounding objects. To study how optic flow depends on self-motion and distances, people usually rely on the notion of motion field (sometimes also called velocity field), which consists of the 2D projection onto the observer’s retina of the relative 3D motion of scene points. In contrast to the actual optic flow field, the motion field is a purely geometrical concept, which is independent of light intensities. Ideally, the optic flow corresponds to the motion field, but this may not always be the case [26]. The main reasons for discrepancies between optic flow and motion field are the possible absence of detectable contrasts on the surrounding objects, or the so-called aperture problem.2 In practice, only the optic flow can be measured. However, the behaviour of optic flow is best understood by means of the geometrical motion field, and discrepancies between the two are often treated as noise. Therefore, in the rest of this chapter, we will not make this distinction anymore, but we shall keep in mind that when taken as motion field estimates, optic flow measurements may be quite noisy because of the aforementioned problems. Let us consider a spherical vision system (Fig. 6.4) moving through a stationary environment. The image is formed by spherical projection of the environment onto this sphere. Apart from resembling a fly’s eye, the use of a spherical projection makes all points in the image geometrically equivalent, thus simplifying the mathematical analysis.3 The photoreceptors of the vision sensor are thus assumed to be arranged on this
2
If the motion of an oriented element is detected by a unit that has a small FOV compared to the size of the moving element, the only information that can be extracted is the component of the motion perpendicular to the local orientation of the element [37, 25, 36]. Interestingly, there is good evidence that flies do not solve the aperture problem [8]. 3 Ordinary cameras are usually not using a spherical projection model. However, if the field of view is not too wide, the spherical
6
Optic Flow to Steer and Avoid Collisions in 3D
77
tional component, RotOF, displays no dependency on distance. The translational optic flow field in fact depends on (i) the velocity of the observer, (ii) its distance to objects in the surroundings and (iii) the angles between the objects and the direction of the translation.
Object
D
1
p
αd
Θ
T
Ψ
R
x
y z
Fig. 6.4 The spherical model of a visual ensor. A viewing direction indicated by the unit vector d, which is a function of azimuth Ψ and elevation Θ (spherical coordinates). The distance to an object in the direction d(Ψ ,Θ) is denoted as D(Ψ ,Θ). The optic flow vectors p(Ψ ,Θ) are always tangential to the sphere surface. The vectors T and R represent the translation and rotation, respectively, of the visual sensor with respect to its environment. As will be seen in the next section, the angle α between the direction of translation and a specific viewing direction is sometimes called eccentricity
unit sphere, each photoreceptor defining a viewing direction indicated by a unit vector d(Ψ ,Θ), which is a function of both the azimuth Ψ and the elevation Θ in a spherical coordinate system. When this vision system undergoes a 3D movement described by its translation and rotation vectors, T and R, the motion field p(Ψ ,Θ) on the surface of the sphere is given by [30] T − (T · d(Ψ ,Θ)) d(Ψ ,Θ) p(Ψ ,Θ) = − D(Ψ ,Θ)
A particular case of the general equation (6.1) is often used in biology [64, 27, 49] and in robotics [19, 50, 62, 35] to explain depth perception from optic flow. The so-called motion parallax refers to a planar situation where only pure translational motion is considered (Fig. 6.5). In this case, it becomes straightforward4 to express the optic flow amplitude p (also referred to as apparent angular velocity) elicited by an object at distance D, seen at an angle α with respect to the motion direction T: p(α) =
||T|| sin α, where p = ||p||. D(α)
Note that the angle α is also called eccentricity, in particular if T is aligned with the reference axis of the vision system. From this equation, it becomes evident that if the translational velocity and the optic flow amplitude can be measured, the distance from the object can easily be retrieved as follows: D(α) =
||T|| sin α. p(α)
Object
+ [−R × d(Ψ ,Θ)] ,
D
T α d
p
Fig. 6.5 The motion parallax. The circle represents the retina of a moving observer. The symbols are the same as defined in Fig. 6.4
4
approximation is reasonably close [41]. A direct model for planar retinas can be found in [17].
(6.3)
Therefore, if the translation (direction and amplitude) is known, the distance to the surrounding objects
(6.1)
where D(Ψ ,Θ) is the distance between the sensor and the object seen in a particular direction d(Ψ ,Θ). This equation is fundamental as it tells us all the basic properties of optic flow, which is a linear combination of a translational and a rotational component due, respectively, to the motion along T and around R The translational component of optic flow, hereafter denoted TransOF, is inversely proportional to distances from the surrounding objects D(Ψ ,Θ), whereas the rota-
(6.2)
To derive the motion parallax equation (6.2) from the general optic flow equation (6.1), the rotational component must first be cancelled since no rotation occurs. Then, the translation vector T can be expressed in the orthogonal basis formed by d (the p viewing direction) and ||p|| (the normalised optic flow vector).
78
can be estimated by this means. If the agent is rotating, however, derotation of optic flow is required as a preliminary step to depth perception, otherwise the RotOF component may completely distort the distance estimates. Interestingly, most flying insects display mechanisms that tend to minimise the rotational content of optic flow. For instance, flies move in straight segments interspersed with rapid turning actions called saccades [13, 61]. In addition to moving straight most of the time, larger flies actively stabilise their gaze by orienting their head with respect to their body in order to compensate for undesired rotations, a phenomenon referred to as gaze stabilisation (see Chap. 4 or [46, 29]). Similar mechanisms allowing for keeping rotational optic flow at a minimum level have been found in many other species as well [65]. However, when thinking in terms of flying robots, saccadic flight and gaze stabilisation may not always be the most simple and effective way of derotating the optic flow. An alternative method, which can be implemented in software, consists of compensating for the spurious rotational optic flow signals by subtracting the theoretically derived rotational motion field according to Eq. (6.1). This can be achieved by means of rate gyros, which allow to easily reconstruct the rotation vector. This alternative solution can be seen as a substitute for gaze stabilisation. Indeed, performing derotation in software is probably more efficient with artificial systems than implementing mechanical gaze stabilisation, which would require complex mechanisms to pan, tilt and roll the entire vision system. The remaining question is: how can optic flow be sensed? There are several ways of doing it. Hassenstein and Reichardt [24] have proposed a model called elementary motion detector (EMD) to explain how local optic flow detection could be achieved in the fly visual system (see Chap. 4, Sect. 4.3 for a review). However, the response of EMDs depends not only on image velocity (which would allow them to more accurately represent the motion field) but also on the contrast and the spatial frequency content of the scene. In computer vision, various algorithms have been proposed, which take as input a time series of visual frames from standard cameras (see [2] for a review). These algorithms generally produce outputs linearly following the image motion and are therefore much closer to the actual motion field than EMDs. However, they tend to be computationally expensive. Quite recently, people have been working on faster algorithms [51, 10],
J.-C. Zufferey et al.
which are better suited to real-time computation and embedded processors. An alternative approach consists of using optic flow sensors such as optical mouse sensors (see Chap. 9) or custom-designed analog detectors (see Chap. 8), which feature fast, on-chip optic flow detection. To sum up, optic flow contains information on distance to obstacles and can be sensed using either dedicated sensors or a camera connected to a processor running one of the various existing algorithms. However, distance information in optic flow is mixed with other factors such as rotation speed and orientation, translation speed and orientation, and looking direction with respect to the translation direction. In brief, if rotation and velocity can be assumed to remain constant or estimated using alternative sensors such as inertial or airflow sensors, derotated optic flow can be directly interpreted as a proximity indication and thus serve as input to flight control and obstacle avoidance.
6.4 Control Strategy Several researchers have considered what can be classified as 2D optic flow-based control strategies. They were working either with robots moving on flat surfaces (see Chap. 3 and [50, 14, 44, 53, 48]) or with constrained flying robots that were either tethered helicopters (see Chap. 3 or [43]) or horizontally flying agents (see Chap. 5, and [42, 39]). Here instead, we aim at controlling aircraft moving in 3D and relying on roll and pitch movements in order to steer. Airplanes and helicopters in translational flight (as opposed to near-hover or aerobatic flight) are indeed using rolling and pitching movements to alter their trajectory (see Fig. 6.6).5 Rolling allows them to preset the direction in which a pitching movement will reorient the aircraft translation vector, which corresponds to its flight direction. This can be observed in any standard turn where translating aircraft first rolls by a few degrees before pitching up to curve their trajectory. It is inter-
5
We deliberately let aside the yawing movements in this discussion because in standard flight they can be seen as a byproduct of the steering mechanism. Indeed, yawing is usually employed in aircraft as a way of cancelling any lateral acceleration to achieve what is known as “coordinated turns” [56].
Optic Flow to Steer and Avoid Collisions in 3D
79
rudder
6
tail rotor main rotor
tor va ele
ns
ro ax ll is
h pitc is ax f dir light ec tio n
ro ax ll is
yaw axis
h pitc is ax
yaw axis
ero
ail
f dir light ec tio n
Fig. 6.6 Idealised translating aircraft (winged airplane or a rotorcraft) can steer by rotating around their roll and pitch axes. The yaw rate is passively or actively controlled so that no lateral acceleration occurs (coordinated turns). The lift produced by the
wings or the rotor is assumed to exactly compensate for the gravity and centripetal forces. Under these conditions, the translation vector (flight direction) remains fixed with respect to the aircraft body
esting to note that these steering dynamics do not seem to be shared by flying insects, which tend to generate quite significant side-slip movements and are able to alter their trajectory while keeping their head orientation stable [46]. However, most existing flying platforms (especially fixed-wing aircraft) are unable to display such capabilities typically found in insects and are therefore well described by this translating aircraft model. Translating aircraft can be assumed to have a translation vector that is always aligned with their main axis. This is of utmost importance because it allows to greatly simplify the interpretation of TransOF since the eccentricity angle alpha (Fig. 6.4) can be assumed to be constant for any given viewing direction. Furthermore, assuming that the velocity of the aircraft is kept constant, the output of derotated optic flow detectors (DOFD) can be directly interpreted as a proximity signal. Note that if the assumption of constant speed does not hold, the TransOF signals will still provide estimates of the time to contact (instead of proximity), which is perfectly suitable as control input in most cases. Aiming at a simple control strategy that can fit any small microcontroller, we propose to follow a reactive paradigm where perception is directly linked to action without intermediary cognitive layers [20, 21, 9, 15]. Since optic flow can be turned into proximity information as seen above, the simplest way of achieving an efficient reactive behaviour of an aircraft is to linearly combine a series of DOFDs to command its rolling and pitching rates. Since no assumption can be made regarding the orientation of the aircraft with
respect to the surrounding environment, the only policy that can be adopted is to react to any sensed object in the simplest way: steering away from it with a magnitude that is proportional to its proximity. When nothing is sensed, just keep moving straightforward. Braitenberg’s vehicles [9] are good examples of reactive systems using such a control strategy to directly map sensor activations into actuator commands. Many wheeled robots are using such a scheme for low-level obstacle avoidance (e.g. [18]). Here however, we need to adapt it to 3D-moving systems featuring the specific dynamics of translating aircraft. By using four DOFDs looking left/right and up/down with some eccentricity angle with respect to the flight direction (Fig. 6.7), a very simple, Braitenberg-like control strategy can be devised to
r co oll ntr ol
+ –
h pitc is ax
h pitc rol t n co
+ –
ro ax ll is
f dir light ec tio n
Fig. 6.7 A 3D Braitenberg-like vehicle using optic flow to control its flight and to steer away from obstacles. The underlying vehicle is a translating aircraft with the only possibility to roll and pitch to alter its trajectory
80
produce reactive collision avoidance. The left/right detectors should be linked to the roll rate control in such a way that if an obstacle is detected on one side, the aircraft rolls away from it. Similarly, the up/down sensors should control the pitch rate so as to steer away from objects sensed in the upper or lower regions of the field of view. This reactive control strategy has several advantages. First of all, the fact that the optic flow is inversely proportional to the distance means that far objects will have little impact on the controls, whereas very close obstacles will induce strong reactions. Second, this control strategy does not require any kind of inclinometers in order to fly straight over flat surfaces. As soon as the aircraft inclines left or right it will sense the proximity of the ground on that side and directly correct towards the opposite side. In some sense, the attitude control commonly seen in classical autopilots here comes for free. Third, no assumption needs to be made on the layout of the environment. In principle, this strategy works as well in open terrains as in cluttered environments, the only limitation being that the objects need to be large enough to be perceived early enough by the optic flow detectors. Note that if the environment has no, or high, ceilings and the gravity attracts the aircraft towards the ground at all time, the top detector may be useless. The same holds ture for outdoor environments where obstacles hanging in the sky are very rare, if the aircraft does not need to fly in tunnels or under bridges. Note that this way of directly connecting translational optic flow to controls using (weighted) lines is reminiscent of the structure of the visuomotor path-
Fig. 6.8 The 7 × 6-m test room features eight projectors on the ceiling, each projecting on half of the opposite wall. This system permits an easy modification of the textures on the walls. The ground is covered by a randomly textured carpet in order to provide enough detectable contrasts even for a low-performance vision system
J.-C. Zufferey et al.
way of insects (Chap. 4, Sect. 3). Tangential cells basically combine local optic flow signals and connect to descending cells, which modulate the wing kinematics in order to steer.
6.5 Application to a 10-g Indoor Microflyer As a first step towards the realisation of minimalist, but completely autonomous free-flying systems, we looked at indoor flight. More precisely, we decided to test whether the previously described control strategy could work in the office-sized room shown in Fig. 6.8. Flying indoors requires slow motion, high manoeuvrability and small size, which calls for ultra-light overall weight, resulting in tremendous constraints in terms of sensors and computational power. In addition, flying indoors means that the aircraft needs to constantly steer in order to avoid hitting the ground or the walls.
6.5.1 Platform The prototype we developed [68], called MC2 (Fig. 6.9), consists mainly of carbon-fibre rods and thin Mylar plastic films. The wing and the battery are connected to the frame by small magnets such that they can easily be taken apart. The propulsion is ensured by a 4-mm brushed DC motor, which transmits its torque to a lightweight carbon-fibre propeller via a
6
Optic Flow to Steer and Avoid Collisions in 3D
81
(b) (e)
(f) (a)
(c)
(d)
Top view: 370 mm
360 mm
elevator Left FOV
120 mm
(a) (b)
(f)
(e)
45°
30°
(about 3 × 3 m) at low velocity (around 1.5 m/s). The average power consumption of the entire system is on the order of 1 W and the on-board 65 mAh lithiumpolymer battery provides an endurance of about 10 min. Regarding the sensor suite, the exact same sensory modalities as in flies is implemented on the MC2. Due to the tight available payload, the vision system is made of two wide FOV linear cameras of 102 pixels each (as shown in the bottom part of Fig. 6.9). Only three segments of 20 pixels out of these two cameras are selected for optic flow extraction in three specific directions: left, right and down. Figure 6.10 shows the regions covered by the two cameras and the zones in them where optic flow is extracted. In addition, two MEMS gyros are used to sense pitching and yawing rates in order to derotate the optic flow signals (further details about how this is achieved can be found in [67]). Note that no roll gyro is present on the plane
120° (c)(d) 30°
60
Right FOV
45 Side view:
(e)
(b)
(c) (d)
(f) 45° 120°
30° Bottom FOV
30 15 0
1:12 gearbox. The rudder and elevator are actuated by two magnet-in-a-coil actuators. Note that no ailerons are present on the main wing, but the action of the rudder has a direct impact on the roll rate as on most high-winged aircrafts. The propeller has been placed unusually close to the wing leading edge in order to free the frontal field of view for the vision system. The total weight of the MC2 reaches 10.3 g. The airplane is capable of flying in reasonably small spaces
flight direction
ROFD
–15 –30 –45
Fig. 6.9 The 10-g MC2 microflyer. The on-board actuators and electronics consist of (a) a 4-mm geared motor with a lightweight carbon-fibre propeller, (b) two magnet-in-a-coil actuators controlling the rudder and the elevator, (c) a microcontroller board with a Bluetooth wireless communication module and a ventral camera with its pitch rate gyro, (d) a front camera with its yaw rate gyro, (e) an anemometer and (f) a 65 mAh lithium-polymer battery
LOFD
BOFD
(a)
elevation Θ [°]
rudder
–60 –60 –45 –30 –15 0
15 30 45 60
azimuth Ψ [°] Fig. 6.10 An azimuth-elevation graph displaying the typical optic flow field experienced during straight motion. The zones covered by the cameras mounted on the MC2 are represented by the two thick rectangles. By carefully defining the regions where the optic flow algorithm is applied (grey zones within the thick rectangles), three radial optic flow detectors (OFD) can be implemented at an equal eccentricity of 45◦ with respect to the flight direction. Note that this angle is not only chosen because it fits the available cameras but also because the maximum OF values occur at α = 45◦ when approaching a fronto-parallel surface. These OFDs are prefixed with L, B and R for left, bottom and right, respectively. A fourth OFD could have been located in the top region (also at 45◦ eccentricity), but since the airplane never flies inverted and the gravity attracts it towards the ground, there is no need for sensing obstacles in this region
82
J.-C. Zufferey et al.
because rolling motion cannot be detected by radially oriented lines of pixels. Of course, rotations about the roll axis can produce spurious radial optic flow signals, but these can be treated as noise. Finally, a small custom-made anemometer ensures the functionality of airflow sensing in order to provide a rough estimate of the airspeed. These sensors are connected to the on-board 8-bit microcontroller, which processes image sequences to extract optic flow in the three viewing directions using an image interpolation algorithm [51, 67]. These optic flow signals are then derotated by subtraction of the related rate gyro signal [66]. In fact, these processing steps implement three derotated optic flow detectors (DOFD).
side, the airplane should roll away using its rudder.6 If the proximity signal increases in the ventral part of the FOV, the airplane should pitch up using its elevator. As suggested by Braitenberg, this is achieved through direct connections between the DOFDs and the control surfaces (Fig. 6.11). In practice, some gains and thresholds should be implemented on the connections in order to be able to tune the resulting behaviour. In Fig. 6.11, these parameters are hidden in the transfer functions Ω. In order to maintain airspeed in a reasonable range (above stall and below over-speed), the anemometer signal is compared to a given set point before being used to proportionally drive the propeller motor. Note that this airspeed control process also ensures a reasonably constant ||T|| in Eq. (6.3).
6.5.2 Control Strategy Equipped with such DOFDs that act as proximity sensors, the implementation of the presented control strategy is straightforward. If an obstacle is detected on one
Right DOFD
Left DOFD
Anemometer LDOFD
ΩLR
+
Σ
–
Bottom DOFD
ΩLR
RDOFD
Setpoint
+
–
Σ
BDOFD
Rudder
ΩB
ΩA
Elevator
Thruster
Fig. 6.11 The control strategy allowing for autonomous operation of the MC2. The three OFDs are prefixed with D to indicate that they are filtered and derotated (this process is not explicitly shown in the diagram). The signals produced by the left and right DOFDs, i.e. LDOFD and RDOFD, are basically subtracted to control the rudder, whereas the signal from the bottom DOFD, i.e. BDOFD, directly drives the elevator. The anemometer is compared to a given set point to output a signal that is used to proportionally drive the thruster. The Ω ellipses indicate that a transfer function is used to tune the resulting behaviour. These are usually simple multiplicative factors or combinations of a threshold and a factor
6.5.3 Results After some tuning of the parameters included in the Ω transfer functions, the airplane could be launched by hand in the air and flew completely autonomously in its test arena (Fig. 6.8).7 Figure 6.12 shows the data recorded during such a flight over a 90-s period. In the first row, the higher right DOFD signal indicates that the airplane was launched closer to a wall on its right, which produced a leftward reaction (indicated by the negative yaw gyro signal) that was maintained throughout the trial duration. Note that in this environment, there is no good reason for modifying the initial turning direction since flying in circles close to the walls is more efficient than figure of eights, for instance. However, this first graph clearly shows that the controller does not simply hold a constant turning rate. The rudder deflection is continuously adapted according to the DOFD signals, which leads to a continuously varying yaw rotation
6
On lightweight airplanes, the ailerons on the wings are often replaced by a rudder (vertical control surface located at the rear of the airplane), which produces an asymmetry in lift between the left and the right wings by introducing a momentary sideslip angle. As a result, the airplane will roll because of one wing dipping and the other lifting. 7 A video of this experiment is available for download at http://lis.epfl.ch/microflyers.
6
Optic Flow to Steer and Avoid Collisions in 3D
Fig. 6.12 A 90-s autonomous flight with the MC2 in the test arena. The first row shows lateral OF signals together with the yaw rate gyro. The second row plots the ventral OF signal together with the pitch rate gyro. The third graph displays the evolution of the anemometer value together with the motor setting. Flight data are sampled every 50 ms, corresponding to the sensory-motor cycle duration
83
120
RDOFD [°/s] LDOFD [°/s] 0 Yaw gyro [°/s] −120 200
BDOFD [°/s] Pitch gyro [°/s] 0 −40 100
Thruster [%] Anemometer [%]
0 0
10
20
30
40
50
60
70
80
90
Time [s] (data sampled every 50 ms)
rate. The average turning rate of approximately 80◦ /s indicates that a full rotation around the room is accomplished every 4–5 s. Therefore, a 90-s trial corresponds to approximately 20 circumnavigations of the test arena. The second graph shows that the elevator actively reacts to the bottom DOFD signal, thus continuously affecting the pitch rate. The non-zero mean of the pitch gyro signal is due to the fact that the airplane needs to bank for turning. Therefore, the pitch rate gyro also measures a component of the overall circling behaviour. It is interesting to note that the elevator actions are due to the proximity not only of the ground but also of the walls. When the airplane detects the proximity of a wall on its right, the rudder action increases its leftward bank angle. In this case the bottom DOFD is oriented directly towards the close-by wall and no longer towards the ground. In most cases, this would result in a quick increase in the bottom DOFD signal and thus trigger a pulling action of the elevator. This reaction is highly desirable since the absence of a pulling action at steep roll angles would result in an immediate loss of altitude. The bottom graph shows that the motor power is continuously adapted according to the anemometer value. In fact, as soon as the controller steers up due to a high ventral optic flow, the airspeed quickly drops,
which needs to be counteracted by a prompt increase in power.
6.6 Conclusions and Outlook Considering the typical motion constraints of translating aircraft on the one hand, and the properties of optic flow on the other hand, allowed us to develop a generic 3D control strategy allowing for autonomous flight in the presence of obstacles. We showed that a simple reactive approach à la Braitenberg can efficiently solve this problem while relying on extremely lightweight and low-power sensors as insects do. Furthermore, no explicit measurement of aircraft attitude nor active gaze stabilisation is required. The proposed control strategy steers aircraft in a very natural way by simply rolling and pitching away from any encountered object. If the only present obstacle is a flat ground, the behaviour will result in a straight and level flight over it. In confined or cluttered environments, the aircraft will constantly steer by rolling and pitching in order to avoid collisions. It is interesting to note that the natural behaviours described in Chap. 3, i.e. terrain following, adapting altitude in case of tail or head wind and achieving smooth landing when decreasing the forward speed, are also byproducts of our control strategy.
84
Since translating aircraft cannot remain level at all time, the problem of controlling their flight cannot be decoupled into vertical altitude control on the one hand and horizontal steering on the other hand. The ventral part of such an aircraft is not always oriented towards the ground and any turning action requires rolling and pitching, which is fundamentally different from steering with a robot constrained to 2D motion. Of course the proposed control strategy has some limitations. Leaving aside the limitations related to sensor performance and optic flow detection, situations can still be encountered where hesitations occur due to all proximity signals reaching similar levels. These situations typically arise when perpendicularly approaching a flat surface. However, a work-around exists. By monitoring the sum of all detector signals, one can tell whether an object is really close in front of the aircraft. If this sum reaches a given threshold an emergency action should be taken. For instance, such an action could be to pitch up during a given period of time or to initiate a left/right turn by rolling and pitching. Such a safety action would very much resemble saccadic movements that are so typical in flies [13, 61, 46]. Saccades could be used more widely within the scope of the proposed approach. For instance, a UAV can be controlled so as to fly straight most of the time and enter a saccadic turn-away (lateral or vertical, depending on the closest obstacle) only if the monitored optic flow signals reach a given threshold [67, 5]. Such a saccadic behaviour presents advantages and disadvantages. On the positive side, optic flow extraction is usually more accurate if the aircraft is not rotating all the time and flight efficiency can be improved by flying straight instead of being continuously steering due to far objects producing small amounts of optic flow. However, the drawback of producing tight turns is that optic flow usually cannot be estimated reliably during saccades. Therefore, the aircraft would need to fly blindly during the saccade and may end up facing another obstacle immediately after the end of the manoeuvre, which would induce a second saccade and so on. In addition, it is not trivial to decide on the angle or the duration of the saccade. Also, because of the relatively larger inertia, these turning actions may last significantly longer in UAVs than in flies. Finally, the presented control strategy is a minimalist one, essentially in terms of vision sensors (num-
J.-C. Zufferey et al.
ber and quality). By increasing the number of optic flow detectors, one could easily improve the robustness and the smoothness of the resulting trajectories. Current work in our lab aims at better understanding the effects of increasing the number of optic flow detectors as well as optimising their orientations all around the flight direction [6]. Another line of research concerns the integration of such a reactive collision-avoidance autopilot with higher level control strategy such as waypoint navigation. Acknowledgements We wish to thank Jean-Daniel Nicoud, Adam Klaptocz and André Guignard for their significant contributions in the development of the hardware and the electronics of the MC2. Many thanks also to Tim Stirling for his help in improving this manuscript. This project is funded by EPFL and by the Swiss National Science Foundation, grant number 200021-105545/1.
References 1. Barber, D., Griffiths, S., McLain, T., Beard, R.: Autonomous landing of miniature aerial vehicles. AIAA Infotech@Aerospace (2005) 2. Barron, J., Fleet, D., Beauchemin, S.: Performance of optical flow techniques. International Journal of Computer Vision 12(1), 43–77 (1994) 3. Barrows, G., Chahl, J., Srinivasan, M.: Biomimetic visual sensing and flight control. Bristol Conference on UAV Systems (2002) 4. Barrows, G., Neely, C., Miller, K.: Optic flow sensors for MAV navigation. In: T.J. Mueller (ed.) Fixed and Flapping Wing Aerodynamics for Micro Air Vehicle Applications, Progress in Astronautics and Aeronautics, vol. 195, pp. 557–574. AIAA (2001) 5. Beyeler, A., Zufferey, J., Floreano, D.: 3D vision-based navigation for indoor microflyers. IEEE International Conference on Robotics and Automation (ICRA’07) (2007) 6. Beyeler, A., Zufferey, J., Floreano, D.: Vision-based control of near-obstacle flight. Autonomous Robots. In Press (2010) 7. Borst, A., Bahde, S.: Spatio-temporal integration of motion. Naturwissenschaften 75, 265–267 (1988) 8. Borst, A., Egelhaaf, M., Seung, H.S.: Two-dimensional motion perception in flies. Neural Computation 5(6), 856– 868 (1993) 9. Braitenberg, V.: Vehicles – Experiments In Synthetic Psychology. The MIT Press, Cambridge, MA (1984) 10. Camus, T.: Calculating time-to-contact using real-time quantized optical flow. Tech. Rep. 5609, National Institute Of Standards and Technology NISTIR (1995) 11. Chahl, J., Srinivasan, M., Zhang, H.: Landing strategies in honeybees and applications to uninhabited airborne vehicles. The International Journal of Robotics Research 23(2), 101–110 (2004)
6
Optic Flow to Steer and Avoid Collisions in 3D
12. Chapman, R.: The Insects: Structure and Function, 4th edn. Cambridge University Press (1998) 13. Collett, T., Land, M.: Visual control of flight behavior in the hoverfly, syritta pipiens. Journal of Comparative Physiology 99, 1–66 (1975) 14. Coombs, D., Herman, M., Hong, T., Nashman, M.: Realtime obstacle avoidance using central flow divergence and peripheral flow. International Conference on Computer Vision, pp. 276–283 (1995) 15. Duchon, A., Warren, W.H., Kaelbling, L.: Ecological robotics. Adaptive Behavior 6, 473–507 (1998) 16. Egelhaaf, M., Kern, R., Krapp, H., Kretzberg, J., Kurtz, R., Warzechna, A.: Neural encoding of behaviourally relevant visual-motion information in the fly. Trends in Neurosciences 25(2), 96–102 (2002) 17. Fermüller, C., Aloimonos, Y.: Primates, bees, and ugvs (unmanned ground vehicles) in motion. In: M. Srinivisan, S. Venkatesh (eds.) From Living Eyes to Seeing Machines, pp. 199–225. Oxford University Press (1997) 18. Floreano, D., Mondada, F.: Automatic creation of an autonomous agent: Genetic evolution of a neural-network driven robot. From Animals to Animats 3, 421–430 (1994) 19. Franceschini, N., Pichon, J., Blanes, C.: From insect vision to robot vision. Philosophical Transactions of the Royal Society B 337, 283–294 (1992) 20. Gibson, J.: The Perception of the Visual World. Houghton Mifflin, Boston (1950) 21. Gibson, J.: The Ecological Approach to Visual Perception. Houghton Mifflin, Boston (1979) 22. Green, W., Oh, P., Barrows, G.: Flying insect inspired vision for autonomous aerial robot maneuvers in near-earth environments. Proceeding of the IEEE International Conference on Robotics and Automation, vol. 3, pp. 2347– 2352 (2004) 23. Griffiths, S., Saunders, J., Curtis, A., McLain, T., Beard, R.: Obstacle and Terrain Avoidance for Miniature Aerial Vehicles, Intelligent Systems, Control and Automation: Science and Engineering, vol. 33, chap. I.7, pp. 213–244. Springer (2007) 24. Hassenstein, B., Reichardt, W.: Systemtheoretische analyse der zeit-, reihenfolgen- und vorzeichenauswertung bei der bewe-gungsperzeption des rüsselkäfers chlorophanus. Zeitschrift für Naturforschung 11b, 513–524 (1956) 25. Hildreth, E.: The Measurement of Visual Motion. MIT, Cambridge (1984) 26. Horn, B.: Robot vision. MIT Press (1986) 27. Horridge, A.: Insects which turn and look. Endeavour 1, 7–17 (1977) 28. Hrabar, S., Sukhatme, G.S., Corke, P., Usher, K., Roberts, J.: Combined optic-flow and stereo-based navigation of urban canyons for uav. IEEE International Conference on Intelligent Robots and Systems, pp. 3309–3316. IEEE (2005) 29. Kern, R., van Hateren, J., Egelhaaf, M.: Representation of behaviourally relevant information by blowfly motionsensitive visual interneurons requires precise compensatory head movements. Journal of Experimental Biology 206, 1251–1260 (2006) 30. Koenderink, J., van Doorn, A.: Facts on optic flow. Biological Cybernetics 56, 247–254 (1987) 31. Krapp, H.: Neuronal matched filters for optic flow processing in flying insects. In: M. Lappe (ed.) Neuronal Process-
85
32.
33.
34. 35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46. 47.
48.
49.
ing of Optic Flow, pp. 93–120. San Diego: Academic Press (2000) Krapp, H., Hengstenberg, B., Hengstenberg, R.: Dendritic structure and receptive-field organization of optic flow processing interneurons in the fly. Journal of Neurophysiology 79, 1902–1917 (1998) Krapp, H., Hengstenberg, R.: Estimation of self-motion by optic flow processing in single visual interneurons. Nature 384, 463–466 (1996) Land, M.: Visual acuity in insects. Annual Review of Entomology 42, 147–177 (1997) Lichtensteiger, L., Eggenberger, P.: Evolving the morphology of a compound eye on a robot. Proceedings of the Third European Workshop on Advanced Mobile Robots (Eurobot ’99), pp. 127–134 (1999) Mallot, H.: Computational Vision: Information Processing in Perception and Visual Behavior. The MIT Press (2000) Marr, D.: Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W.H. Freeman and Company, New York (1982) Mura, F., Franceschini, N.: Visual control of altitude and speed in a flying agent. From Animals to Animats III, pp. 91–99. MIT Press (1994) Muratet, L., Doncieux, S., Brière, Y., Meyer, J.: A contribution to vision-based autonomous helicopter flight in urban environments. Robotics and Autonomous Systems 50(4), 195–209 (2005) Nalbach, G.: The halteres of the blowfly calliphora. I. Kinematics and dynamics. Journal of Comparative Physiology A 173(3), 293–300 (1993) Nelson, R., Aloimonos, Y.: Obstacle avoidance using flow field divergence. IEEE Transactions on Pattern Analysis and Machine Intelligence 11(10), 1102–1106 (1989) Neumann, T., Bülthoff, H.: Behavior-oriented vision for biomimetic flight control. Proceedings of the EPSRC/ BBSRC International Workshop on Biologically Inspired Robotics, pp. 196–203 (2002) Ruffier, F., Franceschini, N.: Optic flow regulation: the key to aircraft automatic guidance. Robotics and Autonomous Systems 50(4), 177–194 (2005) Santos-Victor, J., Sandini, G., Curotto, F., Garibaldi, S.: Divergent stereo for robot navigation: A step forward to a robotic bee. International Journal of Computer Vision 14, 159–177 (1995) Scherer, S., Singh, S., Chamberlain, L., Saripalli, S.: Flying fast and low among obstacles. Proceedings of the 2007 IEEE Conference on Robotics and Automation, pp. 2023– 2029 (2007) Schilstra, C., van Hateren, J.: Stabilizing gaze in flying blowflies. Nature 395, 654 (1998) Schuppe, H., Hengstenberg, R.: Optical properties of the ocelli of calliphora erythrocephala and their role in the dorsal light response. Journal of Comparative Physiology A 173, 143–149 (1993) Serres, J., Ruffier, F., Viollet, S., Franceschini, N.: Toward optic flow regulation for wall-following and centring behaviours. International Journal of Advanced Robotic Systems 3(27), 147–154 (2006) Sobel, E.: The locust’s use of motion parallax to measure distance. Journal of Comparative Physiology A 167, 579– 588 (1990)
86 50. Sobey, P.: Active navigation with a monocular robot. Biological Cybernetics 71, 433–440 (1994) 51. Srinivasan, M.: An image-interpolation technique for the computation of optic flow and egomotion. Biological Cybernetics 71, 401–416 (1994) 52. Srinivasan, M., Chahl, J., Nagle, M., Zhang, S.: Embodying natural vision into machines. In: M. Srinivasan, S. Venkatesh (eds.) From Living Eyes to Seeing Machines, pp. 249–265 (1997) 53. Srinivasan, M., Chahl, J., Weber, K., Venkatesh, S., Zhang, H.: Robot navigation inspired by principles of insect vision. In: A. Zelinsky (ed.) Field and Service Robotics, pp. 12–16. Springer-Verlag (1998) 54. Srinivasan, M., Lehrer, M., Kirchner, W., Zhang, S.: Range perception through apparent image speed in freely-flying honeybees. Visual Neuroscience 6, 519–535 (1991) 55. Srinivasan, M., Zhang, S., Chahl, J., Barth, E., Venkatesh, S.: How honeybees make grazing landings on flat surfaces. Biological Cybernetics 83, 171–183 (2000) 56. Stevens, B., Lewis, F.: Aircraft Control and Simulation, 2nd edn. Wiley (2003) 57. Strausfeld, N.: Atlas of an Insect Brain. Springer (1976) 58. Tammero, L., Dickinson, M.: The influence of visual landscape on the free flight behavior of the fruit fly drosophila melanogaster. The Journal of Experimental Biology 205, 327–343 (2002) 59. Taylor, G., Krapp, H.: Sensory systems and flight stability: What do insects measure and why. Advances in Insect Physiology 34, 231–316 (2008) 60. Thakoor, S., Morookian, J., Chahl, J., Hine, B., Zornetzer, S.: BEES: Exploring mars with bioinspired technologies. Computer 37(9), 38–47 (2004)
J.-C. Zufferey et al. 61. Wagner, H.: Flight performance and visual control of flight of the free-flying housefly (Musca domestica L.). I. organization of the flight motor. Philosophical Transactions of the Royal Society B 312, 527–551 (1986) 62. Weber, K., Venkatesh, S., Srinivasan, M.: Insect inspired behaviours for the autonomous control of mobile robots. In: M.V. Srinivasan, S. Venkatesh (eds.) From Living Eyes to Seeing Machines, pp. 226–248. Oxford University Press (1997) 63. Wehner, R.: Matched filters - neural models of the external world. Journal of Comparative Physiology A 161, 511–531 (1987) 64. Whiteside, T., Samuel, G.: Blur zone. Nature 225, 94–95 (1970) 65. Zeil, J., Boeddeker, N., Hemmi, J.: Vision and the organization of behaviour. Current Biology 18(8), 320–323 (2008) 66. Zufferey, J.C.: Bio-inspired Flying Robots: Experimental Synthesis of Autonomous Indoor Flyers. EPFL/CRC Press (2008) 67. Zufferey, J.C., Floreano, D.: Fly-inspired visual steering of an ultralight indoor aircraft. IEEE Transactions on Robotics 22, 137–146 (2006) 68. Zufferey, J.C., Klaptocz, A., Beyeler, A., Nicoud, J.D., Floreano, D.: A 10-gram vision-based flying robot. Advanced Robotics, Journal of the Robotics Society of Japan 21(14), 1671–1684 (2007)
Chapter 7
Visual Homing in Insects and Robots Jochen Zeil, Norbert Boeddeker, and Wolfgang Stürzl
Abstract Insects use memorised visual representations to find their way back to places of interest, like food sources and nests. They acquire these visual memories during systematic learning flights or walks on their first departure and update them whenever approaches to the goal have been difficult. The fact that small insects are so good at localisation tasks with apparent ease has attracted the attention of engineers interested in developing and testing methods for visual navigation on mobile robots. We briefly review here (1) homing in insects; (2) what is known about the content of insect visual memories; (3) recent robotics advances in view-based homing; (4) conditions for view-based homing in natural environments and (5) issues concerning the acquisition of visual representations for homing.
7.1 Homing in Insects The ability of animals to recognise places of significance and to revisit them is fundamental to life on Earth. Without this navigational skill, for instance, many flowering plants would not be pollinated by insects and animals, in general, would be unable to provide for their offspring. There is ample evidence showing that animals including insects use memorised
J. Zeil () ARC Centre of Excellence in Vision Science and Centre for Visual Sciences, Research School of Biology, The Australian National University, Biology Place, Canberra, ACT 2601, Australia e-mail:
[email protected]
visual representations to pinpoint a goal location. The goal can be the nest location as in bees, wasps and ants (e.g. [2, 63, 68, 72], reviewed in [15]), the location of food as in bees (e.g. [9]) or hovering stations in flies and bees [16, 38, 39]. Visual spatial memories are crucial for local navigation but can also guide the animal during long-range navigation: routes can be formed from sequences of multiple stored views [14, 36]. Moving between these views, insects make use of compass and odometric information (see Chaps. 2 and 9). We will, in the following, focus on local homing methods that allow animals and robots to pinpoint a goal. It is clear since Tinbergen’s seminal experiments [63] that distinct objects in the vicinity of a goal location can act as landmarks and guide an insect’s return path. What constitutes a landmark under natural conditions, however, is still an open question. From a functional point of view, the following properties of objects are likely to make them useful as landmarks (see, e.g. [18, 26]): salience – landmarks should be unique and easy to distinguish from other parts in the scene; permanence or reliability – landmarks and their position should be constant over time; relevance – a landmark should help to recognise important places or decision points. However, when a homing agent has to acquire a visual representation of a place it wishes to return to it has – with the exception of salience – no obvious access to all these crucial, task-related properties of objects in the environment. How, for instance, is an insect or a robot to decide whether a particularly salient object it sees is permanent, reliable and relevant enough for the subsequent task of pinpointing that particular location? For the purpose of this chapter, we thus identify a number of open questions that should
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_7, © Springer-Verlag Berlin Heidelberg 2009
87
88
be of interest to both biologists and robotics engineers: What are the contents of the visual memories used by homing insects? What are the conditions for homing under natural conditions? What are the rules for acquiring visual representations for homing? And how can a homing agent acquire information on the salience, the permanence, the reliability and the relevance of visual features that identify a goal location?
7.2 Probing the Content of Insect Location Memories The basic design of homing experiments is to allow insects to become accustomed to see distinct objects close to a place of interest, be it a nest or a feeding site, and then displacing or modifying these objects with the aim of observing where the insect would search for the goal. This kind of approach allows the experimenter to identify the visual cues that are relevant for homing. One such example is the experiment on a homing wasp shown in Fig. 7.1 The wasp had been accustomed to find its nest hole in the ground a few centimetres east of a small cylindrical landmark. It had performed learning and updating flights on several departures from the
Fig. 7.1 Landmark orientation in ground-nesting wasps (Cerceris; see bottom centre photograph). A wasp has learnt to associate her nest location (at the intersection of white lines) with a 5 cm diameter cylindrical landmark (white circle) during learning flights on departure (top left panel). On her return, the nest was hidden and the landmark was displaced in different directions away from the nest. The diagrams show 30 s search density distributions of the wasp for different landmark displacements. Photograph shows a wasp carrying prey to her nest (courtesy of Waltraud Pix)
J. Zeil et al.
nest throughout the day (Fig. 7.1, top left panel). The search distributions in Fig. 7.1 show where the returning wasp searched for her hidden nest entrance relative to the landmark that was displaced in different directions and at one period was even removed (centre panel). This experiment clearly indicates that the goal is predominantly defined by an object that acts as a landmark, not by olfactory or other geocentric cues: the returning wasp searches in the right distance and direction from the landmark where the goal would be found, had the landmark not been displaced. Similar results have been found in experiments with bees and ants (reviewed in [18]). Interestingly, when the landmark is removed, the insect still searches in the general area, indicating that more than just the individual landmark is being remembered (Fig. 7.1, search distribution at the centre of the graph). When the size of a familiar landmark is changed during a test, some insects search for the goal further away from a larger and closer to a smaller landmark (see Fig. 7.2 and [68, 8]). The insects thus appear to judge how far away the goal is from a landmark by the landmark’s memorised apparent size. Honeybees, ground-nesting bees and wasps are also able to acquire information on the absolute distance to landmarks, independent of their apparent size (see Fig. 7.3
7
Visual Homing in Insects and Robots
89
Fig. 7.2 Ants remember the apparent size of landmarks. Desert ants searching for their nest halfway between two cylindrical landmarks. Search paths on the left, search density histograms on the right (from [48]; data replotted from [68]). Top row: original landmarks at training distance from the nest. Middle row: original landmarks placed at double the original distance apart. Bottom row: landmarks of double the original size placed at double the original distance apart. Modified after [48]
and [73, 45, 4]). This suggests that these insects are guided by motion parallax as a cue to distance [67, 57] (see also Chap. 4). The selection of cues appears to depend on learning history: after their first learning flights in the morning, the search of groundnesting wasps is predominantly driven by distance rather than angular size cues [73]. Lehrer and Collett [45], who analysed the development of search and return flights over time in honeybees, found that absolute distance dominated the bees’ behaviour in the initial phase of learning while apparent size did so later on. Homing insects thus memorise both pictorial, purely image-based information about the goal environment and derived aspects, like image motion cues that provide information on the distance of landmarks. Honeybees at least have been shown to remember several visual attributes of a scene for identifying targets: they can detect food sources that are identified visually by contour orientation (e.g. [30]), by colour (e.g. [37]), by shape (e.g. [33]), by complex image properties (e.g. [22]) and by relative image motion (e.g. [58]). How does insect visual pattern memory relate to their ability to pinpoint a goal location, which itself may be very inconspicuous? We will return to the
question what identifies a location in the natural world later, but would like to make the point here that for homing insects the salience, reliability and the relevance of landmarks is determined by the patterns of visual stimulation they experience both during learning and during their return to the goal, and by the efficiency with which they can relocate the goal. For instance, in some situations, given the choice between flat patterns on the ground and three-dimensional objects, insects appear to pay less attention to flat patterns on the ground [64]. However, when these patterns are large (salient), compared to the apparent size of a three-dimensional object, they can dominate search (see Fig. 7.4). Equally, many experiments have shown that homing insects pay particular attention to landmarks that are close to the goal (e.g. [11, 73]) and that are therefore particularly relevant for the purpose of pinpointing its location. And finally, the reliability of visual features associated with the task of locating a goal will both depend on their salience (e.g. size, contrast, colour) – which may change depending on the direction of illumination, and the intensity and the spectral composition of both illumination and background – and their constancy in location, in their reflectance properties and their shape. Honeybees, for instance, learn the colour of landmarks [12], a property
90
J. Zeil et al.
Fig. 7.3 Ground-nesting bees search for their nest at the absolute distance from a landmark, independent of the landmark’s apparent size. Bees learnt to see one of a number of differently sized cylindrical landmarks placed in such a way that they all had the same angular size as seen from the nest entrance (bottom left)
and were tested with landmarks of different sizes on their return to the nest. The three columns show search histograms of bees searching for their nest entrance in the presence of differently sized landmarks. Training landmark size is shown in black and training distance is marked by a dotted line. Modified after [4]
that allows them to remain recognisable even under changing light conditions, because bees possess colour constancy [70]. Homing insects also monitor the reliability of what they have learnt: depending on the level of difficulty they have in relocating a goal, they update their visual representation by repeating their learning flights (e.g. [35, 72, 69]). Even while performing their learning flights, which we will discuss in detail later,
the insects have the opportunity to test the reliability of their visual representation [76, 74]. In many homing experiments, insects have been shown to associate individual landmark objects with the goal and their return paths or search paths are clearly determined by the location of such individual landmarks. The fact that homing insects search for the goal not only in the right distance from a landmark
7
Visual Homing in Insects and Robots
91
Fig. 7.4 How ground-nesting wasps (Cerceris) search for their nest entrance in the presence of dissociated visual features. On departure, the wasp was used to see a large flat pattern around her nest entrance and a cylindrical landmark to the south of the nest entrance (image on top centre). The returning wasp was confronted with the pattern and the landmark displaced to the east
and to the west of the hidden nest entrance (e.g. top left image), or with the displaced landmark on its own (image on top right). Diagrams: Left column: Search distributions of the wasp with landmark east of the nest and pattern west of the nest. Middle column: pattern east of the nest and landmark west of the nest. Right column: Landmark east of the nest and pattern removed
but also in the right compass bearing, however, indicates that they remember more than just the appearance of that object: visual representations contain, or are associated with, compass information. Honeybees, for instance, become unable to locate a goal when the landmark constellation that helps them identifying the goal is rotated by more than 30◦ [9]. There are many compass cues that may be involved, including celestial cues (e.g. [31]), magnetic cues (e.g. [25]) and the full visual panorama (e.g. [75, 62]). Evidence that landmarks are being remembered in the wider visual context comes from experiments in which goal- or route-defining landmarks were removed
during a test phase. Ground-nesting wasps then continue to search in the general area for their nest (see Fig. 7.1) and ants that used to walk on a curved path past a landmark towards a feeder in a room continue to walk along the original curved path that had led them past the landmark in its original location when the landmark is removed or displaced (see Fig. 7.5 and [28]). In their natural habitat, the wasps thus had information on the nest location, independent of the dominant landmark, and the ants must have remembered the view transformations along their usual path and were now guided by landmark-independent cues in the room.
92
J. Zeil et al.
Fig. 7.5 Landmark guidance and visual context. Graham et al. [28] trained ants to a feeder (F) past a landmark (black circle) in an indoor arena. The ants tended to walk first towards that landmark and then towards the feeder. When the landmark is removed (open circle in centre panel) or displaced (panel on
right), the ants still walk along the original curved path that had led them past the landmark in its original location. This indicates that the ants had learned the view transformations along the path with the aid of distant cues in the room. Modified after [28]
7.3 Modelling Homing: Computer Simulations and Robotics Experiments
correspondence-based algorithms are the “snapshot model” (see Fig. 7.6 and [9]) and the optical flowbased homing methods by Vardy and Möller [65]. Goedeme et al. [27] use a feature-based visual homing algorithm for topological navigation. The average landmark vector (ALV) model [42, 47] uses a very parsimonious spatial representation, a single vector computed from viewing directions to surrounding landmarks. It is different and special insofar as it extracts features (vertical edges) from the visual input without establishing correspondences between features. These features belong to all landmarks that identify the goal. Each step on the way home is calculated from the difference of two ALVs: the ALV at the current and the ALV at the goal position. The ALV algorithm, however, is extremely sensitive to slight differences in orientation, because the reference frames of both ALVs have to have the same orientation. The second class of algorithms does not solve the correspondence problem explicitly. These approaches usually estimate the movement parameters by minimising the difference between (possibly processed) images,2
Computer simulations and in particular mobile robots that can move in real environments allow testing models of animal navigation strategies in closed loop. Interestingly, most visual homing algorithms implemented on mobile robots make use of a panoramic imaging system mimicking the large field of view of insect eyes. Homing algorithms can be classified into methods that establish correspondences between image features and global methods that use the similarity between images. Correspondence-based algorithms extract features from the raw images and compute motion commands directly from paired features.1 They differ in the type and number of features extracted from the images, in the way in which correspondences between features are established (e.g. nearest neighbour or feature similarity measures) and whether both translations and rotations are determined. Examples of
d(P[Ih ], P[It ]), 1
If the coordinates of at least three corresponding features in two images are known, the rotation and direction of translation between the two camera positions can be estimated (see [29, 32]). Without knowing the distance to or between features, only the direction of translation and not its absolute size can be determined [5].
2
We avoid the term “image distance” because it can be easily confused with metric three-dimensional distance, and use “image difference” instead.
7
Visual Homing in Insects and Robots
93
Fig. 7.6 The snapshot model of homing [9]. At the goal location (2) a snapshot template is recorded. The mismatch between remembered snapshot and the current retinal image at a location some distance away from the goal (1) can be used to generate goal-seeking movement instructions by finding the closest matching contours and generating for each a rotation that would
align them. Averaging over the rotations needed to align contours and over the translation needed to minimise the mismatch in apparent size of landmarks generates a command in which direction to move next. This procedure brings the agent back to the goal location. Modified after [10]
where d( · ) is the metric (e.g. the sum of squared differences, SSD), P[ · ] the image processing operations (e.g. low-pass filtering, corner detection or Fourier transformation), Ih the memorised image at the goal position3 and It the current image (at time t). The basic idea is that the physical distance to the goal is reduced by moving in the direction that increases the similarity of images. While Zeil et al. [75] use actual movements in space to estimate the gradient of image differences, Möller and Vardy [51] employ – similar to Labrosse [41] – matched filters derived from two translational optic flow fields to approximate the gradient,4 thus avoiding the need for test steps. The image warping method of Franz et al. [24] computes image changes for several hypothetical translations and rotations based on assumptions on the distance distribution
of objects in the scene in order to find the direction that reduces image differences. Although this approach uses only one-dimensional pixel arrays along the horizon, its homing performance is quite remarkable [65]. Stürzl and Mallot [61] describe a fast and memoryefficient implementation of this warping method by means of Fourier transformation of panoramic images. Recently, Möller [50] extended the warping algorithm to two-dimensional images, improving its homing performance further. All approaches based on image differences described so far operate on raw, on low-pass filtered images or on their Fourier-transformed equivalent. These homing methods are thus susceptible to changes in illumination which is of special significance in natural environments (see below and [60, 75, 62]). Image differences that develop due to changes in illumination can be minimised by lateral inhibition and local contrast normalisation [62]. One possibility to reduce the effect of changes in illumination may be to, in addition, use information on the depth structure of a scene that could be estimated by motion parallax or stereo computation. For correspondence-based approaches the use of features with high-illumination
Instead of the raw image Ih , the processed image P[Ih ] can be stored. 4 Assuming equal distances to surrounding objects and movements that are small compared to object distances, the first-order approximations of pixel shifts can be estimated when moving in x or y direction. 3
94
tolerance may offer a solution to homing in timevarying scenes. Möller [49] suggested that a colouropponent representation can make visual representations immune against changes in illumination, by emphasising the contrast between terrestrial objects and the sky. The problem here is that such a procedure is likely to emphasise large distant objects that do not allow precise localisation. Some recent vision-based techniques aim at integrating local visual representations and homing methods into a large-scale navigation scheme. For instance, the purely vision-based scheme by Franz et al. [23] connects local visual representations (panoramic onedimensional snapshots) in a graph model of the environment without metric information. The agent is able to move between locations at which individual snapshots were acquired (snapshot positions) by means of a local visual homing method. Similar vision-based topological maps are used in several robotics experiments and implementations (e.g. [27, 3]). Metric information about distances and angles between snapshot positions are obtained, for instance, by path integration and can easily be integrated in a graph-like representation (e.g. [34]). Davison et al. [20, 21] use an extended Kalman filter (EKF) for real-time estimation of camera position, orientation and velocity as well as three-dimensional positions of features in a Cartesian reference frame from the video stream of a single moving camera. Sparse vision-based threedimensional maps of this type are now used in a number of robotics applications (e.g. [55, 56, 5]), but also as a new approach to understanding the “knowledge base” of homing insects. Recently, Baddeley and Philippides [1] applied the EKF approach to simulated learning flights of bees. They showed that learning flights (see below) actually improve the simulated agent’s ability to localise itself with the help of a landmark.
7.4 Homing in Natural Environments As we have seen, insects use salient objects during homing. In most cases in which this has been documented, landmarks were artificial, high-contrast objects. In the natural habitat of an insect, however, it is often not clear which objects in the environment should be selected as guideposts for homing. It is interesting
J. Zeil et al.
therefore to ask how well locations in the natural world are identified by the panoramic views taken from them, without segmenting the scene into distinct landmark objects. The question can be answered by analysing how the mean pixel difference between images develops with distance from a reference location. Surprisingly, the resulting “image difference functions” (IDF) are cusp shaped and smooth, without pronounced local minima [75, 62]. In outdoor scenes, panoramic snapshots thus have useful catchments for a homing agent sensitive to image differences. Once the edge of such a catchment is reached, simple gradient descent methods can be employed to reach the minimum in image differences that coincides with the location at which the reference image was taken (e.g. [75]). The range over which a given snapshot can be used, or, which is equivalent, the width of the IDF at any one location depends on the way in which objects are distributed in depth: in the presence of near-by objects, IDFs are narrow and steep, while in open habitats, they are wide and shallow. Importantly, since pure rotational IDFs are usually narrow and steep, they can be used to recover the orientation of the reference image, even without additional compass information [75]. This continues to be true at some distance from the reference location, where the minimum of the rotational IDF takes on the value of the translational IDF at that location [75]. The implication is that a homing agent should at any point along its return path minimise first the rotational IDF and then the translational IDF, as has been suggested by Cartwright and Collett [10]. As we have pointed out before, the visual appearance of natural scenes can be very dynamic. Over time, the image difference of a reference view at the same location (the temporal IDF) becomes slowly larger due to changes in the direction and the spectral composition of illumination as the Sun moves across the sky (Fig. 7.7, see also [75]). Superimposed on this slow increase of the temporal IDF are rapid deflections that are due to the movement of clouds, which change the intensity and the spectral composition of illumination. The variability in the presence and the location of shadow contours associated with these illumination changes in natural environments is likely to cause serious problems for view-based homing under real-life conditions. However, as discussed above, visual representations can be made quite robust against these changes in illumination, in the simplest way by pre-processing images (see [62]). A second
7
Visual Homing in Insects and Robots
95
Fig. 7.7 Under natural conditions, the view of a scene changes due to the movement of the Sun and the movement of clouds. Panoramic images were recorded for 1 h close to the ground in a ground-nesting wasp colony by pointing a video camera down onto a reflective cone (see inset example pictures and [75] for
details of recording technique). Root mean-squared pixel differences over the whole image were calculated every 10 s relative to a reference image at the beginning of the sequence for the red (black line), green (dask grey line) and blue channels (light grey line) separately for 65 min from 1200 to 1305 hours
source of noise in visual scene representations still awaits a detailed quantitative analysis in the context of homing: the effect of environmental motion – the wind-driven movements of plants – that changes the appearance of scenes on shorter timescales than those due to changes of illumination (e.g. [71, 54]). Although the “information content of panoramic images” with respect to the homing task is now fairly well understood, both in natural and in experimental spaces (e.g. [59]), it is still unclear whether homing insects actually do memorise and make use of the full visual panorama. In Sect. 7.2 we provided two examples of experimental evidence that seem to indicate that the wider visual scene is being remembered by homing insects, but experiments specifically addressing this question are still lacking. Recent evidence at least demonstrates that insects are able to recognise and discriminate between visual patterns that resemble natural scenes in all spatial aspects, except the distribution of objects in depth [22]. It is also not clear at present to what extent homing insects are affected by limited storage and processing capacity for visual patterns. In the fruitfly Drosophila, at least two pattern properties, elevation and contour orientation, are stored in two distinct brain regions, when pattern memory is required [46]. However, we do not know whether such storage areas are modified both qualitatively and quantitatively in homing insects.
7.5 Acquisition and Use of Visual Representations Few theoretical studies (e.g. [19, 1]) and robotics experiments (e.g. [44]) have considered systematic ways in which an agent could actively acquire and test robust visual representations of a goal environment. Yet many homing insects do appear to do exactly that: upon leaving a significant location (like a food source or a nest) for the first time, they move in highly stereotyped ways that have become to be called orientation or learning flights and walks ([35, 66, 43, 72, 73, 17, 45, 13, 52], reviewed in [76]). These learning routines are crucial for subsequent successful homing as every beekeeper knows. When bee hives are shifted, various precautions have to be taken to force experienced foragers to perform an orientation flight at the new site (e.g. [2, 7]), otherwise the bees would be lost or would return to the old hive location if the new location is within their familiar foraging range. Equally, groundnesting wasps need to be allowed to perform a learning flight on departure in order to learn changes in their nest environment (e.g. [63]). The behaviour of insects during these learning routines is surprisingly similar across different species: the insects turn to face the goal and then back away from it in ever-increasing arcs (Fig. 7.8, [72, 17, 45, 13],
96
J. Zeil et al.
Fig. 7.8 The organisation of learning flights. Left: Learning flight path of a ground-nesting wasp (Cerceris) as seen from above. Nest entrance is marked by a circle. Right: Time course of gaze direction (fat black), retinal position of the nest entrance
(dotted) and angular bearing relative to the nest entrance (grey) for the same flight. Data were recorded at 250 fps with a Redlake high-speed digital camera
reviewed in [76]). After this initial sequence, honeybees at least circle the area at some height and then fly off for an initial orientation flight, lasting a few minutes (e.g. [2, 6, 7]). The fact that the insects fixate the goal visually can be used to show that they specifically acquire visual information on the location of the goal during these flights (see also [53]). In ground-nesting wasps, a highcontrast collar can be placed around the nest entrance with a hole in the centre through which the insects can emerge. Once a wasp has started to perform her learning flight, the collar can be carefully shifted some small distance to the side of the nest entrance, while the insect continues to fly along arcs centred on the collar. The search distributions of returning wasps that have been shifted to the left or to the right of the nest entrance during their learning flights are shifted accordingly with respect to the true nest location and surrounding landmarks (Fig. 7.9 and [72]). The relationship between the organisation of learning flights and the control of flight when insects return to the goal is thus of great interest if we are to understand the type of visual representations and of algorithms insects employ during homing. The first point to note is that the insects actually have to move during learning, although – as we have seen in Sect. 4 – a single panoramic snapshot of the goal environment is in principle sufficient to uniquely identify the location of the goal. The specific mode and pattern of movement suggests at least five reasons for the need to move:
(1) Insects like wasps and bees, with their close-set eyes, have to move in order to generate the only distance information available to them, namely motion
Fig. 7.9 Learning flights serve to acquire information about location. A ground-nesting wasp during her learning flight was pulled to the east (inset top left) or to the west (inset top right) of the nest entrance by shifting a small moveable collar around the nest entrance. Depending on the direction of shift during learning, the wasp subsequently searched slightly to the east (histogram with filled circles) or to the west (histogram with open circles) of the hidden nest entrance on returning to the nest area. Black circles in insets mark the positions of four landmark cylinders. Modified after [72]
7
Visual Homing in Insects and Robots
parallax information. One reason for learning flights thus may be the need for foreground–background separation to identify close landmarks and for landmark distance information that is independent of apparent size. Close landmarks are particularly relevant for precise localisation (e.g. [75, 62]) and recording the distance of landmarks through motion parallax may help making visual representations more robust against changes in illumination and the presence of shadows (e.g. [73]). The insects move along arcs at a constant pivoting velocity (e.g. [76, 13]), but do so in a saccadic manner flying along straight segments of flight during which gaze direction is kept constant. At the end of such flight segments, the insects change gaze direction by rapid head saccades that are followed by changes in flight direction for the next segment of purely translational movement [74]. Wasps and bees during their learning flights thus produce a sequence of translational optic flow fields while moving along these arcs, perpendicular to the line of sight to the goal. The visual consequences are equivalent to those of the peering movements of some insects, which have been shown to provide them with distance information (reviewed in [40]). (2) The insects may have to move in order to perform a “quality check” on what they have already learnt. It would seem that in the process of leaving a place of significance – especially when a lot of resources have been invested in it, like in the case of a nest – it is crucial for an animal to be certain that it has all the navigational information necessary for a speedy and safe return. Some aspects of learning flights indicate that this need for checking the robustness of the visual representation of the goal environment may be an important aspect of their organisation: learning flights have a repetitive structure, with insects moving along a series of arcs during which certain orientations and vantage points are systematically re-visited, a prerequisite for checking the validity of what has been seen and learnt before. This regularity may also provide a clue as to how insects control these flights: the directions in which the insects face just before they decide to begin a new arc are in some species perfectly aligned [17] and in others are clearly related to the direction in which the insect faced at the end of the previous arc on the same side (Fig. 7.8). It is possible, therefore, that the choreography of learning flights reflects a continuous process of image
97
comparison with new arcs being initiated whenever a previously memorised view is encountered again. (3) Yet another reason why insects have to move during learning and may have a need to continuously compare what they see with what they have seen before is to organise acquisition depending on how fast the scene changes. For instance, a sensible way of learning a visual representation for homing may be to store a view and then move away from that location while continuously monitoring the increasing image differences that develop with distance from that reference location. The next image would be stored when these differences have reached a certain threshold value (e.g. [10]). Such a procedure would assure that the catchments of successive snapshots are contiguous and that a successful gradient descent on one would trigger the descent on the next one. (4) The spatio-temporal regularities in learning flights may also reflect the need to acquire an ordered sequence of representations at different distances from the goal (or at different spatial scales). The catchment areas of panoramic snapshots, for instance, depend on the depth structure of scenes (e.g. [62]). When objects are very close, these catchment areas are very narrow and may consequently be easily missed on return to the goal. There are a number of ways in which catchment areas can be broadened, including by low-pass filtering the memorised images (e.g. [61]) or by filtering out close contours (e.g. [10]). Yet another way of increasing the catchment or the active space of a visual representation – while at the same time preserving the accuracy with which it allows the insect to pinpoint the goal – would be to acquire a sequence of representations at systematically increasing spatial scales or at increasing distances from the goal (e.g. [17]). In this case, the sequence in which these representations are acquired during departure and used during homing might be of utmost importance. (5) The oscillating nature of learning flight, the aligned bearings of the end of arcs and the fact that the average orientation of returning insects closely matches the average orientation they had during learning (e.g. [72, 13]) also suggest that the insects may learn the borders of a v-shaped flight corridor with its apex at the goal. The return flight may then not be guided primarily by attraction to the goal, but by repulsion from the borders of the flight corridor [76].
98
7.6 Outlook Homing insects are clearly guided by the appearance of individual objects in the goal environment, provided they are visually salient. They use these objects as beacons, but also memorise their appearance as seen from the goal and as features along the route [15]. What decides whether an object is being memorised as a beacon, a goal or a route landmark? Much evidence indicates that insects constantly monitor the view transformations they experience on departure and on return to a goal and that they remember the visual signatures of objects along a number of dimensions, including their colour, their shape and the image motion signals they generate. These visual memories are associated with compass information, with motor commands and with information from the path integration system. For the task of homing, visual representations of the goal environment are acquired in a systematic fashion during learning flights and walks, the organisation and functional significance of which we still do not fully understand. Equally, little attention has been paid to the flexibility of view-based homing behaviour in insects: in experiments in which a nest entrance, for instance, has been hidden and the landmark array has been altered, returning insects approach the location and after a short period of local search fly off and approach again repeatedly from some distance away. What is it that makes an insect decide to abort an approach or a search and what determines the distance and the direction from which it tries to home again? In ants, the transition from a directed homing run to search behaviour is controlled by the state of the path integrator. Path integration does not seem to offer ants the option to abort search at some stage for an alternative strategy. Insects that employ a visual homing mechanism, however, are able to repeat an approach by flying back along the route to where the scene has not been disturbed and try again.
References 1. Baddeley, B., Philippides, A.: Improving agent localisation through stereotypical motion. In: F.A. de Costa, L.M. Rocha, E. Costa, I. Harvey, A. Coutinho (eds.) 9th European Conference on Artificial Life, LNAI, vol. 4648, pp. 335–344 (2007)
J. Zeil et al. 2. Becker, L.: Untersuchungen über das Heimfindevermögen der Bienen. Zeitschrift für Vergleichende Physiologie 41, 1–25 (1958) 3. Booij, O., Terwijn, B., Zivkovic, Z., Kröse, B.: Navigation using an appearance based topological map. In: S. Hutchinson (ed.) IEEE International Conference on Robotics and Automation, pp. 3927–3932 (2007) 4. Brünnert, U., Kelber, A., Zeil, J.: Ground-nesting bees determine the location of their nest relative to a landmark by other than angular size cues. Journal of Comparative Physiology A 175, 363–369 (1994) 5. Burschka, D., Hager, G.: V-GPS (SLAM): Vision-based inertial system for mobile robots. In: M. Meng (ed.) IEEE International Conference on Robotics and Automation, vol. 1, pp. 409–415 (2004) 6. Capaldi, E., Dyer, F.: The role of orientation flights on homing performance in honeybees. Journal of Experimental Biology 202, 1655–1666 (1999) 7. Capaldi, E.A., Smith, A.D., Osborne, J.L., Fahrbach, S.E., Farris, S.M., Reynolds, D.R., Edwards, A.S., Martin, A., Robinson, G.E., Poppy, G.M., Riley, J.R.: Ontogeny of orientation flight in the honeybee revealed by harmonic radar. Nature 403, 537–40 (2000) 8. Cartwright, B.A., Collett, T.S.: How honey-bees know their distance from a near-by visual landmark. Journal of Experimental Biology 82, 367–372 (1979) 9. Cartwright, B.A., Collett, T.S.: Landmark learning in bees. Journal of Comparative Physiology A 151, 521–543 (1983) 10. Cartwright, B.A., Collett, T.S.: Landmark maps for honeybees. Biological Cybernetics 57, 85–93 (1987) 11. Cheng, K., Collett, T.S., Pickhard, A., Wehner, R.: The use of visual landmarks by honeybees: Bees weight landmarks according to their distance from the goal. Journal of Comparative Physiology A 161, 469–475 (1987) 12. Cheng, K., Collett, T.S., Wehner, R.: Honeybees learn the colours of landmarks. Journal of Comparative Physiology A 159, 69–73 (1986) 13. Collett, T.S.: Making learning easy: the acquisition of visual information during the orientation flights of social wasps. Journal of Comparative Physiology A 177, 737–747 (1995) 14. Collett, T.S.: Insect navigation en route to the goal: multiple strategies for the use of landmarks. Journal of Experimental Biology 199, 227–235 (1996) 15. Collett, T.S., Graham, P., Harris, R.A., de Ibarra, N.H.: Navigational memories in ants and bees: Memory retrieval when selecting and following routes. Advances in the Study of Behavior 36, 123–172 (2006) 16. Collett, T.S., Land, M.F.: Visual spatial memory in a hoverfly. Journal of Comparative Physiology A 100, 59–84 (1975) 17. Collett, T.S., Lehrer, M.: Looking and learning: A spatial pattern in the orientation flight of the wasp Vespula vulgaris. Proceedings of the Royal Society London B 252, 129–134 (1993) 18. Collett, T.S., Zeil, J.: Places and landmarks: An arthropod perspective. In: S. Healy (ed.) Spatial representation in animals, pp. 18–53. Oxford University Press (1998) 19. Dale, K., Collett, T.S.: Using artificial evolution and selection to model insect navigation. Current Biology 11, 1305– 1316 (2001)
7
Visual Homing in Insects and Robots
20. Davison, A.J., Murray, D.W.: Simultaneous localization and map-building using active vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 865–880 (2002) 21. Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: Monoslam: Real-time single camera slam. Pattern Analysis and Machine Intelligence, IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 1052–1067 (2007) 22. Dyer, A.G., Rosa, M.G.P., Reser, D.H.: Honeybees can recognise images of complex natural scenes for use as potential landmarks. Journal of Experimental Biology 211, 1180–1186 (2008) 23. Franz, M.O., Schölkopf, B., Mallot, H.A., Bülthoff, H.H.: Learning view graphs for robot navigation. Autonomous Robots 5, 111–125 (1998) 24. Franz, M.O., Schölkopf, B., Mallot, H.A., Bülthoff, H.H.: Where did I take that snapshot? Scene-based homing by image matching. Biological Cybernetics 79, 191–202 (1998) 25. Frier, H., Edwards, E., Smith, C., Neale, S., Collett, T.: Magnetic compass cues and visual pattern learning in honeybees. Journal of Experimental Biology 199, 1353–1361 (1996) 26. Gillner, S., Mallot, H.A.: These maps are made for walking task hierarchy of spatial cognition. In: M. Jefferies, W.K. Yeap (eds.) Robotics and Cognitive Approaches to Spatial Mapping (Springer Tracts in Advanced Robotics), pp. 181–201. Springer (2008) 27. Goedemé, T., Nuttin, M., Tuytelaars, T., Gool, L.V.: Omnidirectional vision based topological navigation. International Journal of Computer Vision 74, 219–236 (2007) 28. Graham, P., Fauria, K., Collett, T.S.: The influence of beacon-aiming on the routes of wood ants. Journal of Experimental Biology 206, 535–541 (2003) 29. Haralick, B.M., Lee, C.N., Ottenberg, K., Nölle, M.: Review and analysis of solutions of the three point perspective pose estimation problem. International Journal of Computer Vision 13, 331–356 (1994) 30. van Hateren, J.H., Srinivasan, M.V., Wait, P.B.: Pattern recognition in bees: orientation discrimination. Journal of Comparative Physiology A 167, 649–654 (1990) 31. Homberg, U.: In search of the sky compass in the insect brain. Naturwissenschaften 91, 199–208 (2004) 32. Horn, B.K.P.: Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A 4, 629 (1987) 33. Horridge, A.: Visual processing of pattern. In: E. Warrant, D.E. Nilsson (eds.) Invertebrate Vision, pp. 494–525. Cambridge University Press (2006) 34. Hübner, W., Mallot, H.A.: Metric embedding of viewgraphs. Autonomous Robots 23, 183–196 (2007) 35. van Iersel, J.J.A., van den Assem, J.: Aspects of orientation in the digger wasp Bembix rostrata. Animal Behaviour Suppl 1, 145–162 (1964) 36. Judd, S.P.D., Collett, T.S.: Multiple stored views and landmark guidance in ants. Nature 392, 710–714 (1998) 37. Kelber, A.: Invertebrate colour vision. In: E. Warrant, D.E. Nilsson (eds.) Invertebrate Vision, pp. 250–290. Cambridge University Press (2006) 38. Kelber, A., Zeil, J.: A robust procedure for visual stabilisation of hovering flight position in guard bees of Trigona
99
39.
40.
41.
42.
43. 44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
(Tetragonisca) angustula (Apidae, Meliponinae). Journal of Comparative Physiology A 167, 569–577 (1990) Kelber, A., Zeil, J.: Tetragonisca guard bees interpret expanding and contracting patterns as unintended displacement in space. Journal of Comparative Physiology A 181, 257–265 (1997) Kral, K.: Behavioural-analytical studies of the role of head movements in depth perception in insects, birds and mammals. Behavioural Processes 64, 1–12 (2003) Labrosse, F.: Short and long-range visual navigation using warped panoramic images. Robotics and Autonomous Systems 55, 675–684 (2007) Lambrinos, D., Möller, R., Labhart, T., Pfeifer, R., Wehner, R.: A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems 30, 39–64 (2000) Lehrer, M.: Why do bees turn back and look? Journal of Comparative Physiology A 172, 549–563 (1993) Lehrer, M., Bianco, G.: The turn-back-and-look behaviour: bee versus robot. Biological Cybernetics 83, 211–229 (2000) Lehrer, M., Collett, T.S.: Approaching and departing bees learn different cues to the distance of a landmark. Journal of Comparative Physiology A 175, 171–177 (1994) Liu, G., Seiler, H., Wen, A., Zars, T., Ito, K., Wolf, R., Heisenberg, M., Liu, L.: Distinct memory traces for two visual features in the Drosophila brain. Nature 439, 551– 556 (2006) Möller, R.: Insect visual homing strategies in a robot with analog processing. Biological Cybernetics 83, 231–243 (2000) Möller, R.: Do insects use templates or parameters for landmark navigation? Journal of Theoretical Biology 210, 33–45 (2001) Möller, R.: Insects could exploit UV-green contrast for landmark navigation. Journal of Theoretical Biology 214, 619–631 (2002) Möller, R.: Local visual homing by warping of twodimensional images. Robotics and Autonomous Systems 57, 87–101 (2009) Möller, R., Vardy, A.: Local visual homing by matchedfilter descent in image distances. Biological Cybernetics 95, 413–430 (2006) Nicholson, D.J., Judd, S.P., Cartwright, B.A., Collett, T.S.: Learning walks and landmark guidance in wood ants (Formica rufa). Journal of Experimental Biology 202, 1831–1838 (1999) Opfinger, E.: Über die Orientierung der Biene an der Futterquelle. Zeitschrift für Vergleichende Physiologie 15, 431–487 (1931) Peters, R., Hemmi, J., Zeil, J.: Image motion environments: background noise for movement-based animal signals. Journal of Comparative Physiology A 194, 441–456 (2008) Se, S., Lowe, D.G., Little, J.J.: Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks. International Journal of Robotics Research 21, 735–758 (2002) Se, S., Lowe, D.G., Little, J.J.: Vision-based global localization and mapping for mobile robots. IEEE Transactions on Robotics 21, 364–375 (2005)
100 57. Sobel, E.C.: The locust’s use of motion parallax to measure distance. Journal of Comparative Physiology A 167, 579– 588 (1990) 58. Srinivasan, M.V., Lehrer, M., Horridge, G.A.: Visual figureground discrimination in the honeybee: The role of motion parallax at boundaries. Proceedings of the Royal Society London B 238, 331–350 (1990) 59. Stürzl, W., Cheung, A., Cheng, K., Zeil, J.: The information content of panoramic images I. The rotational errors and the similarity of views in rectangular experimental arenas. Journal of Experimental Psychology: Animal Behavior Processes 34, 1–14 (2008) 60. Stürzl, W., Mallot, H.A.: Vision-based homing with a panoramic stereo sensor. In: H.H. Bülthoff, S.W. Lee, T.A. Poggio, C. Wallraven (eds.) Biologically Motivated Computer Vision, LNCS, vol. 2525, pp. 620–628 (2002) 61. Stürzl, W., Mallot, H.A.: Efficient visual homing based on Fourier transformed panoramic images. Robotics and Autonomous Systems 54, 300–313 (2006) 62. Stürzl, W., Zeil, J.: Depth, contrast and view-based homing in outdoor scenes. Biological Cybernetics 96, 519–531 (2007) 63. Tinbergen, N.: Über die Orientierung des Bienenwolfes (Philanthus triangulum Fabr.). Zeitschrift für vergleichende Physiologie 16, 305–334 (1932) 64. Tinbergen, N., Kruyt, W.: Über die Orientierung des Bienenwolfes (Philanthus triangulum Fabr.). III. Die Bevorzugung bestimmter Wegmarken. Zeitschrift für vergleichende Physiologie 25, 292–334 (1938) 65. Vardy, A., Möller, R.: Biologically plausible visual homing methods based on optical flow techniques. Connection Science 17, 47–89 (2005) 66. Vollbehr, J.: Zur Orientierung junger Honigbienen bei ihrem 1. Orientierungsflug. Zoologisches Jahrbuch Physiologie 79, 33–69 (1975)
J. Zeil et al. 67. Wallace, G.K.: Visual scanning in the desert locust Schistocerca gregaria Foskal. Journal of Experimental Biology 36, 512–525 (1959) 68. Wehner, R., Räber, F.: Visual spatial memory in desert ants, Cataglyphis bicolor (Hymenoptera: Formicidae). Experientia 35, 1569–1571 (1979) 69. Wei, C.A., Rafalko, S.L., Dyer, F.C.: Deciding to learn: modulation of learning flights in honeybees, Apis mellifera. Journal of Comparative Physiology A 188, 725–37 (2002) 70. Werner, A., Menzel, R., Wehrhahn, C.: Color constancy in the honeybee. The Journal of Neuroscience 8, 156–159 (1988) 71. Zanker, J.M., Zeil, J.: Movement-induced motion signal distributions in outdoor scenes. Network: Computation in Neural Systems 16, 357–376 (2005) 72. Zeil, J.: Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): I. Description of flight. Journal of Comparative Physiology A 172, 189–205 (1993a) 73. Zeil, J.: Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): II. Similarities between orientation and return flights and the use of motion parallax. Journal of Comparative Physiology A 172, 207–222 (1993b) 74. Zeil, J., Boeddeker, N., Hemmi, J., Stürzl, W.: Going wild: Toward an ecology of visual information processing. In: G. North, R. Greenspan (eds.) Invertebrate Neurobiology, pp. 381–403. Cold Spring Harbor (2007) 75. Zeil, J., Hofmann, M.I., Chahl, J.S.: Catchment areas of panoramic snapshots in outdoor scenes. Journal of the Optical Society of America A 20, 450–469 (2003) 76. Zeil, J., Kelber, A., Voss, R.: Structure and function of learning flights in ground-nesting bees and wasps. Journal of Experimental Biology 199, 245–252 (1996)
Chapter 8
Motion Detection Chips for Robotic Platforms Rico Moeckel and Shih-Chii Liu
Abstract The on-board requirements for small, light, low-power sensors, and electronics on autonomous micro-aerial vehicles limit the computational power and speed available for processing sensory signals. The sensory processing on these platforms is usually inspired by the sensory information extracted by insects from their world, in particular optic flow. This information is also useful for distance estimation of the vehicle from objects in its path. Custom Very Large Scale Integrated (VLSI) sensor chips which perform focal-plane motion estimation are beneficial for such platforms because of properties including compactness, continuous-time operation, and low-power dissipation. This chapter gives an overview of the various monolithic analog VLSI motion detection/optic flow chips that have been designed over the last 2 decades. We contrast the pros and cons of the different algorithms that have been implemented and we identify promising chip architectures that are suitable for flying platforms.
8.1 Introduction When considering sensors and electronics for small mobile platforms including microflyers, one has to face the limited power and payload constraints of such platforms. The on-board electronic devices and sensors have to be small, low power, with real-time
R. Moeckel () Institute of Neuroinformatics, University of Zürich and ETH Zürich, Zürich, Switzerland e-mail:
[email protected]
responses that match the speed of the platform. Custom very large-scale integrated (VLSI) chips which perform focal-plane sensory processing like motion detection match the requirements of these platforms. The chips incorporate networks with a parallel architecture; the transistors are operated in subthreshold leading to low-power dissipation; and the computation is continuous time, thus leading to possibly fast response times. The silicon technology allows a large number of transistors to be placed within a small area, thus leading to small devices in the order of millimeters. VLSI chips that extract motion information are useful for autonomous mobile platforms that navigate in uncontrolled surroundings. This is especially true of micro-aerial vehicles which depend on optic flow information to avoid obstacles in their world similar to their biological counterparts (insects) where optic flow estimates are extracted by the direction-selective cells in the visual neuropile [19, 10, 27] (see Chap. 4). Over the past 20 years, various analog VLSI (aVLSI) motion detection chips have been reported in the literature. These chips implement various algorithms; many of them are based on biological models of motion processing, for example, the optic flow estimate extracted by cells in the insect visual pathway [10]. The implementation of these algorithms should be contrasted with machine vision implementations of motion algorithms on many mobile platforms. The latter use primarily outputs of clocked frame-based imagers from which motion is extracted. Either this computation is performed on the on-board microcontroller or the imager data are transmitted to a remote computer on which the motion computation is done. By using an aVLSI motion chip, the microcontroller is freed from this processing and on-board sensorymotor controllers can be considered. The frame-based
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_8, © Springer-Verlag Berlin Heidelberg 2009
101
102
motion computation methods lead to motion outputs that are only available at discrete sampling times in contrast to the continuous-time analog motion outputs that are available from the analog VLSI motion chips. These chips have been used to guide robotic platforms [17, 37] in a similar spirit to the fly robotic platform setup by Franceschini et al. [13]. In the following sections, we will expound on the various algorithms implemented on the analog VLSI motion chips and the trade-offs that have to be made in these implementations. We briefly touch on three main issues that designers face when considering the aVLSI circuits for the implemented forms – fill-factor, circuit mismatch, and fidelity. Fill factor: Because a motion pixel includes both the phototransduction circuits and the motion computing circuits, the pixel fill-factor (percentage of area occupied by the phototransduction site in the pixel) of a motion chip depends on the complexity of the algorithm and the subsequent circuits to implement the algorithm. A complex motion algorithm can result in a pixel with a small fill-factor which then limits the number of pixels that can be placed in a fixed chip area; and a lower spatial sampling frequency of the motion array. The pixel fill-factor of these motion chips can vary between 2.5 and 10%. Circuit mismatch: Analog circuit designers also have to tackle the effect of transistor mismatch on the motion pixel responses. Unlike computer simulations, any two pixels on the same chip will not produce exactly the same analog output response to the same stimulus. This transistor mismatch phenomenon is due to the fabrication process of the transistors. This mismatch is one source of the observed differences between the outputs of the hardware-implemented algorithm and the software simulation of the same algorithm. For a chip to have a small output variance (low mismatch) across the pixels of the array, the designer has to trade-off the sizes of transistors in analog circuits against the amount of mismatch that can be tolerated [41]. Fidelity: A mathematical operation in an algorithm like a multiplication is not always easily converted in the analog VLSI form. In many cases, the operation is precise only for a limited range of input voltages. Hence the selection of circuits to ensure the closest fidelity to the algorithm is important.
R. Moeckel and S.-C. Liu
Because of the different circuit techniques used by the individual circuit designers and the fact that the reported motion chips have been fabricated in a variety of processes, we have refrained from making explicit comparisons between the different implementations in Fig. 8.1. We will rather concentrate on a comparison between the implemented motion algorithms and show their corresponding limitations in the implemented aVLSI form.
8.2 Algorithms for aVLSI Motion Detection Chips and Their Implementations Since the first aVLSI motion chip which was designed for use in the Xerox optical mouse in 1981 [30], more than 20 different motion chip implementations have been published. Many of them were designed with the goal of using them to guide the pilotage of robots in uncontrolled lighting environments. Some major contributions are shown in the timeline chart of Fig. 8.1. This chapter focuses only on aVLSI motion processors that implement both the phototransduction circuits and the motion detection circuits on a single chip. As shown in Fig. 8.2, the algorithms implemented on-chip fall into two primary groups: (1) intensity-based algorithms and (2) token-based algorithms. (1) Intensity-based algorithms estimate optic flow directly from the image brightness. Motion is estimated based on either (a) the gradient of the image brightness or (b) the correlation of the image brightness of neighboring pixels. The image brightness in the implementations could be the equivalent of the output of some pre-processing on the visual input: for example, the absolute intensity, the log intensity, or the output of a retina. (2) Token-based algorithms estimate motion by tracking detected features or tokens across space and time. The term token comes from the field of computer vision where tokens are defined either as low-level features like edges or corners or high-level features like objects [47]. On the chips, the token usually consists of a binary pulse which is generated when the contrast of an edge exceeds a threshold. These algorithms can be further divided into two major sub-categories:
8
Motion Detection Chips for Robotic Platforms
103
Fig. 8.1 Timeline of single-chip VLSI motion implementations. We show only major contributions of different authors. For example, publications based on one design are represented using the date of the first publication
(a) motion is estimated through the correlation of edge patterns and (b) motion is estimated by measuring the time taken for a feature to travel between two adjacent pixels. The latter is also called time-of-travel algorithms. Intensity-based versus token-based: The advantage of intensity-based algorithms is that the motion computation is continuous whereas token-based algorithms update the motion values only if a prede-
fined feature is detected in the image. The different update dynamics means that the host system which regularly samples the motion values will always receive the instantaneous motion value from an intensity-based motion chip but will most likely receive a motion value that was computed sometime in the past from a token-based motion chip. However, the readout of token-based motion chips could be an advantage if the feature detection signal or token is
104
R. Moeckel and S.-C. Liu
Fig. 8.2 Overview of motion algorithms implemented in aVLSI
used to signal the host system when a new value is computed. Token-based algorithms can be more robust against noise since only reliable detected features are used to determine the local image speeds. Their disadvantage lies in the fact that the accuracy of the computed motion is determined by the accuracy of the feature detectors. For example, if a feature is detected in a particular pixel but not detected in the neighboring pixel, those algorithms will produce an incorrect motion value. Hence, the interpretation of the motion value by the host system is more challenging. The last updated motion value can either be allowed to decay slowly over time at a predefined rate or is reset after a set time interval. Token-based algorithms also have the advantage during implementation that the algorithms allow easy partitioning into the phototransduction, feature detection, and motion detection blocks. This partitioning can simplify the design of the subsequent aVLSI circuits since each circuit can be optimized for its particular task. Aperture problem: Algorithms that detect twodimensional velocities have to address the inherent ambiguity in determining the correct motion from the measured motions along the two axes at a single pixel which has only a limited field of view or aperture. The local motion values are compatible with a family of possible velocities of an edge through a pixel thus setting up a constraint line in velocity space that has the same orientation as this edge. Through the intersections of the constraint lines from different pixels at the boundaries of a moving object, a unique velocity can be determined for this object. We describe in some detail the intensity-based algorithms in Sects. 2.1 and 2.2 and the token-based algorithms in Sects. 2.3 and 2.4.
8.2.1 Gradient-Based Intensity Algorithm The gradient-based algorithm for computing optical flow was one of the first algorithms implemented on an aVLSI chip. This algorithm is also often used in the computer/machine vision community. The optical flow equation is derived from the brightness constancy assumption that the image brightness, I, at a point (x,y) and at a time t stays constant. Taking the derivative of I with respect to time leads to dtd I(x,y,t) = 0. Using the chain rule for differentiation we get an expression that relates the image flow components in the x- and y-directions νx = dx/dt and νy = dy/dt and the partial change of the brightness ∂I: Ix vx + Iy vy + It = 0
(8.1)
where Ix = ∂I/∂x, Iy = ∂I/∂y, and It = ∂I/∂t. Equation (8.1) is underconstrained since only one independent measure of the image brightness I(x,y,t) at a point in time and space is available while the optical flow velocity has two components νx and νy . By combining estimated flows spatially across pixels, one can arrive at a unique solution of the equation. Two main implementations of optical flow models have been proposed in the literature: True flow: Horn and Schunck proposed a smoothness constraint for the optical flow field [23] such that the flow field varies smoothly across space. Using both constraints, the local optical flow vectors ν = (νx , νy ) can be combined to arrive at an optimal solution for the optical flow estimate by minimizing a cost function: HOF (v) =
2
2 Ix vx + Iy vy + It + ρ x vx ij
2
2
2 + ρ y vx + y vy + x vy (8.2)
8
Motion Detection Chips for Robotic Platforms
105
where the global constant ρ describes the amount of spatial smoothing and Δx and Δy represent the discrete derivative operators in the x and y directions, respectively, for each i,j th pixel in the array. The first optical flow chip that implemented an energy minimization equation was proposed by Tanner and Mead [46] with a subsequent higher performance version described by Stocker and Douglas [44]. The Tanner and Mead chip is one of the first aVLSI motion detection chips and is often cited as the first singlechip motion processor which did not depend on a specific visual pattern environment. The chip contained an array of 8 × 8 pixels and produced a global velocity value over a limited range of input contrasts. It did not implement the smoothness constraint term in Eq. (8.2). The chip by Stocker and Douglas implements this term and provides smooth optical flow values. However, the smoothness constraint term does not take into account object boundaries, thus it blurs the motion values around the object boundaries. To overcome the constant smoothing across the whole chip, a second chip by Stocker [45] implements a network which locally breaks the smoothness constraint at object boundaries. Motion segmentation is included in the computation by minimizing a modified cost function: HOF (v) =
2
2 Ix vx + Iy vy + It + ρijx x vx ij
2
2
2 y +ρij y vx + y vy + x vy 2 2 y +σ vx − vxref + vy − vref (8.3) where ρijx
y and ρij
are local variables that set the amount of smoothing. These variables are allowed to adapt locally or are set to zero in the case of an object boundary. The additional bias term in Eq. (8.3) allows components of the local optical flow vectors νx and νy to y be compared against known motion priors vxref and vref , respectively. The advantage of the combination of the brightness constancy and spatial smoothness contraints lies in the robustness of the algorithm against noisy perturbations in the image. The smoothing operation across local motion outputs can average out this noise. The minimization of energy equations is attractive for aVLSI implementation because they do not require division circuits and no threshold operation is
needed to avoid the zero-division case. However, they do require high-precision multipliers with a wide input linear range as well as linear resistive networks with variable resistance that perform the smoothing operation. Since these circuits are difficult to implement in VLSI, both the implementations in [46, 44] showed a strong dependence of the motion output on image contrast and can only produce reliable measurements with high-contrast stimuli. Normal optical flow: Yet another way of estimating optical flow is to compute normal flow. This computation is based on the assumption that of all possible flow vectors measured at a pixel, the correct flow is the one that is perpendicular to the edge orientation at each pixel. Circuits for implementing the normal flow model are described in [9, 31, 15]. Mathematically, this constraint is set up as follows: Iy Ix = . vx vy
(8.4)
Substituting this constraint equation into Eq. (8.1) leads to vx = −
Iy It Ix It , vy = − 2 . Ix2 + Iy2 Ix + Iy2
(8.5)
The implementation of these equations in aVLSI circuits is challenging because the denominator in Eq. (8.5) includes partial derivatives of local brightness which are highly dependent on contrast. Especially for low-contrast images one has to divide a small value in the numerator by another small number in the denominator. This division process is very susceptible to noisy estimates of the numerator and the denominator. Many aVLSI motion detection chips include a threshold for Ix or Iy so that the motion is not computed when Ix or Iy are below this threshold. To avoid the zero-division problem, one implementation did not include the denominator term [9]. Although this simplification reduces the complexity of their circuits, the optical flow output of the chip can be ambiguous and is highly dependent on contrast. Implementations of simplified forms of the normal optical flow equations vx = − IIxt , vy = − IIyt decrease the complexity of the necessary circuits [9, 31]. However, this simplification does not solve the zero-division problem. Another approach is to compute only the local spatial and temporal derivatives Ix , Iy , and It and to perform the division off-chip [15].
106
8.2.2 Intensity-Based Correlation The intensity-based correlation algorithms are based on spatio-temporal frequency-based models of motion processing in flies [36] and primates [40]. The primary spatio-temporal frequency-based motion algorithms that have been implemented in aVLSI are the Hassenstein–Reichardt model [18], the Barlow–Levick model [3], and the Adelson–Bergen motion energy model [1]. Many implementations of spatio-temporal frequency motion algorithms in single-chip aVLSI systems have been reported, for example, by Andreou et al. [2], Gottardi and Yang [14], Delbrück [6], Meitzler et al. [33], Harrison and Koch [16, 17], Liu [29], and Higgins et al. [20, 21, 35]. Intensity-based correlation algorithms work off the image brightness values unlike token-based algorithms that compute motion from detected features in the image. The motion outputs of the former algorithms have a profile that depends on the spatial and temporal frequency components of the local image patches in the field of view. Although the motion output amplitude is dependent on the square of the contrast of the input signals, these algorithms provide a motion output even for low-contrast inputs that fall below the threshold of a token-based algorithm. The fact that the outputs of these models are dependent on both contrast and temporal frequencies means that the readout is ambiguous, thus it becomes impossible to determine the speed of a visual patch from a single filter. To solve this ambiguity problem, elaborated versions of these models use a bank of filters with different time constants [38]. These versions use a place code where the activation of a filter output codes for the presence of a particular spatio-temporal frequency component in the image patch instead of the value code used for gradient and time-of-travel algorithms. The place code helps, for example, in situations where transparent objects are moving on top of one another, thus allowing several spatio-temporal frequency filters to be activated at the same time. A value code would have to decide for a particular movement or would just give an average response to the movements in the scene. The place coding of motion using several filter banks is an advantage of spatio-temporal frequencybased motion algorithms but is also a drawback if
R. Moeckel and S.-C. Liu
implemented in aVLSI because of the following reasons: (a) since several filters are needed for each motion pixel, the pixel fill-factor will be greatly reduced and (b) the readout and integration of the filter outputs is non-trivial even without considering mismatch. Mismatch between the filter circuits in different pixels makes the readout very difficult because the time constants of the filters across pixels will not match.
8.2.3 Token-Based Correlation Token-based correlation algorithms start with an initial step of detecting a particular token or feature in the image. Common to all implementations is the use of a non-linear thresholding circuit for extracting local temporal contrast edges and/or spatial contrast edges. Contrast edges exceeding a threshold produce binary pulses. We describe only two of the many described correlation algorithms here: (a) the correlation is performed between a detected edge at one pixel with a delayed version of this edge at an adjacent pixel with the help of delay lines and (b) the correlation is performed between adjacent pixels using binary correlation filters that were inspired by the original Reichardt correlator. Delay line correlation: The motion detection chips that perform correlation based on delay lines [22] are inspired by the coincidence detector model in the auditory system of the barn owl. A temporal ON edge triggers a binary pulse of fixed width at each pixel. By propagating the pulses from neighboring pixels through two parallel delay lines from opposite directions the image velocity can be determined from the intersection point of the pulses. This motion chip highlights the following concerns: (a) aVLSI implementations of delay lines are generally costly in silicon area and (b) only a limited range of velocities can be detected for a particular delay setting. Binary Reichardt correlator: The binary motion correlator chip by Sarpeshkar et al. [39] is inspired by the Hassenstein–Reichardt correlator. Each edge that is detected by a motion pixel triggers a pulse of fixed width. This pulse is correlated with the delayed pulse of the neighboring pixel. The correlation circuit is very simple since the output is determined by the overlap of the two pulses. However, the pulse width determines
8
Motion Detection Chips for Robotic Platforms
the range of detected speeds, and the edge detection circuit is not sensitive to low-contrast stimuli. Similar to the intensity-based correlation algorithms, the performance of the token-based implementations is limited by the fixed time constants of the correlation filters. To detect a wider range of stimulus velocities, filter banks with different time constants or correlation filters with self-adjustable delays would be necessary. These solutions would lead to even less compact designs and lower pixel fill-factors. The clear advantage of token-based over intensity-based correlation algorithms lies in the non-linear edge detection circuits that make them less dependent on contrast because each edge is simply represented by a binary event independent of contrast. However, where tokenbased correlation algorithms fail because the signal is below the threshold, intensity-based correlation implementations are often still able to report at least the correct direction of motion.
8.2.4 Time-of-Travel Time-of-travel algorithms directly measure the time taken by a contrast edge to travel between two adjacent pixels. Five major algorithms have been reported in the literature: facilitate-and-inhibit, facilitate-andtrigger, facilitate-trigger-and-inhibit, facilitate-andsample, and facilitate-and-compare. An overview of the different algorithms is given in Fig. 8.3. One general advantage of time-of-travel motion detection chips is that they do not require any highprecision dividers, multipliers, and differentiators. Thus they do not suffer from zero-division problems like the gradient algorithms and allow for more compact designs than many correlation algorithms. The time of travel of a contrast edge is usually measured by using voltage or current pulses which are robust against noise pertusbations. Due to the non-linear highgain stages in the edge detection circuits, the motion outputs are dependent only on the stimulus speed and independent of the stimulus contrast down to very lowcontrast values. All time-of-travel algorithms except the facilitateand-sample algorithm that uses a log code (Fig. 8.3e) show a linear relationship between the time-of-travel ttof of a contrast edge and its speed. Thus the computed outputs are inversely proportional to the edge
107
, where Δx corresponds to velocity ν, that is, v = tΔx tof the interpixel distance. This inverse relationship means that the sensitivity of the algorithm to different speed ranges is non-uniform. This property is common to all time-of-travel algorithms. It does not necessarily mean that these algorithms are at a disadvantage because the chips can be tuned so that the sensitivity is approximately linear in the expected optical flow range. Facilitate-and-inhibit: In the facilitate-and-inhibit (FI) algorithm (Fig. 8.3a), an edge detected at a pixel triggers an output pulse which is reset when the edge reaches the neighboring pixel. The width of this pulse linearly increases with the time of travel of the stimulus. The FI algorithm has its roots in the neural direction selectivity model of the rabbit retina. The first silicon design that implemented this model [4] uses temporal ON contrast edges to trigger pulses that are terminated by inhibition from neighboring photoreceptor pixels. In contrast to the motion chips that are described in the rest of this section, the implementation in [4] did not include a circuit that generates a binary pulse when an edge is detected. The output pulse amplitude of this circuit is dependent on the contrast of the temporal edge, thus making the chip useable only for limited contrast and velocity ranges. An implementation that uses pulses generated from the detection of a spatial edge using a spatial difference of Gaussian filter was proposed by Etienne-Cummings et. al [11]. The time of travel is determined by measuring the time that an edge took to appear at a pixel and disappear at the neighboring pixel. Facilitate-and-trigger: In the facilitate-and-trigger (FT) algorithm (Fig. 8.3b) contrast edges trigger pulses of a fixed width. When an edge travels across the image plane it triggers two pulses at neighboring pixels that are delayed by the time of travel. Thus a simple boolean AND operation is sufficient to determine the temporal overlap of the pulses which linearly decreases with the image velocity. The FT implementation by Kramer et al. [26] can detect speeds from 2 to 120 mm/s, measured through image projections onto the chip. However, as shown in Fig. 8.3b the motion output saturates for higher image speeds. The general problem of the FT algorithm lies in the fixed pulse width of the triggered pulses. If the pulse width is too small, slow stimuli cannot be detected since the pulses will not overlap. On the other hand, a large pulse width
108
R. Moeckel and S.-C. Liu
Fig. 8.3 Comparison between time-of-travel motion algorithms. We show the block diagrams of circuitry (top), the time traces for the internal signals, and the motion output that prefers a stimulus traveling from a photoreceptor at the left to its right neighbor (middle) as well as the dependence of the output signal on the time of travel and the velocity of the traveling stimulus (bottom). Note that while the FI, FT, and FTI algorithms’ use encode the stimulus velocity in the width of an output pulse the FS and FC algorithms output a value code. F: facilitate, T: trigger, I: inhibit, L: motion output that prefers traveling edges from right to the left, R: motion output that prefers traveling edges from the left to the right, Pw : pulse width, tau: time constant of pulse with fixed width
can lead to an output that constantly remains “high,” for example, when pulses are triggered periodically due to a highly textured and fast stimulus. This means that no output pulse is detected by the system. Facilitate-trigger-and-inhibit: To overcome the fixed pulse width limitation in the FT algorithm, Kramer introduced the facilitate-trigger-and-inhibit
(FTI) [24]. The FTI algorithm (Fig. 8.3c) integrates signals from three adjacent pixels and outputs a binary pulse whose width is inversely related to the velocity of contrast edges traveling across the image plane. A contrast edge moving from left to right causes the edge detection circuit at pixel 1 to output a pulse, called the facilitate signal F1. The signal F1 enables the motion
8
Motion Detection Chips for Robotic Platforms
detection circuit to generate the trigger signal T2 that is caused by the same edge traveling across pixel 2. The signal T2 causes the motion circuit to output a pulse R that lasts until the edge reaches pixel 3. Once the edge is detected by pixel 3, it sends an inhibition signal I3 which terminates the output pulse R. The detectable image speed reported in [24] ranges between 0.034 and 60 mm/s. For velocities above 60 mm/s the circuit also shows a motion output in the non-preferred direction due to the limited rise time of the inhibition signal. Facilitate-and-sample: Facilitate-and-sample (FS) aVLSI motion circuits first described by Kramer et al. [25, 26] use (Fig. 8.3d,e) a contrast edge detected by a pixel to trigger both a small sampling pulse (S1, S2) and a facilitate signal (F1, F2) that logarithmically decay over time. The time to travel across two pixels is determined by the sampled value of F1 by S2 for stimuli moving from left to right and by the sampled value of F2 by S1 for stimuli moving in the opposite direction. A similar token-based algorithm implemented using discrete components was reported earlier by Franceschini et al. [5, 12]. The FS motion detection chip of Kramer et al. uses a logarithmic encoding for the time of travel (Fig. 8.3e). This encoding allows the motion circuit to represent a time-of-travel range of more than seven orders of magnitude for fixed time constants. This chip performance is the best reported in the literature so far. However, these motion measurements were done by using electronically generated pulses which bypassed the outputs of the pixel edge detection circuits. In addition, due to the highly compressed encoding of the motion output, where a change of 200 mV in the output corresponds to one order of change in the time of travel of an edge, 20 mV of noise will lead to a relative error of 10% in the decoding of the time of travel. The FS algorithm is advantageous over the FI, FT, and FTI algorithms because its motion output can be held on a capacitor for a few milliseconds, while the latter algorithms produce temporal-coded motion outputs that need to be constantly monitored. Facilitate-and-compare: The facilitate-andcompare (FC) algorithm is inspired by the FS algorithm [8]. The implemented algorithm (Fig. 8.3f ) does not require sample-and-hold circuits to store the sampled value of the facilitate signals. The motion output is computed continuously by subtracting the linearly decaying facilitate signals from adjacent pixels. This difference codes the speed of the stimulus.
109
This algorithm leads to less complexity for the FC circuits but the implemented circuits suffer from the trade-off between high sensitivity settings (the facilitate signals will decay quickly to ground) and from mismatch (the motion output varies over time if the decay rates of the facilitate signals are different). The FC implementation in [8] can measure image velocities in the range from 0.3 to 40 mm/s.
8.3 Promising Motion Chip Architectures In our opinion, the most interesting motion detection chips so far are the normal flow implementations [32], the gradient-based implementations [44], and our timeof-travel implementation [34]. The normal optic flow implementations [31, 32] make use of front-end APS pixels (similar to pixels in imagers) for sensing. These clock-based APS circuits allow one to sequentially read out the local intensity values of a row of pixels over a global data bus. The computation of the motion values is done on the periphery of the APS array, so very dense two-dimensional motion arrays can be achieved. The authors report an array size of 120 × 80 APS pixels, a fill-factor of 10%, and an equivalent frame rate of 20 frames per second with a power consumption of 34.5 mW. Note that the normal flow estimation in this implementation is done in a line-sequential way. The mismatch between the outputs of the different APS pixels (also called fixed pattern noise in imaging literature) is removed by the correlated double sampling (CDS) technique and since the motion values are computed sequentially row by row, the circuits that perform the motion computation can be shared thus eliminating another source of mismatch. However, this implementation has drawbacks that are caused in part by the APS implementation and by the normal flow algorithm itself. Since the operation of the APS requires a clock, this implementation is susceptible to temporal aliasing unlike the continuoustime motion detection chips. In addition, the sequential readout of the APS array for computing the motion can be a bottleneck and prevents the detection of fast moving stimuli. A general problem with the normal optic flow implementation is the need for high-precision division circuits whose output is imprecise for low-contrast
110
R. Moeckel and S.-C. Liu
stimuli and especially in the presence of mismatch and noise. These chips cannot produce reliable measurements in low-contrast environments and a thresholding circuit is needed to eliminate low-contrast signals from the flow computation. Moreover, APS pixels do not have local gain control properties like the adaptive photoreceptor circuit [7] and the one with selectable adaptation time constants [28] which allow circuit operation over a wide range of background lighting conditions. These photoreceptor circuits are widely used as the front end in many motion detection chips including the remaining two implementations to be described in this section. The gradient-based motion implementation in [45] provides local motion values and uses the spatial integration of the local values in addressing the aperture problem. It allows the local adaptation of the smoothing across motion pixels, thus supporting motion segmentation. However, the need for linear multiplication circuits makes the chip vulnerable to noise and mismatch. Due to the strong dependence of the motion output on contrast for values below 50% the chip is not usable in low-contrast environments. The smoothing and multiplication operation circuits also require a lot of transistors leading to a reported pixel fillfactor of 4%. However, the smoothing and segmentation properties on the motion computation makes this an interesting network with cooperation and competition properties.
In our time-to-travel implementation, we combined the FS algorithm together with a novel implementation of a front-end two-threshold edge detection circuit that allows the robust detection of features with contrast down to 2.5%. In contrast to the logarithmic encoding of the motion output used by Kramer et al., we use a linear encoding for the motion values [34]. This linear code is obtained by letting the facilitate signals F1 and F2 decay linearly over time (Fig. 8.3d). Figure 8.4a shows the measured motion output for all 24 motion pixels in response to a drifting sinusoidal grating with a contrast C of 10% using a passive LCD display. The stimulus contrast is defined as −Imin ∗ 100%. As predicted theoretically, the C = IImax max +Imin linear encoding of the speeds leads to a saturation of the signal for high stimulus speeds. The chip is sensitive for speed values in a range of approximately two orders of magnitude (approximately 0.1–10 mm/s) for a given time constant. The global decay rate of the facilitate-signals can be adjusted in real time, for example, through an off-chip controller to match the motion speeds in the image. Figure 8.4a also shows the mismatch profile of the motion detection pixels. This mismatch is generated during the fabrication of the chip. However, the offset and decay rate differences in the motion output across pixels due to this mismatch can be easily removed off-chip. As mentioned above, this implementation uses the adaptive photoreceptor circuit by Delbrück and
Fig. 8.4 Motion output of our facilitate-and-sample motion detection chip [34]. (a) Average motion output for all 24 motion pixels for different stimulus velocities for a particular setting of the decay rate of the facilitate signal. (100◦ /s is equivalent to 10 mm/s for a 6 mm focal length lens.) For this particular decay rate, the range of velocities detected is less than the maximum possible range of two orders achievable with a slower decay rate. The motion outputs across the array for a set of fixed stimulus
velocities (see projected lines on the x – y plane) show the typical sinusoidal mismatch pattern caused by the aVLSI fabrication process. (b) Motion output for different contrast values of the stimuli for a single pixel. While the peak-to-peak value of the photoreceptor output decreases for lower contrast stimuli, the average motion output remains constant for motion values down to 2.5%
8
Motion Detection Chips for Robotic Platforms
Mead [7]. This circuit adapts locally over 6 decades of background light intensity, thus allowing a reliable coding of the motion outputs under conditions ranging from moonlight to sunlight. This adaptation property is of great benefit when the chip is operated in natural surroundings where there could be a wide spatial distribution of light intensities, for example, when sunlight streams into part of a room and the remaining areas are in shadow. To operate reliably in low-contrast environments and to provide a motion output that is invariant to a large range of spatial and temporal frequencies, we combined the circuits originally proposed by Liu [29] to model the laminar monopolar cell (LMC) in the fly visual system together with a novel two-threshold edge detection circuit. Instead of a single threshold, where a signal above this threshold is considered as a contrast edge, we require that the signal moves across two thresholds which are about 100 mV apart, thus ensuring that contrast edges are not randomly generated in the case of a noisy signal that is riding around threshold. The output of the LMC circuit which is a high-gain, high-pass filtered version of the photoreceptor output is compared against two adjustable thresholds. By adjusting these thresholds, we are able to reliably detect low-contrast edges and to filter out noise from, for example, light flicker. The adjustable distance between the thresholds allows us to trade-off between the detection of low-contrast stimuli and the filtering out of noise. A small distance between the thresholds will support the detection of low-contrast stimuli but is more affected by noise while a bigger distance will result in a system that is more robust against noise but is less sensitive to low-contrast stimuli. In our measurements, we placed the thresholds just far enough apart to ensure that noise alone does not trigger an output pulse at the edge detection circuit. Figure 8.4b shows both the peak-to-peak photoreceptor output and the motion output of a single motion pixel in response to a sinusoidal drifting grating of constant speed and different contrast values presented on an LCD screen. While the peak-to-peak photoreceptor output monotonically decreases with lower stimulus contrast the motion output stays constant. Due to the limitations of the LCD screen, we could not reliably decrease the stimulus contrast below 2.5%. The chip provides real-time computation of onedimensional optical flow, and has high temporal and spatial resolutions. The printed circuit board holding
111
this chip weighs less than a gram and consumes only a few milliwatts of power. Similar to many of the motion chip implementations, a drawback of the current implementation is that it is a linear array which means that the extracted motion values are susceptible to the aperture problem. However, this onedimensional optical flow can be sufficient for steering a micro-aerial vehicle [48].
8.4 Summary Over the last 2 decades, more than 20 different aVLSI monolithic motion detection chips have been reported. The implemented algorithms on these chips can be divided into more than 14 sub-categories. Because of the wide range of algorithms and implementations, an unbiased comparison of the best motion chip to use on a robotic platform is difficult. However, there are several things that one can consider when designing or using these chips as a sensor on a mobile robot: (a) to operate in natural environments and without needing high-contrast textures, the chip should respond to low-contrast stimuli reliably and unambiguously; (b) the motion chip outputs should have low mismatch or the mismatch must at least be easily removed in the readout, for example, through calibration; (c) the chip should operate reliably in the presence of noise, for example, noise on the power supply lines as well as noise in the visual stimuli like light flicker; (d) the pixel design should be compact and the design should support a large two-dimensional spatial array; and (e) finally, the chip should work robustly under different background light intensities. To decide on a particular motion algorithm and its subsequent implementation, one has to clearly identify the advantages and disadvantages that come from the motion algorithm itself and those that come from the VLSI circuit implementations. For example most of the motion detection chips reported so far show a high dependence of the motion output on the stimulus contrast. While this contrast sensitivity can be easily explained by the algorithm itself for intensitybased correlation implementations, there is no theoretical reason why an FT implementation should respond more ambiguosly to low-contrast features than an FS implementation. In this case, the ambiguous response of the FT motion chip lies in the implementation of the edge detection circuits.
112
The compactness of a motion detection chip depends on both the underlying algorithm and the implementation. For most analog implementations a trade-off has to be made between design compactness and mismatch. By increasing the area of transistors the matching can be improved because the circuits are less affected by noise in the fabrication process. On the other hand an increase in transistor area will lead to a reduction of the pixel fill-factor and a less compact design. In general it is very difficult to compare the circuit compactness of different motion detection chips. In the literature specifications of the fillfactor, pixel area and transistor count of a design are usually used for comparison. However, these numbers have to be taken with great care since the transistor sizes can vary greatly between different implementations not only because of matching considerations but also because the chips were fabricated in different processes. The comparison of the designs is more meaningful if the chips are fabricated in the same process. A better measure for the comparison of compactness might be the number of circuit elements that are needed for the implementation. However, even the number of transistors and capacitors can only give a rough basis for comparison since the size of the transistors varies considerably depending on how the transistors are operated, that is, whether they are “digital” or “analog” transistors; and the capacitors greatly vary in size depending on the required time constant and the process by which they are fabricated. Furthermore, there are a number of circuits that can implement the same basic function but with different dynamic range for the input and/or output. As an example, a circuit designer can implement a voltage buffer as a 2-transistor source follower circuit, a 5-transistor operational amplifier circuit, or a 10-transistor wide range linear amplifier which has the largest input linear range of the three possibilities. A comparison of the mismatch characteristics in the motion outputs of different chips in the literature is often not possible because these data are usually not available. Hence, one can only speculate on the performance of a chip in the presence of fixed pattern noise. In general, one prefers motion chip implementations with low mismatch which can be achieved by using, for example, on-chip compensation techniques. Another approach around mismatch is to use circuits where the mismatch is only seen as an offset or gain shift in the output transfer function of the motion circuits. This
R. Moeckel and S.-C. Liu
offset and gain mismatch can then be easily compensated off-chip. To facilitate robustness to noise, a circuit designer has to consider the possible different noise sources. For example, we use the low-pass characteristics of the photoreceptor circuit to remove lighting flicker noise. The presence of noise also affects the performance of motion algorithms that require high-precision analog computations like division, multiplication, and taking derivatives. Implementations that compute motion based on pulses, like time-of-travel algorithms and token-based correlation algorithms, have shown good robustness against noise. Based on the considerations discussed above, we identified the three promising monolithic motion chip architectures for flying platforms in Sect. 8.3. Although we have focused only on the monolithic chip implementations in this chapter, some of the motion algorithms themselves can also be implemented using a two-chip system consisting of an imager and a processor albeit at the cost of speed and weight, for example, systems on the robotic platforms in Chaps. 3, 5, 6, and 9. As mentioned before, many of the implemented chips so far usually have a linear architecture which makes the motion outputs susceptible to the aperture problem. Future solutions like the use of three-dimensional technology, a twochip system, or implementations of alternate methods like the image interpolation methods proposed by Srinivasan [42, 43] can help to relieve this problem. We believe that the aVLSI motion chips have important properties that allow them to provide useful optic flow information even under low-contrast and noisy conditions for the navigation of autonomous flying vehicles. The technology for the circuits is improving fast and we will likely soon see these chips used on micro-aerial vehicles leading to a new generation of small autonomously flying robots with vision.
References 1. Adelson, E., Bergen, J.: Spatio-temporal energy models for the perception of motion. Journal of the Optical Society of America A 2, 284–299 (1985) 2. Andreou, A., Strohbehn, K., Jenkins, R.: Silicon retina for motion computation. IEEE International Symposium on Circuits and Systems pp. 1373–1376 (1991)
8
Motion Detection Chips for Robotic Platforms
3. Barlow, H.B., Levick, W.R.: The mechanism of directionally selective units in rabbit’s retina. Journal of Physiology 178, 477–504 (1965) 4. Benson, R., Delbruck, T.: Direction selective silicon retina that uses null inhibition. Advances in Neural Information Processing Systems 4, 756–763 (1992) 5. Blanes, C.: Appareil visuel pour la navigation vue d’un robot mobile. Ms thesis in neuroscience, Univ. of AixMarseille II, Marseille, France (1986) 6. Debruck, T.: Silicon retina with correlation-based, velocity tuned pixels. IEEE Transactions on Neural Networks 4, 529–541 (1993) 7. Delbruck, T., Mead, C.: Adaptive photoreceptor circuit with wide dynamic range. IEEE International Symposium on Circuits and Systems IV, 339–342 (1994) 8. Deutschmann, R.A., Koch, C.: Compact analog VLSI 2-D velocity sensor. IEEE International Conference on Intelligent Vehicles 1, 359–364 (1998) 9. Deutschmann, R.A., Koch, C.: Compact real-time 2-D gradient based analog VLSI motion sensor. Proceedings of SPIE International Conference on Advanced Focal Plan Array and Electronic Cameras II 3410, 98–108 (1998) 10. Egelhaaf, M., Borst, A.: Motion computation and visual orientation in flies. Comparative Biochemistry and Physiology 104A, 659–673 (1993) 11. Etienne-Cummings, R., Fernando, S., Takahashi, N., Shtonov, V., Van der Spiegel, J., Mueller, P.: A new temporal domain optical flow measurement technique for focal plane VLSI implementation. IEEE Proceedings on Computer Architectures for Machine Perception pp. 241–250 (1993) 12. Franceschini, N., Blanes, C., Oufar, L.: Appareil de mesure passif et sans contact de la vitesse d’un objet quelconque. Dossier technique Nb 51549, ANVAR/DVAR, Paris (1986) 13. Franceschini, N., Pichon, J.M., Blanes, C.: From insect vision to robot vision. Philosophical Transactions of the Royal Society of London. Series B 337, 238–294 (1992) 14. Gottardi, M., Yang, W.: A CCD/CMOS image motion sensor. International Solid State Circuits Conference pp. 194–195 (1993) 15. Gruev, V., Etienne-Cummings, R.: Active pixel sensor with on-chip normal flow computation on the read out. IEEE International Conference on Electronics, Circuits, and Systems pp. 215–218 (2004) 16. Harrison, R.R.: A biologically-inspired analog IC for visual collision detection. IEEE Transactions on Circuits and Systems I 52, 2308–2318 (2005) 17. Harrison, R.R., Koch, C.: A robust analog VLSI motion sensor. Autonomous Robots 7, 211–224 (1999) 18. Hassenstein, B., Reichardt, W.: Systemtheoretische Analyse der Zeit-, Reihenfolgen- und Vorzeichenauswertung bei der Bewegungsperzeption des Russelkafers Chlorophanus. Zeitschrift fur Naturforschung 11b, 513–524 (1956) 19. Hausen, K.: Motion sensitive interneurons in the optomotor system of the fly. II. The horizontal cells – Receptivefield organization and response characteristics. Biological Cybernetics 46(1), 67–79 (1982) 20. Higgins, C., Korrapti, S.: An analog VLSI motion energy sensor based on the Adelson-Bergen algorithm.
113
21.
22.
23. 24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
International ICSC Symposium on Biologically-Inspired Systems (2000) Higgins, C.M., Pant, V., Deutschmann, R.: Analog VLSI implementation of spatio-temporal frequency tuned visual motion algorithms. IEEE Transactions on Circuits and Systems – I 52(3), 489–502 (2005) Horiuchi, T., Lazzaro, J., Moore, A., Koch, C.: A delay line based motion detection chip. Advances in Neural Information Processing Systems 3, 406–412 (1991) Horn, B., Schunk, B.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981) Kramer, J.: Compact integrated motion sensor with threepixel interaction. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 455–460 (1996) Kramer, J., Sarpeshkar, R., Koch, C.: An analog VLSI velocity sensor. IEEE International Symposium on Circuits and Systems pp. 413–416 (1995) Kramer, J., Sarpeshkar, R., Koch, C.: Pulse-based analog VLSI velocity sensors. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 44 (2), 86–101 (1997) Krapp, H., Hengstenberg, R.: Estimation of self-motion by optic flow processing in single visual interneurons. Letters to Nature 384, 463–466 (1996) Liu, S.C.: Silicon photoreceptors with controllable adaptive filtering properties. In: C. G, M. Bayoumi (eds.) Learning on Silicon, pp. 67–78. Kluwer Academic Publishers, Norwell, MA (1999) Liu, S.C.: A neuromorphic aVLSI model of global motion processing in the fly. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 47(12), 1458–1467 (2000) Lyon, R.: The optical mouse, and an architectural methodology for smart digital sensors. CMU Conference on VLSI Structures and Computations pp. 1–19 (1981) Mehta, S., Etienne-Cummings, R.: Normal optical flow chip. IEEE International Symposium on Circuits and Systems 4, 784–787 (2003) Mehta, S., Etienne-Cummings, R.: A simplified normal optical flow CMOS camera. IEEE Transactions on Circuits and Systems I 53(6), 1223–1234 (2006) Meitzler, R.C., Strohbehn, K., Andreou, A.G.: A silicon retina for 2-D position and motion computation. IEEE International Symposium on Circuits and Systems III, 2096–2099 (1995) Moeckel, R., Liu, S.C.: Motion detection circuits for a timeto-travel algorithm. IEEE International Symposium on Circuits and Systems pp. 3079–3082 (2007) Pant, V., Higgins, C.M.: A biomimetic focal plane speed computation architecture. Proceedings of the Computational Optical Sensing and Imaging Conference (2007) Reichardt, W.: Autocorrelation, a principle for the evaluation of sensory information by the central nervous system. Sensory Communication pp. 303–317 (1961) Reichel, L., Liechti, D., Presser, K., Liu, S.C.: Range estimation on a robot using neuromorphic motion sensors. Robotics and Autonomous Systems 51(2–3), 167–174 (2005) Santen, v.J., Sperling, G.: A temporal covariance model of motion perception. Journal of the Optical Society of America A 1, 451–473 (1984)
114 39. Sarpeshkar, R., Bair, W., Koch, C.: Visual motion computation in analog VLSI using pulses. Advances in Neural Information Processing Systems 5, 781–788 (1993) 40. Schiller, P.H., Finlay, B.L., Volman, S.F.: Qualitative studies of single-cell properties in monkey striate cortex. I. Spatiotemporal organization of receptive fields. Journal of Neurophysiology 39, 1288–1319 (1976) 41. Serrano-Gotarredona, T., Linares-Barranco, B.: A new 5parameter MOS transistor mismatch model. IEEE Electron Device Letters 21(1), 37–39 (2000) 42. Srinivasan, M.V.: Generalized gradient schemes for the measurement of two dimensional image motion. Biological Cybernetics 63, 421–431 (1990) 43. Srinivasan, M.V.: An image-interpolation technique for the computation of optic flow and egomotion. Biological Cybernetics 71, 401–416 (1994)
R. Moeckel and S.-C. Liu 44. Stocker, A., Douglas, R.: Computation of smooth optical flow in a feedback connected analog network. Advances in Neural Information Processing Systems 11, 706–712 (1999) 45. Stocker, A.A.: An improved 2D optical flow sensor for motion segmentation. IEEE International Symposium on Circuits and Systems 2, 332–335 (2002) 46. Tanner, J., Mead, C.: An integrated analog optical motion sensor. VLSI Signal Processing 2, 59–76 (1986) 47. Ullman, S.: Analysis of visual motion by biological and computer systems. IEEE Computer 14, 57–69 (1981) 48. Zufferey, J., Klaptocz, A., Beyeler, A., Nicoud, J., Floreano, D.: A 10-gram microflyer for vision-based indoor navigation. IEEE/RSJ International Conference on Intelligent Robots and Systems pp. 3267–3272 (2006)
Chapter 9
Insect-Inspired Odometry by Optic Flow Recorded with Optical Mouse Chips Hansjürgen Dahmen, Alain Millers, and Hanspeter A. Mallot
Abstract Inspired by investigations in water striders (Gerrids) on the eye structure and visually controlled behaviour and subsequent simulations of self-motion estimates from optic flow, a device is presented that extracts self-motion parameters exclusively from flow. Optical mouse chips provided with adequate lenses serve as motion sensors. Pairs of sensors with opposite lines of sight are mounted on a sensor head. The optical axes of the sensor pairs are distributed over the largest possible solid angle. The device is fast, cheap and light. The calibration procedure and tests on the precision of self-motion estimates in outdoor experiments are reported.
9.1 Introduction Optic flow (OF) is used in many insects for flight control [27]. Bees control landing, flight speed and travelling distance by OF [1, 24, 23, 25]. There is a vast literature on the influence of OF in various species of flies on the control of flight speed, chasing behaviour and turning responses [8, 9, 13, 10]. The present work was inspired by findings on the visually controlled behaviour of water striders (Gerris lacustris, Gerris paludum). Water striders compensate precisely for self-rotation and self-translation by separate responses [16, 17]. Their eye is divided into three morphologically and functionally distinguished zones, a dorsal, horizontal and ventral one [5]. The
H. Dahmen () Cognitive Neurosciences, University of Tübingen, Germany e-mail:
[email protected]
self-rotation response is limited to the dorsal part which comprises about 30% of the ca. 800 ommatidia of one eye. In the dorsal eye we find relatively large interommatidial angles of more than 10◦ . The good self-rotation compensation led us to investigate precision limits for the estimation of self-motion parameters from OF in a static surround under various environmental conditions using different parts of the visual field [6, 7]. In these studies the algorithm proposed by Koenderink and van Doorn [18] for spherical wide-field eyes was used. The result was that selfmotion can be extracted from flow to a surprisingly high precision if flow can be observed by detector pairs which look along opposite lines of sight and if these pairs are distributed over a large solid angle. Under these conditions only a few properly combined flow observations are necessary. The extraction of selfmotion from flow by matched filters inspired by observations on wide-field neurons in flies has been investigated by Franz and Krapp [12]. Besides the interest in the exploitation of OF in animals there is a lot of work in robotics devoted to reveal the structure of the environment from flow induced by self-motion and/or to determine and control self-motion by OF [28, 11]. To extract self-motion from flow Baker and Pless proved that omnidirectional view helps a lot to eliminate ambiguities in the evaluation of self-rotation and -translation [2, 21]. There have been developed various catadioptric systems to realize omnidirectional vision with a single camera [4, 19, 15]. The approach most similar to ours is the so-called argus eye [2, 3]. In this system several cameras with non overlapping visual fields looking into opposite directions cooperate to reveal self-motion parameters and structure of the environment. Besides the difficulty to calibrate such a system, one has to deal
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_9, © Springer-Verlag Berlin Heidelberg 2009
115
116
with the integration of the output of several cameras [21]. We wanted to test a much simpler and faster setup monitoring flow by commercially available dedicated flow detectors. With the development of optical mouse chips cheap (2.5 Euro), light (0.5 g) and fast (response time < 1 ms) flow detectors have become available. Here we present a hardware realization of an odometer driven solely by flow measurements along a few lines of sight in space.
9.2 Hardware Implementations Two prototypes of sensor heads have been constructed so far. Head A (Fig. 9.1) is designed for ground moving robots for which the distance to the ground does not change much. These vehicles are supposed to have for self-motion only 2 degrees of freedom (DOF), yaw and
Fig. 9.1 Head A contains eight optical mouse sensors (ADNS2620, Avago) looking down to the ground at about −45◦ relative to the horizon. Each sensor is equipped with an adjustable plastic collimator lens (CAY046 Philips) of f = 4.6 mm focal length which images the floor onto the lightsensitive area of the sensor
H. Dahmen et al.
translation along their long axis. Head B (Fig. 9.2) is intended to extract all the possible information about self-motion that can be obtained from optical flow: 3 DOF of rotation, (as is well known, see Sect. 9.4) only the direction of translation (2 DOF) but not its absolute size, and the ‘relative nearness’ of the environment seen by each sensor. The mouse sensors ADNS2620 sample the light intensity on their 1 × 1 mm array of 18 × 18 lightsensitive diodes about 1500 times/s. The focal length of the lens (f = 4.6 mm) and the size of the diode array determine the angular size of the visual field of the sensors to 12.4◦ × 12.4◦ . A fast on-chip digital signal processor (DSP) correlates the patterns of two consecutive samplings and evaluates the displacement between them. In order to avoid too large displacements between two images the maximum allowed speed of the pattern on the chip’s light-sensitive surface is limited to 30 cm/s. This limits the maximum rotation speed of the sensors to (300 × 180/π )/ f [◦ /s] (i.e. 3737 [◦ /s]). When the viewing distance to ground is D [cm] the maximum translational speed is (300/f)D = 65.22 D [cm/s] (i.e. about 980 [cm/s] for D = 15 cm). The minimal displacement detectable on the sensor surface is 1/16 mm. With the focal length of 4.6 mm this leads to an angular resolution of 0.778◦ and at the distance of 15 cm to a minimal detectable displacement of 2.04 mm. A microprocessor (μP) (CY7C68013A-56P, Cypress) reads information continuously from all sensors in parallel and synchronously (Fig. 9.3). The information consists of three bytes: dY, dX, SQ in that order. dY, dX are the pattern displacements along each sensor’s Y/X-axis since the last reading; SQ is a ‘quality’ byte which indicates the ‘number of features’ detected in the sensor image while correlating. If SQ undergoes a selectable threshold dY, dX may be unreliable and discarded. Reading the information from all sensors (strictly in parallel) and transferring them via an USB1.1 bulk transfer to the PC costs less than 2 ms.
9.3 Calibration In order to use these heads for odometry, the line of sight, the X/Y-orientation relative to the head coordinate system and the sensitivity of each sensor have to
9
Odometry from Flow by Mouse Chips
Fig. 9.2 Head B contains two sets of eight sensors each, one above the other below the aluminium plate in (a). The eight sensors of the upper set look under about 45◦ of azimuth with respect to each other into the horizon, only four can be seen in (b). The lower set looks down to the ground in a similar way as in head A (c). The eight horizontally looking sensors are equipped with the same but non adjustable lenses as the down looking sensors
117
(a)
(b)
(c)
9.3.2 Head B
Fig. 9.3 Circuit diagram of the odometer. A microprocessor (μP) (CY7C68013A-56P, Cypress) reads information continuously from all sensors in parallel via two serial lines to each sensor: a clock- and a data line. The clock line is common to all sensors, the data line of each sensor is connected to an individual I/O pin on the μP. The μP is connected to a PC via USB
be determined. In order to reveal these parameters the heads have to undergo a calibration procedure.
9.3.1 Head A For the eight-sensor head A the task is very simple: The head is moved along a defined straight line or rotated around a defined angle and the output of all sensors is registered (Fig. 9.4). From these responses we get for each sensor i ‘unit’ responses ai = aix , aiy ( means transposed) to 1 cm translation and bi = bix , biy to 1◦ of rotation. The response ci = cix , ciy to a combined motion of τ cm of translation and ρ degrees of yaw is given by c = τ a + ρb i
i
i
(9.1)
In order to determine the orientation and sensitivity of each sensor in the coordinate system of head B, the following calibration procedure was proposed by A. Schilling (personal communication). We rotate head B about its X-, Y-, Z-axis by defined angles and read the sensors’ output. Let (X, Y, Z) be the sensor head coordinate system, and (Xsi , Ysi , Zsi ) the ith sensor coordinate system with Zsi pointing along the line of sight. Let the rotation matrix Ri describe the rotation of (X,Y,Z) into (Xsi ,Ysi ,Zsi ) ⎛
⎞
⎛
⎛ ⎞ X ⎜ ⎟ ⎟ ⎜ Y i ⎟ = Ri ⎜ ⎝Y ⎠; ⎝ s⎠ Z Zsi Xsi
i ri ri r11 12 13
⎞
⎜ i i i ⎟ ⎟ Ri = ⎜ ⎝ r21 r22 r23 ⎠ i ri ri r31 32 33 (9.2)
i , ωi , ωi in the ith sensor’s A rotation ωsi = ωxs ys zs coordinate system leads to a sensor response ci (there i ): is no response to ωzs " c = i
cix ciy
#
" =f
i
i −ωys i ωxs
#
" =f
i
0 −1 0 1 0 0
# Ri ω (9.3)
f i reflects the ‘sensitivity’ of the sensor i (proportional to the focal length of its lens). The rotation in the
118
H. Dahmen et al.
(a)
(b)
Fig. 9.4 The X/Y-response of the eight sensors of head A to (a) 15 cm of straight translation and (b) a pure 45◦ counterclockwise rotation. The smaller points (◦) near the sensor numbers mark the sensor response at the start and the larger points () at
the end of the movement. From responses to larger translations and rotations we extract for each sensor i ‘unit’ responses ai to 1 cm of translation and bi to 1◦ of rotation
sensor coordinate system is given by ωsi = Ri ω, with ω being the rotation in the head coordinate system. Choosing ω along the X-, Y-, Z-axis, respectively, with the rotation angle ω = φ reveals the first two rows of Ri . In Sect. 9.4 it will be shown that f i and the third row of Ri are not needed for odometry.
head orientations which allowed to monitor the sensor responses to head rotation and thus calibrate the response of each sensor. Calibration means to determine the matrix elements of Ri . The X/Y-response of all sensors (except the blocked ones) versus step count is a straight line, the slope of which reflects the sensitivity of each sensor to the actual rotation. Figure 9.5c,d shows as an example the X/Y-response for the calibration about the Z-axis. The inset shows the enlarged response. Because of the X/Y-orientation of the sensors, rotation around the Z-axis leads only to a very small Y-response (Fig. 9.5d, note the scale). The inset of Fig. 9.5c and Fig. 9.5d demonstrate the monotonic response of the sensors to small monotonic motion. Note the motor step size of 0.337◦ in comparison to the minimum detectable angular displacement of a single sensor of about 0.778◦ (see Sect. 9.2). There is no additional noise visible. i ,r i ) from the slope in Fig. 9.5c,d. We obtain (r23 13 i ,r i ) from responses In a similar way we evaluate (r21 11 i ,r i ) from to rotation about the head’s X-axis and (r22 12 rotation about the Y-axis.
⎛ ⎞ " i # 1 −r21 ⎜ ⎟ i ω = φ ⎝0⎠ ⇒ c = φ i r11 0 ⎛ ⎞ " i # 0 −r22 ⎜ ⎟ ω = φ ⎝ 1 ⎠ ⇒ ci = φ i r12 0 ⎛ ⎞ " i # 0 −r23 ⎜ ⎟ i ω = φ ⎝0⎠ ⇒ c = φ i r13 1
To calibrate head B, it was mounted on a Spindler & Hoyer micro-optical bench. The bench was then fixed on a cylinder which could be rotated by precisely controlled angles by a stepper motor (1067 steps / 360◦ ) about the vertical axis (Fig. 9.5). The head’s X-, Y- and Z-axis could be precisely oriented along the rotation axis. The head was surrounded by a contrasted environment. In particular head orientations some of the sensors were always blocked by the bench to see the contrasted environment, but there were always other
9.4 Odometry Self-motion can be decomposed at each moment uniquely into a self-rotation ω and a self-translation T. From flow alone one can only hope to retrieve selfrotation ω (3 DOF), the direction of self-translation t (2 DOF) and T / di the ratio of speed over distance to contrast in the line of sight of each sensor, the so-called ‘relative nearness’ [18].
9
Odometry from Flow by Mouse Chips
119
(a)
(b)
(c)
(d)
Fig. 9.5 Head B was mounted on a Spindler & Hoyer microoptical bench (b). The bench was mounted on a cylinder which could be rotated about the vertical axis in a contrasted environment by a stepper motor (1067 steps/360◦ ) (a). The head’s X-, Y- and Z-axis could be precisely oriented along the rotation axis. The X-response of the 16 sensors to ccw rotation about the head’s Z-axis is shown in (c). Some of the sensors were visually blocked by the optical bench in this special head orientation (bhs), hs
9.4.1 Head A
τ=
In the case of only 2 DOF the parameters τ of translation along the X-axis T = (τ,0,0) and ρ of yaw ω = (0,0,ρ) can easily be extracted: minimizing E, the sum of squared deviations E=
indicates the X-responses of the horizontally oriented sensors, vs the response of the sensors oriented downwards. The inset shows the enlarged response. Because of the sensor orientation, rotation around the Z-axis leads only to a very small Y-response (d) (note the scale). The sensor responses in (d) as well as in the inset in (c) demonstrate the monotonic response of the sensors to small monotonic motion. For further explanation see text
2 ci − τ ai − ρbi
(9.4)
i
(ci = sensor response, ai = ‘unit’ response to 1 cm translation, bi = ‘unit’ response to 1◦ of yaw) leads to the best linear fit for τ,ρ:
"
ai ci
i
ρ=
bi 2 −
i
"
a
i2
i
ai bi
i
bc − i i
i
# bi ci /γ (9.5)
i
i i
ac
i
# i i
ac
/γ (9.6)
i
with
γ =
i
i2 i2
a b
−
" i
#2 i i
ab
(9.7)
120
H. Dahmen et al.
9.4.2 Head B
often a simplified estimate comes close to the best fit. For the rotation ω a good fit is
The response ci of sensor i to self-motion of combined translation T = (Tx ,Ty ,Tz ) and rotation ω = (ωx ,ωy ,ωz ) is given by " c = i
cix ciy
#
" =
0 −1 0 1 00
#
1 Rω+f i d i
i
"
100 010
# Ri (9.8)
" A =f
i
100 010
#
" R =f
i
R =f
i
i
i ri ri r11 12 13
# (9.9)
i ri ri r21 22 23
and " B =f i
i
0 −1 0 1 00
#
" i
i −r i −r i −r21 22 23
#
ri11 ri12 ri13 (9.10)
T=Λ
$
1 i A di
− cˆ i )2
ω=Λ
i
B
i
(9.15)
Ai (ˆci − Bi ω)
(9.16)
i
(9.11) i
Minimizing E = i (ˆc = measured responses) leads to ‘best-fit’ estimates for ω, T, 1/di : (ci
Bi cˆ i
summed over the eight horizontal sensors only and omitting the correction term containing T, the socalled ‘apparent rotation’ induced by the translation. This simplification works well in environments where the distance to objects seen by the horizontally oriented sensors are large enough to induce only a small flow component by T compared to that induced by ω. In addition the horizontal sensors are paired which reduces the error in ω induced by T. In this context it is of interest that water striders also limit their response to self-rotation to the dorsal eye. For the translation the estimate
ci can be rewritten as ci = Bi ω +
i
With i
ω=Λ
" #−1 i 1 i i i cˆ − i A with Λ= AA d i (9.12)
" #−1 1 1 i i i i i cˆ −B ω with K= A AA T=K di di2 i i (9.13) −1 1 i i i i i
i i ˆ c = P T A − B ω with P = T A A T di (9.14)
9.4.3 Fast Estimates The equations for best-fit ω, T, 1/di can only be solved by an iterative procedure. However, we found that
omitting the individual factor 1/di for each sensor (assuming di to be constant) produces acceptable estimates for T in environments with a not too inhomogeneous distance distribution. With the above estimates of ω, T an estimate for 1/di is
1 = Pi Ti Ai (ˆci − Bi ω) di
(9.17)
T/di is the ‘relative nearness’.
9.5 Tests In order to test the performance of both heads we carried out several experiments. It turned out that especially for head B in an outdoor environment good results can be obtained by the fast estimates of selfmotion. Outdoor we find usually rich contrast around the horizon and acceptable distributions of ‘relative nearnesses’ around the sensor head.
9
Odometry from Flow by Mouse Chips
121
9.5.1 Head A
Table 9.1 Test results for head A τ [cm] avg τ [cm]
In order to get an estimate for the precision of odometer results for the eight-sensor head A it was moved manually 20 times along a straight line for distances τ of 20, 40, 60 and 80 cm and rotated 20 times around the vertical axis by ρ = 90◦ ,180◦ ,270◦ and 360◦ . We cannot give an error estimate for the hand guidance error. Rotating the curricle on the spot without introducing an X- and/or Y-displacement turned out to be somewhat delicate. Averages and standard deviations of estimates τ and ρ for these trials are given in Table 9.1.
9.5.2 Head B 9.5.2.1 Test for the Orientation and Size of the Estimated Rotation For the 16-sensor head B first we wanted to find out the quality of pure rotation results about various axis
std(τ ) [cm]
translation 20 40 60 80 rotation
19.8495 39.875 59.8995 79.9285
ρ [deg]
avg ρ [deg]
std(ρ) [deg]
90 180 270 360
90.0205 180.3925 271.1805 360.664
0.4356 0.6167 0.5264 0.8132
0.1072 0.1064 0.1425 0.1710
orientations. Head B was rotated clock- and counterclockwise on the optical bench by a fixed angle ρ. The orientation α of the axis of rotation ω was varied in the X/Z plane of the head (Fig. 9.6b). The results of fast estimates are indicated in Fig. 9.6a. Deviations between the expected ω and the estimated one are mainly due to errors in the alignment by hand of the ω-axis as can be seen from the errors for
(b)
(a) Fig. 9.6 Head B was rotated cw and ccw on the optical bench by a fixed angle ρ about the rotation axis ω. The orientation α of the rotation axis was varied in the X/Z plane of the head. In Fig. 9.6b the thick arrows indicate the line of sight of the horizontal sensors of head B, the dashed ones that of the sensors looking downwards. ω illustrates the rotation axis. The inclination α of ω was adjusted by hand using a plastic set square. In
Fig. 9.6a the tips of the ‘fast estimated’ vector ω are indicated as crosses. The lines represent expected rotation vectors if the adjustment by hand would have been correct. R, L indicate ccw and cw rotation, respectively; the numbers indicate α. For a discussion of deviations between expected and estimated ω see text
122
H. Dahmen et al.
opposite ω-axes (e.g. for L+45 and R+45). For opposite ω-axes the mechanical adjustment and therefore the misalignment of the rotation axis was the same. So the deviation from the expected orientation should be the same. We conclude that the orientation of the ωaxis is well represented by the estimated one. Also the size ρ of the rotation (reflected by the length of the lines) is well represented by the ‘fast estimates’ of ω (the distance of the crosses from the origin). We conclude from this test that (in the case of zero translation) all coordinates of ω are satisfactorily reproduced by the fast estimate calculus of ω.
9.5.2.2 Outdoor Test of T- and ω-Distribution Next we wanted to find out the quality of ω and T results for a superimposed rotation and translation in an outdoor environment with more or less constant ‘relative nearnesses’. An outdoor environment under daylight is highly appropriate for the sensors because sufficient contrast with a broad spectrum of spatial frequencies is visible.
Head B was mounted on a rod (Fig. 9.7a) in an outdoor environment and moved by a stepper motor with constant angular velocity along a horizontal circle of 75 cm radius at a height of 40, 61.5 and 90 cm above the ground (1067 steps ≡ 360◦ ). Figure 9.7a shows the hardware setup, Fig. 9.7b depicts the X-response of the 16 sensors for the case of 61.5 cm height of the circle. The orientation of the sensors (seen from above) is indicated in Fig. 9.7c. In order to test the distribution of fast ω- and T-estimates 38 subsets of 50 successive motor steps (i.e. of 16.82◦ rotation angle of the rod) were selected along the whole circular head path. For these subsets we expect ω to be oriented along the Zaxis: ω = (0, 0, 16.82◦ ) and T to be oriented along the X-axis: T = (Tx ,0,0) . Tx can roughly be estimated as 8.44◦ (Fig. 9.8). Figure 9.9 shows the distribution, the mean and standard deviation of ωx , ωy , ωz and Tx , Ty , Tz for the 38 subsets. The mean of the various components deviates from the expected values by less than 1◦ . To test if our ‘fast estimates’ are near to a minimum of E, a Levenberg–Markwardt iteration (LMI) [22]
(c) (b) (a) Fig. 9.7 Head B was mounted on a rod (Fig. 9.7a) and moved by a stepper motor with constant angular velocity along a horizontal circle of 75 cm radius at a height of 61.5 cm above the ground in an outdoor environment. The head’s Y-axis was oriented along the radius of the circle, the X-axis along its circumference; hence the Z-axis was parallel to the rotation axis. The X-response of all sensors versus motor steps is depicted in Fig. 9.7b. The 1067 steps correspond to 360◦ . The relative orientation of the sensors (seen from above) is shown in Fig 9.7c.
Sensors 1–8 are oriented horizontally, 9–16 look downwards. Because the distances to any contrasted objects in space were comparatively large for sensors 1–8 they respond predominantly to rotation (their response shows about the same constant slope), whereas sensors 9–16 respond to a combination of ω and T. Sensors 14/15 look to the ground near the centre of the circle and hence see hardly any motion whereas sensors 10/11 look to a point on the ground which is farthest away from the rotation centre and hence show the steepest slope in their response
9
Odometry from Flow by Mouse Chips
123
9.5.2.3 Test for Evaluations of the Relative Nearness
Fig. 9.8 For the downwards looking sensor i the distance di √ to ground is about 2 61.5 cm = 87 cm. Hence T/di = tan {(75/87)16.8◦ } = tan 14.5◦ . The average fast estimate of 1/di over all 38 subsets was 1.7187 ± 0.1475. Hence the expected value for Tx is Tx = tan 14.5◦ /1.7187 = tan 8.44◦
was applied to each of the 38 subsets mentioned above with the fast estimate as a starting solution. The average solution of the LMI together with that of ‘fast estimates’ and the average E are shown in Table 9.2
Finally we wanted to find out if we can extract something about the 3D structure of the environment, the ‘relative nearnesses’ to objects within the field of view of our sensors. In an arrangement similar to the one used in the previous test the distance to ground was changed from 61 to 31 cm on a segment of 45◦ of azimuth on the circular head path (Fig. 9.10a). This was achieved by placing an elevated pattern of statistically distributed points on the ground under the circle path of head B. For each sensor fast estimates of the ‘relative nearness’ were calculated and are presented in Fig. 9.10c,d versus step number (i.e. azimuth). Note the various positions of the sensors detecting the distance change on their circle path according to the particular azimuth their line of sight crosses the distance change (for clockwise rotation, e.g. sensors 9, 16 detect the change first, 12, 13 last and for counterclockwise rotation in reverse order). The detection of T/di is possible but noisy.
ω x [°]
ω y [°]
ω z [°]
(a)
(b)
(c )
(d )
(e )
(f)
Fig. 9.9 The distribution of ωx (a), ωy (b), ωz (c), Tx (d), Ty (e), Tz (f) estimated from 38 subsets of 50 subsequent motor steps along the circle path of head B (i.e. of 16.82◦ rotation angle). The
mean and standard deviation are indicated by numbers in each subplot. The expected values of ω and T are ω = (0, 0, 16.8◦ ) and T = ( − 8.44◦ ,0,0) (see text)
124
H. Dahmen et al. Table 9.2 Fast ω-, T-estimates and LMI results and Fast estimates Rotation Translation $ E = i F2
$
2 iF
Marquardt–Levenberg iteration
X
Y
Z
X
Y
Z
0.0293◦ −8.2102◦
−0.01323◦ −0.3056◦ 28.43
16.1001◦ 0.0415◦
−0.0385◦ −8.1135◦
−0.1750◦ −0.2934◦ 7.59
18.0081◦ −0.1792◦
(a)
(c) Fig. 9.10 On an azimuth segment of 45◦ of the circular head path the distance to ground was changed from 61 to 31 cm. A pattern of statistically distributed points was placed on the ground under the circle path of head B. The pattern was elevated by 30 cm and 45◦ wide in azimuth (Fig. 9.10a) along the circular head path. The alignment of the down looking sensors (seen from above) is depicted in Fig. 9.10b. Head B was moved clockwise (in −X direction) and counterclockwise (along the X direction) in the same way as in the previous test. The result-
(b)
(d) ing ‘relative nearnesses’ T/di are depicted in Fig. 9.10c,d for cw and ccw rotation, respectively, of the rod. Note the various angles of azimuth (i.e. step number) when the sensors detect the distance change according to their particular line of sight. The baseline for the various plots was lifted to demonstrate the individual response of each sensor. Only for the lowest trace of sensor 9 the increase of T/di by a factor of two can be appreciated. The horizontal bar indicates 45◦
9
Odometry from Flow by Mouse Chips
9.6 Discussion For self-motion estimates the advantage of an omnidirectional motion field over the limited field of view of a normal camera has been demonstrated [2, 14, 26]. Various catadioptric systems which take 360◦ panoramic images of the environment have been proposed [4, 15, 19] as well as various methods to estimate self-motion from omnidirectional flow [14, 20, 26]. We used the algorithm proposed by Koenderink and van Doorn [18] to extract ‘best-fit’ self-motion parameters from flow seen by an omnidirectional spherical camera with a single centre of projection from all directions in space. ‘Best fit’ means that the sum of square deviations between flow measurements along various lines of sight and the theoretical flow induced in the direction of the same lines of sight by a simultaneous hypothetical rotation and translation is minimized. In a static environment with sufficient contrast (outdoor) we always find a well-defined minimum. We used that algorithm in order to find self-motion estimates for simulated error-loaded flow fields from all possible combinations of rotation and translation, various optical systems, fields of view and environments [6, 7]. The result of these extensive simulations was that except for special cases it is best to look along pairs of opposite lines of sight and distribute these pairs over an as large as possible solid angle. Only a few of these pairs are necessary to estimate self-motion to a surprising high degree of precision. In our head B we implemented a ring of four pairs of sensors looking in opposite (preferably horizontal) directions. The advantages of motion detectors looking into a set of fixed selected directions over a camera are obvious:
1. Motion detection by mouse chips is at least 20 times faster than by a camera. There is no need to wait for finishing a frame. Flow needs not be extracted but is determined by a fast dedicated hardware. 2. Motion detection is done in parallel along as many lines of sight as sensors are used (simultaneous distributed flow extraction). 3. The arrangement of lines of sight of the sensors and the optics can be adjusted to the intended purpose (self-motion estimate while moving over ground, flying, obstacle avoidance, etc.)
125
4. Sensors are light, cheap and can be attached to various locations on the vehicle. Disadvantages are: 1. Objects cannot be discriminated. 2. Calibration may be a problem. 3. Enough sensors must see contrast along their line of sight. But sensors that do not see enough contrast or respond irregularly may be excluded from motion estimate. We demonstrate that with our sensor heads selfrotation vectors can be estimated to a precision of 0.5◦ in each axis. The direction of self-translation can be extracted also to about 0.5◦ in each component. A rough overview of relative distances in the environment can be obtained from the ‘relative nearness’ estimates. Because the mouse chips are light, cheap, robust, fast, easy to mount on a robot’s body and can be adapted in their optical design (distribution of optical axes and focal length of the lenses) to a desired task, it is tempting to think of their use on flying robots. For this purpose it is necessary to replace the PC, i.e. to find a sufficiently light, small and powerful microprocessor board to collect the data synchronously in parallel from all sensors and to solve the somewhat complicated task to extract the self-motion parameters from the sensor data sufficiently fast without the USB data transfer.
References 1. Baird, E., Srinivasan, M.V., Zhang, S., Lamont, R., Cowling, A.: From Animals to Animats 9, Proceedings 9th International Conference on Simulation of Adaptive Behaviour, SAB, Rome. Visual Control of Flight Speed and Height in the Honeybee, pp. 40–51. Springer-Verlag Berlin Heidelberg (2006) 2. Baker, P., Fermüller, C., Aloimonos, Y., Pless, R.: A spherical eye from multiple cameras (makes better models of the world). Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR 2001, vol. 1, pp. 576–583 (2001) 3. Baker, P., Ogale, A.S., Fermüller, C., Aloimonos, Y.: The argus eye: A new tool for robotics. IEEE Robotics and Automation Magazine: Special Issue on Panoramic Robots 11 Nr. 4, 31–38 (2004)
126 4. Chahl, J.S., Srinivasan, M.V.: Reflective surfaces for panoramic imaging. Applied Optics 36(31), 8275–8285 (1997) 5. Dahmen, H.: Eye specialisation in waterstriders : an adaptation to life in a flat world. Journal of Comparative Physiology A 169, 623–632 (1991) 6. Dahmen, H., Franz, M.O., Krapp, H.G.: Extracting Egomotion from Optic Flow: Limits of Accuracy and Neural Matched Filters. Motion Vision, pp. 143–168. SpringerVerlag Berlin Heidelberg New York (2001) 7. Dahmen, H., Wüst, R.M., Zeil, J.: Extracting egomotion parameters from optic flow: principal limits for animals and machines. From Living Eyes to Seeing Machines, pp. 174– 198. Oxford Univ Press (1997) 8. Egelhaaf, M.: Invertebrate Vision. The neural computation of visual motion information, pp. 399–461. Cambr. Univ. Press (2006) 9. Egelhaaf, M., Kern, R.: Vision in flying insects. Current Opinion in Neurobiology 12, 699–706 (2002) 10. Egelhaaf, M., Kern, R., Lindemann, J.P., Braun, E., Geurten, B.: Active Vision in Blowflies : Strategies and Neuronal Mechanisms of Spatial Orientation. Chapter 4 of this book. Springer-Verlag Berlin Heidelberg (2009) 11. Franceschini, N., Ruffier, F., Serres, J.: Insect Pilots : Vertical and Horizontal Guidance. Chapter 3 of this book. Springer-Verlag Berlin Heidelberg (2009) 12. Franz, M.O., Krapp, H.G.: Wide-field motion-sensitive neurons and matched filters for optic flow fields. Biological Cybernetics 83, 185–197 (2000) 13. Fry, S.N.: Experimental approaches toward a functional understanding of insect flight control. Chapter 1 of this book. Springer-Verlag Berlin Heidelberg (2009) 14. Gluckman, J., Nayar, S.K.: Ego-motion and omnidirectional cameras. Proceeding of the 6th International Conference on Computer vision (ICCV’03) (2003) 15. Grassi, V., Okamoto, J.: Development of an omnidirectional vision system. Journal of the Brazilian Society of Mechanical Science and Engineering XXVIII, No. 1, 58–68 (2006)
H. Dahmen et al. 16. Junger, W.: Waterstriders (gerris paludum f) compensate for drift with a discontinuously working visual position servo. Journal of Comparative Physiology A 169, 633–639 (1991) 17. Junger, W., Dahmen, H.: Response to self-motion in waterstriders: visual discrimination between rotation and translation. Journal of Comparative Physiology A 169, 641–646 (1991) 18. Koenderink, J.J., van Doorn, A.J.: Facts on optic flow. Biological Cybernetics 56, 247–254 (1987) 19. Nayar, S.K.: Catadioptric omnidirectional camera. Proceedings IEEE Conference CVPR, pp. 482–488 (1997) 20. Shakernia, O., Vidal, R., Sastry, S.: Omnidirectional egomotion estimation from back-projection flow. IEEE Workshop on Omnidirectional Vision (2003) 21. Pless, R.: Using many cameras as one. Proc. IEEE CVPR’03, vol. 2, pp. 587–593 (2003) 22. Press, W., Flannery, B., Teukolsky, S., Vetterling, W. (eds.): Numerical Recipes in Pascal. Nonlinear Models, pp. 572– 580. Cambridge University Press (1989) 23. Srinivasan, M., Zhang, S., Chahl, J., Barth, E., Venkatesh, S.: How honeybees make grazing landings on flat surfaces. Biological Cybernetics 83(3), 171–183 (2000) 24. Srinivasan, M., Zhang, S., Lehrer, M., Collett, T.: Honeybee navigation en route to the goal: visual flight control and odometry. Journal of Experimental Biology 199(Pt 1), 237–244 (1996) 25. Srinivasan, M.V., Thurrowgood, S., Soccol, D.: From visual guidance in flying insects to autonomous aerial vehicles. Chapter 2 of this book. Springer-Verlag Berlin Heidelberg (2009) 26. Vassallo, R.F., Santos-Victor, J., Schneebeli, H.J.: A general approach for egomotion estimation with omnidirectional images. Proceedings of the Third Workshop on Omnidirectional Vision, 97–103 (2002) 27. Zeil, J., Boeddeker, N., Stürzl, W.: Visual Homing in Insects and Robots. Chapter 7 of this book. Springer-Verlag Berlin Heidelberg (2009) 28. Zufferey, J.C., Beyeler, A., Floreano, D.: Optic Flow to Steer and Avoid Collisions in 3D. Chapter 6 of this book. Springer-Verlag Berlin Heidelberg (2009)
Chapter 10
Microoptical Artificial Compound Eyes Andreas Brückner, Jacques Duparré, Frank Wippermann, Peter Dannberg, and Andreas Bräuer
Abstract The cost–benefit ratio of miniaturized single aperture eyes underlies certain limitations, so that evolution led to the development of multi-aperture eyes in case of tiny creatures like invertebrates. Physical constraints, which also apply for the miniaturized artificial imaging systems, make this natural evolutionary path comprehensible. Shrinking down to a sub-millimeter range, the use of parallel imaging with multi-aperture systems is crucial. In this domain, microoptical design approaches and fabrication techniques are the solution of choice. This technology allows the realization of cost-efficient miniaturized imaging systems with sub-micron precision by means of photolithography and replication. The approaches proposed here are mainly inspired by insect vision in nature, although they are bound to planar substrates.
10.1 Introduction Vision is by far the most important sense for 3D navigation or guidance of autonomous flying vehicles. However, vision is also a costly sense for miniaturized systems due to the demands of size, weight, power consumption, and speed for the necessary sensor components on board. Even the simplest vision system requires optics to form an image of the environment, an image sensor to convert image illumination into electric current, processors to interpret visual information for guidance, electronics, and a power supply.
A. Brückner () Fraunhofer Institute for Applied Optics and Precision Engineering, Albert-Einstein-Str. 7, D-07745 Jena, Germany e-mail:
[email protected]
When examining state-of-the-art miniaturized camera systems we learn that the lens is usually the bulkiest component with a minimum size of about 4 mm along each side. In the following section we are going to discuss where this limit originates from and why a simple miniaturization of classical optics would drastically reduce the image resolution. But how can this limitation of optics be overcome? A fascinating approach is to look how nature has successfully solved similar problems in the case of very small creatures like insects which solely use compound eyes. During the last century, the optical performance of natural compound eyes was analyzed exhaustively [39]. Following the terminology found in literature, we will distinguish between two major classes of compound eyes: (1) the apposition compound eyes and (2) the superposition compound eyes [31]. Within the scope of this chapter we are going to give a coarse overview of the different optical working principles, what their advantages and drawbacks are, and how they inspired the development of artificial compound eye optics. Several technical concepts and realizations of imaging sensors using multiple optical channels1 have been presented in the last two decades. However, since the major challenge for a technical adoption of natural compound eyes is the required fabrication and assembly accuracy, none of these attempts has led to a breakthrough, since classical macroscopic fabrication technologies were exploited to manufacture microscopic structures. A more appropriate way to build up the required microstructures with high precision in
1
In the following the term multi-channel imaging systems will be used.
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_10, © Springer-Verlag Berlin Heidelberg 2009
127
128
A. Brückner et al. Side view:
(a) Large
D1
(b) Small - constant F/#
dF
1
2
(c) Small - constant SBP
dF
D2
dF 2
D2 2
f2 Front view:
D1
f2 2
f1
D2
D2
Large image field
Small image field
Fig. 10.1 Scaling of lenses with constant F/# (b) or spacebandwidth-product (SBP) (c). (a) Large lens: The size of one image point is proportional to λ F/#. Due to the large focal length f1 the angular projection of an image point into object space ΔΦ 1 is very small. The lens has a high angular resolution and a large information capacity caused by the extension of the image field and the small spot size. (b) Small lens with same F/#: The size of one image point is the same as in (a). However, due to the short focal length f2 , here the angular projection of one spot
Δφ 2 is enlarged. Hence the lens has a low angular resolution and reduced information capacity due to the small image field size. (c) Small lens with same information capacity: The spot size is decreased due to the reduction of the F/#. The same angular resolution as in (b) results as the focal length is decreased with the size of one image point. Consequently, the information capacity in the image plane equals that of the large system but the information is gathered from a larger field of view with reduced angular resolution. Adopted from [46]
a cost-efficient manner is to rely on state-of-the-art microoptics technology being adapted from semiconductor fabrication techniques. In this chapter, we present a way to design and fabricate microoptical artificial compound eyes. Although these systems are inspired by the insect eye, they are not intended to be a copy of it. We rather adopted some design rules to achieve ultra-compact optical sensors with the major advantages like the small overall size and low weight in combination with a large field of view. These are demands of applications in the fields of machine vision, security, surveillance, and medical imaging. However, they will also be a benefit to the visual guidance system of flying micro-robots or portable consumer electronics. Compound eye sensors can, for instance, fit into tight spaces like credit cards, stickers, sheets, or displays.
From the point of view of an optical design engineer, shrinking the size of a classical optical system2 is fine because geometrical imaging errors (often called aberrations) drastically decrease with the aperture size [41]. From this standpoint we might say that the smaller the size of a vision system, the easier the formation of a good (meaning sharp) image. However, from physical optics we know that light can never be focused to a perfect point image. Even when using an optically ideal constructed lens with a focal length f and an aperture diameter D the image of a (perfect) point-like object will have a finite size which is proportional to the ratio3 F/# = f/D and the wavelength λ of light. Due to this phenomenon of diffraction the size of the smallest image feature is the so-called diffraction limited spot size. As there is a certain smallest image feature size and a certain image field size, the number of image details that are captured by a vision system (or its information capacity) is also limited. For that reason, the stop number F/# should either stay constant
10.2 Miniaturization of Imaging Systems For understanding the existence of compound eyes which is at least partly caused by physical limitations of the miniaturization of optical systems, we will have to glimpse at some well-known principles of optics. As a reward, the following little optical excursion will shed some light on the promises but also on the limits of artificial compound eyes.
2
Due to its single channel setup including one or a few lenses and at least one aperture, all aligned on a common axis, we will further call this single aperture optical system. 3 This ratio is called the stop number or F-number F/#.
10 Microoptical Artificial Compound Eyes
129
Single aperture eyes
Compound eyes Superposition
R
Ø
Planar receptor array
Spherical shaped retina
Apposition
Fig. 10.2 Different types of natural eye sensors (top) and their technical counterparts (bottom) [46]
or shrink in order to preserve image resolution (Fig. 10.1b) or information capacity (Fig. 10.1c) – referred to as space-bandwidth-product (SBP) – during miniaturization [33]. We may conclude that the miniaturization of a single aperture optical system has to be paid by a reduction of either the angular resolution or the information capacity. A certain minimum system size is needed for a high image and angular resolution. The evolutionary solution of choice is found in the vision systems of tiny creatures like invertebrates – the compound eye. Here, a large number of extremely small vision systems (called ommatidia) on a curved basis capture the visual information of a large field of view (FOV) in parallel. The use of parallel optical channels breaks the trade-off between focal length and the size of the FOV. Although each ommatidium exhibits a small FOV and thus a small information capacity, the sum of all ommatidia of the compound eye provides a large FOV (often nearly 180◦ ) and large information capacity so that accurate and fast flight maneuvers such as known for instance from flies are possible (details can be found in Chap. 4).
The multi-aperture solution is well adapted not only to the external skeleton but also to weight and metabolic energy consumption of small invertebrates. Due to the very short focal length of each microlens, compound eyes have a large depth of focus. Hence there is no need for a focusing mechanism. Compared to the mammalian single aperture eye, the amount of gathered optical information is low which suits the capacity of an insect’s brain. However, the arrangement of the ommatidia on a spherical shell is a major advantage which allows a very large field of view while its total volume remains small. Hence, the main volume of the head is still available for the brain and signal processing. In nature, the lack of high angular resolution is often counterbalanced by high temporal sampling 4 and additional functionalities such as polarization sensitivity, hyperacuity, or fast movement detection [35,45]. Natural compound eyes have been subject to scientific research for more than a century. As this section covers only some essential basics of natural compound eye vision, references [31] and [39] are especially recommended for further reading.
10.3 The Compound Eyes of Insects The classification of compound eyes can be divided into apposition and superposition compound eyes (Fig. 10.2) [31,39,43].
4
Compared to the temporal sampling of human vision of about 25 frames per second a fly samples about 10 times faster (approx. 250 Hz) [13].
130
A. Brückner et al.
Fig. 10.3 Natural apposition compound eye. (a) Scanning electron microscope (SEM) image detail of the compound eye of a fruit fly (“Drosophila”). (b) Principle of a natural apposition compound eye which is composed of a large number of ommatidia arranged on a curved basis with radius REYE .
10.3.1 Apposition Compound Eyes In natural apposition compound eyes, which mainly evolved in day-active insects such as flies (Fig. 10.3a), each microlens is associated with a single photoreceptor in its focal plane [11]. One could find several hundreds (water fly) up to tens of thousands (Japanese dragon fly) of these single microlens-receptor units which are commonly referred to as ommatidia. The intermediate space between adjacent ommatidia is filled with pigments that form opaque walls to prevent light from leaking out from one ommatidium into the next which would cause optical cross-talk [28]. The optical axis5 of each ommatidium points to a different direction (Fig. 10.3b) so that nearly 360◦ of the visual surrounding is sampled in patches defined by the interommatidial angle = D/REYE .
(10.1)
Hence, Eq. (10.1) gives the angular offset between the optical axes of two adjacent ommatidia with diameter of the microlens D and the radius of the eye REYE . The photoreceptor in each ommatidium accepts light from an angular interval Δφ. In fact, the angular response is approximately a Gaussian centered on the corresponding optical axis [11,16]. The so-called acceptance angle Δφ is its full width at half maximum. It is one of the most important parameters of a com-
5 Optical axis denotes the straight line which connects the center of a single microlens with the center of the associated photoreceptor. Thus, “axially” means in direction along the optical axis.
pound eye because it determines the trade-off between sensitivity and resolution. The acceptance angle has a geometrical contribution Δρ = d / f which is determined by the photoreceptor diameter d projected into the object space (via the focal length f). A second contribution λ/D originates from the diffraction at the microlens aperture D for the wavelength λ [42], resulting in % 2 2 d λ + (10.2) ϕ = f D A large Δφ means that each ommatidium collects light from a large angle, therefore the sensitivity is high but the resolution is low meaning that two object points which are separated by a small angle would be imaged on a single receptor. Vice versa, a small acceptance angle causes higher resolution but lower sensitivity. For that reason it is insufficient to just increase the number of ommatidia of a compound eye while keeping the acceptance angle constant in order to increase the total resolution. Instead, the ommatidia must increase in both number and size [1] which is the reason why there are no large compound eyes found in nature. When a certain size is exceeded for the compound eye a much lower resolution results as compared to a single aperture eye of same size [24]. Many invertebrates developed compound eyes which have areas with a locally higher resolution than elsewhere in the eye. These “acute zones” point into the direction of highest interest, similar to the fovea in the eyes of mammals [15,29]. A special form of apposition compound eye, which can be found in several flies, comprises a set of photoreceptors in each ommatidium in contrast to a single
10 Microoptical Artificial Compound Eyes
131
Δ
D f d R
Fig. 10.4 Working principle of a neural superposition eye. The optical axes of different photoreceptors from adjacent ommatidia point on the same location of a distant object
one. Here, each object point is imaged by multiple photoreceptors from different ommatidia and the related signals are fused in the first synaptic layers of the eye (see Fig. 10.4). Therefore, the setup is known as the neural superposition type of an apposition compound eye [13,25]. This multiple sampling of the same part of the field of view6 enables an increased light sensitivity. The main drawback of the apposition compound eye is a limited sensitivity which restricts their owners to live at daytime.7 It might be for that reason that evolution came up with the second class of multi-aperture vision systems – the superposition compound eyes – for insects that are active at night.
10.3.2 Superposition Compound Eyes The superposition compound eye (Fig. 10.5) has primarily evolved in night-active insects and deep water crustaceans due to the low-light conditions found in their habitats. In contrast to the apposition type, the focused light from multiple ommatidia combines on the surface of the photoreceptor layer to form a single
6
In the following this will be called “redundant sampling.” Even though the neural superposition type could expand activities of its owner into light conditions of dawn and dusk. 7
real image of the surrounding [31]. The decisive advantage of superposition compound eyes is that they are much more light sensitive because light bundles from one object point traveling through several adjacent ommatidia are deflected toward a common image point (Fig. 10.5b), increasing the light-collecting aperture D to a size which is several times larger than the diameter of the individual ommatidium. This optical performance is not the result of a single microlens per ommatidium but of two axially aligned microlenses that form a microtelescope [34]. In nature, eyes with small effective F/# (ratio of eye’s focal length F and diameter of the superposition aperture D), even smaller than one, can be observed. However, blurring – caused by the imperfect combination of light beams from several ommatidia – lead to a resolution below the diffraction limit [30]. When comparing apposition with superposition eyes, it is evident that the clear zone that is needed for the fusion of light bundles leads to a larger volume in case of superposition compound eyes – a price that is paid for their higher sensitivity.
10.4 Insect-Inspired Imaging Systems What can we learn and adapt from nature concerning the miniaturization of imaging systems? In multiaperture eyes of insects the overall field of view is split into a number of fragments which are imaged by different optical channels in parallel. Each channel has a small information capacity but in total all channels achieve the capacity of a much larger optical system. Second, as each ommatidium images only a small angular segment around its optical axis, there are few geometrical imaging errors so that a very simple optical element, such as a single spherical lens, may be used. After the first description of the potential of artificial compound eyes by Sanders and Halford [40], various technical approaches for compact vision systems exploited the basic principle of apposition compound eyes so far [9,12,17,26,36,49]. Latest developments in this field led to a novel 3D microfabrication method to develop artificial compound eye optics on a spherical basis [21,32]. However, the problem of recording the image from the curved image field of the eye still remains unsolved.
132
A. Brückner et al.
Incident light
Fig. 10.5 Natural superposition compound eye. (a) SEM image of head and eye section of the moth “Ephestia kuehniella.” (b) Schematic cross section of a natural superposition compound eye
In the case of the superposition compound eyes only a few technical equivalents have been demonstrated so far. Most remarkably, in 1940 Gabor published a setup of two stacked lens arrays with slightly different pitch axially separated by the sum of their focal lengths which would act like a normal (big) lens [10].8 More than 50 years later, this planar derivative of superposition compound eye optics has been demonstrated experimentally [14]. But due to high technological and assembly efforts the setup has not yet found access to general applications. Systems with unity magnification using gradient-index lens arrays like found in photocopying machines and scanners are an exception [2,19,20,23]. Up to date, we have to conclude that only a few multi-aperture imaging systems found a successful way into application. The reason is that most of the published examples have been fabricated in macroscopic scale, so that assembly misalignments become critical when placing one channel beside each other, also lacking a considerable miniaturization. Microoptical fabrication technologies, instead, are promising for the fabrication of artificial compound eyes with a large number of channels. Fabrication and assembly tech-
nologies with high lateral precision may lead to cheap and compact imaging devices because a large number of systems can be manufactured in parallel processes.
8
9
Such a setup became known as the “Gabor superlens.”
10.4.1 Artificial Apposition Compound Eyes An artificial apposition compound eye in its simplest form consists of a planar 2D microlens array on a transparent substrate positioned on an optoelectronic detector array (Fig. 10.6) [5]. The thickness of the substrate is matched to the focal length of the individual microlenses f so that the detector pixels are located in the focal plane of the microlens array. As technical detector arrays such as CCD- or CMOS sensors are fabricated on planar substrates, a difference between the center-to-center spacing of the microlenses pL and the pixels pP 9 is used to achieve different viewing directions for the individual channels. Thus, the angular sampling scheme of the curved natural apposition compound eye is adapted to a planar setup. Each chan-
This is referred to as pitch difference Δp .
10 Microoptical Artificial Compound Eyes
133
Focal plane
Fig. 10.6 Schematic layout of a planar artificial apposition compound eye. (a) 3D model of the system showing the focusing microlens array, the pixel array in its focal plane, and the tilted optical axes of the specific channels sampling the field of view.
(b) Schematic section of the objective. The diameter of the individual microlens is D and d denotes the pixel size. See text for explanations
nel corresponds to one field angle in object space with the optical axis directed outward. Another possibility of explanation is the so-called moiré magnifier [22,44]: Behind each microlens the (same) small image of the distant object is formed. Due to the result of the pitch difference between microlens and pixel array, p = pL − pP , the image is sampled at a slightly different location in each channel. When looking at the entirety of all sampled pixels a magnified version of the image is obtained from this sampling scheme. The overall image size of the artificial apposition compound eye is determined by the moiré magnification pP /p and thus completely independent of the focal length. This arrangement delivers an image with a much larger magnification than that of a single microlens and at the same time it has a much shorter system length than a single aperture optical system with same magnification [5,6]. Analogous to the insect apposition compound eye we define the two most important system parameters: the angle between two adjacent optical axes (sampling angle), given by
object is focused independently of the object distance. This means that there is no need for an active focus adjustment.
= arctan
p , f
(10.3)
and the acceptance angle Δφ approximated by Eq. 10.2 where d is now the detector pixel diameter. A further advantage of the artificial apposition compound eye is the large depth of field. As a result of the extremely short focal length of the microlenses an
10.4.1.1 Prototype Demonstration The conception and optical design of an artificial apposition compound eye start with selecting a suitable detector array. The pixel pitch of the detector array has to be in the range of 50–150 μm while the photodiodes of each pixel should be small (between 2 and 10 μm), which is rather unusual for commercially available arrays. For that reason, standard detector arrays of large size can be used while only a fraction of the whole amount of pixels is read out. Alternatively, a customized image sensor array can be used, which additionally offers the possibility of implementing signal pre-processing within the vacant periphery of each pixel. A recent prototype of an artificial apposition compound eye sensor that has been developed for passenger monitoring in automotive applications is shown in Fig. 10.7, its specifications are given in Table 10.1. Its main purpose is the detection of the seating position of a passenger in order to control the airbag inflation and prevent injuries caused by it in case of an accident. An artificial compound eye suits this application well as it demands a large field, of view, a rigid system of low complexity with a large depth of field and fast pro-
134
A. Brückner et al.
(b)
(a) Fig. 10.7 (a) Size comparison between commercial VGA camera objective (left) and artificial apposition compound eye. (b) Ultra-thin vision sensor formed by an artificial apposition com-
pound eye attached to customized CMOS image sensor on flexible carrier
Table 10.1 Selected parameters of the artificial compound eye prototype from Fig. 10.7b Parameter/feature Value
ing unwanted light in horizontal layers of baffle arrays which are placed in different axial depths with decreasing size of apertures toward the image plane (scheme in Fig. 10.9). The first method is also found in nature but it is technologically rather challenging. The second method has the advantage that the necessary thin baffle structures may be easily fabricated by photolithography. For the visualization of cross-talk, the radial star pattern was imaged off-axis by a system with and without baffle layers (Fig. 10.9 right part) under the same conditions. For the artificial apposition compound eye without baffles, a ghost image of the part of the star which is outside the objective’s field of view appears on the right side of the image in Fig. 10.9. For the artificial apposition compound eye with additional baffle layers, light from outside the field of view is blocked. Hence, ghost images are suppressed, only the original part of the pattern is imaged, and contrast is preserved. The suppression of optical cross-talk is crucial for multi-aperture imaging sensors if they shall be used for imaging of arbitrary scenes with large field of view. When using a regular microlens array10 for imaging a large field of view, large angles of incidence lead to blurring effects within channels that lie outside the center of the array (the blur grows with angle of incidence). However, in each channel a sharp focus is needed for only a narrow angular region independent of the absolute angle of incidence. Thus, the proper-
Number of channels Thickness Field of view F/# Microlens diameter Pixel size Size of sensor head
144 × 96 300 μm 85◦ × 51◦ 4 50 μm 3 μm 10 mm × 10 mm × 1.2 mm
cessing capabilities. The sensor size and costs are other important side conditions. After fabrication, the artificial apposition compound eye objective (shown in Fig. 10.7a) is actively aligned with respect to the image sensor array and finally attached to its surface using transparent glue. The accuracy of the lateral alignment is in the range of 1–2 μm. In order to measure and control the quality of fabrication and assembly steps, images of specific test objects are analyzed with respect to resolution, distortion, sensitivity, and brightness homogeneity. Static properties like sampling angle and acceptance angle as well as their variance over several channels are measured. Shading effects and fixed pattern noise are calibrated. For example, a radial star pattern and bar targets of different periods (Fig. 10.8) are well suited to determine the optical resolution limit of such an imaging system. A critical issue to the performance of artificial apposition compound eyes is the suppression of optical cross-talk. Cross-talk is prevented by using either opaque walls between adjacent channels or by block-
10
Here, regular means that all lenslets are exactly the same.
10 Microoptical Artificial Compound Eyes
(a)
135
(b)
(c)
(d)
(e)
Fig. 10.8 Test images as captured by the artificial apposition compound eye sensor. (a) Low-frequency bar target. (b) Scientist viewing out of the lab window. (c) High-frequency bar target with 36 LP/field of view. (d) The logo of the Fraunhofer Institute
IOF Jena. (e) A picture of “Image processing Lena.” The original size of all images is 144 × 96 pixels covering a field of view of 85◦ × 51◦ .
Fig. 10.9 Schematic section of an artificial apposition compound eye objective with additional baffle aperture arrays for cross-talk suppression. Solid lines show useful signal, dashed lines show cross-talk. Right side: Images of a radial star pattern
which has been moved out of the field of view. Left image: Original and ghost image of both halves of the star are clearly visible due to cross-talk. Right image: After the introduction of additional baffle arrays the ghost image is suppressed
ties of each microlens in the array can be tuned so that it focuses almost perfectly for its individual angle of incidence. The properties of the individual microlens become a function of the position within the array which is termed “chirped” microlens array [8,48]. The net effect is demonstrated in Fig. 10.10 by comparing between the images of different test patterns captured through a regular and a chirped microlens array (denoted by rMLA and cMLA, respectively) – the resolution is constant for all angles of incidence. It can be clearly observed that the resolution in the center of the field of view is independent of using regular or chirped lens arrays. However, with increasing viewing angle the resolution is decreased in case of the regular microlens array while the resolution remains constant when using the chirped microlens array. Please note that the proposed tuning of the individual microlenses is an action which is required for a planar compound eye with a large field of view. Nature automatically achieves constant resolution by using lenses on a spherical base structure – so that the lens normal is always parallel to the angle of incidence.
10.4.1.2 Increased Sensitivity with Artificial Neural Superposition So far, we dealt with crucial issues like the suppression of cross-talk and how to achieve a constant resolution throughout a large field of view. But what about the major drawback of natural apposition compound eyes: Are artificial apposition compound eyes also prone to low sensitivity? Unfortunately, yes. From nature we already know one way to tackle this problem in the context of apposition compound eyes – the neural superposition type. Adopting the neural superposition type of apposition compound eye is comparatively simple. Instead of a single pixel in the individual channel of the artificial compound eye, a square array of n ∗ n pixels is used in each channel of the artificial apposition compound eye. A schematic section of this setup with n = 3 is shown in Fig. 10.11a. The pitch difference p between microlens array and the center of the pixel groups is now defined as p = pL − pK ,
(10.4)
136
A. Brückner et al.
Fig. 10.10 Bar targets of different spatial frequency and captured images using a chirped microlens array for channel-wise blurring correction under oblique incidence and a regular lens array for comparison. Additionally, a 4×1/4 radial star test pattern demonstrates the obtainable resolution in the four image
Δ
Px
corners as a function of the angle of incidence. Only one quadrant of the entire symmetrical field of view is shown, so that 0◦ viewing direction corresponds to the lower left corner of each image. The angle of incidence increased up to a maximum of σmax = 32◦ in the upper right corner of each picture
Δ
D
pL
Microlens array Apertures Glass substrate
pK
pPx
d
Pinhole array Sensor array
Fig. 10.11 Schematic layout of the artificial neural superposition eye for increased sensitivity. Each angular segment in object space is imaged in multiple (three shown) pixels of different neighboring channels. In this drawing an image sensor with
small pixel pitch pPx compared to the size of a single channel pL is used. An additional pinhole array is needed on the backside of the artificial compound eye optics to cover unused pixels
and two different sampling angles result. One is the angle between the optical axes of adjacent channels given by Eqs. (10.3, 10.4). The other is defined between adjacent pixels within each channel by
between pixels φ Px , so that m · φ = φPx holds for integer values of m. As a result, each object point is imaged on n2 pixels across n2 different channels in the 2D microlens array. The recorded signals that belong to a common point in object space are then accumulated to increase the sensitivity. The sensitivity increases proportional to the number of pixels that are involved in the process, e.g., n2 , whereas the increase of the signal-to-noise ratio is proportional to the square root of the number of pixels. Due to the redundant sampling this effect is achieved effectively without any
φPx = arctan
pPx f
.
(10.5)
By choosing the pitch of the microlens array pL , the sampling angle between adjacent channels φ can be set to be a defined fraction of the sampling angle
10 Microoptical Artificial Compound Eyes
137
signal-to-noise ratio
500
(a)
(b)
400 300 200 100
(c)
0 0
10
20 30 40 50 60 integration time [ms] one pixel per channel nine pixels per channel
Fig. 10.12 Left: Images of “Lena”: (a) recorded with one pixel per channel, (b) each pixel averaged from nine pixels with common viewing direction (raw image), (c) the same image with normalized gray levels. The image resolution is about 65×45 pixels. Right: Measured signal-to-noise ratio (SNR) for a system
with (squares) and without (diamonds) artificial neural superposition for different integration times. The signal-to-noise ratio is increased by a factor of three when averaging from nine pixels per field angle
loss of resolution which is demonstrated in Fig. 10.12. However, due to the larger lateral dimensions of the pixel group in each channel and the larger necessary baffle apertures, the suppression of cross-talk in the way that was proposed earlier (Fig. 10.9) becomes less effective in case of a large field of view. Artificial neural superposition also enables spectralsensitive imaging using coding of redundant sampled pixels by different spectral filters. Such a scheme is not known of any natural compound eye but it has been used for color imaging with artificial apposition compound eyes [3].
one would need an image sensor with 50 mm length along one edge. For most applications this would cause unacceptably high costs. This is one of the main reasons why the image resolution of planar artificial apposition compound eyes is restricted to a maximum of about 200 × 200 pixels when using a reasonable size of the image sensor. A way around this stalemate is offered by the Gabor superlens which on the other hand is inspired by the superposition compound eye. Here, a real image is optically formed and can be recorded by a standard image senor just like it would be the case with a classical single aperture objective. As a result the image resolution can be much higher when using densely packed pixels of small size, allowing small lateral sensor dimensions. The special type of artificial superposition compound eye demonstrated here consists of three stacked microlens arrays with slightly different pitches which decrease toward the image plane (Fig. 10.13). The first microlens array focuses light from a distant object directly onto a second, so-called field microlens array. A third microlens array re-images the individual intermediate images of each channel onto the final image plane. The pitch difference between the individual arrays causes a deflection of the ray bundles so that the optical axes of the individual channels are tilted with respect to each other and each channel observes a different segment of the field of view12 . This setup can be interpreted as a cluster of single aperture microcameras
10.4.2 Artificial Superposition Compound Eyes With a thickness of about 0.3 mm artificial apposition compound eye objectives offer the highest degree of miniaturization for imaging optics, although they require comparatively large lateral dimensions of the image sensor when aiming for a reasonable image resolution11 because each channel contributes to only one image pixel. For a large number of image pixels, the same large number of channels is needed so that, multiplied by the size of a single channel, a large sensor size results. As an example, for 1000 pixels in one dimension of the image with a channel diameter of 50 μm
12 11
Defined by the total number of pixels in the digital image.
Compared to the artificial apposition compound eye a segment of the individual channel is considerably larger (e.g., by a factor of ten).
138
A. Brückner et al.
Fig. 10.13 Working principle of a telescope compound eye imaging system [46]. The arrangement of microlens arrays is similar to that of a Gabor superlens (natural archetype: superposition compound eye). However, internal aperture arrays avoid
superposition in favor for the direct stitching of image segments. Please note that the individual intermediate images are inversed whereas after the re-imaging an array of erect image segments is formed
which have tilted optical axes to obtain a large overall field of view and is therefore termed “cluster eye” or “CLEY” [47]. The formation of an intermediate image is crucial in order to achieve an array of erect image segments after the re-imaging. Hence, all image segments can be optically stitched together to one single overall image. This feature is different to superposition eyes in insects where each image segment is so large that it overlaps with several neighboring ones, so that light bundles from a large number of channels contribute to the same image point. What is the advantage of image stitching compared to a superposition? The first allows higher resolution whereas the second achieves higher sensitivity. For lab demonstration, a 2 mm thin prototype of a CLEY has been fabricated and its imaging performance has been evaluated using various test patterns. A detailed list of specifications may be found in Table 10.2. Figure 10.14 shows the images of a page from a textbook, a picture of “Lena,” and the institute logo captured at about 20 cm distance. They demonstrate that a perfect image stitching could not be obtained
with this first prototype. Though the image segments are of roughly rectangular shape, they connect or overlap only to some degree (Fig. 10.14a). This causes a considerable intensity modulation even for a homogeneous white object. Nevertheless, the high image contrast and the readability of the text in Fig. 10.14a are promising.
Table 10.2 List of selected parameters of the prototype of a cluster eye [7]. Parameter/feature Value Number of channels Thickness Total field of view Total image size Channel FOV Size of image segments
21 × 3 2 mm 70◦ × 10◦ 4.5 mm × 0.5 mm 4.1◦ ×4.1◦ 192 μm2
Fig. 10.14 Experimental demonstration of the imaging capabilities of the cluster eye. (a) Image of a text section of M. F. Land’s book “Animal Eyes,” Sect. 10.3: “What makes a good eye” [31] with size 10×3.7 cm2 at a distance of about 20 cm. (b) Image of a picture of “Lena,” and (c) image of the institute logo captured through 6×3 channels of the CLEY. Sharp edges and small resolved image features demonstrate the promising imaging capabilities of the cluster eye
10 Microoptical Artificial Compound Eyes
139
Fig. 10.15 Left: Fabrication of a master tool. A thin resist layer is binary structured by UV exposure through a mask and subsequent reflow creates a smooth surface relief, e.g., spherical microlenses. A negative copy of this master structure acts as replication stamp. Right: Process flow of the fabrication of an artificial apposition compound eye. (a) Spin on black-matrix polymer, exposure through mask, development gives an absorbing baffle array layer. (b) Stacking of glass wafers, no precise alignment needed. (c) On the front side the same steps like in (a) are performed, now including alignment. Aligned with respect
to the center baffle array, a thin metal layer is structured, forming a metal baffle array with aperture diameters down to a few microns. (d) Molding of a polymer spacer on the backside. (e) Spin coating of a thin layer of UV-curable polymer on the front. (f) The replication stamp is brought in contact with the polymer resin and UV light is applied through the transparent stamp in order to cure the polymer material. (g) After curing, the stamp is released and the individual objectives are diced by a wafer saw. This technology is carried out in a mask aligner for producing optical microstructures on wafer scale
It is demonstrated that one overall image is generated by the transfer of the different image segments through separated channels with a strong demagnification. The parallel transfer of different parts of an overall field of view (different information) by separated optical channels allows the CLEY to have an overall information capacity which is equal to the sum of the individual channel’s capacities. Consequently, the cluster eye has the potential of much higher information capacity (or in other words, a larger number of image pixels) than the demonstrated artificial apposition compound eye. On the other hand the complexity of the CLEY is much higher compared to the artificial apposition compound eye, and it is therefore several times thicker (2 mm instead of 0.3 mm for the apposition type). Additionally, stitching errors might be unavoidable because in case of a cluster eye the tolerances of microlens array fabrication and assembly are very tight (in the order of a few microns) and there are no compensation possibilities without reducing either contrast of the image or degrading the image segment stitching.
tical artificial compound eyes. This typically involves coating steps on planar substrates, micropattern generation, and processes to modify and/or transfer the microstructures. A sequence of these process steps requires precise lateral as well as axial alignment (which correspond to a proper centering and focusing of each channel). Here, photolithography is the key implement to achieve the required precision in the micron or even sub-micron range. As a result, millions of features with sub-micron lateral precision are generated in parallel on a planar circular carrier substrate which is referred to as “wafer.”13 One major task is the fabrication of a master of the actual microstructure like a microlens array (Fig. 10.15 left). This is carried out using a well-established lithography and reflow method [37]. A photoresist material is locally exposed to UV light through a binary mask14 . After post-processing, binary structures (e.g., array of cylindrical pillars of homogeneous height) remain. These are reflowed so that very smooth and
10.4.3 Fabrication of Artificial Compound Eye Optics Established processes and equipment of microtechnology are the key enabler for the fabrication of microop-
13
A process chain which is carried out for many components in parallel on these circular carrier substrates with a diameter of up to 300 mm is called a “wafer scale” process. 14 Binary photomasks have apertures allowing the UV light to locally expose the underlying polymer and opaque regions that block UV light.
140
A. Brückner et al.
precise spherical surface profiles are created due to the surface tension. Finally, a transparent replication stamp is formed as a negative copy of the wafer which carries the master structures. A simplified flow of an example fabrication process of an artificial compound eye objective is shown on the right side in Fig. 10.15. Thin glass wafers are the substrate material of choice due to their high optical transparency and mechanical stability at low thickness (usually less than 0.2 mm). Two glass wafers are bonded/glued together to form the baffle setup of an apposition compound eye as shown in Fig. 10.9. Subsequently, baffle arrays of different sizes are patterned by photolithography in thin layers of light-absorbing (black-matrix) polymer. The transparent stamp with the negative microstructure is pressed into the polymer using a modified contact mask aligner (see Fig. 10.16) [4,18]. The polymer hardens by applying UV light through the stamp and its microstructure relief is fixed even after removing the stamp. Here the axial precision of about ±5μm is limited by the bowing of the substrate, tool, and reference backplane as well as axial (z-axis) positioning and wedge errors, whereas the lateral precision of ±1μm is similar to that of photolithography. The UV cross-linking of spacer, aperture, and lens polymers provides the chemical and temperature stability which is necessary to combine all the patterning steps in arbitrary order and to ensure the compatibility to subsequent processes like bonding/soldering or
UV exposure modified mask holder transparent mold liquid UV resin glass substrate backside pattern
WEC
chuck alignment marks back side microscope
Fig. 10.16 Molding of polymer-on-glass microlens arrays on wafer scale using a modified mask aligner. The aligner reference backplane (chuck) fixes the substrate with the mold and eliminates the tilt between them (wedge error compensation, WEC). Lateral alignment is carried out by x,y stages and microscopes, the z-motion controls the polymer thickness (axial alignment). After UV exposure and separation, a polymer replication of the lens array remains on the glass wafer
dicing. In principle, our approach allows the assembly of apposition compound eye objectives with Sibased CMOS or CCD imager sensors on a wafer scale and the subsequent separation using a dicing saw. Alternatively, separated optical chips can be aligned and bonded to electrically connected image sensors.
10.4.4 Future Challenges One of the main issues for the future would be to achieve a closer interaction between the compound eye optics and the optoelectronic components. For example, for the application to flying/moving vehicles we aim for the symbiosis of artificial apposition compound eyes with smart image sensors that work highly parallel with pixel-wise embedded optical flow detection (see Chap. 3 for details). For such application, size and weight advantages of compound eye optics would perfectly complement with fast-processing abilities and high dynamic range of such sensors. Considering the lateral dimensions, optical flow detection chips need a large pixel size for implementing the processing circuits within the periphery of the photodiodes, which is exactly provided by an artificial apposition compound eye due to the large lens diameter compared to the size of the single photodiode within its footprint. Optical sensors that work in very close object distance or at high speed (e.g., optical mice navigation sensors) might benefit from the integration of solidstate illumination (e.g., LEDs or organic LEDs) onchip with the image sensor and compound eye optics. Optical channels for distributing the illumination light onto the object might be situated periodically next to the imaging channels in order to create an overall integrated system. Great potential lies in the realization of compound eyes on a curved basis as this is not only an evolutionary but also a physical optimum for tiny optical systems. A spherical shell offers the largest surface area with smallest possible volume, so that highest miniaturization can be achieved with the largest possible number of ommatidia (i.e., best possible resolution). Besides, each ommatidium images close to perpendicular incidence which allows very simple spherical lens profiles and a large field of view results by
10 Microoptical Artificial Compound Eyes
default. However, most challenges here are of technological nature. Up to date, a few examples of fabricating microlens arrays on non-planar substrates have been demonstrated [32,38]. However, the major problem of recording the image from a curved image plane remains. Some proposed ways like thinning of standard planar image sensors or using a grid of thin photodiodes on a flexible sheet and subsequently bowing the thin sensor array [27] seem to be sporadic attempts to come around this problem, but are usually applicable only along one dimension. Organic photodiodes seem to be a promising solution for non-planar detector arrays if they reach a high efficiency with small pixel sizes and a moderate lifetime. Artificial apposition compound eyes demonstrated the highest miniaturization known to imaging systems so far. On the other hand, the comparatively low resolution limits their application to machine vision and optical navigation sensors. In the future, it has to be further increased for most applications in the fields of medical imaging, security, and consumer products. Ongoing efforts also deal with exploring the possibilities and technological realization of more sophisticated versions of compound eyes such as the Gabor superlens (Sect. 10.4) or the cluster eye. Preliminary simulation results promise good image resolution of 350 × 350 pixels in combination with a high sensitivity (F/# = 1.5) for a modified Gabor superlens system of about 2 mm thickness. On the other hand, we aim for the optimization of image stitching properties for the cluster eye (which was described in Sect. 10.4.2) to achieve about 640 × 480 pixels (VGA) resolution. Such a system would be able to compete with small classical single-aperture objectives. Although several critical issues for the realization and application of artificial compound eyes seem to be solved by the different approaches we presented here, there is much more work to be done. Microoptical imaging sensors are still in their advent so that it will take another 5 years from now until the first products will be on the market. However, the application of artificial compound eyes in flying micro-robots (e.g., for the purpose of search and rescue or as a toy) may play a leading role for the ongoing development of these sensors. Acknowledgments We would like to thank Sylke Kleinle, Andre Matthes, Antje Oelschläger, and Simone Thau from the Fraunhofer Institute for Applied Optics and Precision Engineer-
141 ing (IOF), Jena, for their contributions to the fabrication of the various types of artificial apposition compound eye objectives using microoptics technology. Special thanks is dedicated to Reinhard Völkel from SUSS MicroOptics SA (Neuchâtel, Switzerland) for his inspiring previous work and helpful discussions about bio-inspired imaging. The experience of Martin Eisner (also SUSS MicroOptics) in aligned stacking of microlens array wafers finally led to the realization of the cluster eye. We are furthermore very thankful for the help we got from our colleagues from the Institute of Microtechnology (IMT) of the University of Neuchâtel, Switzerland, especially Toralf Scharf who took very important steps in the fabrication of the lensand aperture arrays of the cluster eye. The presented work was partly funded by the German Federal Ministry of Education and Research (BMBF) within the project “Extremely compact imaging systems for automotive applications” (FKZ: 13N8796).
References 1. Barlow, H.B.: The size of ommatidia in apposition eyes. Journal of Experimental Biology 29, 667–674 (1952) 2. Borrelli, N.F., Bellman, R.H., Durbin, J.A., Lama, W.: Imaging and radiometric properties of microlens arrays. Applied Optics 30(25), 3633–3642 (1991) 3. Brückner, A., Duparré, J., Dannberg, P., Bräuer, A., Tünnermann, A.: Artificial neural superposition eye. Optics Express 15(19), 11,922–11,933 (2007) 4. Dannberg, P., Mann, G., Wagner, L., Bräuer, A.: Polymer UV-molding for micro-optical systems and O/E- integration.In: S.H. Lee, E.G. Johnson (eds.) Proc. of Micromachining for Micro–Optics, vol. SPIE 4179, 137–145 (2000) 5. Duparré, J., Dannberg, P., Schreiber, P., Bräuer, A., Tünnermann, A.: Artificial apposition compound eye fabricated by micro-optics technology. Applied Optics 43(22), 4303– 4310 (2004) 6. Duparré, J., Dannberg, P., Schreiber, P., Bräuer, A., Tünnermann, A.: Thin compound eye camera. Applied Optics 44(15), 2949–2956 (2005) 7. Duparré, J., Schreiber, P., Matthes, A., Pshenay-Severin, E., Bräuer, A., Tünnermann, A., Völkel, R., Eisner, M., Scharf, T.: Microoptical telescope compound eye. Optics Express 13(3), 889–903 (2005) 8. Duparré, J., Wippermann, F., Dannberg, P., Reimann, A.: Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence. Optics Express 13(26), 10,539–10,551 (2005) 9. Franceschini, N., Pichon, J.M., Blanes, C.: From insect vision to robot vision. Philosophical Transactions: Biological Sciences. The Royalty Society is the publisher, see: http://www.jstor.org/pss/57060. Series B 337, 283–294 (1992) 10. Gabor, D.: Improvements in or relating to optical systems composed of lenticules. Pat. UK 541,753 (1940) 11. Goetz, K.G.: Die optischen Uebertragungseigenschaften der Komplexaugen von Drosophila. Kybernetik 2, 215–221 (1965) 12. Hamanaka, K., Koshi, H.: An artificial compound eye using a microlens array and its application to scale-invariant processing. Optical Review 3(4), 264–268 (1996)
142 13. Hardie, R.: Functional organization of the fly retina. Progress in Sensory Physiology 5, 1–79 (1985) 14. Hembd-Sölner, C., Stevens, R.F., Hutley, M.C.: Imaging properties of the Gabor superlens. Journal of Optics A: Pure and Applied Optics. 1, 94–102 (1999) 15. Horridge, G.A.: The compound eye of insects. Scientific American 237, 108–120 (1977) 16. Horridge, G.A.: The separation of visual axes in apposition compound eyes. Philosophical transactions of the Royal Society of London. Series B 285, 1–59 (1978) 17. Hoshino, K., Mura, F., Shimoyama, I.: A One–chip Scanning Retina with an Integrated Micromechanical Scanning Actuator. Journal of Microelectromechanical System 10(4), 492–497 (2001) 18. Houbertz, R., Domann, G., Cronauer, C., Schmitt, A., Martin, H., Park, J.U., Fröhlich, L., Buestrich, R., Popall, M., Streppel, U., Dannberg, P., Wächter, C., Bräuer, A.: Inorganic-Organic Hybrid Materials for Application in Optical Devices. Thin Solid Films 442, 194–200 (2003) 19. Hugle, W.B., Daendliker, R., Herzig, H.P.: Lens array photolithography. Pat. US 8,114,732 (1993) 20. Hutley, M.C.: Integral photography, superlenses and the moiré magnifier. In: M.C. Hutley (ed.) Digest of Top. Meet. on Microlens Arrays at NPL, Teddington, vol. EOS 2, pp. 72–75 (1993) 21. Jeong, K., Kim, J., Lee, L.P.: Polymeric synthesis of biomimetic artificial compound eyes. Proceedings of the 13th International Conference on Solid-State Sensors, Actuators and Microsystems (Transducers 05), pp. 1110– 1114 (2005) 22. Kamal, H., Völkel, R., Alda, J.: Properties of moiré magnifiers. Optical Engineering 37(11), 3007–3014 (1998) 23. Kawazu, M., Ogura, Y.: Application of gradient-index fiber arrays to copying machines. Applied Optics 19(7), 1105– 1112 (1980) 24. Kirschfeld, K.: The resolution of lens and compound eyes. Neural Principles in Vision pp. 354–370 (1976) 25. Kirschfeld, K., Franceschini, N.: Optical characteristics of ommatidia in the complex eye of musca. Kybernetik 5, 47– 52 (1968) 26. Kitamura, Y., Shogenji, R., Yamada, K., Miyatake, S., Miyamoto, M., Morimoto, T., Masaki, Y., Kondou, N., Miyazaki, D., Tanida, J., Ichioka, Y.: Reconstruction of a high–resolution image on a compound–eye image– capturing system. Applied Optics 43(8), 1719–1727 (2004) 27. Ko, H.C., Stoykovich, M.P., Song, J., Malyarchuk, V., Choi, W.M., Yu, C.J.: A hemispherical electronic eye camera based on compressible silicon optoelectronics. Nature 454, 748–753 (2008) 28. Land, M.F.: Compound eyes: Old and new optical mechanisms. Nature 287, 681–686 (1980) 29. Land, M.F.: Variations in Structure and Design of Compound Eyes. In: D. Stavenga, R.C. Hardie (eds.) Facets of Vision, Chap. 5, pp. 90–111. Springer (1989) 30. Land, M.F., Burton, F., Meyer-Rochow, V.: The Optical Geometry of Euphausiid Eyes. Journal of Comparative Physiology A 130(1), 49–62 (1979) 31. Land, M.F., Nilsson, D.E.: Animal Eyes. Oxford Animal Biology Series. Oxford University Press, Oxford (2002)
A. Brückner et al. 32. Lee, L.P., Szema, R.: Inspirations from biological optics for advanced photonic systems. Science 310(5751), 1148– 1150 (2005) 33. Lohmann, A.W.: Scaling laws for lens systems. Applied Optics 28(23), 4996–4998 (1989) 34. McIntyre, P., Caveney, S.: Graded index optics are matched to optical geometry in the superposition eyes of scarab beetles. Philosophical Transactions of the Royal Society of London. Series B 311, 237–269 (1985) 35. Nakayama, K.: Biological image motion processing: a review. Vision Research 25(5), 625–660 (1985) 36. Ogata, S., Ishida, J., Sasano, T.: Optical sensor array in an artificial compound eye. Optical Engineering 33(11), 3649– 3655 (1994) 37. Popovich, Z.D., Sprague, R.A., Conell, G.A.N.: Technique for monolithic fabrication of microlens arrays. Applied Optics 27(7), 1281–1284 (1988) 38. Radtke, D., Duparré, J., Zeitner, U., Tünnermann, A.: Laser lithographic fabrication and characterization of a spherical artificial compound eye. Optics Express 15, 3067–3077 (2007) 39. Sanders, J.S. (ed.): Selected Papers on Natural and Artificial Compound Eye Sensors, 122 edn. SPIE Milestone Series. SPIE Optical Engineering Press, Bellingham (1996) 40. Sanders, J.S., Halford, C.E.: Design and analysis of apposition compound eye optical sensors. Optical Engineering 34(1), 222–235 (1995) 41. Smith, W.J.: Modern Optical Engineering: The Design of Optical Systems, 2 edn. McGraw–Hill, New York (1990) 42. Snyder, A.W.: Physics of Vision in Compound Eyes. Handbook of sensory physiology, pp. 225–313. Springer (1977) 43. Snyder, A.W., Stavenga, D.G., Laughlin, S.B.: Spatial Information Capacity of Compound Eyes. Journal of Comparative Physiology A 116, 183–207 (1977) 44. Stevens, R.F.: Optical inspection of periodic structures using lens arrays and moiré magnification. Imaging Science Journal 47, 173–179 (1999) 45. Thorson, J.: Small-signal analysis of a visual reflex in the locust. Kybernetik 3(2), 41–66 (1966) 46. Völkel, R., Eisner, M., Weible, K.J.: Miniaturized imaging systems. Microelectronic Engineering 67–68, 461–472 (2003) 47. Völkel, R., Wallstab, S.: Flachbauendes Bilderfassungssystem. Pat. DE 199 17 890A1 (1999) 48. Wippermann, F., Duparré, J., Schreiber, P., Dannberg, P.: Design and fabrication of a chirped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective. In: L. Mazuray, R. Wartmann (eds.) Proceedings of Optical Design and Engineering II, vol. 5962, pp. 59,622C–1– 59,622C–11 (2005) 49. Yamada, K., Tanida, J., Yu, W., Miyatake, S., Ishida, K., Miyazaki, D., Ichioka, Y.: Fabrication off diffractive microlens array for opto-electronic hybrid information system. Proceedings of Diffractive Optics’ 99, pp. 52–53. EOS (1999)
Chapter 11
Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles W. Shyy, Y. Lian, S.K. Chimakurthi, J. Tang, C.E.S. Cesnik, B. Stanford, and P.G. Ifju
Abstract Aerodynamics, structural dynamics, and flight dynamics of natural flyers intersect with some of the richest problems in micro-air vehicles (MAVs), including massively unsteady three-dimensional separation, transition in boundary and shear layers, vortical flows, unsteady flight environment, aeroelasticity, and adaptive control being just a few examples. A challenge is that the scaling of both fluid dynamics and structural dynamics between smaller natural flyer and practical flying hardware/lab experiment (larger dimension) is fundamentally difficult. The interplay between flexible structures and aerodynamics motivated by the MAV development is discussed in this chapter. For fixed wings, membrane materials exhibit self-initiated vibration even in a steady free stream which lowers the effective angle of attack of the membrane structure compared to that of the rigid wing. For flapping wings, structural flexibility can enhance leading-edge suction via increasing the effective angle of attack, resulting in higher thrust generation.
11.1 Introduction Micro-air vehicles (MAVs) have the potential to revolutionize our capabilities of gathering information in environmental monitoring, homeland security, and other time-sensitive areas. To fulfill this potential,
W. Shyy () Department of Aerospace Engineering, University of Michigan, Ann Arbor, Michigan, USA e-mail:
[email protected]
MAVs must have the ability to fly in urban settings, tunnels, and caves; maintain forward and hovering flight; maneuver in constrained environments, and “perch” until needed. Due to the MAVs’ small size, flight regime, and modes of operation, significant scientific advancement will be needed to create this revolutionary capability. From a biology-inspired viewpoint, aerodynamics, structural dynamics, and flight dynamics of birds, bats, and insects intersect with some of the richest problems in aerospace engineering: massively unsteady three-dimensional separation, transition in boundary and shear layers, vortical flows, unsteady flight environment, aeroelasticity, and adaptive control being just a few examples. Natural flyers have several outstanding features which may pose several challenges in the design of MAVs. For example, (i) there is substantial anisotropy in the wing structural characteristics between the chordwise and spanwise directions, (ii) they employ shape control to accommodate spatial and temporal flow structures, (iii) they accommodate wind gusts and accomplish station keeping with varying kinematics patterns, (iv) they utilize multiple unsteady aerodynamic mechanisms for lift and thrust enhancement, and (v) they combine sensing, control, and wing maneuvering to maintain not only lift but also flight stability. In principle, one might like to first understand these biological systems, abstract certain desirable features, and then apply them to MAV design. A challenge is that the scaling of both fluid dynamics and structural dynamics between smaller natural flyers and practical flying hardware/lab experiments (larger dimension) is fundamentally difficult. Regardless, in order to develop a satisfactory flyer, one needs to meet the following objectives:
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_11, © Springer-Verlag Berlin Heidelberg 2009
143
144
• generate necessary lift, which scales with the vehicle/wing length scale as l3 (under geometric similitude); however, oftentimes, a flyer needs to increase or reduce lift to maneuver towards/avoid an object, resulting in the need for substantially more complicated considerations; • minimize the power consumption. An optimal design based on a single design point, under a given steady free stream value, is insufficient; instead, we need to develop a knowledge base guiding future design of MAVs across a range of wind gust, flight speed, and time scales so that they can be optimal flyers within the entire flight envelope. When wind gust adjustment, object avoidance, or station keeping become major factors, highly deformed wing shapes and coordinated wing-tail movement are often observed. Figure 11.1a illustrates such behavior for a hummingbird maneuvering around a potential threat and a chickadee adjusting its flight path to accommodate a target. Understanding of the aerodynamic, structural, and control implications of these modes is essential for the development of high perfor-
W. Shyy et al.
mance and robust micro-air vehicles capable of performing desirable missions. The large flexibility of animal wings leads to complex fluid–structure interactions, while the kinematics of flapping and the spectacular maneuvers performed by natural flyers result in highly coupled nonlinearities in fluid mechanics, aeroelasticity, flight dynamics, and control systems. Furthermore, as already mentioned before, insect wing properties are anisotropic because of the membrane-batten structures, with the spanwise bending stiffness about 1 to 2 orders of magnitude larger than the chordwise bending stiffness in a majority of insect species. In general, spanwise flexural stiffness scales with the third power of the wing chord while the chordwise stiffness scales with the second power of the wing chord [5]. Figure 11.1b shows the wing of a dragonfly. It has a reinforced leading edge and local variations in structural composition in terms of corrugation, for example. It has been shown in literature that wing corrugation increases both warping rigidity and flexibility. Furthermore, specific characteristic features have been observed on the wing structure of a dragonfly which even help prevent fatigue fracture [27].
(a) Fig. 11.1 (a) Asymmetric flapping kinematics, involving wing–tail coordination, are displayed with a hummingbird avoiding a potential threat and a nuthatch making adjustment while flying toward a target. (b) Wing structure of a dragonfly with reinforced leading-edge, anisotropic mechanical property distribution, and corrugated geometry
(b)
11 Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles
Moreover, the thin nature of the insect wing skin structure makes it unsuitable for taking compressive loads, which may result in skin wrinkling and/or buckling (i.e., large local deformations that will interact with the flow). Overall, insect wings have deformable aerofoils, whose instantaneous shape through the stroke cycle is determined, largely automatically, by the interaction of their structural elasticity and the inertial and aerodynamic forces they are experiencing [46]. On the aerodynamics side, in a fixed-wing setup, wind tunnel measurements show that corrugated wings are aerodynamically insensitive to the Reynolds number variations, which is quite different from a typical low Reynolds number airfoil. Can large flexible deformations provide a better interaction with the aerodynamics than if limited to the linear regimes? If torsional stiffness along the wing span can be tailored, how can that affect the wing kinematics for optimum thrust generation? How do these geometrically nonlinear effects and the anisotropy of the structure impact the aerodynamics characteristics of the flapping wing? All of these are issues that require detailed investigations and the understanding of which is critical for the success of future MAV designs. Due to the foregoing reasons and many others, fluid–structure interaction studies are critical to MAV design. Much of the efforts in this area thus far have focused on fixed-wing membrane-based vehicles [19, 29, 34, 36, 38]. Shyy et al. [28] have discussed flexible wings utilizing membrane materials and inferred from computations that compared to a rigid wing, a membrane wing can adapt to stall better and has the potential to achieve enhanced agility and storage condition by morphing its shape. They also emphasized the importance of fluid–structure analyses to understand the membrane wing performance. Lian and Shyy [19] have studied the three-dimensional interaction between a membrane wing and its surrounding fluid flow via an aeroelastic coupling of a nonlinear membrane structural solver and a Navier–Stokes solver. Stanford et al. [36] made a direct comparison of wing displacements, strains, and aerodynamic loads obtained via a novel experimental setup with those obtained numerically. In their work, they considered both pre- and poststall angles of attack and the computed flow structures revealed several key aeroelastic effects: decreased tip vortex strength, pressure spikes and flow deceleration at the tangent discontinuity of the inflated membrane
145
boundary, and an adaptive shift of pressure distribution in response to aerodynamic loading. The aeroelasticity of flapping wings has only recently been seriously addressed and a full picture of the basic aeroelastic phenomena in flapping flight is still not clear [3, 4, 9, 13, 14, 22, 24, 28, 31–33, 39, 40, 44, 47]. For example, Frampton et al. [9] have investigated a method of wing construction that results in an optimal relationship between flapping wing bending and twisting such that optimal thrust forces are generated. The thrust production of flapping wings was tested in an experimental rig. Results from this study indicated that the phase between bending and torsional motion is critical for the production of thrust. It was noted that a wing with bending and torsional motion in phase creates the largest thrust whereas a wing with the torsional motion lagging the bending motion by 90◦ results in the best efficiency. Hamamoto et al. [13] have conducted finite element analysis based on the arbitrary Lagrangian–Eulerian method to perform fluid–structure interaction analysis on a deformable dragonfly wing in hover and examined the advantages and disadvantages of flexibility. They tested three types of flapping flight: a flexible wing driven by dragonfly flapping motion, a rigid wing (stiffened version of the original flexible dragonfly wing) driven by dragonfly flapping motion, and a rigid wing driven by modified flapping based on tip motion of the flexible wing. They found that the flexible wing with nearly the same average energy consumption generated almost the same amount of lift force as the rigid wing with modified flapping motion. In this case, the motion of the tip of the flexible wing provided equivalent lift as the motion of the root of the rigid wing. However, the rigid wing required 19% more peak torque and 34% more peak power, indicating the usefulness of wing flexibility. More recently, Singh [32] has discussed a computational framework for the aeroelastic analysis of hover-capable, bio-inspired flapping wings. The chord-based Reynolds number considered for the analyses was in the 103 –105 range. One of the major inferences from this work is that at high flapping frequencies (12 Hz), the light-weight and highly flexible insect-like wings used in the study exhibited significant aeroelastic effects. Zhu [47] has developed a nonlinear fluid–structure interaction approach to study the unsteady oscillation of a flexible wing. He found that when the wing is immersed in air, the chordwise flexibility reduces
146
both the thrust and the propulsion efficiency whereas spanwise flexibility (through equivalent plunge and pitch flexibility) increases the thrust without efficiency reduction within a small range of structural parameters. However, when the wing is immersed in water, the chordwise flexibility increased the efficiency and the spanwise flexibility reduced both the thrust and the efficiency. Wills et al. [44] have presented a computational framework to design and analyze flapping MAV flight. A series of different geometric and physical fidelity level representations of solution methodologies was described in the work. Liani et al. [22] have coupled an unsteady panel method with Lagrange’s equations of motion for a two degree-of-freedom (2 DOF) spring-mass wing section system to investigate the aeroelastic effect on the aerodynamic forces produced by a flexible flapping wing at different frequencies especially near its resonance. Heathcote et al. [14] have experimentally investigated the effects of stiffness on thrust generation of airfoils undergoing a plunging motion under various free stream velocities. Direct force measurements showed that the thrust/inputpower ratio was found to be greater for flexible airfoils than for the rigid one. They also observed that at high plunging frequencies, the less flexible airfoil generates the largest thrust, while the more flexible airfoil generates the most thrust at low frequencies. To study the effect of the spanwise stiffness on the thrust, lift, and propulsive efficiency of a plunging wing, a water tunnel study was conducted on a NACA0012 uniform wing of aspect ratio 3. They observed that, for Strouhal numbers greater than 0.2, a degree of spanwise flexibility was found to be beneficial. Tang et al. [40] explored a two-dimensional flexible airfoil by coupling a pressure-based fluid solver with a linear beam solver. In this work, the fluid flow around a plate of different thicknesses with a teardrop-shaped leading edge was computed at a Reynolds number of 9 × 103 . In addition to this, a flat plate with half-cylinders at leading and trailing edges were investigated at a Reynolds number of 102 to probe the mechanism of thrust generation. In particular, they pointed out that the effect of the deformation (passive pitching) is similar to the rigid body motion (rigid pitching), meaning that the detailed shape of the airfoil is secondary to the equivalent angle of attack. In contrast to the aforementioned studies, [6] explored the relative contributions of inertial-elastic and fluid dynamic forces by oscillating Manduca
W. Shyy et al.
wings at 25 Hz in both normal air and helium. In the paper, the authors show that the overall wing motions and bending patterns are quite similar in both the cases, despite the 85% reduction in fluid density in the case of helium, suggesting that the contribution of aerodynamic forces are relatively small compared to the contribution of inertial-elastic processes. It was then suggested that, for studies of animal flight in air the somewhat intractable problem of fluid–solid coupling in wing design does not need to be addressed. In this connection, [42] suggested that, unfortunately, the inherent challenge of obtaining flowfield measurements in the wake of a living insect makes such postulations difficult to test and that such conclusions are often based on simple models for the aerodynamic forces that generally neglect much of the important unsteady fluid dynamics. This chapter presents a perspective regarding the issues, progress, and challenges associated with unsteady low Reynolds number aerodynamics associated with wing flexibility and their implications on micro-air vehicles. The parameters considered in the model problems presented here are general and do not target any particular insect.
11.2 Parameter Space and Scaling Laws From the viewpoint of fluid and structural dynamics, there are different dimensionless parameters that are of relevance to our study. Consider c, chord length; ω, circular frequency of flapping (rad/s); ha , flapping amplitude; Uref , reference velocity; ν, kinematic viscosity; ρf , fluid density; D, plate stiffness (directly proportional to material Young’s modulus and the cube of the wing thickness); IB , (flapping) moment of inertia; ωB and ωT being wing linear natural frequencies of bending and torsion, respectively. The relevant dimensionless parameters are listed in Table 11.1. Assuming that the geometric similarity is maintained, the scaling laws for forward and hovering flight conditions are summarized in the same table. One can readily conclude that the hovering Reynolds number and the cruising Reynolds number are very close to each other because the characteristic velocity for hovering is Uref = ωha . For hovering, the reduced frequency becomes k = c/2ha , which is simply related to the normalized stroke amplitude. Furthermore, if
11 Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles Table 11.1 Dimensionless parameters and scaling dependency for flapping wings
Dimensionless parameter Reynolds number: U c Re = ref ν 1 Strouhal number: a St = πωh Uref Reduced frequency: k = 2ππUωcref 2Π 3Π
1 2
= =
4Π
3 = 5Π = 4
D 2 c3 ρf Uref IB 5 ρf c ωB ω ωT ω
147
Hovering
Forward flight
Based on flapping wing speed
Based on cruising speed
l2
f
l
Independent
Independent
Independent
l
f
Independent
Independent
l
f
l−2
f −2
Independent
Independent
l−1
Independent
l−1
Independent
l−1 l−1
f −1 f −1
l−1 l−1
f −1 f −1
is noted that the advance ratio J = Uref /(ωha ) is related to St, specifically J = 1/(π St). of elastic and aerodynamic forces. Π1 gives a relative measure of elastic deformation to given aerodynamic loading and it is important as a measure of the structural nonlinear regime. Π1 is also related to the Kussner factor for flutter estimation. The applicability of the latter to the flapping wing stability boundary is uncertain however, and this should be investigated. 3 Ratio of inertia and aerodynamic generalized forces. Π is related to the Lock number and it 2 contains the mass ratio (representing the relative density of the wing and the fluid surrounding it). 4 Ratio of first linear bending natural frequency [23] and the frequency of excitation. 5 Ratio of first linear torsion natural frequency [23] and the frequency of excitation. 1 It
2 Ratio
we use the forward flight speed as the velocity scale, the resulting non-dimensional form of the momentum equation explicitly contains the Reynolds number and the Strouhal number. On the other hand, if we choose to use the flapping velocity scale, then the momentum equation will explicitly contain the Reynolds number and the reduced frequency [30]. As shown in Table 11.1, the scaling laws make the construction of aeroelastic models and testing complicated. Moreover, it leads to the usage of structural materials with elastic properties that are different from those of the struc-
Table 11.2 Selected structural and flow properties of three low Re flyers
Table 11.3 Dimensionless parameters for the low Re flyers listed in Table 11.2
ture of the natural flyer. Table 11.2 shows selected structural and flow properties of three low Re flyers. Table 11.3 shows several dimensionless parameters for the three low Re flyers listed in Table 11.2. For the calculation of the plate stiffness in Π1 , Young’s modulus information along the span was considered. Specifications for the humming bird were obtained from [7], whereas those for the bumblebee and hawkmoth were obtained from Fig. 11.4 of [5]. It may be noted that in the calculation of the dimensionless parameters Π3 and Π4 , the wing was assumed to be a beam with a
Insect
Mean chord length c (cm)
Wing mass Forward velocity (mg) (U in m/s)
Mean wing Wing semi thickness (cm) span (cm)
Bumblebee Hawkmoth Hummingbird
0.4 1.8 2.0
0.45 47 294
7 × 10−4 3.4 × 10−3 0.1
4.5 5 8
1.3 4.9 8.5
Parameter
BumbleBee
Hawkmoth
Hummingbird
AR Re St k Π1 Π2 Π3 Π4
6.6 1.2 × 103 –3 × 103 0.48 0.23 510 20 6.5 1.1 × 102
5.3 4.2 × 103 –5.3 × 103 0.35 0.3 61 12.7 4.6 62
8.2 1.1 × 104 0.93 0.15 1.56 × 103 170 7.9 61
148
rectangular cross section whose length is equal to the mean chord and breadth to the mean thickness of the wing.
11.3 Fixed Membrane Wing MAVs Nature’s design of flexible membrane wings can be put into practice for MAVs. The membrane concept has been successfully incorporated in MAVs designed by Ifju et al. [16]. In their design illustrated in Fig. 11.2, unidirectional carbon fiber and cloth prepreg materials have been used for the skeleton (leading-edge spar and chordwise battens) and latex rubber sheet material for the membrane. The stiffness of the whole structure can be controlled by the number of battens and membrane material in such a construction. Passive shape adaptation can be built into such a membrane wing via its interaction with aerodynamic loading through either geometric or aerodynamic twist [30, 35]. Its successful implementation is particularly important for micro-air vehicles beset with several flight issues: poor wing efficiency due to separation of the low Reynolds number flow, rolling instabilities, bilateral asymmetries due to destabilization of the low aspect ratio wing’s tip vortices, and wind gusts of the same order of magnitude as the original flight speed. The dynamic membrane model computations by Lian [18] and Lian et al. [21] have shown that even in steady free stream, the membrane wing experienced high-frequency vibrations. Navier–Stokes simulations and experimental studies report that, for membranetype flexible structures under typical MAV Reynolds
Fig. 11.2 Six-inch flexible, fixed-wing MAV [16]
W. Shyy et al.
numbers, the structural response is around O(100) Hz [31]. These issues, along with stable weight management and flight control problems that intensify with decreasing vehicle sizes, can be potentially attenuated with proper load redistribution over the wing. In an attempt to understand the aerodynamics/aeroelasticity aspects of membrane wings, in [30, 35], a rigid wing and two flexible, fixed-wing MAV structures were considered. The first of the latter two had membrane wings with several chordwise batten structures and a free trailing edge for geometric twist (batten-reinforced wings, BR). And the second had membrane wings whose interior is unconstrained and is sealed along the perimeter to a stiff laminate for aerodynamic twist (perimeter-reinforced wings, PR). Typical flow structures for all three wings are shown in Fig. 11.3, for 15◦ angle of attack (α) and 15 m/s free stream velocity (U∞ ). The two predominant hallmarks of MAV aerodynamics can be seen from the flow over the rigid wing: the low Reynolds number (105 ) causes the laminar boundary layer to separate against the adverse pressure gradient at the wing root, and the low aspect ratio (1.2) forces a strong wing tip vortex swirling system and leaves a low-pressure region at the wing tip. Flow over the flexible BR wing is characterized by pressure undulations over the surface [15], where the membrane inflation between each batten re-directs the flow. The shape adaptation decreases the strength of the adverse pressure gradient, and thus the size of the separation bubble. A large pressure spike develops over the PR wing at the leading edge of the membrane skin. The pressure recovery over the wing is shifted aftward, and flow separates as it travels down the inflated shape, where it is then entrained into the low-pressure core of the tip vortex. This interaction between the tip vortices and the longitudinal flow separation is known to lead to unsteady vortex destabilization at high angles of attack [41]; no such relationship is obvious for the BR and rigid wings. The low-pressure regions at the wing tips of the two membrane wings are weaker than those observed on the rigid wing, presumably due to energy considerations: strain energy in the membrane may remove energy from the lateral swirling system. Furthermore, the inflated membrane shape may act as a barrier to the tip vortex formation. The lift, drag, and pitching moment coefficients for the three different wings discussed above through
11 Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles
149
Fig. 11.3 Streamlines and pressure distributions (Pa) over the top wing surface: α = 15◦ , U∞ = 15 m/s [16]
an α-sweep are shown in Fig. 11.4. The CL − α relationships are mildly nonlinear (20–25% increase in CLα between 0◦ and 15◦ , CLα is the lift curve slope) due to growth of the low-pressure cells at the wing tip. Further characteristics of a low aspect ratio are given by the high stall angle, computed as being 21◦ for the rigid case. The aerodynamic twist of the PR wing increases CLα (by as much as 8%), making the MAV more susceptible to gusty conditions. CLmax is slightly higher as well, subsequently lowering the stall angle to 18◦ . The adaptive washout of the BR wing decreases CLα (by as much as 15% over that of the rigid wing), though the change is negligible at lower angles of attack. This is thought to be a result of two offsetting factors: the adaptive washout at the trailing edge decreases the lift, while the inflation of the membrane toward the leading edge increases the effective camber, and hence the lift.
Also, comparing the drag polars of Fig. 11.4, it can be seen that both flexible wings incur a drag penalty at small lift values, indicative of the aerodynamically non-optimal shapes assumed by the flexible wings (though the BR wing has less drag at a given angle of attack [2]). The drag difference between the rigid and BR wing is very small, while the PR wing displays a larger penalty. This is presumably due to two factors: a greater percentage of the wing experiences flow separation, and a large portion of the pressure spike at the leading edge is pointed in the axial direction. Pitching moments (measured about the leading edge) have a negative slope with both CL and α, as necessitated by stability requirements. Nonlinear trends due to low aspect ratio affects are again evident. Both the BR and the PR wings have a lower ∂Cm/∂CL than the rigid wing, though only the PR wing shows a drastic change (by as much as 15%). This is a result of the membrane
1.2
1
1
0.8
0.8
0.6
0.6
CL
CL
1.2
0.4 0.2 0 −0.2 −10
0
10 α
20
30
Rigid BR PR
0.4
Rigid BR PR
0.2 0 −0.2
0
0.1
0.2 0.3 CD
0.4
0.1 6 4
−0.1
L/D
Cm
0
−0.2
Rigid BR PR
−0.3
Fig. 11.4 Computed aerodynamic performance: α = 15◦ , U∞ = 15 m/s
−0.4
0
0.5 CL
2
Rigid BR PR
0 1
−2
0
0.5 CL
1
150
inflation, which shifts the pressure recovery toward the trailing edge, adaptively increasing the strength of the restoring pitching moment with increases in lift or α [36]. Steeper Cm slopes indicate larger static margins: stability concerns are a primary target of design improvement from one generation of micro-air vehicles to the next. The range of flyable CG locations is generally only a few millimeters long; meeting this requirement represents a strenuous weight management challenge. Furthermore, the PR wing displays a greater range of linear Cm behavior, possibly due to the fact that the adaptive membrane inflation quells the strength of the low-pressure cells, as discussed above. No major differences appear between the L/D characteristics of the three wings for low angles of attack. At moderate angles, the large drag penalty of the PR wing decreases the efficiency, while the BR wing slightly out-performs the rigid wing. At higher angles, both the lift and drag characteristics of the PR wing are superior to the other two, resulting in the best L/D ratios. Aeroelastic tailoring conventionally utilizes unbalanced laminates for bend/twist coupling, but the pretension within the membrane skin has an enormous impact on the aerodynamics: for the two-dimensional case, higher pre-tension generally pushes flexible wing performance to that of a rigid wing. For a threedimensional wing, the response can be considerably more complex, depending on the nature of the membrane reinforcement. Effects of increasing the membrane pre-tension may include decrease in drag, decrease in CLα , linearized lift behavior, increase in the zero-lift angle of attack, and more abrupt stalling patterns. Furthermore, aeroelastic instabilities pertaining to shape hysteresis at low angles of attack can be avoided with specific ratios of spanwise-to-chordwise pre-tensions [25]. Increasing the pre-stress within the membrane skin of a BR wing (see [37]) generally increases CLα , decreases Cmα (slope of the moment curve), and decreases L/D. The system is very sensitive to changes in the pre-stress normal to the battens, and less so to the stress parallel to the battens, due to the zeropre-stress condition at the free edge. Minimizing CLα (for optimal gust rejection) is found with no prestress in the span direction and a mild amount in the chord direction. The unconstrained trailing edge eliminates the stiffness in this area (allowing for adaptive washout), but retains the stiffness toward the leading
W. Shyy et al.
edge, removing the inflation seen here (and the corresponding increase in lift). Such a tactic reduces the conflicting sources of aeroelastic lift seen in a BR wing. Maximizing CLα (for effective pull-up maneuvers, for example) is obtained by maximizing Ny and setting Nx to zero. Conversely, maximizing CLα with a constraint on L/D might be obtained by maximizing Nx and setting Ny to zero. Opposite trends are seen for a PR wing. Increasing the pre-stress within the membrane skin generally decreases CLα , increases Cmα , and increases L/D. The chord-wise pre-stress has a negligible effect upon the stability derivatives, though both directions contribute equally to an improvement in L/D. As such, optimization of either derivative with a constraint on L/D could easily be provided by a design with maximum chordwise pre-tension and a slack membrane in the span direction. Overall sensitivity of the aerodynamics to the pre-tension in the membrane skin of a BR or a PR wing can be large for the derivatives (up to a 20% change in the Cmα of a BR wing), though less so for the wing efficiency. Variations in L/D are never more than 5%. Laminar-turbulent transition can affect the aerodynamic performance of MAVs flying at Reynolds numbers (104 –105 ). In their study, Lian and Shyy [20] have performed numerical investigations to examine the impact of flexible surfaces on the transition process. For this, a Navier–Stokes solver, the eN transition model, and a dynamic membrane model were coupled to study fluid–structure interaction. In the studied test case, a portion of the upper surface of the SD7003 airfoil was covered with a latex membrane that extends from 33 to 52% of the chord. No pre-tension was applied to the membrane. The membrane has a uniform thickness of 0.2 mm with a density of 1200 kg/m3 . The reference scales of their computations are based on the free stream velocity of 0.3 m/s, density of 1000 kg/m3 , and airfoil chord length of 0.2 m. With these parameters, the time step for the CFD solver here was set to 2 × 10−3 s and the time step of the structural solver was set to 1 × 10−5 s. In their work, subiterations between the CFD and the structural solver were used within each time step to allow for synchronization of the flow and the structure. By doing so, the errors introduced by a lagged fluid/structure coupling approach were regulated. A computational test was performed at α = 4◦ and Re = 6 × 104 . It was observed that when flow passes
11 Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles 0.11
y/c
0.1 0.09 0.08 0.25
τ=0 τ = 1.5 τ = 1.53 τ = 1.56 τ = 1.59
0.3
0.35
0.4
0.45 x/c
0.5
0.55
0.6
Fig. 11.5 Membrane airfoil shapes in a steady free stream at several time instants. The vibration changes the effective wing camber. τ is the non-dimensional time defined as tc/U [20]
over the flexible surface, the latter experiences selfexcited oscillation and the airfoil displays a varied shape over time (Fig. 11.5). Analysis showed that the transverse velocity magnitude could reach as much as 10% of the free stream speed. During the vibration, energy was transferred from the wall to the flow and the separated flow was energized. Compared to the corresponding rigid airfoil simulation, the surface vibration caused both the separation and transition positions to exhibit a standard variation of 6% of its mean. The time history of the lift coefficient for both the rigid and the membrane wings along with the timeaveraged component for the latter are presented in [20]. Even though the time-averaged lift coefficient (0.60) of the flexible wing is comparable to that of the corresponding rigid wing, the lift coefficient displays a time-dependent variation with maximum magnitude that is as much as 15% of its mean. The drag coefficient shows a similar pattern, but the time-averaged value closely matches that of the rigid wing. The flexible wing, on the other hand, can delay the stall margin
(a)
151
substantially [11, 43]. The FFT (fast Fourier transform) of the flexible wing response showed a dominant frequency at 167 Hz. Given the airfoil chord (0.2 m) and free stream speed (0.3 m/s), this high-vibration frequency is not likely to affect the vehicle stability. The time response of the lift coefficient indicates that a lowfrequency component is present in the lift coefficient response along with the primary one. This component has a frequency of about 14 Hz and seems to be associated with the vortex shedding. In a different simulation with three-dimensional laminar flow over a 6 in. membrane wing (i.e., the entire wing surface is flexible), Lian and Shyy have observed a self-excited structural vibration with a frequency around 120 Hz [19]; the experimental measurement of similar wings recorded a primary frequency around 140 Hz [43]. In a different study, Lian et al. [21] have reviewed the aerodynamics of membrane and corresponding rigid wings under MAV flight conditions. Their numerical findings show that both membrane and rigid wings exhibit comparable aerodynamic performance before stall limit, which has also been experimentally observed by Waszak et al. [43]. Figure 11.6 shows the time-averaged vertical displacement of the trailing edge of the membrane wing at a free stream velocity of 10 m/s while the root chord Reynolds number is 9 × 104 . The x-axis has the normalized distance of a point on the membrane (from the leading edge) with respect to the length of the membrane. The displacement is normalized with respect to the maximum camber of the wing and is shown on the y-axis in Fig. 11.6a. Due to the membrane deformation, the effective angle of attack (defined as the angle between the free stream velocity vector and the line joining
(b)
Fig. 11.6 Averaged displacement of the membrane wing trailing edge. (a) α = 6◦ ; (b) α = 15◦ [21]
152
11.4 Aeroelasticity of Flapping (Plunging) Wings Flapping wing aircraft configurations received substantial attention up to early decades of the last century, though the aerodynamic and mechanical complexity of designing a flapping wing aircraft soon discouraged inventors. Furthermore, flapping wing vehicles are suitable only as flyer sizes become sufficiently small. The recently risen interest for a power-efficient MAV platform that is highly maneuverable and capable of low-speed flight with a stable hover and vertical takeoff has re-ignited research and development efforts in flapping wing vehicles. High-speed cine and still photography and stroboscopy indicate that most biological flyers undergo orderly deformation in flight [45]. Birds, bats, and insects exploit the coupling between flexible wings and aerodynamic forces such that the wing deformations improve aerodynamic performance [9]. The interaction between unsteady aerodynamics and structural flexibility is, therefore, of considerable importance for MAV development. Wing propulsion, which is a key issue in the study of flapping MAVs, has been widely investigated by several researchers in the past [1, 8, 10, 12, 17, 26] focusing on rigid structures. However, the impact of flexibility on flapping wing propulsion of three-dimensional wing structures in a low Reynolds number environment (104 to 105 ) is still not very well understood and needs more examination. As part of validation efforts of a computational aeroelasticity framework that is suitable for flapping wing MAVs, [4] presented a fluid–structure coupling procedure between a Navier–Stokes solver and a quasi-three-dimensional finite element solver. Further, results were presented on a model example problem corresponding to a NACA0012 rectangular wing of aspect ratio 3 in pure heave motion at a Reynolds number of 3 × 104 for comparison to the experimental data of [14].
In their study, the wing structure was modeled as a one-dimensional beam with six elastic degrees of freedom, corresponding to extension, twist, and shear and bending in two directions. This choice was justified since the chordwise deformation was reported as being negligible in the experiment [14]. The wing cross section is built up from both PDMS (polydimethylsiloxane, which is a silicon-based organic polymer) and stainless steel. The mass and stiffness properties of the PDMS were not considered here; therefore, only the stainless steel stiffener (rectangular thin strip) was used for the evaluation of cross-sectional properties. The flapping axis was chosen at the leading-edge and the cross-sectional properties were evaluated with respect to the leading-edge point. Furthermore, the properties are uniform throughout the semi-span. A sinusoidal plunge profile as shown in Fig. 11.7 was prescribed to the root of the wing at the leading edge. The threedimensional structural solution is obtained by using 75 recovery nodes on each cross section, resulting in a structured grid of 3000 interface points which define the solid side of the aeroelastic interface. For the CFD analysis, a structured multi-block O-type grid around a NACA0012 wing of aspect ratio 3 was used. The number of grid points is 120, 56, and 60 in the tangential, radial, and spanwise directions, respectively. More details of the CFD model are furnished in [4]. Further, a detailed summary of the wing geometrical and mechanical properties, flow properties, and relevant dimensionless numbers are available in the reference. Figure 11.8 shows that the computational response of the thrust coefficient correlates well with that of
1 A Prescribed plunge displacement
the leading and trailing edge points at a station along the span) of the membrane wing is less than that of the rigid wing. The reduced effective angle of attack causes the decrease in the lift force in the case of the membrane wing.
W. Shyy et al.
0.5 B
0
D
−0.5 −1
C 0
0.25
0.5 t/T
0.75
1
Fig. 11.7 Prescribed plunge motion for the rectangular wing (normalized w.r.t. amplitude) (Points A, B, C, and D are representative time instants corresponding to 0, T/4, T/2, and 3T/4, respectively)
11 Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles 0.8
0.8
Computation Experiment
0.6
153
Computation Experiment
0.6
0.4 CT
CT
0.4 0.2
0.2 0
0
−0.2
−0.2
0
0.5
1
1.5
2
2.5
3
3.5
4
0
0.5
1
1.5
2
t/T
t/T
(a)
(b)
2.5
3
3.5
4
Fig. 11.8 Thrust coefficient as a function of time for rigid and flexible wings compared to experiment. (a) Rigid; (b) flexible [4]
0.4
0.4
Computation Experiment
0.3
Computation Experiment
0.3
0.2
CT
CT
0.2
0.1
0.1
0
0
−0.1
0
0.5
1
1.5
−0.1
2
0
0.5
1
k
k
(a)
(b)
1.5
2
Fig. 11.9 Thrust coefficient as a function of reduced frequency. (a) Rigid; (b) flexible [4]
the thrust coefficient, the displacements also increase with an increase in reduced frequency. To better understand the implications of wing flexibility on the aerodynamics, detailed flow structure and pressure distributions need to be investigated. Results are shown for selected wing span locations and representative time instants on both the rigid and the flexible wings (see [4]).
Normalized tip displacement amplitude
the experiment in both the rigid and the flexible wing cases. As seen from the figure, the frequency of the response is twice that of the plunging frequency as the maximum thrust occurs twice in a period as the wing passes through the neutral (zero) position (points B and D of Fig. 11.7). Also, the thrust coefficient of the flexible wing is greater than that of the rigid wing. This indicates that spanwise flexibility has a favorable impact on the thrust response in this case. It is worth noticing, however, that this result is not universal. The phase lag associated with structural flexibility alters the effective angles of attack, which means that the specific level and nature of flexibility can affect the outcome of thrust enhancement. To assess the dependence of thrust production on the reduced frequency of oscillation, a parametric study was conducted on both the rigid and flexible wings. Figure 11.9 shows the computational results and their comparison with the experiment, showing good correlation between them. The thrust coefficient response increases gradually at low reduced frequencies and more rapidly at higher reduced frequencies. Figure 11.10 shows the variation of amplitude of tip displacement as a function of reduced frequency. Like
1.8 Computation Experiment
1.6 1.4 1.2 1 0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
k
Fig. 11.10 Amplitude of tip displacement as a function of reduced frequency
154
W. Shyy et al.
Streamlines (as viewed from the reference frame moving with the prescribed motion) and pressure contours around the airfoil at 50% semi-span location are plotted for both rigid and flexible wings at four different time instants (t = 0 (A), T/4 (B), T/2 (C), and 3T/4 (D) of Fig. 11.7) within a stroke period T. It may be observed from the figure that the streamlines in the case of the flexible wing hit the wing surface because the reference frame with respect to which they were plotted does not take into account the surface speed due to deformation. The following features can be observed: • At time t = 0, i.e., at the beginning of the downstroke, a vortex is seen on the bottom surface of the rigid wing close to the leading edge and a weaker one on the top surface close to the trailing edge. Conversely, in the case of the flexible wing, no vortex is seen on the top surface and the one on the bottom surface is stronger than its counterpart on the rigid wing. • At time t = T/4, i.e., at the middle of the downstroke, in both rigid and flexible wings, the vortex on the bottom surface becomes weaker and moves downstream. Furthermore (only for the rigid wing) the smaller vortex on the top surface grows in size and also moves downstream toward the trailing edge. This is the point at which maximum thrust is generated in both the rigid and the flexible cases. • At time t = T/2, i.e., at the beginning of the upstroke, in the case of the rigid wing, a large vortical structure is now seen on the top surface closer to the leading edge and a smaller-sized vortex on the
bottom surface closer to the trailing edge. For the flexible wing, a much stronger vortex is seen on the top surface. • At time t = 3T/4, in the case of the rigid wing, both the vortices seen at time T/2 become weaker and move toward the trailing edge. The vortex on the top surface moves downstream much less than the one on the bottom. In the flexible wing case, the weakening of the vortex is seen but it does not convect downstream as much as its counterpart on the rigid wing. Figure 11.11 shows the time response of the instantaneous angle of attack for both rigid and flexible wings. The instantaneous effective angle of attack is defined as −1
αinst = tan
−1 dh(t) U dt
where U is the wing velocity component normal to the uniform flow in the case of the rigid wing and in the case of the flexible wing, velocity due to elastic deformation is included as well. For the flexible wing case, two different stations along the semispan (50 and 97%) are considered since each station sees a different effective angle of attack due to wing bending and spanwise variation of velocities induced due to deformation. As seen in the figure, the amplitude of the effective angle of attack in the case of the flexible wing (for 97% semi-span station) is at least 35% higher than that of the rigid wing. Angle of attack due to plunging of airfoils with leadingedge curvature promotes leading-edge suction. As seen
Instantaneous angle of attack (deg)
80
Fig. 11.11 Time response of the instantaneous angle of attack [4]
Rigid wing Flexible wing (50% semi−span) Flexible wing (97% semi−span)
60 40 20 0 −20 −40 −60
0
0.5
1
1.5
2 t/T
2.5
3
3.5
20
20
10
10
–Cp
–Cp
11 Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles
155
0
0 0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
x/c
x/c
(a)
(b)
0.8
1
Fig. 11.12 Pressure distribution around 50% semi-span for two different time instants B and C of Fig. 11.7 (dashed – rigid, solid – flexible). (a) Time instant B; (b) time instant C [4]
from Fig. 11.11, structural flexibility has resulted in higher instantaneous effective angles of attack, which, in turn, promote larger streamline curvatures around the wing. From the momentum equations, streamline curvatures induce pressure gradients in corresponding manner. In order to understand the impact of this further, Fig. 11.12 shows the pressure field distributions at 50% semi-span station for two different time instants (points B and C in Fig. 11.7). It is seen in the figure that the effect of leading-edge suction is enhanced in the flexible wing case (higher suction peak near the leading edge) which helps explain the increase in thrust with increase in flexibility. This reinforces the fact shown in Fig. 11.8 that there is a thrust enhancement due to wing flexibility. Also, it is important to note here that flexible wings can yield favorable performance at quite high instantaneous angles of attack (50◦ ) and large streamline curvatures without stalling.
11.5 Summary and Concluding Remarks Fundamental understanding of interactions between structural flexibility and fluid flow is critical for the success of future MAV designs. Flexible wings may be beneficial for manmade flyers. Insects, bats, and birds stand as illuminating examples of utilizing flexibility along with unsteady aerodynamics for guiding research efforts in MAVs. Typical insect wings are characterized by membrane-like skin and a network of anisotropic veins to support the structure. Membrane wings exhibit self-initiated vibration even in a steady free stream which lowers the effective angle of attack of the membrane structure compared to that of the rigid wing. To accurately simulate the mutual interaction between the flexible membrane
structure and its surrounding viscous flow, coupled fluid and structure simulations are needed. Comparison of rigid, batten-reinforced, and perimeter-reinforced fixed-wing MAV designs shows that in the case of the rigid wing, the low Reynolds number (105 ) causes the laminar boundary layer to separate against the adverse pressure gradient at the wing root, and the low aspect ratio forces a strong wing tip vortex swirling system and leaves a low-pressure region at the wing tip. Whereas, in the case of the batten-reinforced wing, shape adaptation decreases the strength of the adverse pressure gradient, and thus the size of the separation bubble. In the case of the perimeter-reinforced wing, the interaction between the tip vortices and the longitudinal flow separation leads to unsteady vortex destabilization at high angles of attack. This was not seen in the rigid and the batten-reinforced cases. Both experimental and computational investigations [4, 14] consistently indicated that within the range of appropriate non-dimensional parameters considered, span-wise flexibility can have a favorable impact on the thrust generation. Regarding fluid physics, leadingedge suction is important for thrust generation in flapping wings with leading-edge curvature. The structural flexibility results in higher instantaneous effective angles of attack. As long as the wing does not stall and the shape deformation does not cause the wing to experience out-of-phase movement from root to the tip, higher angles of attack promote larger streamline curvatures around the wing. From the momentum equations, streamline curvatures induce pressure gradients, resulting in enhanced leading-edge suction. Within the range of reduced frequencies considered (0.4–1.82), increasing reduced frequency enhances the thrust generated by both rigid and flexible wings. Notwithstanding the fact that the case of a flexible wing with prescribed root plunge motion is
156
a very simplified representation of an actual insect wing structure envisioned for an MAV (the wing structures could be anisotropic, prescribed with three-dimensional flapping kinematics, of an elliptic planform, etc.), it highlights the importance of spanwise flexibility on thrust generation (one of the primary functions of flapping wings), the effective angle of attack, and the leading-edge suction at Strouhal numbers which fall within the range of those found in nature (0.2–0.4). Further, fundamental studies on canonical (simplified) flexible wing configurations are critical to the understanding of full-fledged flapping MAV aeroelasticity. Observing the fast growth of papers and designs available in the open domain, it seems clear that strong attempts are being made in the research and development community to lay a foundation for the advancement of MAVs. While much progress has been made, more advancement is needed before we can develop robust and agile MAV technologies. Acknowledgments This work was supported by the Air Force Office of Scientific Research’s Multidisciplinary University Research Initiative (MURI) grant and by the Michigan/AFRL (Air Force Research Laboratory)/Boeing Collaborative Center in Aeronautical Sciences.
References 1. Archer, R., Sapuppo, J., Betteridge, D.: Propulsion characteristics of flapping wings. Aeronautical Journal 83(825), 355–371 (1979) 2. Argentina, M., Mahadevan, L.: Fluid-flow-induced flutter of a flag. Proceedings of the National Academy of Science: Applied Mathematics 102(6), 1829–1834 (2005) 3. Breugel V.F., Teoh, E.Z., Lipson, H.: A passively stable flapping hovering micro air vehicle. In: C. Ellington (ed.) Flying Insects and Robots. Springer-Verlag, Switzerland (2008) 4. Chimakurthi, S.K., Tang, J., Palacios, R., Cesnik, C., Shyy, W.: Computational aeroelasticity framework for analyzing flapping wing micro air vehicles. 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference. AIAA Paper Number 2008-1814 Schaumburg, IL (2008) 5. Combes, S., Daniel, T.: Flexural stiffness in insect wings i. Scaling and the influence of wing venation. Journal of Experimental Biology 206, 2979–2987 (2003) 6. Combes, S., Daniel, T.: Into thin air: Contributions of aerodynamic and inertial-elastic forces to wing bending in the hawkmoth manduca sexta. Journal of Experimental Biology 206, 2999–3006 (2003)
W. Shyy et al. 7. Cubo, J., Casinos, A.: Mechanical properties and chemical composition of avian long bones. European Journal of Morphology 38(2), 112–121 (2000) 8. DeLaurier, J., Harris, J.: Experimental study of oscillatingwing propulsion. Journal of Aircraft 19(5), 368–373 (1982) 9. Frampton, K., Goldfarb, M., Monopoli, D., Cveticanin, D.: Passive aeroelastic tailoring for optimal flapping wings. In: T.J. Mueller (ed.) Fixed and Flapping Wing Aerodynamics for Micro Air Vehicle Applications, vol. 195, pp. 473–482. Progress in Astronautics and Aeronautics, AIAA New York (2001) 10. Freymuth, P.: Thrust generation by an airfoil in hover modes. Experiments in Fluids 9(1–2), 17–24 (1990) 11. Galvao, R., Israeli, E., Song, A., Tian, X., Bishop, K., Swartz, S., Breuer, K.: The aerodynamics of compliant membrane wings modeled on mammalian flight mechanics. AIAA Paper Number 2006–2866 (2006). 12. Guglielmini, L., Blondeaux, P.: Propulsive efficiency of oscillating airfoils. European Journal of Mechanics B/Fluids 23(2), 255–278 (2004) 13. Hamamoto, M., Ohta, Y., Hara, K., Hisada, T.: Application of fluid-structure interaction analysis to flapping flight of insects with deformable wings. Advanced Robotics 21(1–2), 1–21 (2007) 14. Heathcote, S., Z., W., Gursul, I.: Effect of spanwise flexibility on flapping wing propulsion. Journal of Fluids and Structures 24(2), 183–199 (2008) 15. Hepperle, M.: Aerodynamics of spar and rib structures. MH AeroTools Online Database, available at http://www.mhaerotools.de/airfoils/ribs.htm, March 2007 16. Ifju, P., Jenkins, A., Ettingers, S., Lian, Y., Shyy, W.: Flexible-wing based micro air vehicles. IEEE Transactions on Robotics 22, 137–146 (2002) 17. Jones K.D., Platzer, F.M.: Flow control using flapping wings for an efficient low-speed micro air vehicle. In: C. Ellington (ed.) Flying Insects and Robots. SpringerVerlag, Switzerland (2008) 18. Lian, Y.: Membrane and adaptively-shaped wings for micro air vehicles. Ph.D. thesis, Department of Mechanical and Aerospace Engineering, University of Florida, Gainsville, Florida (2003) 19. Lian, Y., Shyy, W.: Numerical simulations of membrane wing aerodynamics for micro air vehicle applications. Journal of Aircraft 42(4), 865–873 (2005) 20. Lian, Y., Shyy, W.: Laminar-turbulent transition of a low Reynolds number rigid or flexible airfoil. AIAA Journal 45(7), 1501–1513 (2007) 21. Lian, Y., Shyy, W., Viieru, D., Zhang, B.: Membrane wing aerodynamics for micro air vehicles. Progress in Aerospace Sciences 39, 425–465 (2003) 22. Liani, E., Guo, S., Allegri, G.: Aeroelastic effect on flapping wing performance. 48th AIAA/ASME/ASCE/ AHS/ASC Structures, Structural Dynamics, and Materials Conference. AIAA Paper Number 2007-2412 Honololu, Hawaii (2007) 23. Meirovitch, L.: Fundamentals of Vibrations. McGraw Hill, New York (2001) 24. Muniappan, A., Baskar, V., Duriyanandhan, V.: Lift and thrust characteristics of flapping wing micro air vehicle (mav). 43rd AIAA Aerospace Sciences Meeting and
11 Flexible Wings and Fluid–Structure Interactions for Micro-Air Vehicles
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
Exhibit. AIAA Paper Number 2005-1055 Reno, Nevada (2005) Ormiston, R.: Theoretical and experimental aerodynamics of the sail wing. Journal of Aircraft 8(2), 77–84 (1971) Sarkar, S., Venkatraman, K.: Numerical simulation of thrust generating flow past a pitching airfoil. Computers and Fluids 35(1), 16–42 (2006) Shimanuki, J., Machida, K.: Structure analysis of the wing of a dragonfly. Proceedings of the SPIE 5852, 671–676 (2005) Shyy, W., Berg, M., Ljungqvist, D.: Flapping and flexible wings for biological and micro air vehicles. Progress in Aerospace Sciences 35(5), 455–505 (1999) Shyy, W., Ifju, P., Viieru, D.: Membrane wing-based micro air vehicles. Applied Mechanics Reviews 58(1–6), 283–301 (2005) Shyy, W., Lian, Y., Tang, J., Liu, H., Trizila, B., Stanford, B., Bernal, L., Cesnik, C., Friedmann, P., Ifju, P.: Computational aerodynamics of low reynolds number plunging, pitching and flexible wings for mav applications. 46th AIAA Aerospace Sciences Meeting and Exhibit. AIAA Paper Number 2008-523 Reno, Nevada (2008) Shyy, W., Lian, Y., Tang, J., Viieru, D., Liu, H.: Aerodynamics of Low Reynolds Number Flyers. Cambridge University Press (2008) Singh, B.: Dynamics and aeroelasticity of hover capable flapping wings: Experiments and analysis. Ph.D. thesis, Department of Aerospace Engineering, University of Maryland, College Park, Maryland (2006) Smith, M.J.C.: The effects of flexibility on the aerodynamics of moth wings: Towards the development of flapping-wing technology. 33rd AIAA Aerospace Sciences Meeting and Exhibit. AIAA Paper Number 1995-0743 Reno, Nevada (1995) Song, A., Tian, X., Israeli, E., Galvo, R., Bishop, K., Swartz, S., Breuer, K.: The aero-mechanics of low aspect ratio compliant membrane wings, with applications to animal flight. 46th AIAA Aerospace Sciences Meeting and Exhibit. AIAA Paper Number 2008-517 Reno, Nevada (2008) Stanford, B., Ifju, P.: Aeroelastic tailoring of fixed membrane wings for micro air vehicles. 49th AIAA/ASME/ ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference. AIAA Paper Number 2008-1790 Schaumburg, IL (2008)
157
36. Stanford, B., Sytsma, M., Albertani, R., Viieru, D., Shyy, W., Ifju, P.: Static aeroelastic model validation of membrane micro air vehicle wings. AIAA Journal 45(12), 2828–2837 (2007) 37. Stanford B., I.P.A.R., Shyy, W.: Fixed membrane wings for micro air vehicles: Experimental characterization, numerical modeling, and tailoring. Progress in Aerospace Sciences 44, 258–294 (2008) 38. Stults, J., Maple, R., Cobb, R., Parker, G.: Computational aeroelastic analysis of a micro air vehicle with experimentally determined modes. 23rd Applied Aerodynamics Conference. AIAA Paper Number 2005-4614 Toronto, Ontario, Canada (2005) 39. Tang, J., Chimakurthi, S.K., Palacios, R., Cesnik, C., Shyy, W.: Fluid-structure interactions of a deformable flapping wing for micro air vehicle applications. 46th AIAA Aerospace Sciences Meeting and Exhibit. AIAA Paper Number 2008-615 Reno, Nevada (2008) 40. Tang, J., Viieru, D., Shyy, W.: A study of aerodynamics of low reynolds number flexible airfoils. 37th AIAA Fluid Dynamics Conference and Exhibit. AIAA Paper Number 2007-4212 Miami, Florida (2007) 41. Tang, J., Zhu, K.: Numerical and experimental study of flow structure of low-aspect ratio wing. Journal of Aircraft 41(5), 1196–1201 (2004) 42. Toomey, J., Eldredge, J.: Numerical and experimental investigation of the role of flexibility in flapping wing flight. 36th AIAA Fluid Dynamics Conference and Exhibit. AIAA Paper Number 2006-3211 San Francisco, California, USA (2006) 43. Waszak, R., Jenkins, N., Ifju, P.: Stability and control properties of an aeroelastic fixed wing micro aerial vehicle. AIAA Paper Number 2001-4005 44. Wills, D., Israeli, E., Persson, P., Drela, M., Peraire, J., Swartz, S.M., Breuer, K.S.: A computational framework for fluid structure interaction in biologically inspired flapping flight. 25th AIAA Applied Aerodynamics Conference. AIAA Paper Number 2007-3803 Miami, Florida (2007) 45. Wootton, J.: Support and deformability in insect wings. Journal of Zoology 193, 447–468 (1981) 46. Wootton, R.: Springy shells, pliant plates, and minimal motors. abstracting the insect thorax to drive a micro air vehicle. In: C. Ellington (ed.) Flying Insects and Robots. Springer-Verlag, Switzerland (2008) 47. Zhu, Q.: Numerical simulation of a flapping foil with chordwise or spanwise flexibility. AIAA Journal 45(10), 2448–2457 (2007)
Chapter 12
Flow Control Using Flapping Wings for an Efficient Low-Speed Micro-Air Vehicle Kevin D. Jones and Max F. Platzer
Abstract A review is given of the research studies which led to the development of a flapping-wing propelled micro air vehicle which uses two oscillating wings in a biplane arrangement for propulsion and a fixed wing for lift generation. Computational and experimental studies are described which were conducted to obtain quantitative information about the thrust and propulsive efficiency offered by this choice. They included inviscid incompressible panel code as well as viscous Navier-Stokes computations, flow visualizations and flow measurements in water and wind tunnels as well as direct thrust measurements. It is shown that the placement of the fixed wing upstream but closely coupled to the two oscillating wings delays flow separation and therefore offers special advantages for flight operations at the low Reynolds numbers encountered by micro air vehicles.
12.1 Introduction The mastery of flight exhibited by birds and insects has fascinated man for many centuries and has induced his longing for wings of his own. However, various attempts throughout history to emulate bird flight, as documented, for example, by Dalton [6] remained unsuccessful. The German flight pioneer Otto Lilienthal [15] summarized the results of his studies of storks and other birds in a book titled “Bird flight as the
K.D. Jones () Naval Postgraduate School, Monterey, CA, USA e-mail:
[email protected],
[email protected]
Basis for Human Flight,” published in 1889. He thus contributed greatly to the understanding of flappingwing aerodynamics. In particular, he correctly identified the drag reduction (or thrust generation) caused by wing flapping. Another flight pioneer, Chanute, noted in his book [5], published in 1894, that the potential of flapping-wing aircraft was held back by the limited understanding of the underlying aerodynamics. However, the success of the Wright brothers in 1903 with the fixed-wing aircraft concept soon convinced the aeronautical engineering community to regard the flapping-wing aircraft concept as unpromising for further development. Part of the reason for the diminishing interest in flapping-wing aerodynamics can certainly be found in the complexity of the unsteady flow physics of flapping wings. While the correct flow physics of lift generation by an airfoil at steady incidence angle was already formulated by Kutta [14] and Joukowski [10] in 1902 and 1906, the phenomenon of thrust generation by a flapping airfoil was first explained only in 1909 and 1912 by Knoller [11] and Betz [1]. The first quantitative prediction of this thrust generation was achieved in 1922 by Prandtl’s PhD student Birnbaum [3]. Further interest in the aerodynamics of oscillating airfoils was sparked by the need to predict the phenomenon of airfoil flutter which led to the development of inviscid incompressible flow solutions for oscillating airfoils by Theodorsen [23] and Küssner [13] in the 1930s. The availability of these solutions, in turn, motivated Theodorsen’s associate Garrick [7] in 1936 to apply them to the prediction of the thrust generated by airfoil flapping. The advances in computing power achieved in the 1960s then made it possible for us to replace the Theodorsen/Küssner linearized solutions by nonlinear solutions using the panel concept. Finally, toward the
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_12, © Springer-Verlag Berlin Heidelberg 2009
159
160
K.D. Jones and M.F. Platzer
12.2 Flapping Airfoil Aerodynamics
Fig. 12.1 The unusual configuration uses a trailing biplane pair of flapping wings for thrust and a large fixed wing for lift. The biplane pair takes advantage of “ground effect” without having to be near the ground, and the flow they entrain suppresses flow separation over the main wing. This third-generation radiocontrolled model has a modular construction, two-channel control, a 25 cm span, 17 cm length and 13 g mass, and flies for 15 min on a single rechargeable battery
end of the century the continuing advances in computing power made it possible for us to obtain numerical solutions of the viscous flow equations. It is fair to state that throughout the twentieth century the aeronautical engineering community showed little interest in the aerodynamics of bird and insect wings. It was only toward the end of the century that a new type of air vehicle of greatly diminished size, the micro-air vehicle (MAV), triggered a re-examination of the flapping-wing aircraft concept. In the subsequent sections we describe our own “bio-inspired” approach that led to the development of the flapping-wing micro-air vehicle shown in Fig. 12.1. We explain the considerations which led us to a “biomorphic” flight vehicle (which does not exist in nature) instead of a “biomimetic” one (which imitates nature as closely as possible). In Sect. 12.2 we discuss the basic principles of flapping-wing aerodynamics and the experiments that pointed us toward the biplane flapping-wing arrangement. As we progressed with our research we started to realize the importance of viscous effects and the need to control flow separation. This led us to the concept of energizing a boundary layer by placing a flapping wing in the boundary layer or just downstream of a fixed wing. These analyses and experiments are described in Sects. 12.3, 12.4, 12.5. The confluence of this work, which led to the design of the MAV shown in Fig. 12.1, is described in Sect. 12.6.
The flapping bird or insect wing presents the aerodynamicist with a complex unsteady three-dimensional flow problem because its flapping amplitude varies with the spanwise distance and its aspect ratio is often relatively small. This fact made it quite difficult for us to select a vehicle configuration which would meet our design objectives in the absence of experimental or computational aerodynamic data for flapping wings of arbitrary planform. For this reason we decided to adopt the approach historically used in the design of fixedwing aircraft where the wing was assumed to be of sufficiently high aspect ratio so that two-dimensional airfoil data would provide a reasonable approximation of the actual three-dimensional wing aerodynamics. We felt that this assumption would yield meaningful data for birds with high aspect ratio wings, such as the albatross. This decision was also influenced by the fact that the major analysis tools available to us were the classical linearized inviscid flow solutions for incompressible flow past oscillating airfoils developed by Theodorsen [23] and Garrick [7] and in-house panel codes for single oscillating airfoils of arbitrary profile shape written by Teng [22] and for airfoil combinations written by Pang [16]. These tools made it possible for us to analyze the effect of plunge or pitch amplitude and frequency on thrust and propulsive efficiency of single airfoils and airfoil combinations, as documented by Platzer et al. [18]. It also allowed us to study the effect of airfoil geometry and of the interference between two airfoils in close proximity to each other. Of special interest were the stationary trailing airfoil (tandem) and the opposed plunge (biplane) arrangements shown in Fig. 12.2. Results for the tandem arrangement confirmed an earlier linearized theory result by Bosch [4] that a stationary airfoil downstream of an oscillating airfoil generates a significant amount of thrust due to the fact that the stationary airfoil converts part of the vortical energy shed by the oscillating airfoil into thrust. The results for the biplane arrangement showed that each airfoil experiences a significant thrust and propulsive efficiency increase if the airfoils are in close proximity to each other. Since this arrangement is equivalent to the flight of a single oscillating airfoil near the ground it indicated a favorable ground effect. These findings
12 Flow Control Using Flapping Wings
C L
8
U
a. Single foil
x0 C L
U
8
Fig. 12.2 Three configurations investigated numerically and in wind/water tunnel experiments: (a) single wing, (b) stationary trailing airfoil or tandem configuration, where the aft wing produces some thrust from the wake of the forewing, and (c) opposed plunge or biplane configuration which emulates a single wing near a ground plane
161
b. Stationary trailing foil
z0
8
U
C L
c. Opposed−plunge
1.0
propulsive efficiency, η
are shown in Figs. 12.3, 12.4, 12.5, and 12.6. Note that thrust (or more precisely the non-dimensional thrust coefficient) and the propulsive efficiency are plotted as a function of the non-dimensional or reduced frequency, k, and the non-dimensional amplitude of oscillation h. Here k is defined as 2π fc/U∞ , where f is the flapping frequency in H2 and c is the chord length. In Figs. 12.3 and 12.4 we show our computations of thrust and propulsive efficiency for the single airfoil. Note the good agreement in the prediction of
0.8
0.6
0.2
0.0 h = 0.1, Garrick h = 0.2, Garrick h = 0.4, Garrick h = 0.1, deforming wake h = 0.2, deforming wake h = 0.4, deforming wake h = 0.1, planar wake h = 0.2, planar wake h = 0.4, planar wake h = 0.2, Navier−Stokes h = 0.4, Navier−Stokes
thrust coefficient, Ct
2
1
0
0
1
2 3 reduced frequency, k
h = all, Garrick h = 0.1, deforming wake h = 0.2, deforming wake h = 0.4, deforming wake h = 0.1, planar wake h = 0.2, planar wake h = 0.4, planar wake
0.4
0
1
2 3 reduced frequency, k
4
Fig. 12.4 Predicted propulsive efficiency for configuration a as a function of h and k
4
Fig. 12.3 Predicted thrust coefficient for configuration a as a function of plunge amplitude, h, and reduced frequency, k. The deforming and planar wake notations indicate panel solutions, where the wake evolves as it convects in the first, but the vorticity is confined to the chord line in the latter
thrust between linear theory and the panel method. The Navier–Stokes predictions agree well for both h values until the product of hk gets above a critical value where leading edge separation takes over. Also note that for the panel method we have included results for both the normal wake model, where vortex elements are allowed to roll up as they convect downstream, and a wake model that forces all vorticity to remain in the plane of the chord line, as it is in linear theory. As can be seen, the deforming wake model is largely responsible for the difference between Garrick’s approach and the panel code. In Figs. 12.5 and 12.6 we present a comparison between the single flapping airfoil and the tandem and biplane configurations.
162
K.D. Jones and M.F. Platzer
1.5 thrust coefficient, Ct
lift forces on an oscillating airfoil (as needed for flutter calculations). However, it is not obvious why inviscid flow theory should yield meaningful results for the force on an oscillating airfoil parallel to the flow direction (i.e., thrust), since the drag computation on a steady airfoil requires a viscous flow computation. A further important question concerns the sensitivity to Reynolds number, especially for the small Reynolds numbers encountered by micro-air vehicles.
Configuration (a) (linear theory) Configuration (a) (panel code) Configuration (b) (panel code) Configuration (c) (panel code)
1
0.5
0
0
1
2
3
reduced frequency, k
Fig. 12.5 Predicted thrust coefficient for the three configurations of Fig. 12.2. The average value for the wing pair is shown for configurations b and c. For b the forewing produces roughly the same thrust as the single wing and the aft wing provides some additional thrust
propulsive efficiency, η
1
0.8
0.6
0.4 Configuration (a) (linear theory) Configuration (a) (panel code) Configuration (b) (panel code) Configuration (c) (panel code)
0.2
0
0
1 2 reduced frequency, k
3
Fig. 12.6 Predicted propulsive efficiency for the three configurations in Fig. 12.2. The average value for the wing pair is shown for configurations b and c
An important unresolved question, however, was the effect of viscosity on the validity of these inviscid flow predictions. The usability of inviscid flow theory for the prediction of lift on an airfoil in a high Reynolds number flow at a small steady incidence angle is well understood because the effect of the viscous boundary layer thickness is quite small. A similar consideration also holds for the prediction of the oscillatory
12.3 Effect of Viscosity on Flapping Airfoil Aerodynamics To answer these questions we conducted a series of experiments in a water channel. The panel code computations for the flow past a harmonically plunging airfoil showed the shedding of a so-called reverse Kármán vortex street from the airfoil trailing edge. As shown in Fig. 12.7, a reverse Kármán vortex street consists of counterclockwise rotating upper row vortices and clockwise rotating vortices in the lower row. This arrangement is the exact opposite of the classical Kármán vortex street shed, for example, from a stationary cylinder in a low-speed flow where the upper row vortices are turning clockwise and lower row vortices are turning counterclockwise. The change in vortex shedding from the trailing edge of a stationary airfoil to that from a harmonically plunging airfoil as the amplitude of oscillation is increased for a given frequency of oscillation is shown in Fig. 12.8. The classical Kármán vortex street shed from the stationary airfoil is shown in (a). As the airfoil starts to oscillate in plunge at a relatively low reduced frequency and amplitude, the Kármán vortex street changes into the mushroom-like vortex pattern shown in (b). As the amplitude is increased, this vortex pattern changes into the pattern shown in (c). A further increase in amplitude finally produces the reverse Kármán vortex street shown in (d). A comparison of the panel code-computed vortex street with the observed street of (d) shows excellent agreement. However, the panel code could not reproduce the vortex patterns shown in (a), (b), and (c). Hence these flow visualization experiments indicate the inadequacy of inviscid flow computations for low values of frequency, amplitude, and Reynolds number. On the other hand, they also show that there is a range of flow speed,
12 Flow Control Using Flapping Wings
163
Fig. 12.7 Vortex arrangement in a thrust-indicative reverse Kármán vortex street: top — panel code, middle — schematic, bottom — experimental
a). kh = 0.0
b). kh = 0.1
c). kh = 0.2
d). kh = 0.4
Fig. 12.8 Transition from normal to reverse Kármán vortex street with increasing hk. Flow is from left to right, with a heaving NACA 0012 airfoil
frequencies, and amplitudes where inviscid computations produce good agreement. These flow visualizations were quite important. They showed that previous conclusions drawn from inviscid flow computations can be quite wrong. For example, Garrick’s [7] inviscid linearized flow analysis showed that the propulsive efficiency of a harmonically plunging airfoil decreases from values close to 100% at very small reduced frequencies to only 50% for high frequencies. This led to the conclusion that very large flapping wings are required to produce meaningful thrust and efficiency values. It is now well recognized
that viscous flow effects are dominant at low amplitude and frequency values and, therefore, inviscid flow analyses are highly misleading. It is also instructive to take a closer look at the physical mechanism that causes the generation of thrust on a harmonically plunging airfoil. Consider again the reverse Kármán vortex street shown in Figs. 12.7 and 12.8d. The counterclockwise rotating vortices of the upper row and the clockwise rotating vortices of the lower row entrain flow from the outside so that a higher velocity flow is generated between the two vortex rows. Hence one would expect a jet-like flow to be
164
K.D. Jones and M.F. Platzer
now the airfoil as it plunges from above through the mean position. The airfoil “sees” a maximum positive angle of attack at this moment which starts to diminish greatly as it slows down. Hence the angle of attack changes from positive to negative as it reverses and reaches the maximum negative angle of attack as it approaches the mean position from below. Therefore during this part of the oscillation the airfoil will shed clockwise vorticity which accumulates quickly into the clockwise vortex of the lower row of the reverse Kármán vortex street. Similarly, while the airfoil moves above the mean position it sheds a counterclockwise vortex.
2 experimental panel code (60 steps/cycle) panel code (120 steps/cycle)
y/c
1
0
−1
−2
1.0
1.5
2.0
U / U∞
Fig. 12.9 The time-averaged jet velocity profile behind a flapping NACA 0012 measured using LDV in our water tunnel agrees well with the predictions of our inviscid panel code for moderate values of hk
12.4 The Bird Wing — A Fully Integrated Lift/Propulsion/Control System
created if one measures the time-averaged flow. This is indeed the case. In Fig. 12.9 we show one of our time-averaged flow measurements in a plane near (but downstream of) the airfoil trailing edge. It is seen that the oscillating airfoil indeed generates a distinct jet. It is also seen that the panel code-computed jet profile is in good agreement with the measurement (again proving that inviscid flow calculations can be quite adequate for certain parameter combinations). The oscillating airfoil therefore imparts momentum to the fluid in the streamwise direction which, in turn, generates a reaction force (thrust) in the opposite direction on the airfoil. The flapping bird wing therefore can be considered as a “jet engine” which came into use millions of years before the aircraft jet engine was invented and applied. But how is the reverse Kármán vortex street generated? To answer this question, it is useful to recall the basic principle of lift generation on an airfoil. To this end it is important to remember that a socalled “starting vortex” is shed from the airfoil trailing edge whenever the airfoil’s incidence angle is abruptly changed. A sudden increase in the angle of attack produces the shedding of a counterclockwise starting vortex. A decrease produces a clockwise vortex. The trailing edge has to be reasonably sharp for this shedding to occur with sufficient strength. A rounded trailing edge greatly diminishes the shedding. Consider
It has long been recognized that an aircraft’s propulsive efficiency is improved when the air from the wake of the aircraft is used as part of the propulsive stream. Betz explains this in his book [2] and points out that with wake ingestion the power expended can actually be less than the product of the forward speed and craft drag. At the dawn of the jet age Smith [20] proposed to use the boundary layer air for propulsion. Similarly, Reynst [19] advocated boundary layer propulsion by means of pulsating combustors installed near the wing trailing edges. Unfortunately, the implementation of boundary layer propulsion runs into the practical difficulty of distributing the air along the wing span and ejecting it at or near the wing trailing edge. Over the years, various jet flap concepts have been developed. However, the flow losses, weight and volume penalties caused by the ducting from the jet engines to the trailing edge have always outweighed the potential benefits. Nevertheless, Leroy Smith of the General Electric Company [21] has quantified the potential benefits of wake ingestion as recently as 1993. In contrast, birds have solved this problem in a most elegant and efficient way by means of wing flapping which, as noted above, generates a jet flow along the wing span, thus achieving a fully integrated lift/propulsion system. Furthermore, birds have mastered the use of variable camber and variable geometry concepts to adjust their wings to the required flight mode and flight control requirement.
12 Flow Control Using Flapping Wings
165
12.5 The Oscillating Airfoil — A “Two-Dimensional” Propeller
12.6 Conceptual Design Considerations for a Flapping-Wing MAV
As is clear from the preceding discussion, the oscillating airfoil can be regarded as a propeller that entrains a certain amount of flow along its span and gives it a certain amount of additional flow momentum. This fact was first fully recognized and analyzed by Birnbaum [3] who introduced the term “two-dimensional” propeller to draw attention to the fact that the oscillating airfoil works similar to the conventional propeller. This fact suggests that oscillating airfoils must have potential uses for flow entrainment and flow energization purposes. To explore these potential uses we performed several experiments. First we considered the effect of a small oscillating airfoil mounted in the laminar boundary layer of a flat plate. The arrangement is shown in Fig. 12.10 and a typical result is given in Fig. 12.11. It is seen that, compared to the airfoil oscillating in the outside free stream, the airfoil oscillating in the boundary layer generates a significantly increased jet flow. This suggests its use for boundary layer control by means of “blowing.” In another experiment we explored the use of oscillating airfoils for flow separation control. In Fig. 12.12 flow over an airfoil with a cusped trailing edge is shown together with a small oscillating airfoil mounted close to the trailing edge. As seen in the left image, the large flow separation which occurs without flow control can be fully suppressed if the small airfoil is oscillated with a sufficiently large plunge amplitude and frequency, as seen in the right image.
Having gathered the computational and experimental information on oscillating airfoils, described in the preceding paragraphs, we asked ourselves whether this information would suffice to design a flapping-wing micro-air vehicle. The computational and experimental results for two-dimensional flow over harmonically plunging airfoils indicated considerable potential as thrust generators (propellers). It also became clear that it might be advantageous to use high aspect ratio wings with constant spanwise oscillation amplitude. Although this choice meant a distinct deviation from the flapping wings used by birds it is clear that the diminishing amplitude of flapping bird wings near their roots contributes little to thrust production. Obviously, birds have no choice, whereas in manmade systems it is possible to take advantage of additional conceptual degrees of freedom. Furthermore, this choice reduced the need to develop computational solutions for three-dimensional flapping-wing flows and to acquire three-dimensional test results. It remained to select the most suitable oscillation mode. The two most practical oscillation modes are the plunge and pitch mode and the phase angle between these two modes (if both are selected). Rapid insight into this effect of simultaneous pitch/plunge oscillation could again be obtained with inviscid panel code computations which showed that the best propulsive efficiency is obtained if the phase angle between the pitch and plunge oscillations is 90◦ . Another important conceptual design consideration concerns the number of flapping wings to be used.
Fig. 12.10 A typical boundary layer profile is shown in the top image, but with the addition of a small flapping wing inside the boundary layer in the lower image, the boundary layer is energized, reducing drag
166
K.D. Jones and M.F. Platzer
Fig. 12.11 The normalized velocity profiles measured with LDV behind a flapping wing, with the flapping wing placed in a free flow or in the boundary layer of a flat plate shows that more thrust is produced in the presence of the flat plate dye emitted from surface.
stationary cusped airfoil
8
U
flapping wing
Static airfoil with separated flow
Flapping airfoil with attached flow
Fig. 12.12 In a water tunnel experiment, a very small flapping wing was placed in the wake of a large stationary airfoil which had a cylindrical leading edge and a cusped trailing edge. Dye was emitted from the upper surface just ahead of the cusped trail-
ing edge. Without flapping, the flow immediately separates, and indicates a large, unsteady separated flow region. With flapping, the recirculation is greatly reduced, and the flow reattaches to the trailing edge
Birds get along with just one left and right wing. Insects, on the other hand, often have two wings as, for example, the dragonfly. The test results for the harmonically plunging airfoil in the flat-plate boundary
layer showed significant benefits from operation near the wall. These results suggested to simulate “ground effect” by arranging two airfoils in biplane formation and oscillating them in counterphase. While birds
12 Flow Control Using Flapping Wings
167 notch in rear nacelle for laser reflection
2 Hz (panel) 4 Hz (panel) 6 Hz (panel) 8 Hz (panel)
thrust (N)
0.4
(exp) (exp) (exp) (exp)
0.2
incoming flow flapping wings
0 0
Fig. 12.13 The large biplane wind tunnel test model, with 65 mm chord, 1.2 m span wings, and a NACA 0014 airfoil section was built to investigate the performance of flapping wings in ground effect
can only achieve this configuration by flying near the ocean’s surface in true ground effect, some prehistoric insects had extensive overlap of their forewing and hindwing, according to Wootton and KukalovaPeck [24]. The panel code computations for this case, as shown in Figs. 12.5 and 12.6, remained to be verified by direct thrust measurements. To this end, the model shown in Fig. 12.13 was built. The two wings, arranged in biplane formation, could be oscillated in either pitch or plunge. The thrust could be measured
mounting rails in ceiling
5
10
velocity (m/s)
Fig. 12.15 Predicted and measured thrust for the large biplane model shown in Fig. 12.13. Reynolds numbers ranged from 0 to 50,000 and the motion was a pure plunge with amplitude 0.4c
quite easily by hanging the model from the wind tunnel ceiling, as shown in Fig. 12.14, and measuring the small streamwise movement of the model with a laser range finder to estimate drag and thrust. Typical results are shown in Fig. 12.15. These thrust measurements were quite encouraging. They then provided the motivation to build a much smaller model (of potential micro-air vehicle size) and to repeat the thrust measurements. This model is shown in Fig. 12.16 and thrust measurements are plotted in Fig. 12.17. During this phase of the micro-air vehicle development another advantage offered by the biplane
to fan
2−axis trans. table
incoming flow
laser model hanging from cables
Fig. 12.14 The model was hung from the ceiling of the test section and drag/thrust was calculated by measuring the displacement of the model in the streamwise direction
Fig. 12.16 This miniaturized version of the large biplane model shown in Fig. 12.13 was built to investigate the effects of low Reynolds numbers and the use of unconventional, “insect-like” wings. The MAV-sized model had a 15 cm span, and the wings had a stiff, balsa leading edge spar and a flexible membrane surface with carbon fiber batons
168
K.D. Jones and M.F. Platzer 80 static 1 m/s 2 m/s 3 m/s 4 m/s 5 m/s
thrust (mN)
60
40
20
0
0
10
20 30 flapping frequency, f (Hz)
40
Fig. 12.17 Typical results from the 15 cm biplane test model shown in Fig. 12.16. Reynolds numbers ranged from 0 to 12,000. The drop-off at about 20 Hz is due to mechanical limitations on the elastic pitching. To achieve better performance at higher speeds, stiffer wing-mount springs were needed to limit the wing pitching
configuration became evident. Oscillation of two airfoils in counterphase does not affect the joint center of gravity, hence a remarkably vibration-free dynamically balanced platform is obtained if the biplane configuration is chosen. Another important advance was the excitation system which consisted of two swing arms which were driven by a crankshaft and scotch yoke combination. This mechanism oscillated the two airfoils in plunge. Yet, as discussed above, it is desirable to have a joint pitch/plunge oscillation with a 90◦ phasing between the two oscillations. This requirement was met by mounting the airfoils elastically at the ends of the swing arms, thus providing the airfoils with the desired pitch degree of freedom by properly adjusting the stiffness of the connection to the swing arms. Mounting this model on a rotating arm provided the final proof that adequate thrust levels could be generated by oscillating the airfoils in the 20–30 Hz frequency range. Having thus solved the thrust-generation problem, it remained to select an aircraft configuration which would provide adequate lift and flight stability. Here again, we abstained intentionally from copying bird flight. Instead, we followed the conventional aircraft design process by separating lift and thrust generation. Selection of sufficient wing area, reflex camber for longitudinal static stability, and wing dihedral
Fig. 12.18 First generation radio-controlled micro-air vehicle with throttle-only control, 30 cm span, 18 cm length, and 14.4 g mass
for lateral stability are well-known aeronautical design techniques. However, it remained to select the most suitable location of the flapping-wing propulsion system. This decision was greatly influenced by the flow separation control experiments of Fig. 12.18. Micro-air vehicle flight occurs in a Reynolds number range where severe flow separation problems can be expected. Arranging the flapping wings (in biplane formation) downstream but very close to the trailing edge of the stationary wing was therefore likely to minimize flow separation. Tests verified that this is indeed the case. These considerations and test results then led to the configuration of Fig. 12.18 which first flew successfully in December 2002. A short time later, refinements led to the model shown in Fig. 12.1 with modular construction, lighter materials, and much better performance.
12.7 Summary and Outlook Since that time many versions of this micro-air vehicle have been flown countless times. The smallest version weighs approximately 6 g, with a span of 15 cm and a length of 15 cm. Using the latest Li-poly battery technology typical flight times are about 15 min. The flight speeds range between 2 and 5m/s. The maximum flight speed is limited by the achievable oscillation frequency (currently about 40 Hz). It is reasonable to expect that the flight speed can be doubled with improvements in various subsystems. The minimum speed is determined
12 Flow Control Using Flapping Wings
by the stall limit. As noted before, the close-coupled wing propulsion system gives the micro-air vehicle the ability to change the angle of attack (and therefore to reduce speed) up to rather high angles of attack (typically about a 15◦ angle of attack for 2 m/s flight). Therefore the vehicle is amazingly gust insensitive and, after the encounter of stall, control can be regained quickly. Readers interested in more quantitative development and flight performance details can find additional information in [9, 8, and 17]. With the exception of the hummingbird, birds have only limited ability to take off vertically or to hover in the air. In fact, many large birds try to run against the wind to gain aerodynamic lift and it is sometimes amusing to observe their struggle. Our micro-air vehicle is no exception, having been inspired by birds with high aspect ratio wings. A remaining future challenge for us therefore is the development of a variation with hovering and vertical take-off and landing (VTOL) capability. Much research and development activity is presently being devoted to this challenge with some success, such as the models described in Chaps. 13 and 14, as well as the Mentor developed by SRI and the University of Toronto [12]. While our present MAV was designed with efficient cruise flight in mind, it has a distinct disadvantage for hovering flight, as the center of mass is ahead of the flapping wings. In contrast, all of the successful hovering models mentioned above have the center of mass behind the flapping wings, and should therefore obtain passive stability. Acknowledgments We are grateful for the support received from Spiro Lekoudis of the Office of Naval Research, with project monitors Peter Majumdar and Edwin Rood, and from Richard Foch, head of the Vehicle Research Section of the Naval Research Laboratory, with project monitors Kevin Ailinger, Jill Dahlburg, and James Kellogg.
References 1. Betz, A.: Ein Beitrag zur Erklaerung des Segelfluges. Zeitschrift fuer Flugtechnik und Motorluftschiffahrt 3, 269–270 (1912) 2. Betz, A.: Introduction to the Theory of Flow Machines. Pergamon, New York (1966) 3. Birnbaum, W.: Das Ebene Problem des Schlagenden Fluegels. Zeitschrift fuer Angewandte Mathematik und Mechanik 4(4), 277–290 (1924) 4. Bosch, H.: Interfering airfoils in two-dimensional unsteady incompressible flow. Tech. Rep. 7, AGARD-CP-277 (1977)
169 5. Chanute, O.: Progress in Flying Machines. Lorenz & Herwig, Long Beach, CA (1976) 6. Dalton, S.: The Miracle of Flight. Firefly Books (1999) 7. Garrick, I.E.: Propulsion of a flapping and oscillating airfoil. Tech. Rep. 567, NACA (1936) 8. Jones, K.D., Bradshaw, C.J., Papadopoulos, J., Platzer, M.F.: Bio-inspired design of flapping-wing micro air vehicles. The Aeronautical Journal of the Royal Aeronautical Society 109(1098), 385–392 (2005) 9. Jones, K.D., Lund, T.C., Platzer, M.F.: Fixed and Flapping Wing Aerodynamics for Micro Air Vehicle Applications, Progress in Astronautics and Aeronautics, vol. 195, chap. 16: Experimental and Computational Investigation of Flapping Wing Propulsion for Micro Air Vehicles, pp. 307–339. AIAA (2001) 10. Joukowski, N.: On adjoint vortices. Izviestiia 13, 12–25 (1906) 11. Knoller, R.: Die Gesetze des Luftwiderstandes. Flug- und Motortechnik 3(21), 1–7 (1909) 12. Kornbluh, R.D., Low, T.P., Stanford, S.E., Vinande, E., Bonwit, N., Holeman, D., DeLaurier, J.D., Loewen, D., Zdunich, P., MacMaster, M., Bilyk, D.: Flapping-wing propulsion using electroactive polymer artificial muscle actuators, phase 2: Radio controlled flapping-wing testbed. Tech. Rep. ITAD-3470-FR-03-009, SRI International (2002) 13. Kuessner, H.G.: Zusammenfassender Bericht Ueber den Instationaeren Auftrieb von Fluegeln. Luftfahrtforschung 13(14) (1936) 14. Kutta, M.W.: Auftriebskraefte in Stroemenden Fluessigkeiten. Illustr. Aeronautische Mitteilungen (1902) 15. Lilienthal, O.: Der Vogelflug als Grundlage der Fliegekunst, 3 edn. Harenberg Kommunikation, Dortmund (1992) 16. Pang, K.C.: A computer code for unsteady incompressible flow past two airfoils. Master’s thesis, Department of Aeronautics & Astronautics, Naval Postgraduate School (1988) 17. Platzer, M.F., Jones, K.D., Young, J., Lai, J.C.S.: Flapping wing aerodynamics — progress and challenges. AIAA Journal 46(9), 2136–2149 (2008) 18. Platzer, M.F., Neace, K.S., Pang, K.C.: Aerodynamic analysis of flapping wing propulsion. AIAA-93-0484 (1993) 19. Reynst, F.H.: Pulsating Combustion. Pergamon, New York (1961) 20. Smith, A.M.O., Roberts, H.E.: The jet airplane utilizing boundary layer air for propulsion. Journal of the Aeronautical Sciences (1947) 21. Smith, L.H.: Wake ingestion propulsion benefit. Journal of Propulsion and Power 9(1), 74–82 (1947) 22. Teng, N.H.: The development of a computer code for the numerical solution of unsteady inviscid and incompressible flow over an airfoil. Master’s thesis, Department of Aeronautics & Astronautics, Naval Postgraduate School (1987) 23. Theodorsen, T.: General theory of aerodynamic instability and the mechanism of flutter. Tech. Rep. 496, NACA (1935) 24. Wootton, R.J., Kukalova-Peck, J.: Flight adaptations in palaeozoic palaeoptera. Biological Reviews of the Cambridge Philosophical Society 75(1), 129–167 (2000)
Chapter 13
A Passively Stable Hovering Flapping Micro-Air Vehicle Floris van Breugel, Zhi Ern Teoh, and Hod Lipson
Abstract Many insects and some birds can hover in place using flapping wing motion. Although this ability is key to making small scale aircraft, hovering flapping behavior has been difficult to reproduce artificially due to the challenging stability, power, and aeroelastic phenomena involved. A number of ornithopters have been demonstrated, some even as toys, nearly all of these designs, however, cannot hover in place because lift is maintained through airfoils that require forward motion. Two recent projects, DeLaurier’s Mentor Project and the TU Delft’s DelFly (Chapter 14), have demonstrated flapping based hovering flight. In an effort to push the field forward even further, we present here the first passively stable 24 g hoverer capable of hovering flapping flight at a Reynolds number similar to insects (Re = 8,000). The machine takes advantage of the clap and fling effect, in addition to passive wing bending to simplify the design and enhance performance. We hope that this will aid in the future design of smaller machines, and shed light on the mechanisms underlying insect flight.
13.1 Introduction In this chapter we will discuss the design and fabrication of a passively stable hovering flapping machine. This work is motivated by the unmatched aerodynamic ability of insects and hummingbirds to hover in
F. van Breugel () Cornell Computational Synthesis Lab, Cornell University, Ithaca, New York, USA e-mail:
[email protected]
place in addition to other acrobatic feats such as flying backward and sideway by exploiting flapping wing motion [5]. Although this remarkable ability is key to making small-scale aircraft, flapping-hovering behavior has been difficult to reproduce artificially due to the challenging stability, power, and aeroelastic phenomena involved. Recent interest in small-scale unmanned air vehicles, especially those capable of hovering like insects and hummingbirds, is driven by many potential applications ranging from surveillance and exploration to flocks of rapidly reconfigurable 3-D airborne machines. A working robotic model of flapping flight is also of interest to biologists and fluid dynamicists for better understanding the dynamics governing flapping flying organisms. A number of flapping machines have been developed [13, 17, 12, 11, 21, 1, 3], but few are capable of untethered hovering flight [16, 6]. A key challenge is to demonstrate stable untethered hovering flapping ability at a weight and power approximating that of insects and birds where hovering flapping flight is observed in nature. In this chapter we will discuss the recent development of a passively stable 24 g machine capable of hovering flapping flight at a Reynolds number similar to insects (Re = 8,000). This design, particularly the passive stability, may help in the design of insect-sized hovering vehicles, as well as shed light on the aeroelastic dynamic principles underlying insect flight. For the past several decades researchers have been studying how the complex flows form that allow insects to perform such incredible aerial feats, and what effect they have on flight at small scales with the hope of building a fly-sized hovering flapping machine. Flapping flight, such as that employed by insects, offers several advantages over ornithoptic flapping flight as seen in larger birds (i.e., incapable of
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_13, © Springer-Verlag Berlin Heidelberg 2009
171
172
hovering) or fixed and rotary wing flight, most notably in scalability to small sizes [9]. At smaller sizes, such as that of a fruit fly, fixed wing airfoils become less efficient than flapping flight due to the low Reynolds number. Low Reynolds numbers also appear in thin atmospheres such as that found on Mars [18], where conventional airborne exploration would be difficult. While some research suggests that flapping flight is more efficient than both translational and spinning wing flight [20], it is still debatable whether flapping or spinning wings are most efficient at small scales. It is clear from insect flight, however, that flapping wings allow for robust efficient, and incredibly maneuverable flight, and should thus be considered as a potential approach to MAV design. Current work has focused on thrust mechanisms that produce sufficient lift for hovering flight, and that provide a future of possible methods for control. Recent advances in small motors and batteries made by the cellphone industry have made miniature lightweight robotics much more feasible. Significant work has been done on forward flying flapping micro-air vehicles, but these cannot hover. Two micro-air vehicles have demonstrated hovering abilities, most notably SRI/UTIAS and Delauriers Mentor project [6], and the DelFly 2 from TU Delft and Wageningen University (Chap. 14) [16]. Both of these machines take advantage of opposing wing pairs flapping together and rely on conventional rudders for control. Fearing and Wood [11] have recently demonstrated a fly-sized flapping machine takeoff, but used off-board power and stabilization. The design presented here offers a scalable flapping wing architecture that provides the opportunity for quad-rotor-like active control using flapping wing pairs rather than conventional rudders for control. We further demonstrate that the machine is passively stable, thus eliminating the need for active stabilization control, a property which may help eliminate the need for complex and relatively heavy active stabilization systems in smaller scale future designs.
13.2 Aerodynamics of Flapping Flight The key to building a flapping-hovering machine is understanding the aerodynamic forces involved, as well as the wing motions and aeroelastic dynamics required to generate those forces. It has long
F. van Breugel et al.
been established that there must be fluid–wing interactions beyond traditional aerodynamic theory that accounts for insects ability to actually fly [10]. Scientists have been working on understanding the complex unsteady aerodynamic flows that form in the intermediate Reynolds numbers between 101 and 104 , where insect flight generally falls [19]. Several other chapters in this book cover the aerodynamics in great detail, thus we will only provide a brief overview of the aerodynamic flows here. The precise mechanism that forms and utilizes the complex flows that make insect flight possible is still an active research area, but the underlying theories to explain the phenomena have come a long way. The first unsteady aerodynamic effect proposed to explain lift enhancement in insects is the Weis-Fogh clap and fling effect, which has been described in detail using both insects and robotic setups [15]. As the wings start to peel apart at the beginning of the fling phase (outstroke), a low-pressure region forms that pulls air into the cleft around the leading edge (Fig. 13.1). This influx of air strengthens the formation of the leading edge vortices, which increases the circulation and thus lift. A similar mechanism, though less pronounced, takes place on the clap, where the air leaving the gap between the two wings helps increase vortex formation [15]. In nature most insects do not rely on this mechanism for the majority of their flight strokes as it can be very damaging to the wings. It has, however, been a strong motivation for the design of a number of flapping wing vehicles, including the Mentor project, the DelFly (Chap. 14), Dr. Jones’ flapping machine (Chap. 12), and the machine discussed in this chapter. In addition, three more general mechanisms for lift enhancement beyond traditional aerodynamic theory have been established: delayed stall, rotational circulation, and wake capture [7]. Delayed stall is a translational mechanism by which leading edge vortices are formed, resulting in a higher circulation about the wing, increasing net lift. Rotational circulation is an inviscid mechanism representing the circulation needed to satisfy the Kutta condition at the trailing edge due to the upwash/downwash cause by the wing rotation. Wake capture is a gust response to the wake flow patterns whereby the wings recapture the wake from the previous half stroke on their return, taking advantage of the energy already spent to generate the vortex wake [7].
13 A Passively Stable Hovering Flapping MAV
173
Fig. 13.1 Diagram depicting vortex formation and shedding throughout the cycle (adapted from Lehmann, Sane, & Dickinson, 2005). As the wings come together (i) vortices are formed at the trailing and leading edges, these are then shed (ii) and recaptured during the fling phase (iii). This mechanism provides a substantial increase in lift production
13.2.1 Passive Wing Pitching An important factor in controlling the formation of these aerodynamic processes and utilizing them is precise control of the wing. Using a robotic setup, Dickinson proposed that by carefully controlling the timing of the wing pitch and delayed stall the insect could theoretically increase their lift by 35% during peaks [7], but this comes at a cost of increased drag and reduced efficiency [19]. Recent work on insect flight dynamics has shown that energy efficiency is improved through the use of wing dynamics to help pitch the wing [2]. Studies on the hawkmoth have shown that wing bending is even largely independent of fluid dynamics, suggesting that complex stroke patterns can be designed purely through wing architecture and careful design of the wings moment of inertia [4]. Further work on Drosophila wing stroke patterns has shown that it is likely that the majority of the pitching occurs passively [8]. Nearly all successful small-scale flapping-based machines that have been demonstrated have made use of aeroelastic pitching, either through baggy wing skins or an elastic skeleton [13, 17, 12, 11, 21, 1, 3, 16, 6]. Using such a passive mechanism to control the wing motion drastically reduces the complexity of the design by eliminating the need for a second actuator to actively control the wing pitch. A single actuator can be used to flap the wings, while a combination of air
drag and the wings moment of inertia and elasticity bends the wing plane such that it provides a positive lift force on both the in-and outstrokes. The timing of the rotation, extent to which the wings bend, and thus the lift coefficient are controlled by the material properties of the wings. This aeroelastic bending is shown in Fig. 13.2.
13.3 Machine Design With the natural inspirations from insect and hummingbird flight in mind, we set about designing a small hovering flapping machine. The design principle was to combine a number of modules each consisting of a DC motor and a pair of flapping wings, based on insect and hummingbird flight. Note that the wings needed to be configured such that they provided vertical thrust, rather than lateral thrust as in an ornithopter, which requires an airfoil to generate lift from forward flight. The first design, pictured in Fig. 13.3, consisted of three such modules. In later designs we began using four modules for several reasons. While this project did not make use of any active control, the presence of four independent motor–wing pairs allows for the potential of actively controlled flight. Also, adding an additional module increased the net lifting power of the machine, and simplified the angles in the modules’ chassis. The
174
F. van Breugel et al.
Fig. 13.2 Aeroelastic wing bending. Machine shown under a strobe light during flapping at 20 Hz. The image is composed of several individual images to show both the instroke and outstroke of each wing pair. The inner pair is the outstroke, the outer pair the instroke. Air drag bends the wings, providing an appropriate angle of attack, allowing for positive lift generation throughout the entire cycle. The fluid inertia combined with the slowing down and switching direction passively pitches the wings. Similar mechanisms for wing control have been observed in insects
final design, capable of untethered hovering flapping flight, is the version discussed here in more detail. We have designed an 18 g machine (excluding batteries, 24 g including batteries) capable of hovering flapping flight, see Fig. 13.4. The design consists of
Fig. 13.3 Initial design of hovering flapping machine. Theoretically this three-winged version should be equally capable of being passively stable as our more successful four-winged version. We moved to four-wings primarily to simplify the design of the angles in the modules’ chassis and allow for a potential future of active control
four pairs of wings; each independently powered by a small DC motor and designed to act in a similar manner to an insect, taking advantage of passive wing bending, the clap-and-fling effect, as well as the other unsteady aerodynamic effects, with a single active
13 A Passively Stable Hovering Flapping MAV
175
Fig. 13.4 Design and material properties: (a) solidworks designed CAD model and. (b) physical machine. The DC motor is connected to the wing shafts with a crankshaft and
connecting rods. As the crankshaft turns the wings flap in and out, nearly symmetrically. Opposite wing pairs are reversed to reduce effects of the slight asymmetry
degree of freedom. The machine we present here is entirely passively stabilized (Fig. 13.3), yet the presence of four independent pairs allows for future implementation of active control. In this section we discuss a general approach to building small-scale lightweight machines using simple manufacturing techniques. As with any airborne vehicle, weight is of critical importance, and particular attention should be paid to material selection and component design in order to remove unnecessary weight. Balsa wood has a very high strength to weight ratio and is easy to work with, making it the perfect material for structural components. In order to maximize the strength of balsa it is critical to use the wood grain to your advantage, which we were able to do by virtue of our modular design. Each module was cut such that the grain went with the longest (most fragile) dimension. Balsa also comes in a variety of grades which differ significantly in strength and density. Complicated structures can easily be cut out and etched into the wood using a laser cutter, such as that available from Epilog (http://www.epiloglaser.com/). For our machine’s structural components we used 2.4 mm (3/32 in.) balsa sheets laser-cut into interlocking components to take advantage of the wood grain.
As discussed earlier, flexible wings are key. After much experimentation we found that wings comprised a 13 μm PET (polyethylene terephthalate) film welded (using a soldering iron) to a laser-cut vein pattern made of 250 μm PET worked particularly well. Wing pairs were attached to the flapping mechanism using 21 cm long, 0.9 mm diameter carbon fiber rods. The flexible carbon fiber allows the wings to twist, providing additional feathering as well as providing some energy storage through wing bending, increasing the amplitude of the flap. Wing shafts were connected to the structure with a PET flexible hinge joint to constrain motion to the desired plane. Minimum distance between static wings was designed to be 0.2 of the chord length, to maximize the benefits from clap and fling effect. Flapping motion of each wing pair was generated by a separate 1.2 g geared (25:1) DC pager motor (GM15, distributed through Solarbotics, Calgary, CN). The motor turned a hand-bent 0.6 mm diameter steel crankshaft, which was connected to the wings with connecting rods and ball joints (steel tube and fishing line knots on either end). Motors were powered in parallel by two 3.1 g, 3.7 V, 90 mAh Li-Poly batteries in series (available through plantraco.com). Batteries were connected to the structure with a 24 cm
176 Fig. 13.5 Mass distribution of parts
Fig. 13.6 Dimensioned drawing of full model, dimensions in mm
F. van Breugel et al.
13 A Passively Stable Hovering Flapping MAV
177
Fig. 13.7 Dimensioned critical components, dimensions in mm
long, 0.4 mm steel rod, reinforced by 0.2 mm fishing line to ensure rigidity. (Later the steel rod and fishing line were replaced with a 0.9 carbon fiber rod, which provided sufficient rigidity at a lower weight.) Top and bottom sails were constructed from lightweight semirigid packing foam, reinforced by balsa struts at the top. Total weight of the machine including sails and both batteries was 24.2 g (see Figs. 13.4 and 13.5). In an effort to promote research in the area of small-scale hovering flapping machines we include here the precise blueprints of our design to aid in others’ construction of such robots, see Figs. 13.6 and 13.7 (for more details contact authors).
13.3.1 Design Improvements Since building and testing the machine presented in the previous section we have made a number of changes to improve the design; here, we discuss some of these changes and potential future changes as well. Occasionally the jigsaw fittings on the modular design pre-
sented a problem, either through fragility or fatigue due to vibrations. In order to fix this we tested a variety of different material approaches including composites and other balsa designs. Composites proved difficult to use as they were significantly heavier than their balsa counterparts, and also made it much more difficult to take the machine apart and replace parts such as motors. Thus, a new way of joining the four modules is shown in Fig. 13.8. A module connector is used to secure both the top and bottom of each module and is also used to connect all four modules together. Spanwise connector arms are used to connect the modules to the module connector. Initially, length-wise connector arms were used but when built, flapping caused excessive vibrations. The span-wise connector arms reduced the vibrations significantly. Using a module connector enables individual modules to be built and then assembled. In the event that module 1 has a faulty motor, only module 1 would be affected leaving the modules 2, 3, and 4 free from damage. The modules are made of 1/16 in. balsa, while the module connector and the connector arms are made of 3/32 in. balsa wood.
178
F. van Breugel et al.
Fig. 13.8 (a) Solidworks designed CAD model of module connector. (b) Physical machine. The top and bottom module connectors provide the rigidity needed when the wings flap
The next improvement we made was to redesign the crankshaft. The original crankshaft placed the wing load directly on the DC motor via the crankshaft. In the new design a second bearing was introduced between the attachment to the wings and the DC motor, relieving the motor from the direct loading. Additionally some changes had to be made to the balsa structure, as seen in Fig. 13.9. Lastly, we updated the PET flexible hinge in order to reduce losses in the system. The original flexible hinge introduced more damping than necessary, and was thus replaced with a pinned hinge to allow for free rotation as shown in Fig. 13.10 a–d, which shows the actual redesign of the hinge and its ease of assembly. Currently, the pins used are 0.026 in. steel music wire. To decrease weight, carbon fiber rods of similar dimensions could be used instead. The hinge was fabricated by combining etching and vector laser cutting 3/32 in.
Fig. 13.9 New design of the crankshaft. The frame was redesigned to accommodate the secondary crankshaft holder
balsa wood. A groove for the pin slot was etched and the shape of the hinge was vector cut. The pinned hinge will also make it easier to install and uninstall the wings for repair or modifications.
13.4 Passive Stability Hovering flapping-based flying machines are notoriously unstable, and generally require a suite of sensors and processing power to maintain stability or constant manual remote control. Note the distinction between hovering flight and forward flight, as there have been countless stable forward-flying ornithopters. Adding the necessary sensors and processors on board is often impossible for smaller vehicles due to weight limitations, making untethered automatic control extremely difficult if not impossible. In this section we discuss an alternative to active control: passive stability through the use of strategically placed dampers. This approach can be valuable for testing the propulsion systems, and to a certain extent to compliment active control by reducing the need for precision. From a control’s perspective, our machine’s architecture is similar to that of a quad-rotor and could theoretically be controlled in much the same manner if outfitted with accelerometers, gyros, a microcontroller, and speed controllers. Rather than implementing such an active control system, however, we opted to explore passive stability. The physical control concepts presented here could theoretically be converted to gains for a classical PID controller using electronics rather than sails for control. The passive mechanism we used is likely not applicable to situations where
13 A Passively Stable Hovering Flapping MAV
179
Fig. 13.10 New design of the wing hinge. (a–d) The sequence in which a wing can be easily removed for repair or modifications
maneuverability is a high priority. However, for applications where stationary hovering and slow maneuvering flight are desired, such an approach may be of interest. The sails also increase the machines’ susceptibility to gusts of wind. While in the real world this approach is not likely ideal, in the emerging field of untethered hovering flapping micro-air vehicles, where weight is of utmost concern, passive stability provides a simple and lightweight solution for tuning and testing the thrust portion of the design, which is the crux of the field at this point. The stabilization dynamics are governed by two sails, which cause it to act like a damped pendulum. In order to intuitively understand the dynamics, think of the machine as a mass on a rod. If we were to place a pivot point below the mass, it would act as an inverted pendulum and thus be unstable. Placing the pivot point above the mass, however, causes it to act as a pendulum, which in the presence of damping will be stable. The pivot point is determined by the center of the drag forces caused by the sails. If this pivot point is above the center of mass, the system will be stable. In order to verify the stability of the model we used a first-order approximation of the free body dia-
gram in Fig. 13.11. The stability of the device has been verified by its ability to recover from arbitrary launch orientations, including upside-down zero-velocity initial conditions that are typically very difficult for most aircraft’s to recover from (including helicopters). Using a first-order approximation of the system dynamics we can show its stability mathematically. The free body diagram in Fig. 13.11 yields the following equations:
Forces = Ft j − FDW j − Mgj − FD1 i − FD2 i = Ma
Moments = − cM sin θ + aFD1 − bFD2 = I θ¨
For the simplified system, I = Mc2 . The key to understanding how the stability works lies in the center of rotation, which is the point where the drag induced from the two sails during rotation is balanced, which imposes the constraint, aFD1 = bFD2 (note a+b=length of the machine, providing two equations to solve for a and b). In order to analytically examine the system we approximated the drag
180
F. van Breugel et al.
Fig. 13.11 Passive stabilization mechanism. (a) CAD diagram and (b) free body diagram of perturbed system. The hovering machine has thrust exactly equal to its weight. Small perturbations will cause an imbalance in forces and the machine may begin to rotate and drift. The top and bottom sails act as dampers, producing a restoring force in the presence of a drift or rotational
velocity. These two dampers determine a rotation point for the machine. If the rotation point is above the center of mass, the machine will be stable, similar to a damped pendulum. A damping along the axis is due to a damping factor from the flapping wings
force as being linearly proportional to the velocity. This approximation is reasonable for low translational velocities, which is true in the hovering case that we are interested in. Next we used a small angle approximation to simplify the analysis. The drag forces can be written as follows:
response was examined for an initial 10◦ offset of the tilt angle (Fig. 13.12). It can be concluded that for stability the center of rotation defined by the two sails as above needs to be above the center of mass. Using the following values we simulated the system in Matlab:
Drag on the flapping wings: FDW = dw y˙ Drag on the top sail: FD1 = d1 (˙x − aθ˙ ) Drag on the bottom sail: FD2 = d2 (˙x + bθ˙ ),
d1 = 0.03142 kg/s (based on 0.5 ∗ rho ∗ Cd ∗ A) d2 = 0.02244 kg/s (based on 0.5 ∗ rho ∗ Cd ∗ A) dw = 0.01 kg/s (arbitrary, but has no effect on stability provided it is > 0) a = 0.191 m (distance from d1 to the center of rotation, defined by aFD1 = bFD2 ) b = 0.268 m (distance from d2 to the center of rotation, defined by aFD1 = bFD2 ) M = 0.025 kg (machine mass) g = 9.8 m2 /s (acceleration due to gravity)
where d1,2,w are of the form 12 ρCd A, ρ is the fluid density, Cd is the drag coefficient, and A is the crosssectional area. In order to determine the stability of the system we can put it into state space, and examine the eigenvalues, in addition to some test case scenarios. The
13 A Passively Stable Hovering Flapping MAV
181
Fig. 13.12 System response for (a) a stable system, with the center of mass located below the pivot point. The system started at (0,0) with a tilt angle of 10◦ at time=0 (open circle) and proceed to stabilize, ending at the black circle. (b) unstable system response with the center of mass located above the pivot point. The system started at (0,0) with a tilt angle of 10◦ at time=0 (open circle) and did not stabilize, ending at the black circle
Ft = 1.05 ∗ M ∗ g (thrust slightly higher than the hovering condition) c = 0.00833 m (distance from center of rotation to center of mass) Analyzing this system yields the eigenvalues: {0, −1589.1, −2.2, −0.7, 0, −0.4}. The two zero-value eigenvalues are due to the rigid body modes. This can be verified by adding a spring in the x and y dimensions, causing all the eigenvalues to be negative.
13.5 Performance Results The design can be operated under a variety of configurations, depending on the need for longer flight times or increased payload capacity for tools such as sensors and cameras. Power, lift, and flapping frequency were measured using a digital multimeter, a scale, and strobe light (Fig. 13.2). Under maximum lift conditions the craft operates at a Reynolds Number of roughly 8,000.
182
F. van Breugel et al.
Fig. 13.13 Operating characteristics. Lift and frequency for various power arrangements. The nonlinearity at 7.5 W is likely due to a second oscillatory mode resembling a standing wave that appeared at this power. Furthermore the motor efficiency becomes nonlinear as the power increases. We operated the machine just below this point, at 6.9 W
Our current mode of operation uses two 3.7 V, 90 mA h Li-Poly batteries in series to provide a nominal output voltage of 7.4 V. At these voltages the flapping transitions into a second oscillatory mode: a standing wave with nearly zero lift production. In order to minimize this effect we implemented a 0.81/2 resistance to reduce the voltage. This also reduced the stress on the motors and lowered the maximum height achieved. Operating conditions were measured to 6.5 V at 1.07 A for a total lift of 25 g capable of 33 s flight. The poor performance is likely due to the motors being driven far beyond their recommended operating characteristics, lack of a voltage regulator, and unoptimized wing structure (see Sect. 13.5.1) (Fig. 13.13).
13.5.1 Future Design Changes In order to improve the machines performance there are a number of issues that need to be addressed. First, several of the components (motors, batteries, and wires) are being operated at a much higher power than is recommended. With the addition of an efficient voltage regulator and by minimizing the length and amount of wiring we can further reduce wasted power. Beyond component optimization the wing shape and materials need to be examined more carefully. During our design process we systematically tried various wing shapes, sizes, and material properties, yet were unable to find a simple method to help us optimize the wing
13 A Passively Stable Hovering Flapping MAV
shape. Once we found the presented design to work we moved on to other areas of the machine. The first step is to design reliable experiments for measuring lift with systematic changes to the wings. This is made difficult due to the fact that the machines’ flapping characters change when it is tethered to a stiff stand. We will need to investigate a number of attributes including size, geometry, aspect ratio, mass distribution, flexibility, and flexibility distribution. Having a more flexible trailing than leading edge may help increase lift by reducing the formation of trailing edge vortices which destroy the lift enhancing leading edge vortices [14]. Different mechanisms of flapping can also be explored. Currently, the motor is attached directly to the wings simulating a direct method of flapping. Perhaps indirect mechanisms of flapping as observed in the Drosophila could be used. Drosophila expands and contracts its thorax to produce its flapping motion, using elastic energy storage for increased efficiency. Perhaps the use of a more biomimetic actuator would further improve our efficiency. We also began to explore altitude control as a first step into implementing active control of the flapper by controlling each motor individually. We used a four-channel receiver from Plantraco (Micro9-S-4CH 1.1 g Servo Rx w/ESC) and a Plantraco transmitter (HFX900 M2). We converted Blue Arrow 2.5 g servos from positional control to speed control to make use of the four-channel receiver. The difficult design constraint here was that we needed four-channels of control, while most lightweight electronics are made for two–three control outputs. On testing, we found that the performance of the modified servos was unstable. There was also an appreciable drop in voltage from the batteries to the motors. This resulted in poor lift generation. Our next approach was to connect the four motors in parallel and wire them to the throttle output which had a 2 A output. (At this point we could have changed the design to use a different receiver and transmitter with just a single channel.) With an external power source of 7 V at maximum throttle, the receiver would consistently let out a sharp beep and the motors would start to reduce their flapping frequency as opposed to maintaining a frequency suitable for substantial lift generation. This could be due to the overloading of the throttle which was designed for only one motor. Here we are loading it with four motors. Control could also be achieved by altering the stroke plane as opposed to altering the power of the motors. Poten-
183
tially, the four connections of the module connector to an individual module could be replaced with material that mimics muscle. Looking at Fig. 13.8a, if the connector arms are replaced with an actuator that contracts like a muscle, the stroke plane can be tilted when the “artificial muscle” contracts. Shape memory alloys (SMAs) and piezo actuators come to mind. However, SMAs generate substantial amounts of heat and piezo actuators require a high voltage to achieve appreciable strain. Therefore, issues of weight and input power need to be addressed before such an artificial muscle can be used in the flapper.
13.6 Conclusion While the performance of the machine presented here needs to be improved, it demonstrates several design considerations relevant to others interested in stable and controllable hovering flapping MAVs. The quadflapper design allows for active control using the thrust from the wings, rather than additional rudders and actuators. Since our machine operates within the same aerodynamic flow regime of insects, these design principles should scale favorably to insect-sized machines. While for real-world applications active control is critical, in designing and optimizing the thrust portions of small-scale flapping MAVs the use of sails for passive stability may prove to be much simpler than trying to build an active controller small enough that it can be carried on board. Ultimately we hope hovering flapping flight will open the door to new applications and provide further insight into the mechanisms underlying insect and hummingbirds remarkable flying abilities. Acknowledgments We would like to thank our funding sources for supporting this project, including Cornell Presidential Research Scholars, the NASA Space Consortium, and the NASA Institute for Advanced Concepts.
References 1. Arrow, B.: Wing bird rc flying bird (2005). http:// flyabird.com/wingbird.info.html 2. Berman, G., Wang, J.: Energy-minimizing kinematics in hovering insect flight. Journal of Fluid Mechanics 582, 153–168 (2007)
184 3. Chronister, N.: The ornithopter zone – fly like a bird – flapping wing flight. http://www.ornithopter.org/ 4. Combes, S.A., Daniel, T.L.: Into thin air: contributions of aerodynamic and inertial-elastic forces to wing bending in the hawkmoth manduca sexta. The Journal of Experimental Biology 206, 2999–3006 (2003). 10.1242/jeb.00502. http://jeb.biologists.org/cgi/content/abstract/206/17/2999 5. Dalton, S.: Borne on the Wind. Reader’s Digest Press, New York (1975) 6. DeLaurier, J., SRI/UTIAS: Mentor project (2005). http://www.livescience.com/technology/041210_project_ ornith%opter.html 7. Dickinson, M.H., Lehmann, F.O., Sane, S.P.: Wing rotation and the aerodynamic basis of insect flight. Science 284, 1954–1960 (1999). 10.1126/science.284.5422.1954. http://www.sciencemag.org/cgi/content/abstract/284/5422/ 1954 8. Dickson, W., Dickinson, M.H.: Inertial and aerodynamic mechanisms for passive wing rotation. Flying Insects and Robotics Symposium, p. 26 (2007) 9. Ellington, C.: The novel aerodynamics of insect flight: applications to micro-air vehicles. The Journal of Experimental Biology 202, 3439–3448 (1999). http://jeb. biologists.org/cgi/content/abstract/202/23/3439 10. Ellington, C.P., van den Berg, C., Willmott, A.P., Thomas, A.L.R.: Leading-edge vortices in insect flight. Nature 384, 626–630 (1996). 10.1038/384626a0. http://dx. doi.org/10.1038/384626a0 11. Fearing, R., Wood, R.: Mfi project (2007). http://robotics.eecs.berkeley.edu/ronf/MFI/index.html 12. Jones, K., Bradshaw, C., Papadopoulos, J., Platzer, M.: Bioinspired design of flapping-wing micro air vehicles. Aeronautical Journal 109, 385–393 (2005)
F. van Breugel et al. 13. Keennon, M., Grasmeyer, J.: Development of two mavs and vision of the future of mav design. 2003 AIAA/ICAS International Air and Space Symposium and Exposition: The Next 100 Years (2003) 14. Lehmann, F.O.: When wings touch wakes: understanding locomotor force control by wake wing interference in insect wings. The Journal of Experimental Biology 211, 224–233 (2008). 10.1242/jeb.007575. http://jeb. biologists.org/cgi/content/abstract/211/2/224 15. Lehmann, F.O., Sane, S.P., Dickinson, M.: The aerodynamic effects of wing-wing interaction in flapping insect wings. The Journal of Experimental Biology 208, 3075–3092 (2005). 10.1242/jeb.01744. http://jeb. biologists.org/cgi/content/abstract/208/16/3075 16. Lentink, D., Team, D.: Delfly (2007). http://www.delfly.nl/ 17. Michelson, R.: Entomopter project (2003). http://avdil.gtri. gatech.edu/RCM/RCM/Entomopter/EntomopterPro%ject. html 18. Michelson, R., Naqvi, M.: Extraterrestrial flight. Proceedings of von Karman Institute for Fluid Dynamics RTO/AVT Lecture Series on low Reynolds Number Aerodynamics. Brussels, Belgium (2003) 19. Wang, J.: Dissecting insect flight. Annual Review of Fluid Mechanics 37, 183–210 (2005). http://arjournals. annualreviews.org/doi/abs/10.1146/annurev.f%luid.36. 050802.121940?cookieSet=1&journalCode=fluid 20. Woods, M.I., Henderson, J.F., Lock, G.D.: Energy requirements for the flight of micro air vehicles. Aeronautical Journal 105, 135–149 (2001) 21. Wowwee: Wowwee flytech dragonfly toy 2007). http:// www.radioshack.com/product/index.jsp?productId= 2585632%&cp&cid=
Chapter 14
The Scalable Design of Flapping Micro-Air Vehicles Inspired by Insect Flight David Lentink, Stefan R. Jongerius, and Nancy L. Bradshaw
Abstract Here we explain how flapping micro air vehicles (MAVs) can be designed at different scales, from bird to insect size. The common believe is that micro fixed wing airplanes and helicopters outperform MAVs at bird scale, but become inferior to flapping MAVs at the scale of insects as small as fruit flies. Here we present our experience with designing and building micro flapping air vehicles that can fly both fast and slow, hover, and take-off and land vertically, and we present the scaling laws and structural wing designs to miniaturize these designs to insect size. Next we compare flapping, spinning and translating wing performance to determine which wing motion results in the highest aerodynamic performance at the scale of hummingbirds, house flies and fruit flies. Based on this comparison of hovering performance, and our experience with our flapping MAV, we find that flapping MAVs are fundamentally much less energy efficient than helicopters, even at the scale of a fruit fly with a wing span of 5 mm. We find that insect-sized MAVs are most energy effective when propelled by spinning wings.
14.1 Introduction Recently, micro-air vehicles (MAVs) have gained a lot of interest of both aerospace engineers and biologists studying animal flight. Such small planes are of special
D. Lentink () Experimental Zoology Group, Wageningen University, 6709 PG Wageningen, The Netherlands; Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands e-mail:
[email protected]
interest because they have many promising civil and military applications: from inspection of buildings and other structures to silent and inconspicuous surveillance. These small sensor platforms with a wingspan of less than 10 in. can potentially be equipped with various micro-sensors ranging from multiple microphones and cameras to gas detectors. But how can one design such small planes best? One major problem is aerodynamics. MAVs have a size and flight speed comparable to insects and small birds, which are much smaller and slower than airplanes. The aerodynamic effect of low speed and small size is quantified by the Reynolds number. The Reynolds number is the ratio of inertia and viscous stress in the flow and ranges roughly from 10 to 100,000 for insects to birds, and MAVs, whereas it ranges from 300,000 to 100,000,000 for airplanes. The low Reynolds number aerodynamics of MAVs is therefore more similar to that of flying birds and insects than that of airplanes. Only little is known about the aerodynamics in the low Reynolds number domain, which is studied mainly by biologists. Many engineers have therefore looked for biological inspiration for the design of their MAVs. Of special interest are insect-sized MAVs that can actually fly like insects using flapping wings: ornithopters. It is not widely known that several bird-sized, freely flying, ornithopters have been built and successfully flown even before Otto Lilienthal and the Wright brothers took off into the air with their, now, conventional airplanes. Ever since there have been few, but successful, amateur ornithopter enthusiasts that have developed many bird-sized ornithopters. The most successful predecessor of DelFly, a flapping MAV which we present here, is the AeroVironment Microbat, which could fly
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_14, © Springer-Verlag Berlin Heidelberg 2009
185
186
for 42 s [14]. An up-to-date historic overview of flapping MAVs can be found on www.ornithopter.org. Developing a working insect-sized ornithopter that can both hover, fly fast and take off and land vertically like insects can remain, however, an open challenge. Not only because of small scale but also because we only know since roughly 10 years how insects can generate enough lift with their wings to fly [4, 3, 5, 18]. Biologists have found that a key feature that enables insects to fly so well is the stable leading edge vortex that sucks their wings upward, which augments both wing lift and drag. Building MAVs at the size of insects is even more challenging, because the critical components for successful flight cannot yet be bought at small enough size, low enough weight and high enough efficiency. Further, special production processes and design strategies are needed to build micro-flapping wings that function well at the length scale of insects. The ultimate dream of several engineers and biologists is to build a fruit fly-sized air vehicle. Here we present an integrated design approach for micro-air vehicles inspired by insect flight that really fly and can be scaled from bird size to insect size.
14.2 The Scalable Wing Aerodynamics of Hovering Insects To quantify if the aerodynamics of hovering insect wings are scalable from bird size to insect size we chose to study hovering flight, because it is the most
Fig. 14.1 Flapping fly wings generate more lift than translating fly wings. Stroke-averaged lift–drag coefficient polar of a translating (dark grey triangle), simple flapping (light grey circles) and realistically flapping (star) fruit fly wing at Re = 110. The angle of attack amplitude ranges from 0◦ to 90◦ in steps of 4.5◦ [11]
D. Lentink et al.
power and lift demanding flight phase. Under hover conditions we measured the aerodynamic forces generated by a fruit fly wing model at Reynolds numbers (Re) of 110 (fruit fly sized), 1400 (house fly sized) and 14,000 (hummingbird sized). We performed these experiments with a robotic insect model at Caltech, RoboFly [5], in collaboration with Michael Dickinson (for details and methods, see [11]). The RoboFly setup consists of a tank filled with mineral oil in which we flapped a fruit fly-shaped wing using both measured and simplified fruit fly kinematics. The simplified fruit fly kinematics consists of sinusoidal stroke and filtered trapezoidal angle of attack kinematics. The stroke amplitude of 70◦ (half the full amplitude defined in [2]) is based on the measured kinematics of six slowly hovering fruit flies [6]. The angle of attack amplitude was varied from 0◦ to 90◦ in steps of 4.5◦ , which encloses the full range of angle of attacks relevant for the flight of flies (and other insects). The lift and drag measurements at fruit fly scale, Re = 110, show that flapping fruit fly wings generate roughly twice as high lift coefficients as translating ones (lift coefficient is equal to lift divided by the product of the averaged dynamic pressure and wing surface area; the drag coefficient is similarly defined). RoboFly force measurements using actual kinematics of slow hovering fruit flies reveal that fruit flies indeed generate much more lift by flapping their wing than generated when the same wing is simply translated, like an airplane, Fig. 14.1. The force coefficients measured for fruit fly kinematics overlap with the lift–drag coefficient polar generated with the simplified fruit fly flap kinematics, Fig. 14.1. The elevated lift and drag forces generated by a fruit fly wing are due to a stably attached leading edge vortex (LEV) on top of the wing, which sucks the wing upward, Fig. 14.2. A stable LEV that explains the elevated lift forces generated by hovering insects was first found for hawkmoths [3]. Dickinson et al. [5] measured the actual unsteady forces generated during a flap cycle of a fruit fly wing, which showed that indeed much, up to 80%, of the total lift can be attributed to the ‘quasi-steady’ lift contribution of the stable leading edge vortex. To test if the aerodynamics of flapping fly wings is indeed scalable we flapped the same wing, using the same kinematics, in less viscous oil (house fly scale, Re =1400) and finally water (humming bird scale, Re =14,000), which is even less viscous. We found that the lift–drag coefficient polars did not change much, the main effect we found is
14 The Scalable Design of Flapping MAVs
187
Fig. 14.2 (a) Cartoon of the leading edge vortex that is stably attached to the wing of a fruit fly. (b) Flow visualization of this stable LEV on a fruit fly wing at Re=110. Visible are air bubbles
that swirl into the LEV after they were released from the leading edge of a fruit fly wing immersed in a tank with mineral oil [11]
that the lift coefficients generated at fruit fly scale are a bit smaller and the minimum drag coefficient a bit higher, due to viscous damping Fig. 14.3. This shows that the aerodynamic forces generated by flapping fly wings can be estimated well across this whole Reynolds number range using the coefficients measured at either Re = 110, 1400 or 14,000, multiplied by the average dynamic pressure and wing surface area that correspond with the scale of interest (average dynamic pressure is calculated using a blade element method [2]).
14.3 Design Approach: Scale a Flapping MAV That Works Down to Smaller Sizes
Fig. 14.3 The forces generated by a flapping fly wing depend weakly on Reynolds number. Stroke-averaged lift–drag coefficient polar of a model flapping fruit fly wing (light grey circles) at Re = 110, 1400 and 14,000. The angle of attack (amplitude) ranges from 0◦ to 90◦ in steps of 4.5◦ . The lift–drag coefficient polars only weakly depend on Re, especially for angles of attack close to 45◦ , which correspond approximately with maximum lift. The polars at Re = 1400 and 14,000 are almost identical [11]
Model airplane enthusiasts have demonstrated that small, lightweight airplanes can be built with wingspans that range from 70 to 10 cm, of which the lightest weigh around 1 g (e.g. www.indoorduration. com). The biggest challenge might be the availability of high-performance micro-components that build up the flight system: micro-radio controllers (RC), actuators, motors, batteries, etc. (see the extensive list of suppliers in Appendix 1). Keeping in mind that both aerodynamics and structure are not limiting to scaling ornithopters, it is advantageous to develop a relatively large, well-flying ornithopter inspired by existing ornithopter designs and insect flight. This scale should be chosen such that the required electronic and mechanical components are both commercially available and affordable. An artist impression of this approach is shown in Fig. 14.4. Based on this approach we first designed and built DelFly, a 35 cm span (22 g) flapping ‘MAV’. It can fly both fast and perform slow hovering flight for maximal 15 min, while streaming video (2005). Next we scaled the DelFly design down to 28 cm span (16 g), DelFly II (2006), which can take off and land vertically. DelFly II can hover for 8 min and fly fast for 15 min, while streaming video. Recently Nathan Chronister scaled the DelFly to 15 cm span (3.3 g); this model can both hover and fly loopings. The group of Yoshiyuki Kawamura has made the next step in an earlier phase (2006–2007) and scaled down the DelFly design to 10 cm span (2.3 g) which flies for a couple of minutes [8]. This 10 cm span ornithopter is the first successful insect-sized flapping MAV.
188
D. Lentink et al.
trolled and too briefly, they have two big advantages. First, they are inherently stable designs. Second, the in-flight torque of the rubber band can be determined using a torque meter constructed of a thin, calibrated, piano-steel wire, and the windings in the rubber band can be counted easily; the result is illustrated in
Fig. 14.4 Artist impression of micro-aerial vehicle design inspired by insect flight
14.4 DelFly: A Flapping ‘MAV’ That Works The aim of the DelFly project was to design a stable, radio-controlled, flapping MAV that could fly for 15 min and function as a sensor platform. The DelFly mission was to detect a person walking with a red suitcase. For this DelFly is equipped with an onboard colour camera, of which the images were streamed live to a base station with situation awareness software. The team consisted of 11 bachelor students of Delft University of Technology supervised by scientists and engineers from Wageningen University, Delft University of Technology and Ruijsink Dynamic Engineering. To kick start the project the students started out with flight testing three existing rubber-band-powered ornithopter designs: a monoplane, biplane and tandem design, Fig. 14.5. The fight test procedures are described in Appendix 2. Although these ornithopters cannot be considered MAVs, because they fly uncon-
Fig. 14.5 Three different rubber-powered ornithopter configurations flight tested for the design of DelFly. (a) Falcon, a monoplane ornithopter available at www.ornithopter.org. A single pair of wings is powered by a rubber band. (b) Luna, a biplane ornithopter available at www.ornithopter.org. Its two wings form a cross; the two lower legs of the cross are actuated. As a result the upper and lower left wings flap towards and away from each other (same for the right wing). (c) Tandem wing ornithopter, custom built inspired by an existing Swedish design. The two tandem wings flap in anti-phase, the front wing is actuated and the hind wing flaps 180◦ out of phase in reaction to this. The front wing is connected to one side of the rubber band and the hind wing is connected to the fuselage to which we attached the other side of the rubber band
14 The Scalable Design of Flapping MAVs
Fig. 14.6 Rubber band torque vs. number of windings. After putting in a certain number of winds into a rubber band (e.g. 1000) it starts to unwind at the right side of this graph at peak torque. We first let the rubber unwind such that the torque flattens off and then start performing a test flight. Knowing the start and end torque and the number of windings in the rubber band (counted during (un)winding) the in-flight torque can be estimated well, especially with the movies we made of every flight, for which we used the recorded flapping sound of the ornithopter to determine its flap frequency. High-quality rubber band is the key to good flight performance and useful measurements: www.indoorduration.com
Fig. 14.6. During the flight test of the three ornithopter configurations we measured the mass, torque (at the start and end of flight), flap frequency (audio data) and average flight velocity (video data). Finally we estimated the rocking amplitude at the front of the ornithopter (video data), where we planned to fit the camera. Using torque and flap frequency we computed the average power consumption. The measured and calculated flight variables can be found in Table 14.1. Based on the measurements we found that it was most difficult to trim the tandem design such that it flew well, and we therefore eliminated this configuration. The two competing configurations were the monoplane and biplane configurations. The
189
monoplane appeared to be the most efficient flyer, but flew at relatively high speed and rocked significantly. Therefore, we chose the slower biplane configuration which rocked least, which is critical for a camera platform. Having reliable performance estimates is critical for sizing the electronic components, actuators, motor, gearbox and battery. For the sizing we used simple scaling laws to obtain realistic torques, frequencies, power consumption and flight speed estimates for an arbitrary size, using the measured data in Table 14.1. In our scaling we assume that the flight path is the same for both the original (indicated with ‘1’) and the newly scaled ornithopter (indicated with ‘2’). To determine the new horizontal velocity U∞ we use the fact that lift equals weight during horizontal flight as follows: & 2 mg ∝ CL 1 2ρU∞ S → U∞,2 = U∞,1
m2 S1 m1 S2
0.5 ,
(14.1) where m is the mass, CL the lift coefficient which we assume to be independent of Reynolds number (Fig. 14.3), ρ the air density (constant), U∞ the forward flight velocity of the ornithopter and S the wing surface area. We use the proportional instead of equal sign, because the proportionality remains valid for other flight conditions such as climbing and turning. The required power P is proportional to weight times speed (because drag scales with weight) as follows: U∞,2 m2 → P2 U∞,1 m1 1.5 0.5 (14.2) m2 S1 = P1 , m1 S2
P ∝ mgU∞ → P2 = P1
where g is the gravity constant. To determine the flap frequency we need to know how forward speed U∞ and the flap frequency f are related; for this we assume that the advance ratio J [2, 12] of the ornithopter remains constant:
Table 14.1 Flight test results of the three ornithopter configurations Variable Unit mg S m2 bm T Nm f Hz
V m/s
PW
Rocking mm
Monoplane 6.7 0.043 0.41 0.0075 3.7 1.7 0.18 80 Biplane 8.0 0.074 0.35 0.0064 6.7 1.3 0.28 ±0 Tandem 10.9 0.066 0.35 0.013 7.9 1.5 0.66 – The tandem configuration was difficult to trim and did not fly well as a result. We averaged over flight tests during which we judged the ornithopter to be trimmed well; fly stable. The variables are as follows: m, mass; S, total wing area; b, wing span; T, torque needed to drive the wings; f, flap frequency; V, flight speed; P, required power; rocking, rocking amplitude at the front of the ornithopter
190
D. Lentink et al.
U∞,2 0,1 R1 U∞ = const → f2 = f1 → f2 4f 0 R U∞,1 0,2 R2 m2 S1 0.5 0,1 R1 , = f1 m1 S2 0,2 R2 (14.3) where 0 is the stroke amplitude in radians (half the total amplitude defined in [2]) and R the single wingspan (radius). If we assume that the stroke amplitude is constant, which is true for isometric scaling, this relation becomes straightforward. What if measurements are performed for hovering flight when U∞ = 0? To scale both hovering and forward flight continuously we suggest to use the wing tip speed V in Eqs. (14.1) and (14.2) instead of U∞ , which is within good approximation [10, 12]: J=
V≈
' 2 + (4f R)2 . U∞ 0
(14.4)
Based on the calculated power and the specs of the battery pack we can now calculate the average motor current as follows: I2 =
P2 . VLiPo
(14.5)
where I2 is the motor current of the newly scaled ornithopter and VLiPo the voltage of the lithiumpolymer battery pack. The total flight time in seconds can now be estimated as follows: t2 = 3.6
CLiPo VLiPo , P2
(14.6)
where CLiPo is the capacity of the lithium-polymer battery pack (mA h). Based on the required power, the voltage of the battery pack and the flap frequency, a motor can now be selected (see www.didel.com for pager motor selection charts). Based on the rpm of the selected motor RPMmotor,2 , the required gearbox ratio red2 is as follows: red2 =
RPMmotor,2 , 60f2
(14.7)
Using a spreadsheet and Eqs. (14.1)–(14.17) the ‘components off the shelf’ (COTS) are chosen which means components are bought as light and small as currently available in retail. We illustrated the main components chosen to build DelFly in Fig. 14.7 (these are representative, not actual, photos). In 2005 these
were the most lightweight components available (see the extensive list of suppliers in Appendix 1). The weight, power consumption and other performance indices of these components determined the smallest possible dimensions of DelFly at which it could fly for 15 min and stream live video. The main component of DelFly is the battery. In order to choose a suitable battery two criteria are important: the capacity and the maximum discharge rate. The first parameter, the required capacity, is determined by the required flight duration and the power consumption of the electric systems. The second, the maximum required discharge rate, is determined by the maximum power required by the total electrical system. The latter is a problem with most lightweight batteries. Therefore, we selected a battery with a least power to weight ratio and a sufficient maximum discharge rate. The lightest available battery fulfilling these requirements is a 140 mA h lithium-polymer battery as seen in Fig. 14.8. It could discharge up to five times its capacity, 700 mA, which is enough. The biggest energy consumer is the motor that powers the flapping wings of DelFly. We chose a brushed pager motor, because of its availability. Brushless motors are more efficient, but the available motors cannot handle the periodic loading due to the flapping wings. The motor drives a gearbox (see Fig. 14.7) to reduce the RPM of the motor to match the required flapping frequency of the wings. A dedicated crankshaft, conceptually the same as that of the Falcon biplane, connects the gearbox to the two lower legs of the X-wing and drives both lower wings in phase. The left lower wing is directly connected to the right upper wing and vice versa. Therefore both sides of the X-wing flap synchronously towards each other and away from each other (buying the actual kits helps to get a good three-dimensional picture of this system). We re-designed the flapping mechanism itself using the freely available Java software at www.ornithopter.org/software.shtml. For controlling DelFly we used standard model aircraft RC radio equipment. The remote control sends control signals to DelFly’s onboard receiver (see Fig. 14.7). This in turn translates the control signal to a power signal to the coil actuators (see Fig. 14.7), which we connected to the control surfaces. DelFly also has a camera onboard for two reasons. First, a camera is a useful sensor to obtain images of its surrounding. Second, the camera in combina-
14 The Scalable Design of Flapping MAVs
Fig. 14.7 Illustration of the main components of DelFly (and DelFly II): (a) brushless electric motor (www.bsdmicrorc.com; modified for use on DelFly II); (b) colour camera (www.misumi.com.tw); (c) plastic gear box (www.didel.com);
Fig. 14.8 Control loop for vision-based awareness of DelFly
191
(d) micro-actuators (www.bsdmicrorc.com/); (e) receiver (www.plantraco.com); and (f) lithium-polymer battery (www. atomicworkshop.co.uk)
192
D. Lentink et al.
Fig. 14.9 Three-dimensional CAD drawing of the DelFly design (a) and DelFly flying in the European Alps (b)
tion with dedicated vision software can be part of the control loop. DelFly has a camera onboard that sends video signals via the transmitter to the receiver on the ground. This signal enters a central processing unit, usually a personal computer or laptop, via its video card. Dedicated software is installed on the central processing unit, which analyses the video signal to detect objects like a red suitcase or compute the optical flow. Based on the image analysis the base station then sends updated control signals to DelFly via the RC radio to the receiver on board of DelFly. This continuous control loop for vision-based awareness is illustrated in Fig. 14.8. All the components combined resulted in the design shown in Fig. 14.9a. The combined mass of all (electronic) components is approximately 12.5 g. To carry this load and bear the corresponding aerodynamic and inertial loads we designed a lightweight structure of approximately 4.5 g. The structure consists primarily of carbon fibre rods. The wings leading edge spars are made of a balsa–carbon (unidirectional) sandwich that is much less stiff in flight direction than in flap direction. This stiffness asymmetry is essential to make the wing deform well aero-elastically. The wing is covered with transparent Mylar foil of 7 g/m2 . We used cyano-acrylate for gluing carbon–carbon, epoxy for gluing carbon–balsa and transparent Pattex hobby glue, diluted 1:1 with acetone, for gluing carbon– Mylar. DelFly turned out to be an easy to control and very stable flapping MAV, Fig. 14.9b. Its main drawback is that it can get into a spiral dive when turning too tightly and too fast at too low angle of attack, because the actuators are slightly underpowered for this flight condition
(finding strong and light enough actuators remains a challenge for MAV design).
14.5 DelFly II: Improved Design After designing and building DelFly within a student project we professionalized the DelFly design and better quantified its aerodynamic performance. For this we were supported by TNO (The Netherlands), which resulted in the DelFly II design, shown in Figs. 14.10 and 14.11. DelFly II weighs about 14 g without payload and 17 g with payload, a camera and video transmitter. The wingspan is reduced from 35 cm to 28 cm and its length is reduced from 40 cm to 28 cm, such that it fits in a 30 cm diameter sphere. The most important difference between DelFly and DelFly II is its symmetric driving mechanism and the customrefitted brushless motor. This brushless electric motor has enough power to enable DelFly II to vertical takeoff and landing, shown in Fig. 14.12. The motors’ efficiency of roughly 60% enables it to fly longer than normal, using a pager motor instead will still allow DelFly II to take off and land vertical and hover at the cost of some flight duration. The brushless motor we used had to be refitted with different windings, magnet configuration and controller software such that it could cope with the highly varying drive torque of the flapping wings. Note that brushed pager motors do not need special modifications to drive a flapping wing and are therefore a time efficient and inexpensive solution (see the extensive list of suppliers in Appendix 1).
14 The Scalable Design of Flapping MAVs
193
Fig. 14.10 Detailed photos of DelFly II. (a) Main components of DelFly II. The front part of the carbon fuselage is a sandwich of carbon cloth (65 g/m2 ) with a Rohacell core (lowest density available). The transparent Mylar foil weighs 7 g/m2 . (b) Rudder connection mechanism for the controls. (c) Brushless electric motor (e-motor) and the symmetric driving mechanism of
both lower legs of the X-wing. The rotary motion of the gears is converted into a translating motion through the carbon fibre rods (cr-rods) that are fitted with a small bearing consisting of a flattened brass rod with a hole drilled in it. (d) Wing root with cellotape reinforced Mylar film and rapid prototype wing hinge (www.quickparts.com)
14.6 DelFly II: Aerodynamic Analyses
getting insight into the aerodynamics of aero-elastic insect wings. Based upon the coefficient that we measured for DelFly under hovering conditions, which is around 2, we believe that DelFly employs at least two of the high-lift mechanisms that are found in insect flight. First we think that DelFly creates a stable leading edge vortex (Fig. 14.2), like the AeroVironment Microbat [14]. Second we think DelFly benefits from the clap and fling mechanism of aero-elastic wings, which is utilized by small insects and butterflies [17]. The wings of DelFly clap and fling when the upper and lower wings come together as the X-wings close. The peeling motion of the aero-elastic DelFly wings resembles the wings of butterflies during take-off, Fig. 14.13. First the wings ‘clap’ together at the ‘start’ of the flap cycle, at 0%, after which they ‘peel’ apart at 12.5% through
Insects have limited control over the wing shape, because their muscles stop at the base of their wings. The aero-elastic wings of insects are therefore thought to be passively stabilized. The aero-elastic wings of ornithopters like DelFly have passively stabile aeroelastic wings too, which deform strongly under loading. But what forces mediate DelFly’s wing deformation, aerodynamic loading or wing inertia? And how much power is lost with accelerating and decelerating a flapping wing continuously? What flap angles result in the best hover performance, and how high are the lift coefficients generated by a strongly deforming aeroelastic wing? The answers to these questions are likely to be as relevant for optimizing DelFly as they are for
194
D. Lentink et al.
Fig. 14.11 Dimensions and building material of DelFly II. All the above are drawn to scale and angles are realistic: top view (a), wing dimensions (b) and build in dihedral (c). Note that cr stands for carbon fibre rod, whenever available we used hollow carbon rods. All carbon fibre components are available at www.dpp-pultrusion.com. (d) DelFly II in hovering flight, note that this is a slightly different model than shown in Fig. 14.10.
Photo credit: Jaap Oldenkamp. Not shown is the X-shaped landing gear of DelFly II, build out of the leading edge carbon rods of the horizontal and vertical rudder. The landing X-rods are pulled together with two thin wires (e.g. nylon or Kevlar) and as a result they bend. The base of the landing gear has smaller dimensions than the wing span (dimensions are not very critical)
37.5% of the flap cycle. The clap and fling is essentially a combination of two independent aerodynamic mechanisms that should be treated separately. First during the clap the leading edges of the wing touch each other before the trailing edges do, progressively closing the gap between them. Second during the fling the wings continue to pronate by leaving the trailing edge stationary as the leading edges fling apart. A lowpressure region is supposedly generated between the wings and the surrounding fluid rushes in to occupy this region. This initializes the build up of circulation [17]. Experiments of Kawamura et al. [8] with a 10 cm
span DelFly biplane model and a similar monoplane model showed that the clap and fling indeed increases thrust up to roughly 50%. The thrust–power ratio, a measure of efficiency, is also roughly 50% higher for the biplane configuration. Another explanation as to why insects might clap and peel their wings is that they simply try to maximize their stroke amplitude to maximize wing lift (for the same flap frequency). The lift force is proportional to velocity squared and therefore amplitude squared; maximization of the stroke amplitude will therefore significantly enhance the total flight forces [17].
14 The Scalable Design of Flapping MAVs
195
a
b
Fig. 14.12 Compilation of video images that illustrate the vertical take-off (a) and vertical landing (b) capabilities of DelFly II. The clearly visible light square is the reflective battery pack (LiPo)
14.6.1 DelFly Models Used for Aerodynamic Measurements We studied the aerodynamic performance and aeroelastic deformation of DelFly II wings to maximize its lift and minimize its power consumption in hovering flight, the most power-consuming flight mode. These studies were performed at Wageningen University in collaboration with Delft University of Technology [1]. For these studies two DelFly II models were used: DelFly IIa, powered by a strong brushed motor (simplified aluminium construction). This model was used for high-speed camera imaging DelFly IIb, powered by a 3.5 V brushless motor of which the frequency is controlled by varying the current (realistic carbon fibre and plastic construction). This model was used for all except one performance
Fig. 14.13 High-speed video image sequence of the clap and fling of DelFly II wings in air at 14 Hz and 30◦ flap angle. The images are snapshots starting at the beginning of upstroke up to the start of the downstroke: 0%, 12.5%, 25%, 37.5% and 50% of the flap cycle
measurement. We used a brushed motor for the performance analysis shown in Fig. 14.15, because it allowed us to test for higher flap frequencies. Figure 14.14a shows the DelFly IIb model mounted on a six-component force transducer. This transducer is capable of accurately measuring forces and torques (i.e. moments) in three directions with a resolution of
196
D. Lentink et al.
Fig. 14.14 (a) DelFly IIb mounted on the six-component transducer, φ indicates the flap angle. The oblique line indicates the wing position at the start of a flap cycle (upstroke) when the leading edge of the wing is at maximum deflection, i.e. 50% of the
flap cycle, where the downstroke starts. (b) Wing hinge with different drive-rod positions that result in the different flap angles for which we measured the hover performance of DelFly
Table 14.2 Flap angle as a function of flap angle position. The final design of DelFly II has an even larger flap angle, for which special wing hinges were designed Flap angle position φ 3 4 5 6 7 8 9
17.5◦ 19.5◦ 21.5◦ 24◦ 27◦ 30◦ 36◦
approximately 0.5 g. The flap angle is indicated by φ. Different flap angles are obtained by connecting the drive rod to the different connection points at the wing’s hinge, shown in Fig. 14.14b, the actual values are given in Table 14.2.
14.6.2 Lift as a Function of Flap Frequency at a Constant Flap Angle of 36◦
Fig. 14.15 Lift vs. wing beat frequency at a constant flap angle of 36◦ . The line indicates the linear trend, whereas a quadratic trend is expected (lift is proportional to frequency squared). We think the linear trend with increasing frequency results from the increasingly higher lift forces that deform the wing more and more, and therefore reduce the angle of attack of the wing, which lowers lift force (because lift is proportional to the angle of attack)
We found that lift increases linearly with wing beat frequency between 14 and 20 Hz at a constant flap angle of 36◦ , Fig. 14.15. This flap angle was chosen because this flap angle most closely resembles the flap angle of DelFly in flight. The frequency range was determined by the maximum power output of the brushed
motor fitted on DelFly II B. Figure 14.15 shows that this model needs to flap at a frequency of 20 Hz to lift the payload (total mass 17 g) during hovering and a frequency of 17.2 without payload (14 g). The final DelFly II design flaps at lower frequencies, because it has a larger flap angle than we could test with the
14 The Scalable Design of Flapping MAVs
here-used hinge. Note that the net wing speed is proportional to flap angle times frequency and that this product is roughly the same for our DelFly IIb and the final design, because both operate at similar lift coefficients.
14.6.3 Lift and Power as a Function of Flap Angle at a Flap Frequency of 14 Hz Both lift and power increase with flap angle, but the more important lift over power ratio reaches a plateau, Fig. 14.16. The lift over power ratio is a measure of how effectively DelFly generates lift. The required power measurements include the aerodynamic power as well as the power needed to drive the motor, the drive train and overcome the inertia of the complete mechanism (accelerate and decelerate it). Both the lift and the required power increase with flap angle, because wing speed increases with flap angle (at constant frequency). Theoretically lift is proportional to the flap angle squared, and aerodynamic power to the flap angle cubed. Because the experiment is carried out at constant flap frequency the flap velocity, and therefore wing lift, increases significantly with flap angle. The increasing lift deforms the aero-elastic wing increasingly more. These significant deformations might explain why the lift over power ratio is not
Fig. 14.16 Lift, power and efficiency of DelFly: (a) lift vs. flap angle at a constant wing beat frequency of 14 Hz; (b) power vs. flap angle at a constant wing beat frequency of 14 Hz; and (c) the ratio of lift to power vs. flap angle at a constant wing beat
197
proportional to the inverse of flap angle, predicted by theory. The measurements suggest that the most efficient flap angle is higher than 30◦ for DelFly II flapping at 14 Hz. The Reynolds number in these experiments varied from 3700 to 7600. Based on Fig. 14.3 we do not expect that this difference in Reynolds numbers affects the wings’ lift coefficients much. The dimensionless lift and power coefficients of DelFly II flapping at 14 Hz are shown in Fig. 14.17. The lift coefficients have high values, CL = 1.8–2.5, compared to the lift coefficients of translating wings at similar Reynolds numbers of which the maximum lift coefficient is approximately 1. The power coefficients are calculated by dividing power by the product of dynamic pressure, wing speed and wing surface area using a blade element method [2]. It is striking that these coefficients are an order of magnitude greater than the force coefficients. Based on scale arguments we expected that both coefficients are of order 1, O(1). An explanation for this inconsistency might be in the fact that we used total power instead of aerodynamic power. In order to determine aerodynamic power we had to separate it from the power needed to drive the motor, the drive train and overcome inertial forces. We determined the aerodynamic power by measuring the required power under vacuum conditions and subtracting that power from the power needed to flap at the same frequency in air. For these experiments we designed and built a custom vacuum chamber with a minimum pressure of 10 Pa.
frequency of 14 Hz. The lift–power ratio is a measure of ‘efficiency’. The dots give the results of the individual measurements per sequence and the crosses denote the mean value of the four runs
198
D. Lentink et al.
Fig. 14.17 Lift (a) and power (b) coefficients vs. flap angle at a constant wing beat frequency of 14 Hz. Power coefficients vary much more with flap angle than lift coefficients
14.6.4 Power Requirement and Wing Deformation in Air Versus Vacuum The power losses of a flapping wing range from roughly 80% at a flap angle of 24◦ to 50% for a flap angle of 36◦ . To isolate the aerodynamic power, the power measured in near vacuum was subtracted from the power measured in air: Paero = Pair – Pvac . We plotted the aerodynamic power as a percentage of total power as a function of flap frequency in Fig. 14.18. The results suggest that the power losses of DelFly are strongly dependent on the flap angle, but not so much on frequency. The larger flap angle results in the highest percentage of aerodynamic power, which partly explains why a large flap angle results in high flap performance, Fig. 14.16c. If we correct the power coeffi-
cient for a power loss of 80% at a flap angle of 24◦ and 50% at 36◦ , we obtain power coefficients of approximately 5 and 7.5 which are much closer to the values found for flapping fruit fly wings of 2–4 at similar angles of attack in Fig. 14.3. The remaining differences can still be explained by mechanical losses, because we were unable to correct for the effect of variable motor efficiency. The much lower torque in vacuum can drastically alter the efficiency of the brushless motor, we hope to better quantify this in a future study. Finally we wanted to know how inertial versus aerodynamic forces deform the aero-elastic wing of DelFly. Using the vacuum chamber we made highspeed video images of the flapping foil in vacuum and air, Fig. 14.19. The wings deformation is signif-
Fig. 14.18 Percentage of aerodynamic power compared to total power increases with flap angle and depends weakly on wing beat frequency: (a) flap angle of 24◦ and (b) flap angle of 36◦
14 The Scalable Design of Flapping MAVs
199
Fig. 14.19 Comparison of wing deformation under aerodynamic plus inertial wing loading in air (left) and inertial wing loading in vacuum (right) at 14 Hz and 30◦ flap angle. The images are snapshots at the start of the upstroke (0%), 12.5%
and end (50%) of the flap cycle, where the downstroke starts. In vacuum the upper and lower foils stick together (12.5%), most likely due to electrostatic stickiness
icantly higher in air than in vacuum, hence we conclude that the aero-elastic wing deformation largely determines DelFly II’s wing shape. It confirms that there is a direct coupling between wing load and the wings aerodynamic angle of attack, because the wing deforms towards lower angles of attack under loading (compare air and vacuum at 0% and 50% flap cycle). We think that the apparent wing peeling at 12.5% of the flap cycle is due to the electrostatic stickiness of
the upper and lower foil. This could be tested in future studies by using non-electrostatic foils. Based on our aerodynamic analysis we conclude that inertial and friction losses in DelFly-like designs are high and need attention. One solution could be to use elastic energy storage in a spring, tuned to the average flapping frequency of the wing. Ideally the spring has a variable stiffness such that its natural frequency continuously matches the flapping frequency.
200
14.7 Bio-Inspired Design of Insect-Sized Flapping Wings Our aim is to work towards fly-sized micro-air vehicles that manoeuvre well. DelFly is still big, slow and sluggish. DelFly potentially could be controlled more directly using the flapping wings instead of the airplane tail for control. Because we focus on flapping MAVs that can actually fly, the size of every new design is limited by the smallest and lightest available components, which limits miniaturization. A more direct approach is to start at a smaller scale and figure out how such small fly-sized flapping mechanisms, wings and actuators can be designed and constructed. This approach, pioneered in the lab of Ronald Fearing, has significantly increased our understanding of designing and building micro-flapping structures. Recently, the first wire-supported micro-mechanical insect successfully lifted off from the ground in the lab of Robert Wood [21]. Building upon the existing work in the field we wondered how we could improve the flat wing design of micro-mechanical insects and DelFly wings at the scale of insects. The current DelFly wing is made of a D-shaped carbon fibre leading edge spar of constant thickness. A flexible Mylar foil forms the wing’s surface and is stiffened by two carbon rods. The foil and carbon rods form a flat airfoil that is cambered by aerodynamic forces and, to a lesser extent, inertial forces during flapping. Aero-elastic tailoring of DelFly’s wing has been done by trial and error using a strobe. At some point we even applied variable wing tension (left versus right) for thrust vectoring, but all these measures are primitive compared to the aero-elastic tailored wings of insects. Insect-sized flapping MAVs could benefit from stiffer wings for the same weight, because their shape can be controlled and tailored more directly. Wing venation, like found for insect wings, could potentially minimize or even stop wing tear, which is a problem with DelFly wings. Images of DelFly in flight often show wrinkling in trailing edges that affect its aerodynamic performance, which could be prevented by making the trailing edge stiffer. This keeps the wing foil in shape during flapping and prevents the wing from tearing. We used dragonfly wings as an inspiration to develop design principles for such stiffer micro-wings with venation-like tear-stoppers.
D. Lentink et al.
If we take a close look at insect wings we find that they are not flat but corrugated and wing thickness varies both span- and chord-wise. The wing structure of an insect has therefore a much richer architecture than the non-corrugated and constant thickness DelFly II wings. Compared to flat wings, corrugation improves the strength and stiffness of insect wings, because it increases the moments of area of the wing sections [16, 22, 23]. Corrugation also mediates the aero-elastic properties and vibration modes (natural frequencies) of the wing and, finally, corrugated wings are lighter for the same stiffness [16], while performing well aerodynamically [15, 16, 13, 9, 19]. We found inspiration for improving the wings of DelFly in the front wing of a dragonfly, Sympetrum vulgatum of the order Odonata [7]. Odonata is a primitive order of insects; their four wings possess relatively complex venation patterns. The fossil record shows that these venation patterns exist even in large dragonfly wings of up to 70 cm span, whereas current dragonflies have wing spans smaller than 10 cm, this suggests that dragonfly wings and their aerodynamic function are scalable. Although they are primitive insects, dragonflies have evolved into well-flying insects. Fast manoeuvres, silent hovering and even inflight hunting and mating are commonly shown by these aerobatic artists. S. vulgatum flaps its wings at a frequency of approximately 35 Hz. We first digitized a front wing of S. vulgatum, Fig. 14.20a, using a micro-CT scanner, Fig. 14.20b. The nearly 4000 cross-sectional images per wing create an accurate three-dimensional digital reconstruction of the wing. This reconstruction allowed us to quantify both the vein and shell thickness of the dragonfly wing, Fig. 14.20c1,c2. In our next step we simplified the scans by converting the wing geometry into beam and shell elements with the same geometric properties as the scan. The result is an accurate and efficient finite element model of both wings. For this dedicated processing software was written using Matlab 7.0 (MathWorks). All the elements, load data and boundaries were automatically written to an input file for Abaqus, a finite element solver for structural analysis. Using a blade element model [2], we calculated the aerodynamic and inertial loads on the wing during hovering flight, Fig. 14.20d1,d2. Using both the simplified finite element model and the calculated wing loading we determined the wings deformation, internal stresses
14 The Scalable Design of Flapping MAVs
201
Fig. 14.20 Design wheel of an insect-sized flapping wing based on the forewing of a dragonfly (Sympetrum vulgatum): (a) original dragonfly forewing; (b) micro-CT scan of the dragonfly wing; (c1) thickness distribution of the wing veins; (c2) thickness distribution of the wing membranes; (d1) maximum aerodynamic load on the wing during hovering, computed with wing
and flight data using a blade element method; (d2) maximum inertia loads on the wing during hovering; (e) maximum wing deformation during hovering; (f1) maximum internal loads during hovering; (f2) synthesized and simplified load paths in S. vulgatum forewing; and (g) the bio-inspired design of an insectsized flapping wing
and vibration modes, Fig. 14.20e. We calculated the average load paths over a stroke cycle to estimate which veins and shells contributed most to the stiffness and strength of the wing, Fig. 14.20f1. Based on the average load paths we used engineering judgement to eliminate veins that carried only little load and connect the veins such that they formed continuous load paths, Fig. 14.20f2. In our final design step we used the simplified load paths to come up with a conceptual corrugated wing for a flapping MAV, Fig. 14.20f3 (details can be found in [7]). The main features of the conceptual corrugated wing design are its corrugation at the leading
edge. Both thickness and corrugation height decrease towards the wing tip where less stiffness is needed. Between the leading edge beams we suggest to apply ribs, to prevent the beams from buckling. Controlled buckling might be a very interesting failure mode when ultimate forces are applied on the wing. The wing design consists of several thin ribs that connect the leading and trailing edge and form ‘rib-enclosed compartments’ that can stop wing tearing. By carefully designing the corrugation profile, the location of supporting ribs and the thickness distribution of the ‘veins’, the wing can be customary tailored to perform well aero-elastically.
202
14.8 Production of the Bio-Inspired Wings for an Insect-Sized MAV Although one can cut very nice two-dimensional wings out of carbon sheet using a laser cutter, this method is not well suited for building complex threedimensional structures. We envision that a miniature three-dimensional weaving machine in combination with a mould could solve this problem [7]. Such a machine could weave carbon fibres into a three-dimensional vein network that forms the wing structure. We think an even more promising solution could be to weave thin tungsten wires and damp boron on it, a super-stiff and strong metalloid. Boron fibres are already used to stiffen the lightest planes for their size: 65 cm span at a weight of 1 g (see www.indoorduration.com). These fibres are made by vacuum coating boron on tungsten wires, hence the process is already an industry standard for making simple two-dimensional structures. Within our project we focussed on a low-cost demonstration project using carbon fibres. We chose cyano-acrylate as a matrix for the carbon fibres, because its low viscosity results in good wetting properties of the carbon fibres. To cover the wings we chose one-sided (OS) film of 0.5 μm thickness glued with diluted Pattex (similar to DelFly wings). The wings are produced by stretching dry carbon fibres on a threedimensional mould, Fig. 14.21. Crossing fibres are checked for sufficient contact surface between each other to ensure correct bonding. If all the dry fibres are stretched on the mould an infusion process starts by applying cyano-acrylate drops attached to the round head of a pin to the fibres. Drops of cyano-acrylate are also placed on the intersections of fibres to connect them firmly. We immediately noticed the advantage of the low-viscous cyano-acrylate; the fibres absorb the glue and spread it through the fibre by capillary force fast and easily. Pushing a pin against the fibre gives insight as to which part is sufficiently infused and which part is still dry. The wing is trimmed to its final shape after infusion and consolidation of all the fibres. Finally OS film was mounted on the carbon fibre structure using thinned glue. This process is very labour intensive, but it can be automated using a miniature three-dimensional weaving and glue machine, but even better would be to damp boron on three dimensionally woven tungsten wires.
D. Lentink et al. Fig. 14.21 Developing process of insect-sized flapping wings: (a) design of an insect-sized flapping wing inspired by a dragonfly forewing; (b1) cross section of re-designed wing; (b2) cross section of original DelFly wing; (c) cross section of Sympetrum vulgatum forewing at approximately 30% of wingspan. (d–g) Building method: (d) stretch carbon fibres on a three-dimensional mould; (e) consolidate structure by tipping drops of glue on the fibres with a pin; (f) trim consolidated wing to its final shape; (g) end result: insect-inspired wing made of carbon fibre. (h) Mock-up of future DelFly micro-equipped with wings with an advanced three-dimensional structure, which makes them stiff for their weight. Shown: tandem configuration of which the forewing and hindwing can flap 180◦ out of phase, like a dragon fly during slow flight
14 The Scalable Design of Flapping MAVs
14.9 Less Is More: Spinning Is More Efficient than Flapping an Insect Wing Having demonstrated the scalable design of flapping MAVs we conclude that the biggest challenge for fruit fly-sized air vehicles is the development of highperformance micro-components. Another challenge is
Fig. 14.22 The forces generated by a spinning fly wing depend weakly on Reynolds number. Stroke-averaged lift–drag coefficient polar of a simple spinning fruit fly wing (triangles) at Re = 110, 1400 and 14,000. The angle of attack (amplitude) ranges from 0◦ to 90◦ in steps of 4.5◦ . The lift–drag coefficient polars only weakly depend on Re, especially for angles of attack up to 45◦ , which correspond approximately with maximum lift. The polars at Re = 1400 and 14,000 are almost identical [11]
Fig. 14.23 The aerodynamic performance of a spinning fly wing is higher than that of a flapping fly wing. Stroke-averaged power factor vs. glide-number polar of a flapping versus spinning fruit fly wing at Re = 110, 1400 and 14,000. Aerodynamic power is proportional to the inverse of the power factor, the highest power factor represents maximum performance. These per-
203
energy efficiency, because existing insect size flapping MAVs are inefficient. This inefficiency is demonstrated by the low flight duration that ranges from 1 to 15 min at relatively low wing loading. The smallest mechanical insects can still not take off without power cables attached to batteries on earth. Based on aerodynamic measurements on both insect wings and DelFly II we found that the aerodynamic performance of flapping insect wings is low. We further found, similar to others [20], that simple spinning insect wings also generate a stable leading edge vortex and corresponding elevated lift and drag forces for Re = 110– 14,000, Fig. 14.22. In fact we deliberately depicted a stable LEV on a spinning (not flapping) fruit fly wing in Fig. 14.2, because it is surprisingly similar to the one generated when the wing flaps. This observation inspired us to explicitly compare the hover efficacy of both flapping and spinning fruit fly wings at Reynolds numbers ranging from fruit flies (Re = 110) to small birds (Re = 14,000); published in Lentink and Dickinson [11], Fig. 14.23. Through this unique constant Reynolds number and equal wing shape comparison within one experiment we found that spinning insect wings outperform flapping ones up to a factor 2. This suggests that helicopter-like MAVs fitted with insectlike wings could potentially be, up to a factor 4, more energy efficient than flapping insect-like MAVs. Such helicopter-like MAVs can generate similarly elevated lift forces compared to flapping wings using a stable leading edge vortex. The estimated factor 4 efficiency improvement results from the combined effect
formance polars are based on the flapper data in Fig. 14.2 and the spinner data in Fig. 14.22. Note that fruit flies, at Re = 110, flap at approximately the same maximum performance level obtained with the more simple flap kinematics; they flap well (based on [11])
204
D. Lentink et al.
of a factor 2 difference in aerodynamic power and a factor 2 difference due to inertial power loss. Our study suggests, therefore, that combining both the spinning motion of helicopters and the wing shape of insects might give best of both worlds: the high lift from a stably attached leading edge vortex and the high efficiency of a spinning wing. Our study predicts therefore that a fruit fly-sized and winged air vehicle will be most efficient when fitted with spinning wings.
Acknowledgments We would like to thank the following persons and everyone else who helped us out with the research and designs presented in this chapter. Insect flight research: Michael Dickinson, William Dickson, Andrew Straw, Douglas Altshuler, Rosalyn Sayaman and Steven Fry at Caltech. DelFly design: Michel van Tooren, Rick Ruijsink, Christophe de Wagter, Bart Remes, René lagarde, Ayuel Kacgor, Wouter Roos, Kristien de Klercq, Christophe Heynze, Gijs van der Veen, Pieter Moelans, Anand Ashok, Daan van Ginneken, Michiel Straathof, Bob Mulder and Meine Oosten at TU Delft. Johan van Leeuwen at Wageningen University. Eric den Breejen, Frank van den Bogaart, Klamer Schutte, Judith Dijk and Richard den Hollander at TNO Defence, Security, and Safety. DelFly research: Eric Karruppannan, Evert Janssen, Jos van den Boogaart, Henk Schipper, Ulrike Müller, Gijs de Rue and Johan van Leeuwen at Wageningen University. Rick Ruijsink at TU Delft. Insect wing research: Elke van der Casteele at SkyScan. Adriaan Beukers at TU, Delft. Mees Muller and Johan van Leeuwen at Wageningen University.
Appendix 1 Suggested Web Sites for Ordering Micro-Components and Materials Components/materials
Web site
Smallest RC system Ornithopter kits Motors, gears, etc. Micro-RC shop Micro-camera systems Micro-RC shop Miniature carbon fibre rods Lightweight indoor airplanes Micro-RC shop Micro-RC shop (e.g. Mylar) Alternative LP batteries Rapid prototyping
www.microflierradio.com www.ornithopter.org www.didel.com www.bsdmicrorc.com www.misumi.com.tw www.plantraco.com www.dpp-pultrusion.com www.indoorduration.com www.peck-polymers.com www.wes-technik.de www.atomicworkshop.co.uk www.quickparts.com
Appendix 2 Tested Parameters Parameter
Test method
Flight speed
Measured using both a stopwatch and video-analysis of a straight flight. The flight is performed along a reference red-white tape to measure distance and climb angle. The number of video frames is also used to measure time The flapping frequency was determined by examining audio-peaks in the audio track of the video recording. The audio file was filtered using GoldWave software The power is derived from the flapping frequency (video data) and the torque of the rubber band (torque meter in combination with counting the number of windings in the rubber band) Measured using video-analysis and red-white tape as a reference for rocking amplitude
Flapping frequency
Power
Rocking
References 1. Bradshaw, N.L., Lentink, D.: Aerodynamic and structural dynamic identification of a flapping wing micro air vehicle. AIAA conference, Hawaii (2008) 2. Ellington, C.P.: The aerodynamics of insect flight I-VI. Philosophical Transactions of the Royal Society of London. Series B 305, 1–181 (1984) 3. Ellington, C.P., van den Berg, C., Willmott, A.P., Thomas, A.L.R.: Leading-edge vortices in insect flight. Nature 384, 626–630 (1996) 4. Dickinson, M.H.: The effects of wing rotation on unsteady aerodynamic performance at low Reynolds numbers. The Journal of Experimental Biology 192, 179–206 (1994) 5. Dickinson, M.H., Lehmann, F.O., Sane, S.P.: Wing rotation and the aerodynamic basis of insect flight. Science 284, 1954–1960 (1999) 6. Fry, S.N., Sayaman, R., Dickinson, M.H.: The aerodynamics of free-flight maneuvers in Drosophila. Science 300, 495–498 (2003) 7. Jongerius, S.R., Lentink, D.: Structural analysis of a dragonfly wing. Journal of Experimental Mechanics, special issue on Locomotion (2009) 8. Kawamura, Y., Souda1, S., Nishimoto, S., Ellington, C.P.: Clapping-wing Micro Air Vehicle of Insect Size. In: N. Kato, S. Kamimura (eds.) Bio-mechanisms of Swimming and Flying. Springer Verlag. (2008) 9. Kesel, A.B.: Aerodynamic characteristics of dragonfly wing sections compared with technical aerofoils. The Journal of Experimental Biology 203, 3125–3135 (2000) 10. Lentink, D., Gerritsma, M.I.: Influence of airfoil shape on performance in insect flight. American Institute of Aeronautics and Astronautics 2003–3447 (2003)
14 The Scalable Design of Flapping MAVs 11. Lentink, D., Dickinson, M.H.: Rotational accelerations stabilize leading edge vortices on revolving fly wings. The Journal of Experimental Biology accepted (2009) 12. Lentink, D., Dickinson, M.H.: Biofluid mechanic scaling of flapping, spinning and translating fins and wings. The Journal of Experimental Biology accepted (2009) 13. Okamoto, M., Yasuda, K., Azuma, A.: Aerodynamic characteristics of the wings and body of a dragonfly. The Journal of Experimental Biology 199, 281–294 (1996) 14. Pornsin-Sirirak, T.N., Tai, Y.C., Ho, C.H., Keennon, M. (2001). Microbat-A Palm-Sized Electrically Powered Ornithopter. NASA/JPL Workshop on Biomorphic Robotics. Pasadena, USA 15. Rees, C.J.C.: Form and function in corrugated insect wings. Nature 256, 200–203 (1975a) 16. Rees, C.J.C.: Aerodynamic properties of an insect wing section and a smooth aerofoil compared. Nature 258, 141– 142 (13 November 1975) doi: 10.1038/258141aO Letter (1975b) 17. Sane, S.P.: The aerodynamics of insect flight. The Journal of Experimental Biology 206, 4191–4208 (2003)
205 18. Srygley, R.B., Thomas, A.L.R.: Unconventional liftgenerating mechanisms in free-flying butterflies. Nature 420, 660–664 (2002) 19. Tamai, M., Wang, Z., Rajagopalan, G., Hu, H., He, G.: Aerodynamic performance of a corrugated dragonfly airfoil compared with smooth airfoils at low Reynolds numbers. 45th AIAA Aerospace Sciences Meeting and Exhibit. Reno, Nevada, 1–12 (2007) 20. Usherwood, J.R., Ellington, C.P.: The aerodynamics of revolving wings I-II. The Journal of Experimental Biology 205, 1547–1576 (2002) 21. Wood, R.J.: The first takeoff of a biologically-inspired atscale robotic insect. IEEE Trans. on Robotics 24, 341–347 (2008) 22. Wootton, R.J.: Geometry and mechanics of insect hindwing fans: a modelling approach. Proceedings of the Royal Society of London. Series B 262, 181–187 (1995) 23. Wootton, R.J., Herbert, R.C., Young, P.G., Evans, K.E.: Approaches to The Structural Modelling of Insect Wings. Philosophical Transactions of the Royal Society 358, 1577–1587 (2003)
Chapter 15
Springy Shells, Pliant Plates and Minimal Motors: Abstracting the Insect Thorax to Drive a Micro-Air Vehicle Robin J. Wootton
Abstract The skeletons of the wing-bearing segments of advanced insects show unexploited potential in the design of biomimetic flapping MAVs. They consist of thin, springy, composite shells, cyclically deformed by large, enclosed muscles to flap the wings as first-order levers over lateral fulcra. The wings are light, flexible, membrane-covered frameworks, with no internal muscles, whose deformations in flight are encoded in their structure; they are ‘smart’ aerofoils. Both thorax and wings are apparently resonant structures, storing energy elastically, and tuned to deform appropriately at their operating frequencies. The form of the basic wing stroke is determined structurally, but is modulated by a series of controlling muscles, contracting tonically to alter the positions of skeletal components over the course of several stroke cycles. Fuel economy through lightness, low wing inertia and cyclic energy storage are all desirable in a flapping MAV. Furthermore, the insects’ peculiar combination of structural automation with modulation has great potential in achieving versatile kinematics with relatively few actuators. Aspects of the thoracic functioning of an advanced fly can be simulated in a simple card flapping model, combining the properties of a closed four-bar linkage with the elastic lateral buckling of a domed shell. Instructions for building this are included. Addition of further degrees of freedom, along with biomimetic smart wings, would seem to allow other crucial kinematic variables to be introduced and controlled with minimum actuation, and ways are suggested how this might be achieved in a sophisticated mechanism. R.J. Wootton () School of Biosciences, Exeter University, Exeter EX4 4QD, UK e-mail:
[email protected]
15.1 Introduction Insects are superlative micro-air vehicles. This is widely recognised by engineers who have chosen to propel small flying robots by means of flapping wings, and insect flight specialists have been extensively consulted and sometimes actively involved in MAV development. Their contributions, however, have so far mainly been in the areas of flapping flight aerodynamics and of flight control. The mechanical processes and components to drive MAVs – the actuators, transmission and effectors – have tended to follow orthodox rather than biomimetic technology: electric motors or piezoelectric actuators driving novel, often beautifully ingenious mechanisms of stiff components, linked by bearings, and flapping rigid or simple flexible wings ([1, 2, 4, 12, 14–18, 22, 30, 31, 36] and see [4] for a useful classification of types); see also Chaps. 13 and 14. Insect ‘technology’ is very different. Muscle has no real parallel in our own engineering; and the insect flight skeleton, which provides both the transmission and the wings themselves, is a system of thin, springy shells, plates and frameworks. The thoraxes of advanced insects – flies, wasps, moths, true bugs – can be thought of as flexible monocoques which are deformed cyclically by the muscles, flapping the wings as first-order levers over lateral fulcra. The wings themselves are smart, deformable aerofoils, whose instantaneous shape through the stroke cycle is determined, largely automatically, by the interaction of their structural elasticity and the inertial and aerodynamic forces they are experiencing. Again there are no obvious parallels in our own technology – yet.
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_15, © Springer-Verlag Berlin Heidelberg 2009
207
208
There are good reasons for this. Deformation of thin shells and plates under loading often involves buckling, which can be highly non-linear, is difficult to model and is often destructive. Furthermore, cyclical deformation can lead quickly to fatigue failure. These disadvantages can to some extent be overcome by using appropriate polymers and composites and by new computer modelling and optimisation techniques; but in most engineering contexts traditional methods are adequate and considerably more straightforward to use. Insects, however, overcome these difficulties with ease, and thereby gain considerable advantages. First, thin composite shells and plates have low mass; an advantage in any flying machine, but especially valuable in a flapping system, where the inertia of the moving parts and particularly of the wings needs to be minimised. Second, they are often springy, made of resilient materials in three-dimensional structures that deform elastically and are capable of elastic energy storage – again invaluable in an oscillating system. Elastic storage is a component of some existing MAV mechanisms (e.g. [16, 30, 31]), but achieved in more conventional ways. Third, they seem fairly insensitive to scaling effects; in some prominent groups similar mechanisms operate over a very wide size range, from the unnervingly large to the near-microscopic, with little apparent difference in morphology. These properties should all commend themselves to MAV designers, and it seems possible that insect solutions may have much to offer in the development of small robotic flapping mechanisms. This chapter will therefore explore the possibilities and implications of adopting and adapting aspects of the insect thoracic skeleton in designing the transmissions of small, versatile, manoeuvrable low-speed MAVs with some insect flight characteristics and will also examine how an understanding of insect wing functioning might lead to the development of more effective flapping aerofoils than have so far been employed.
15.2 Some Requirements of a Small, Versatile Flying Machine: How Do Insects Manage?
R.J. Wootton
of this chapter; we are concerned with design requirements of the last two. These need the following.
15.2.1 Low Mass Flapping flight is expensive, especially at low speeds. Power is at a premium; and minimising both weight and the inertial cost of flapping is a major design consideration. The cuticular skeleton of insects, which also serves as a skin forming almost the entire interface between the insect and its surroundings, is suitably light. It consists mainly of an extraordinarily versatile array of composite materials, chemically and structurally related but varying markedly from place to place in the orientation of the fibrous component, which consists of microfibrils of the carbohydrate chitin, and in the composition and degree and nature of cross-linking of the protein molecules that provide the matrix of the composite material. These variations provide a range of mechanical properties – stiffness, toughness, hardness, strength, resilience – apparently optimised everywhere for the forces encountered and for the many local functions that the cuticle serves [20, 23]. In the skeleton of the thorax itself cross-linked cuticle provides areas of rigid, but often springy, plates and curved shells (‘sclerites’), which may locally be reinforced by internal ridges or thinned for greater pliancy. Between and continuous with them are areas of soft, compliant cuticle, which may locally include tensile, tendon-like bands joining muscles to plates or plates to plates and sometimes bands or pads of rubbery, elastomeric protein. The wings themselves consist almost entirely of cuticle, which provides both the supporting, usually tubular, veins and the membrane, whose thickness varies greatly but can be less than 1 μm. With no internal muscles and little contained fluid the wings are usually extremely light with low moments of inertia – important, since flapping frequencies of several hundred hertz are common – and the potential for inertial energy loss is substantial. Rigidity is enhanced by relief: corrugation and camber, raising the second moment of area of cross sections, with little extra cost in material and mass.
15.2.2 Appropriate Kinematics Flying machines, whether natural or man-made, need a power source, actuators, a control system, a transmission and effectors. The first three are outside the scope
Appropriate wing kinematics are essential in flapping flight. These become more complex at low speeds or
15 Springy Shells, Pliant Plates and Minimal Motors
209
more precisely at lower values of the advance ratio, J. This is a measure of the ratio of forward speed to flapping speed and is given by J = V/2nR, where V is the forward velocity, the stroke amplitude in radians, n the stroke frequency and R the wing length [7]. At high J values the wings meet the air at positive angles of attack on both downstroke and upstroke. In fast forward flight the wings of birds and bats undergo minimal change in shape and attitude, and this is also true of the relatively small range of insects, including some butterflies, that fly fast with slow strokes. For this reason it is relatively easy to build a simple fast-flying model whose wings simply flap up and down, and several are available as toys, showing remarkably lifelike flight. The best known are the inexpensive Tim and Timmy birds, by Schylling Toys, whose simply supported wings are flapped by an ingenious variant on the open four-bar linkage driven by wound synthetic rubber loops, widely used in ‘ornithopters’ by aeromodellers. However, as the value of J decreases, changes in wing attitude and/or shape between the two halfstrokes become increasingly necessary if the upstroke is not to exert adverse forces on the wings, with a net downward component. Slow flight at low J values, and hovering, where J=0, hence require the wings to twist and sometimes to change shape between the halfstrokes in order to either minimise upstroke forces or direct them favourably. Furthermore, in many insects wing twisting, appropriately timed, is itself responsible for generating bursts of high vorticity and hence high lift around the points of stroke reversal, in a range of unsteady aerodynamic mechanisms that may be essential to support the insects’ weight and in the fine control of accelerations and manoeuvres [5, 8, 26]. Insects achieve these kinematic skills by unique methods. In the great majority of species, flapping is achieved by cyclic contraction of opposing sets of ‘indirect’ muscles, inserted on thoracic sclerites remote from the wings, rather than on the wings themselves as in flying vertebrates. These muscles alternately pull the top of the thorax down, levering the wings up, and cause it to bow upwards by compressing it longitudinally, levering the wings down (Fig. 15.1a,b). Other movements of the whole wing – promotion and remotion, some basal twisting, control of the wing stroke path, folding and extension – are
Fig. 15.1 Diagrammatic cross section of a generalised insect thorax showing the action of the flight muscles: (a) upstroke and (b) downstroke
achieved by a series of ‘direct’ muscles, inserted on, or connected by tensile cuticle bands to sclerites at the extreme wing base. In the great majority of species, the indirect muscles powering the wing stroke are of a physiologically and histologically distinct type, known as ‘asynchronous’ or ‘fibrillar’ muscle, and unique to insects, whose contraction frequency is not limited by the frequency of the incoming nerve impulses. This allows the insects to operate if necessary at stroke frequencies far higher than those with orthodox muscle only, which are constrained to values below c. 100 Hz. The direct muscles, on the other hand, appear always to be of the orthodox kind; and in insects with asynchronous power musculature, changes in the wing stroke variables which the former control are achieved by slower, tonic contractions that modify the positions of the basal sclerites during the course of several stroke cycles. There is some evidence that the direct muscles can influence the shape of the wings at their extreme base, but wing shape is otherwise passively determined from instant to instant during the stroke cycle by the interaction of the wings’ elasticity with the inertial and aerodynamic forces that they are receiving. These alterations in shape – twisting, flexion, change of cross section – are an integral part of the flight process, and they are effectively programmed into the wings’ structure: in the relief, the form and arrangement of the supporting veins, the distribution of rigid and flexible components. The wings are in effect ‘smart’ aerofoils, combining remote and automatic shape control in ways which seem to occur nowhere else in nature or in technology [32, 33, 35].
15.3 Biomimetic Possibilities Biological systems have limitations as well as advantages – see Vogel [24] for a thoughtful analysis. They
210
have no access to bulk metals or (above the level of bacterial flagella) to rotational joints, so that movement involving rigid components is restricted to oscillation and reciprocation. Their actuators can only pull, not push. Most importantly, their designs are constrained by their ancestry; they have always to develop from those of their immediate predecessors. They cannot, as engineers can, learn from, adopt, combine and adapt the designs of widely separate groups. Engineers contemplating biomimetic design solutions therefore face a sequence of decisions. Is the naturally occurring solution to the problem in question the best available? If so, is it feasible and sensible to adopt it? If so, to what extent? Copying the principles of insect thoracic skeletons appears to have real potential in MAV design. It is at present beyond us to emulate the extraordinary versatility of insect cuticle, but combining appropriate tough, fatigue-resistant polymers and composites in an optimally designed thin shell would seem to be a feasible, and ultimately simple and reproducible way of building an effective low mass transmission. MAV wings, too, could usefully copy the smart properties of insect wings; the greater the degree of structurebased automation, the less information needs to be processed from instant to instant, and the fewer actuators are required. Minimising the number of actuators is a major consideration in MAV development, and insects here give us no cause for optimism. In dragonflies, insects which instinctively appeal to biomimetic engineers because of their appropriate size and superlative flight skills, no fewer than 50 muscles are involved in flight. Even flies (Diptera), which operate with a single pair of aerofoils on one thoracic segment, use 38 muscles [28]. Here at least, close biomimicry seems impracticable. The implications of so many actuators in terms of weight and control in an MAV are obvious, and different solutions are needed.
15.3.1 Kinematic Requirements A fully manoeuvrable flapping MAV may need to be capable of varying and controlling most or all of the following: A. Flapping amplitude: Most insects whose flight has been studied appear to use amplitude change as a means of varying the strength of the net aero-
R.J. Wootton
B.
C.
D.
E.
dynamic force, though the relationship between amplitude and flight velocity is far from simple [6]. Stroke path: The principal reason to vary the trajectory of the wing stroke is to control the direction and centring of the mean aerodynamic force vector of each stroke, or in low frequency flappers each half-stroke, relative to the centre of mass of the insect; hence inducing and controlling movement around the three rotational axes of space: pitch, roll and yaw. Control of pitch is particularly important, in influencing the angle of the stroke plane to the horizontal: the stroke plane angle – see below. Most insects studied are capable of modifying their stroke path to a considerable extent [3, 7, 10, 25, 27]. The ‘figure of eight’ shape that appears in much of the earlier literature is only one of a range of such paths, and the emphasis on achieving this that has guided some recently published flapping mechanisms is probably quite unnecessary. Stroke plane angle: In many insects the flapping movement of the wing approximates to a plane. The angle of this plane – calculated as the slope of the linear regression of the vertical component of the wing tip path on the horizontal axis [7] – is a major determinant of the direction of the mean force vector, and hence of flight velocity. The degree and timing of wing twisting during the stroke: Twisting, as we have seen, is essential at low values of J, and its timing can be important in unsteady lift mechanisms. The capacity for lateral stroke asymmetry in amplitude, stroke path, the degree or timing of twist, or in any combination of these, to facilitate manoeuvres.
Control of another possible kinematic variable – stroke frequency – is probably less important. Frequency appears to vary little in individual insects and is probably kept close to the resonant frequency of oscillation, where energy expenditure is minimised [6, 13]. For economy any flapping mechanism in an MAV should be designed as a resonant system, and its frequency would then be relatively invariate.
15.4 An Appropriate Thorax Design for Abstraction – Higher Flies Insect thoraxes are extremely diverse, and the mechanics of very few have been studied in detail. While there
15 Springy Shells, Pliant Plates and Minimal Motors
a
211
b
Fig. 15.2 The indirect flight muscles of the blowfly Calliphora: (a) the wing elevators and (b) the wing depressors
is nothing to preclude the abstraction and combining of principles from several different groups, I shall base the following on the blowfly Calliphora, whose flapping mechanics have been investigated in particular depth [9, 19, 28, 29]. Figure 15.2 shows semidiagrammatically the positions of the flapping muscles in a sagittal section of the blowfly mesothorax – the flapping body segment. Wing elevation is achieved by contraction of an array of large, paired dorsoventral muscles (Fig. 15.2a) inserted dorsally on the notum – the thoracic roof – and ventrally low down on the pleura – the sides of the thorax. Wing depression is brought about by contraction of huge dorsal longitudinal muscles (Fig. 15.2b), inserted broadly on the anterior part of the domed notum, and posteriorly on the vertical back of the notum and an internally extending plate, the phragma. These, together with a pair of small lateral muscles, not illustrated, are ‘indirect’; remote from the wing itself. Three further pairs of small muscles, not shown, serve to tension the thoracic box, and no fewer than 13 further pairs, inserted directly on the
complex of small sclerites around the wing hinge, are concerned with wing folding, in controlling the positioning and attitude of the wing base and in modifying the form of the stroke. It would be absurd to attempt to copy such a complex actuation system in an MAV. The correct approach would seem to develop a simplified transmission based on the thoracic skeleton and to consider from first principles what actuation would be necessary to achieve the necessary kinematics. The following account of the deformation of the mesothorax largely follows that in [19, 29]. Figure 15.3a shows the mesothorax from the side. The four round dots represent transverse axes, about which the thorax distorts cyclically under the action of the indirect, flapping muscles. The positions of the anterior and ventral axes are approximate; their presence is betrayed by the cyclic widening and contracting of the adjacent pointed clefts, filled with soft cuticle. Posteriorly two more clefts also widen and narrow by the cyclic elevation and lowering of the processes X and Y.
Fig. 15.3 (a) Side view of the mesothorax of Calliphora, showing the three principal processes, X, Y and Z, to which the wing attaches and the four rotational axes about which the thorax
deforms in flight. (b, c) The mesothorax treated as a four-bar linkage, with three out-of-plane coupler bars. (b) Side view and (c) from above
212
These are flexibly linked to the base of the wing; and the latter rests on the process Z, which acts as the fulcrum. In flapping, the up and down movements of X impart some twist to the wing base, so that the latter tends to pronate as the wing is depressed and to supinate as it rises. The mesothoracic flapping mechanism can be simplified as a three-dimensional four-bar linkage, with fixed coupler bars projecting inwards from three of the four, their ends representing the three processes, X, Y and Z, to which the wing is attached (Fig. 15.3b,c). Z, the fulcrum, is more laterally situated than X and Y; and X is positioned slightly more laterally than Y. The proportions of the bars and the positions of X, Y and Z are critical. If correct, low-amplitude compression at the points indicated by arrows raises X and Y relative to Z and would depress an attached wing. X moves higher than Y, which would tend to twist the wing base and pronate the wing; and Y also moves slightly posteriorly, and closer to X, tending to promote the wing. Hence, as a direct consequence of the structure of the mechanism, a single movement depresses, pronates and promotes the wing; and conversely tension at the indicated points would raise, retract and supinate the wing – all in keeping with actual wing kinematics. Since a four-bar linkage has only one degree of freedom, the system should theoretically be operable with a single actuator, replacing the four sets of indirect muscles in the insect itself. In the fly the four bars are actually thin shells, and in no case do the ends of the flexible clefts coincide with the axes of rotation, so that some lateral buckling is inevitable. The notum in particular is domed, and Ennos [9] found that contraction of the dorsal longitudinal muscles caused the sides to buckle outwards at the process Y, assisting in wing depression. This is an elastic process, and a potential site of cyclic energy storage. Figure 15.4 shows a cardboard flapping model that combines these properties and can be worked by simple pressure of thumb and forefinger. Figure 15.5 is a flat design for the model, with instructions for building it. If properly constructed the mechanism produces an automated flapping cycle with a degree of appropriately timed promotion, remotion and pronatory and supinatory twisting and twisting. For a manoeuvrable MAV, however, a mechanism with a single degree of freedom is inadequate. The only kinematic variables that could be altered in flight
R.J. Wootton
Fig. 15.4 The thorax modelled as a four-plate linkage with lateral elastic buckling
are stroke amplitude and, within the limits imposed by resonance, frequency. To be able to change the stroke path and the stroke plane angle, additional freedom and actuation are required. Here another insect trick could if necessary be adopted: the tonic contraction of the direct, controlling muscles, acting slowly by altering the positions of thoracic components over several flapping cycles. Two approaches, available to engineers but not to insects, appear to be worth exploring. 1. The stroke path could be made adjustable by introducing active movement to the lateral fulcra. 2. An extra three-dimensional, shell-like bar could be added to the linkage, increasing its mobility to 2 and theoretically necessitating one additional actuator. Interaction of the two actuators, operating at the same frequency, should allow precise instantaneous control of the relative motions of Y and Z, and hence of the wing tip path, and using one actuator ‘tonically’ to position the mechanism over several cycles is also an option. One feature of the four-bar system is lost. Experiment shows that active twisting of the wing base can no longer be appropriately coordinated with the flapping cycle, so that the bar corresponding to the process X, which causes twisting in the four-bar model, needs to be lost. Promotion and remotion, the other functions of X, are components of the stroke path and hence controllable by the newly acquired mobility of the system. Active wing twisting would now need separate actuation. However, this may not be necessary. Most of the
15 Springy Shells, Pliant Plates and Minimal Motors
213
anterior
A
a
a x
y
y x
b
b
Fig. 15.5 Cut-out design for the thorax model: (A–D) cut-out proformas and (E) diagrammatic detail from Fig. 15.4, showing the positions of the cords at the articulation. Assembly instructions: Use thin card, about the thickness of a standard index card. A glue stick is best for adhesive. Paperclips and small staples are useful for the temporary support of glued surfaces, and fine forceps or pliers help in making the wing articulations. Carefully score along the broken lines. Those with longer dashes are actual fold lines, the smaller dashes facilitate the operation of the mechanism. (A) The dorsal component of the model, corresponding to the insect’s notum. Bend the triangular tabs at each end until they overlap as shown and glue them together, forming two triangular, boat-shaped ends. Fold in the tabs marked ‘a’ and ‘b’, but do not glue them at this stage. (B) The ventral component. Glue together the triangular/square tabs at each end, as indicated, creating diamond-shaped, boat-shaped ends, with a transverse crease across each diamond. Fold back the tabs marked ‘c’, but do not glue them at this stage. Flex the model along the line ‘d–d’ through approximately 90◦ and glue the tabs marked ‘e’ to the shaded squares. (C) The leading edge spar of the wing, and two should be made. Crease along all the broken lines, fold along the middle one and mould the result into a v-shaped cross section, with the small, cut-out rectangle on
10cm
the concave side of the V, near the pointed end. Glue all but the shaded area. Join the dorsal and ventral components of the body (A) and (B) at the ends by gluing the protruding, flexible, triangular part of the diamond-shaped ends of (B) inside the triangular ends of (A), making sure that both components are similarly orientated. Allow to dry. Cut four lengths of thin cord, or strong thread, each ca. 4 cm long. On each side, glue ca. 1.5 cm along the fold line of tab a, with the free length protruding posteriorly at point y, corresponding to point Y on the fly thorax in Fig. 15.3. Glue tab a in place, anchoring the cord. Bring the pointed end of the wing spar (C) up to point y and glue the next section of the cord along the concave face, behind the innermost layer, bringing it out through the cut-out rectangular slot. Complete the gluing of the spar base, which should now be capable of free rotation about y. Glue the remaining end of the cord down the fold line of tab c in the ventral component of the model. The upper end of tab c, point z, forms the fulcrum for the wing, corresponding to point Z in Fig. 15.3. Both y and z should be in flexible contact with the wing spar, with very little cord showing. Glue tab c in place. Glue ca. 1.5 cm of the next piece of cord along the fold line of tab b, with the free end protruding anteriorly at point x. Glue the tab in place. Cut two of the shapes marked (D) from paper and glue the straight sides
214
R.J. Wootton
C
D E
anterior
B e
d
c z
d
e c
z
Fig. 15.5 (Continued) to the wing spars. The model is now virtually complete. The wing spars should be level, projecting transversely and slightly above the horizontal. Glue the free ends of the cord extending from point x to the underside of the posterior part of the wing, leaving enough cord between x and the wing to allow free movement. Trim off any surplus cord. To
operate the model, hold the anterior part of the ventral component between the thumb and forefinger of one hand. Support the most ventral part of the model (at d–d) with the thumb of the other hand and gently squeeze repeatedly with the forefinger at the posterior extremity of the ventral component
torsion in insect wings takes place within the span, and appears often to be driven solely by inertial forces on the wings, perhaps with an aerodynamic component. It may well be that in a suitably constructed and tuned wing no active twisting is required. There remains the need to introduce lateral asymmetry to the wing stroke, in the form of differential amplitude or torsion, in order to manoeuvre. For either of these, additional actuation is essential. Four approaches deserve consideration.
A. In the movable fulcrum option described above, asymmetry would be provided by separate actuation of the two sides. B. It may be possible to achieve the necessary effects by actively stiffening or warping the shell on one side through several stroke cycles. Pressure applied to one side of the model in Fig. 15.4 can alter the form of the stroke on that side, indicating that it may be possible to optimise this effect by careful design.
15 Springy Shells, Pliant Plates and Minimal Motors
C. Another solution available to engineers, though not to insects, may be to allow active movement of the centre of mass of the body relative to the centre of aerodynamic force. D. The effects may be achievable by remote control of the instantaneous shape of the wings themselves. We will explore the last solution.
15.5 Wing Biomimicry MAV designers have so far paid little attention to the biomimetic possibilities that insect wings offer. Approaches to modelling their unique properties, which combine structural automation of their kinematics with remote control, have been explored by Wootton et al. [35], with reference to much earlier work; see also Chap. 11. Insects make extensive use of relief to stiffen their wings. This not only minimises mass but also provides differential flexibility in different planes. For example, a wing with longitudinal pleats is flexible along axes parallel to the pleats, but rigid to bending across them. Transverse bending is only possible if the pleats can be flattened or if the pleats on the inside of the curve can bow upwards into the plane of those on the outside [21]. A wing with a cambered section is more rigid than one with a flat section, and asymmetric in its response to bending forces. Force applied
Fig. 15.6 A cardboard (a) and paper (b) wing, demonstrating how three flexion lines can allow and control supinatory twisting in the upstroke. Explanation in the text
215
from the convex side flattens the section and allows non-destructive bending with a fairly large radius of curvature, but applied from the concave side tends to increase the camber and hence the rigidity, and eventually leads to local buckling and destructive failure [32]. If the force is centred behind the wing’s torsional axis, this asymmetric response to bending leads to asymmetry in resistance to twisting, which combines bending and torsion; and this simple property appears to be extensively used by insects to facilitate passive twisting in the upstroke while resisting it in the downstroke [11, 34]. In many insects with broadly supported wing bases, or whose fore and hind wings are coupled together, upstroke twisting of the distal part of a cambered wing is achieved by ventral bending along an oblique line of flexibility. The amount of bending and angle of twist are related to the height of the wing camber proximally to the line of flexion, and this seems to be actively controllable by muscles at the wing base. Bending and torsion of a cambered or pleated wing involve elastic deformation, and it seems that wings too are resonant structures. Like the thoracic box they need to deform correctly at their working frequency, and one can identify morphological features that seem adapted to tune them to do so. These principles can readily be modelled physically and could certainly be used in designing wings for an MAV. Figure 15.6 shows one such model. Support is provided by the stippled area, which is made of thin card. The leading edge section is curved ventrally and
216
is crossed by two oblique lines of flexibility, a–a and b–b, made by cutting part way through the card with a sharp blade. The broad basal stippled area is also cambered and is crossed by a longitudinal flexion line c–c that crosses b–b. The rest of the wing is made of paper. The wing has interesting and unexpected properties, best understood by making the model. The curved section of the leading edge makes it resistant to bending when force is exerted on the ventral, concave side, as it would be in a downstroke. If the force is applied to the convex side, behind the wing’s torsional axis, as it would be in an upstroke, the leading edge bends slightly about a–a, and the wing twists readily towards the tip and could easily assume a positive angle of attack and generate useful upward force. Bending about b–b greatly enhances the twisting, but this is controllable by varying the camber of the basal area around c–c. When the base is nearly flat, bending at both a–a and b–b allows the distal part of the wing to twist dramatically (Fig. 15.6b). Steeper basal camber limits bending to a–a and the wing twists far less. A wing so designed would twist automatically to some extent in the upstroke, but the extent could be controlled over a wide range by simple basal actuation.
15.6 Conclusion It seems possible that a complete system comprising a transmission with the properties of a five-bar linkage in a resonant, springy shell, together with a pair of smart wings with actively variable basal camber, could drive an MAV having mechanical control over all the kinematic variables that we have identified as essential for versatile flight, with a rather small number of appropriately designed and located actuators. The mechanism in Figs. 15.4 and 15.5 and the wing in Fig. 15.6 are naïve examples; a sophisticated design would be a deformable monocoque, optimised using modern modelling software, with similar optimised wings. Such a system would have the added advantages of low weight and inertia and relative economy through cyclic elastic energy storage and release. It should moreover be fairly easy to build and replicate, and be capable in time of progressive miniaturisation, as smaller motors, power stores and control circuits become available.
R.J. Wootton
References 1. Avadhanula, S., Wood, R.J., Steltz, E. Yan, J., Fearing, R.S.: Lift force improvements for the Micromechanical Flying Insect. IEEE International Conference on Intelligent Robots and Systems 1350–1356 (October 2003) 2. Banala, S., Agrawal, S.K.: Design and optimization of a mechanism for out-of-plane insect wing-like motion with twist. Transactions ASME, Journal of Mechanical Design 127, 817–824 (2005) 3. Betts, C.R.: The kinematics of Heteroptera in free flight. Journal of Zoology B 1, 303–315 (1986) 4. Conn, A.T., Burgess, S.C., Ling, S.C.: Design of a parallel crank-rocker flapping mechanism for insect-inspired micro air vehicles. Proceedings of the Institution of Mechanical Engineers C. Journal of Mechanical Engineering Science 221(10), 1211–1222 (2007) 5. Dickinson, M.H., Lehmann, E.O., Sane, S.P.: Wing rotation and the aerodynamic basis of insect flight. Science 284, 1954–1960 (1999) 6. Dudley, R.: The Biomechanics of Insect Flight. Princeton University Press. Princeton, N.J. (2000) 7. Ellington, C.P.: The aerodynamics of hovering insect flight. III. Kinematics. Philosophical Transactions of the Royal Society London B 305, 41–78 (1984) 8. Ellington, C.P.: The aerodynamics of hovering insect flight. IV Aerodynamic mechanisms. Philosophical Transactions of the Royal Society London B 305, 79–113 (1984) 9. Ennos, A.R.: A comparative study of the flight mechanism of Diptera. Journal of Experimental Biology 127, 355–372 (1987) 10. Ennos, A.R.: The kinematics and aerodynamics of the free flight of some Diptera. Journal of Experimental Biology 142, 49–85 (1989) 11. Ennos, A.R.: Mechanical behaviour in torsion of insect wings, blades of grass, and other cambered structures. Procedings of the Royal Society London B 259, 15–18 (1995) 12. Galinski, C., Zbikowski, R.: Insect-like flapping wing mechanism based on a double spherical Scotch yoke. Journal of the Royal Society Interface 2(3), 223–235 (2005) 13. Greenwalt, C.H.: The wings of insects and birds as mechanical oscillators. Proceedings of the American Philosophical Society 104, 605–611 (1960) 14. Khan, Z.A., Agrawal, S.K.: Design of flapping mechanisms based on transverse bending mechanisms in insects. Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida, 2323–2328 (2006) 15. Madangopal, R., Khan, Z.A., Agrawal, S.K.: Biologically inspired design of small flapping wing air vehicles using four-bar mechanisms and quasi-steady aerodynamics. Journal of Mechanical Design 127(4), 809–816 (2005) 16. Madangopal, R., Khan, Z.A., Agrawal, S.K.: Energeticsbased design of small flapping-wing micro air vehicles. IEEE/ASME Transactions on Mechatronics 11(4), 433– 438 (2006) 17. McIntosh, S.H., Agrawal, S.K., Khan, Z.A.: Design of a mechanism for biaxial rotation of a wing for a hovering
15 Springy Shells, Pliant Plates and Minimal Motors
18.
19.
20.
21.
22.
23.
24. 25.
26.
vehicle. IEEE/ASME Transactions on Mechatronics 11(2), 145–153 (2006) Mukerjee, S., Sanghi, S.: Design of a six-link mechanism for a micro air vehicle. Defence Science Journal 54, 271– 276 (2004) Nachtigall, W.: Mechanics and aerodynamics of flight. In: G.J. Goldsworthy, C.H. Wheeler (eds.) Insect Flight, pp. 1–28. CRC Press Inc. Boca Baton (1989) Neville, A.C.: Biology of Fibrous Composites: Development Beyond the Cell Membrane. Cambridge University Press, Cambridge, U.K. (1993) Newman, D.J.S., Wootton, R.J.: An approach to the mechanics of pleating in dragonfly wings. Journal of Experimental Biology 125, 361–372 (1986) Steltz, E., Wood, R.J., Avadhanula, S., Fearing, R.S.: Characterization of the Micromechanical Flying Insect by optical position sensing. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona 1–4, 1252–1257 (2005) Vincent, J.F.V.: Insect cuticle: a paradigm for natural composites. In: J.F.V. Vincent, J.D. Currey (eds.) The Mechanical Properties of Biological Materials, pp. 183– 210. Symposia of the Society for Experimental Biology 34. Cambridge University Press, Cambridge UK (1980) Vogel, S.: Cats’ Paws and Catapults. 382 pp. W.W. Norton and Company New York (1998) Wakeling, J.M., Ellington, C.P.: Dragonfly flight. II. Velocities, accelerations and kinematics of flapping flight. Journal of Experimental Biology 200, 557–582 (1997) Weis-Fogh, T.: Quick estimates of flight fitness in hovering animals, including novel mechanisms for lift production. Journal of Experimental Biology 59, 169–230 (1973)
217 27. Wilmott, A.P., Ellington, C.P. The mechanics of flight in the hawkmoth Manduca sexta. I. Kinematics of hovering and forward flight. Journal of Experimental Biology 200, 2705–2722 (1997). 28. Wisser, A., Nachtigall, W.: Functional-morphological investigations on the flight muscles and their insertion points in the blowfly Calliphora erythrocephala (Insecta, Diptera). Zoomorphology 104, 188–195 (1984) 29. Wisser, A., Nachtigall, W.: Mechanism of wing rotating regulation in Calliphora (Insecta, Diptera). Zoomorphology 111, 111 (1987). 30. Wood, R.J.: Design, fabrication and analysis of a 3 DOF, 3 cm flapping-wing MAV. IEEE/RSJ IROS, San Diego, CA, (October 2007). 31. Wood, R.J.: The first take off of a biologically-inspired atscale robotic insect. IEEE Transactions on Robotics 24 (2), 341–347 (2008). 32. Wootton, R.J.: Support and deformability in insect wings. Journal of Zoology London 193, 447–468 (1981) 33. Wootton, R.J.: Functional morphology of insect wings. Annual Review of Entomology Palo Alto 37, 113–140 (1992) 34. Wootton, R.J.: Leading edge section and asymmetric twisting in the wings of flying butterflies. Journal of Experimental Biology 180, 105–117 (1993) 35. Wootton, R.J., Herbert, R.C., Young, P.G., Evans, K.E.: Approaches to the structural modelling of insect wings. Philosophical Transactions of the Royal Society London B 358, 1577–1587 (2003) 36. Zbikowski, R., Galinski, C., Pedersen, C.B.: Four-bar linkage mechanism for insectlike flapping wings in hover: Concept and an outline of its realization. Journal of Mechanical Design 127, 817–824 (2005)
Chapter 16
Challenges for 100 Milligram Flapping Flight Ronald S. Fearing and Robert J. Wood
Abstract Creating insect-scale flapping flight at the 0.1 gram size has presented significant engineering challenges. A particular focus has been on creating miniature machines which generate similar wing stroke kinematics as flies or bees. Key challenges have been thorax mechanics, thorax dynamics, and obtaining high power-to-weight ratio actuators. Careful attention to mechanical design of the thorax and wing structures, using ultra-high-modulus carbon fiber components, has resulted in high-lift thorax structures with wing drive frequencies at 110 and 270 Hz. Dynamometer characterization of piezoelectric actuators under resonant load conditions has been used to measure real power delivery capability. With currently available materials, adequate power delivery remains a key challenge, but at high wingbeat frequencies, we estimate that greater than 400 W/kg is available from PZT bimorph actuators. Neglecting electrical drive losses, a typical 35% actuator mass fraction with 90% mechanical transmission efficiency would yield greater than 100 W/kg wing shaft power. Initially the micromechanical flying insect (MFI) project aimed for independent control of wing flapping and rotation using two actuators per wing. At resonance of 270 Hz, active control of a 2 degrees of freedom wing stroke requires precise matching of all components. Using oversized actuators, a bench top structure has demonstrated lift greater than 1000 μN from a single wing. Alternatively, the thorax structure can be drastically simplified by using passive wing rotation and a
R.S. Fearing () Biomimetic Millisystems Lab, Univ. of California, Berkeley, CA, USA e-mail:
[email protected]
single-drive actuator. Recently, a 60 mg flapping-wing robot using passive wing rotation has taken off for the first time using external power and guide rails.
16.1 Motivation and Background Flies (order Diptera) are arguably the most agile objects on earth, including all things man-made and biological. They can fly in any direction, make 90◦ turns in tens of milliseconds, land on walls and ceilings, and navigate very complex environments. It is natural then to use flies as inspiration for a small autonomous flying robot. However, this bio-inspiration must be done with care. There are certain aspects of insect morphology and physiology which would not make sense to replicate (reproduction, for example). So our bio-inspiration paradigm hopes to observe natural systems and extract the underlying principles. Then we apply our most advanced engineering techniques in concert with these principles to achieve a desired goal. Insects control flight with a three degrees of freedom (DOF) wing motion and either one or two pairs of wings. This discussion focuses on two-wing insects for two reasons: first, the agility of Dipteran insects is arguably rivaled only by a few species of Odonata. Second, the mechanical complexity of four wings is simply greater than that of two. The three DOF wing trajectory consists of flapping, rotation, and stroke plane deviation. Flapping (upstroke and downstroke) defines the stroke plane. Rotation consists of pronation and supination about an axis parallel to the spanwise direction. The final DOF is stroke plane deviation; however, this will not be considered due to the fact that hovering Dipteran wing motions can be approximately characterized with only two rotational axes.
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_16, © Springer-Verlag Berlin Heidelberg 2009
219
220
R.S. Fearing and R.J. wood
(a)
insect flight [18]. Finally, dynamically scaled robotic insect wings have resulted in approximate quasi-steady empirical models using lift and drag coefficients to hide the unsteady terms [8]. These empirically derived models are used throughout the design of a robotic fly due to their relative simplicity. These models provide the engineer with a first-order approximation to the forces and moments expected from a pair of flapping wings. This chapter will describe the design and fabrication of two classes of robotic flies, shown in Fig. 16.1, using characteristics and models derived from insect flight.
16.2 Design of High-Frequency Flapping Mechanisms
(b) Fig. 16.1 Prototype flapping-wing MAVs, with integrated air frame, thorax, and piezoelectric actuators, but offboard power. (a) UC Berkeley micromechanical flying insect (130 mg). (b) Harvard microrobotic fly (60 mg)
This periodic wing trajectory exists at a Reynolds number of approximately 100–1000 and thus the flow around the wings is mostly separated. Biologists have identified key features of the flow patterns of hovering Diptera and collectively called these ‘unsteady aerodynamics’ [8, 17]. No closed-form analytical description of the unsteady aerodynamics exists due to the challenges in capturing all fluid interactions with non-trivial airfoil deformations. Moreover, the vast diversity in wing morphology (e.g., shapes, textures, anisotropic compliance) offers a further impediment to a simple description of flapping-wing flight [5, 6]. Similarly, numerical simulations (solving the Navier– Stokes equations) have proven difficult for broad studies of multiple simultaneous unsteady aerodynamics phenomena. However, simplified wing models and kinematics have been used to explain some aspects of
Diptera have two sets of flight muscles: direct and indirect [10] as shown in Fig. 16.2. The indirect flight muscles control flapping and provide the vast majority of power for flight [9]. The direct flight muscles insert directly on the pleural wing process via basalar sclerites [13]. It is thought, therefore, that the direct flight muscles are involved with control of pronation and supination of the wing. Details of insect wing drive systems are provided in Chap. 15.
Fig. 16.2 Simplified drawing of a Dipteran thorax. The indirect flight muscles (dorsoventral and dorsolongitudinal) create the upstroke and downstroke, respectively. The direct flight muscles insert on the base of the wing hinge at the pleural wing process (adapted from [10])
16 Challenges for 100 Milligram Flapping Flight
221
(a) transmission
airfoil
(b)
airframe
actuator
Fig. 16.3 (a) Half of original four actuator thorax with two independent DOF per wing [14]. (b) Simplified thorax with single-drive actuator and passive wing rotation [19]
The first design for a flapping-wing MAV (Fig. 16.3a) combines power and flight control actuators to provide direct control of pronation and supination. However, it is speculated that dynamic forces acting on the wing during flight also contribute to wing rotation [12]. The second design for a flapping-wing MAV (Fig. 16.3b) uses this latter assumption and relies on passive wing rotation.
16.2.1 Four Actuator Thorax A wing drive mechanism was designed to provide simultaneous control of wing flapping and rotation angles using a two-input two-output transmission system shown in Fig. 16.3a. To minimize reactive power required to drive the wing inertia, the thorax is designed to operate near mechanical resonance as described by Avadhanula et al. [2]. Each wing is driven
by two piezoelectric bimorph bending actuators [22], which provide an unloaded displacement of ± 250 μm and blocked force of ± 60 mN [22]. The transmission is designed [14, 15, 20, 3] to convert this high-force small displacement to an ideal wing stroke of ± 60◦ , with an equivalent transmission ratio of approximately 3000 rad m−1 . The MFI structure in Fig. 16.3a uses two stages of mechanical amplification followed by a differential element to couple the individual actuator motion into wing flapping and rotation. The first stage slidercrank converts actuator linear displacement into ± 10◦ input to the planar four bar. The four bar has a nominal amplification of 6:1, providing an ideal ± 60◦ output motion. Finally, the two planar four bars are coupled into a spherical five-bar differential element [2, 3], an approximation to the insect wing hinge. The differential element converts angle difference between the four bars into wing rotation, such that a 22◦ angle difference gives rise to a 45◦ wing rotation. The original goal of this design was to achieve independent control of flapping and rotation, providing much greater control moments than even real insects can obtain. As discussed in Sect. 16.6.1, wing inertial and aerodynamic coupling effects dominate the available actuator control effort, making independent control difficult to achieve. The lessons learned from the four actuator MFI motivated the design of a structure with greatly reduced complexity, described next.
16.2.2 Single Actuator Thorax with Passive Rotation The design of a flapping-wing MAV based on passive rotation is shown in Fig. 16.3b. Here a central power actuator is responsible for controlling flapping while pronation and supination are passive. The power actuator thus acts to deliver a maximal amount of power to the wing stroke in an analogous fashion to the indirect flight muscles of the Dipteran thorax. Passive rotation is achieved with a flexure hinge at the base of the wing at the interface between the wing and the transmission. A custom-fabrication method (described in Sect. 16.3) enables the designer to create flexures with arbitrary geometries. Incorporated into the wing hinge flexure are joint stops which limit the rotational motion. Therefore, if adequate inertial and aerodynamic loads are experienced by the wing during flapping, the wing
222
will rotate to a pre-determined angle of attack for each half-stroke. The statics and dynamics of passive rotation are equally important. Using a pseudo-rigid-body model of the wing hinge flexure, it is simple to estimate the effective torsional stiffness of the wing hinge. Thus for an expected loading we can estimate the maximum rotation angle during flapping. Furthermore, the geometry of the flexure defines the limits of rotation by the joint stops. In order to achieve quasi-static rotation, it is important to also consider the dynamics of the rotational DOF. We design the first rotational resonance to be significantly higher than the flapping resonance by tuning the materials and geometry of the wing and flexure hinge. In this way, the baseline trajectory is mechanically hard coded into the structure and flapping and rotation can be accomplished simultaneously with a single actuator. Derivations from this baseline trajectory – to control body moments – will be accomplished with smaller actuators which subtly alter the transmission of the thorax in a similar manner as Diptera [13].
16.3 Fabrication Using Smart Composite Manufacturing Because of the scale of the components, we require a ‘meso’-scale manufacturing method. ‘Meso,’ in this use, refers to scales in between two heavily investi regimes: ‘macro’-scale (traditional machining) and MEMS. More traditional large-scale machining processes are inappropriate for a robotic insect for two fundamental reasons. First, the required resolution, on the order of 1 μm, would be difficult to achieve with standard machining tools. Second, as the components become smaller, the ratio of surface area to volume increases, and thus surface forces such as friction begin to dominate the dynamics of motion. This latter point implies that more traditional mechanisms for coupling rotations (e.g., sleeve or ball bearings) would exhibit increased inefficiency at the scale of insect joints. Alternatively, researchers have created articulated robotic structures using surface [23] and bulk micromachining [11] MEMS processes. However, MEMS devices are limited in terms of material choice, geometry, and actuation. Furthermore, MEMS process steps typically
R.S. Fearing and R.J. wood
involve cost-prohibitive infrastructure and significant time delays. For all of these reasons, we require a novel way to construct the articulated and actuated mechanical/electromechanical/aeromechanical components of a robotic insect. This must be fast, inexpensive, repeatable, and result in structures that can have dramatic deformations (> ± 60◦ ), long fatigue life (>10 M cycles), and high power density. The solution is a multi-step micromachining and lamination process called smart composite microstructures (SCM [21]). In this process, select materials (metals, ceramics, polymers, or fiber-reinforced composites) are first laser micromachined into arbitrary 2D geometries, as shown in Fig. 16.4. This typically involves thin sheets of material and a UV (frequencytripled Nd:YVO4 , 355 nm) or green (frequencydoubled Nd:YAG, 532 nm) computer-controlled laser. Once each material is cut, they are properly aligned and cured to form the laminate. Alignment can use a number of techniques including folding, fluid surface tension, and mechanical aligners using vision and registration marks (similar to mask aligners). A common constituent lamina material is carbon fiber prepreg. This is a composite material of ultra high modulus fibers with a catalyzed but uncured polymeric matrix. During curing (at elevated temperatures using a modified vacuum bagging process), the matrix flows and makes bonds with the various layers in the laminate. Using this process, we can create laminates with a well-defined spatially distributed compliance (e.g., flexures) which can be folded into any 3D shape with any number of degrees of freedom. Moreover, by including electroactive materials into the laminate – PZT for example – we can create actuators and actuated structures. This process is the basis for all of the mechanical and aeromechanical components of our robotic flies. It is enabling for the demanding application of a robotic fly and is potentially impactful for a number of other meso-scale robotics applications.
16.4 Actuation and Power Providing adequate power for lift and thrust is critical for hovering devices. For Dipteran insects, power of 70–100 W kg−1 of body mass is estimated [16], with power plant power density of approximately 200
16 Challenges for 100 Milligram Flapping Flight Fig. 16.4 (a) Smart composite manufacturing process using laser micromachining and lamination. Gaps are cut in carbon fiber which define flexure joint locations, then an intermediate layer of polyimide is used as the flexure layer, and finally a second layer of carbon fiber is laminated to form the complete structure. (b) Example parts for UCB MFI thorax
223
(a)
(b) W kg−1 . Traditional electromagnetic motors, ubiquitous in larger robotic systems, are inappropriate for actuation of a robotic insect. This is due to the scaling arguments made in Sect. 16.3. Additionally, there are practical limitations to the current density in smaller electromagnetic windings which exacerbate the poor scaling of such motors. (The limits of current available actuators are discussed in Chap. 14 and 21.) Furthermore, a simple periodic (or even harmonic) motion is required to drive the wings. Therefore, any rotary motion would require a kinematic linkage to convert rotations to the flapping motions. Clamped-free piezoelectric bending bimorph actuators were chosen for the MFI based on the desired metrics of high power density, high bandwidth, high efficiency, and ease of construction [22]. These actuators are constructed using the same method as with the articulated mechanisms; only here some of the constituent layers are piezoelectric. Figure 16.5 shows a cross section of the actuator. Each of the layers is laser
micromachined, aligned, and cured in a similar manner as the transmission. Although initial lift results using piezoelectric actuators were promising [3], verifying actual power output from the actuators is critical for identifying V1 PZT
V2
PZT
CFRP
S-glass
Fig. 16.5 Composite piezoelectric bimorph actuator cross section and example actuators at multiple scales
224
R.S. Fearing and R.J. wood
Fig. 16.6 Dynamometer for testing piezoelectric power output at resonance [16]
x1
x2
ksp
DUT
Driver (b)
(a) possible transmission losses or aerodynamic inefficiencies. Extrapolation of actuator performance from DC measurements [22] predicted higher power than was actually observed. Hence, a miniature dynamometer system was developed [16] to measure real actuator output power for simulated damping loads at resonance. Figure 16.6a, b shows the setup for the dynamometer, which uses precision optical sensors to measure the displacement of the drive actuator and
Energy Output per Cycle (µJ)
the device-under-test (DUT). Force is measured from the extension of the connecting spring, and equivalent damping is set by adjusting the driver phase. As seen in Figure 16.7, with a 10.1 mg actuator, energy density of 1.89 J kg−1 was obtained. With operating frequency for the MFI of 275 Hz, power density of 470 W kg−1 is obtained, with internal mechanical losses of approximately 10%.
16.5 Airfoils
20 16 Improved Actuators
12 8 Standard Actuators
4 0
Sensors
0
40
80
120
160
200
240
DUT Motion Amplitude (µm)
Fig. 16.7 Direct measurements of actuator output energy for various simulated loads at resonance
Another crucial component of a robotic fly is the airfoils. Insects wings exhibit a huge diversity in shape, size, venation pattern, and compliance. (Wing and aerodynamic issues are considered further in Chaps. 11, 12, and 14.) It is currently unknown how these morphological features affect flight: Are some of the features of insect wings due to bio-material limitations or are they instead an indicator of beneficial performance? Due to the complexity of this question, here current airfoils are designed to match key features of appropriately sized Diptera (aspect ratio, second moment of area, length, etc.) while remaining as rigid and lightweight as possible. To achieve this, carbon fiber ‘veins’ are laser micromachined and aligned to a
16 Challenges for 100 Milligram Flapping Flight
225
Fig. 16.8 Passive wing hinge with joint stop [19]
thin film polymer membrane (1.5 μm thick polyester) and cured. The outline of the wing shape is then cut with a final micromachining step which results in the wings shown in Fig. 16.8. These airfoils weigh less than 600–700 μg.
16.6 Results The SCM process and piezoelectric actuators have enabled lightweight thorax designs with both active and passive wing rotation. Active control of wing rotation is possible, but is very sensitive to near exact matching of each half of the thorax structure. Passive wing rotation, while still requiring precise tuning of wing hinge stiffness and rotational inertia properties, is more tolerant of manufacturing process variation.
16.6.1 Dynamic Challenges for Active Control of Flap and Rotation One of the motivations for active control of wing rotation is the potential to achieve enhanced rotational lift effects at the end of wing strokes [8]. Figure 16.9a shows several candidate wing rotation profiles, including a simple sinusoidal profile and higher harmonics in rotation, to generate a faster wing rotation at the end of each half-stroke. Using a dynamic model of the thorax and wing [14], the required actuator forces can be predicted, as shown in Fig. 16.9b. Interestingly, all the trajectories generate approximately the same lift (within 10%), but the trajectories with the faster rotation require five times greater actuation forces, which exceeds the capabilities of available actuators. This result indicates that a passive rotation, which approxi-
mates a sinusoidal rotation, may provide adequate lift forces with minimal power.
16.6.2 MFI Benchtop Lift Test A benchtop, one-wing version of the MFI was tested using over-sized actuators [14]. Through careful tuning of the amplitude, phase, and frequency of the two actuators (four parameters), an operating point with decent wing rotation was found as shown in Fig. 16.10. Tuning is quite critical, and due to driving at the resonant frequency, controllability is reduced. At 275 Hz, with flap angle ± 35◦ and rotation ± 45◦ , a net lift force of 1400 μN was measured using a precision scale. It is interesting to note that the small wing stroke, large wing rotation, high wing-beat frequency used are more bee-like than fly-like [1]. In addition, the high frequency allows better power density from the actuators, and short wing stroke reduces strain on the four bar joints.
16.6.3 Flapping-Wing MAV with Passive Rotation An alternative design simplifies the thoracic mechanics and uses passive rotation to achieve the required flapping and rotation trajectories. This entails similar components as the active rotation version, but uses only a single actuator and eliminates the differential mechanism. Once the four mechanical and aeromechanical components of the fly (actuator, transmission, airfoils, and airframe) are complete, they are integrated to form the structure in Fig. 16.1b. The first metric of interest is the trajectory that active flapping with
226 400
60
3
40 20
0
3
2
−20 −40 −60
(a)
Actuator Voltage (V)
1 Rotation angles (°)
Fig. 16.9 Comparison between desired wing rotations and required actuator voltage. Quick rotations, such as Case 1, require unachievable actuator power, suggesting only slow rotations may be achievable. Case 2 is a pure sinusoid trajectory and Case 3 is a sinusoidal drive voltage. Slow rotations could be achieved by passive aerodynamic and compliant mechanisms [14]
R.S. Fearing and R.J. wood
0
1
2
3
4
Time (ms)
5
200 0 −200
(b)
1
−600 −800 −1000
6
2
−400
0
1
2
3
4
5
6
Time (ms)
Fig. 16.10 (a) Benchtop testing of UC Berkeley MFI [14] with wing beat at 275 Hz. (b) Measured wing trajectory in stroke plane
(a)
(b) passive rotation can create. This is evaluated by simply driving the wings open loop at the flapping resonant frequency (approximately 110 Hz) and observing the wing motion with a high-speed camera. It was observed that the trajectory is nearly identical to
Diptera in hover (see Fig. 16.11a). The second metric of interest is the thrust produced. This was evaluated by fixing the structure to a custom single-axis force transducer and yielded an average thrust-to-weight of approximately 2:1.
16 Challenges for 100 Milligram Flapping Flight
227
Fig. 16.11 Takeoff of Harvard microrobotic fly
16.6.4 Benchtop Takeoff with Passive Rotation The final metric for this initial fly is a demonstration of takeoff. The fly was fixed to guide wires which restrict the motion of the fly to purely vertical, the other five body degrees of freedom were constrained. The wings were again driven open loop and the fly ascended the guide wire as shown in Fig. 16.11b. This shows the ability to produce insect-like wing motion with an integrated insect-size robot and that these wing motions produce lift forces of similar magnitude as a similarly sized fly. However, this does not show onboard power, integrated sensors, or automatic control and therefore there are numerous open research questions which need to be addressed to meet the goal of an autonomous robotic insect.
16.7 Conclusion The two main challenges remaining before free-flying robot flies can be created are flight control and compact power sources. For control, flight stabilization has been shown in simulation [7], and MEMS sensors (body attitude and rate) of the appropriate mass and power are close to off the shelf. While small devices are inherently highly maneuverable due to high angular accelerations (and hence potentially unstable), recent work described in Chaps. 17 points to high damping during turns which may simplify some control issues. Conventional computer vision systems are still too
computationally intensive and slow to use on an insectsize flying robot; however, bio-inspired navigation techniques such as optical flow sensing as described in Chaps. 3, 5, and 6 are low mass and can provide crucial flight control information, such as obstacle avoidance. Power sources currently are the biggest obstacle to 100 mg free flight. The required power source at the 50 mg size is still about an order of magnitude smaller than commercially available practice. The power required for free flight is estimated in Fig. 16.12. For a 100 mg flyer, 10 mW of wing power would provide 100W kg−1 of body mass. Considering thorax losses, and assuming efficient charge recovery [4] from the piezoelectric actuator(s), 27 mW of battery power should be sufficient, which corresponds to a reasonable battery power density of about 600 W kg−1 which can be obtained with current LiPoly battery technology (albeit in a 1 g battery rather than the 50 mg battery desired here). Several key challenges for flapping flight at the 0.1 gram size scale have been met. In particular, thorax kinematics have been designed which can drive wings at high frequency. A new fabrication process, smart composite microstructures (SCM), has enabled lightweight, high-strength, dynamic mechanisms with dozens of joints which can operate at hundreds of Hz, yet weigh only tens of milligrams. These structures have low losses, less than 10%. A low-inertia high stiffness wing has been shown to generate high lift forces. The SCM process has also enabled high-power density piezoelectric actuators, which have demonstrated sufficient power density for lift off of a tethered robot fly.
228
R.S. Fearing and R.J. wood
Fig. 16.12 Estimated power budget for free flight of microrobotic fly
We expect that free flight of fly-sized robots should be realizable in the next few years. Acknowledgments The authors acknowledge the key work of collaborators S. Avadhanula and E. Steltz on thorax and actuator design and characterization. Portions of this work were supported by NSF IIS-0412541. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).
References 1. Altshuluer, D., Dickson, W., Vance, J., Roberts, S., Dickinson, M.: Short-amplitude high-frequency wing strokes determine the aerodynamics of honeybee flight. Proceedings of the National Academy of Sciences (USA) 102, 18, 213–18, 218 (2005) 2. Avadhanula, S., Wood, R., Campolo, D., Fearing, R.: Dynamically tuned design of the MFI thorax. IEEE International Conference on Robotics and Automation. Washington, DC (2002) 3. Avadhanula, S., Wood, R.J., Steltz, E., Yan, J., Fearing, R.S.: Lift force improvements for the micromechanical flying insect. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007 IROS 2007 (Oct. 28–30, 2003) 4. Campolo, D., Sitti, M., Fearing, R.: Efficient charge recovery method for driving piezoelectric actuators in low power applications. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 50, 237–244 (Mar. 2003) 5. Combes, S., Daniel, T.: Flexural stiffness in insect wings I. Scaling and the influence of wing venation. Journal of Experimental Biology 206 (17), 2979–2987 (2003) 6. Combes, S., Daniel, T.: Flexural stiffness in insect wings II. Spacial distribution and dynamic wing bending. Journal of Experimental Biology 206 (17), 2989–2997 (2003) 7. Deng, X., Schenato, L., Sastry, S.: Model identification and attitude control for a micromechanical flying insect including thorax and sensor models. IEEE Int. Conf. on Robotics and Automation. Taipei, Taiwan (2003)
8. Dickinson, M., Lehmann, F.O., Sane, S.: Wing rotation and the aerodynamic basis of insect flight. Science 284, 1954– 1960 (1999) 9. Dickinson, M., Tu, M.: The function of dipteran flight muscle. Comparative Biochemistry and Physiology vol. 116A, pp. 223–238 (1997) 10. Dudley, R.: The Biomechanics of Insect Flight: Form, Function and Evolution. Princeton University Press (1999) 11. Ebefors, T., Mattsson, J., Kälvesten, E., Stemme, G.: A walking silicon micro-robot. The 10th Int. Conf. on SolidState Sensors and Actuators (Transducers ’99), pp. 1202– 1205. Sendai, Japan (1999) 12. Ennos, A.: The inertial cause of wing rotation in Diptera. Journal of Experimental Biology 140, 161–169 (1988) 13. Miyan, J., Ewing, A.: How Diptera move their wings: A reexamination of the wing base articulation and muscle systems concerned with flight. Philosophical Transactions of the Royal Society of London B311, 271–302 (1985) 14. Steltz, E., Avadhanula, S., Fearing, R.: High lift force with 275 hz wing beat in MFI. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems. IROS 2007. pp. 3987–3992 (October 29 2007–November 2 2007) 15. Steltz, E., Avadhanula, S., Wood, R., Fearing, R.: Characterization of the micromechanical flying insect by optical position sensing. IEEE International Conference on Robotics and Automation. Barcelona, Spain (2005) 16. Steltz, E., Fearing, R.: Dynamometer power output measurements of piezoelectric actuators. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems. IROS 2007. pp. 3980– 3986 (October 29 2007–November 2 2007) 17. Sunada, S., Ellington, C.: A new method for explaining the generation of aerodynamic forces in flapping flight. Mathematical Methods in the Applied Sciences 24, 1377–1386 (2001) 18. Wang, Z., Birch, J., Dickinson, M.: Unsteady forces and flows in low reynolds number hovering flight: two-dimensional computations vs robotic wing experiments. Journal of Experimental Biology 207, 449–460 (2004) 19. Wood, R.: Design, fabrication, and analysis of a 3dof, 3 cm flapping-wing MAV. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2007. IROS 2007. pp. 1576–1581 (October 29 2007-November 2 2007)
16 Challenges for 100 Milligram Flapping Flight 20. Wood, R., Avadhanula, S., Menon, M., Fearing, R.: Microrobotics using composite materials: The micromechanical flying insect thorax. IEEE Int. Conf. on Robotics and Automation. Taipei, Taiwan (2003) 21. Wood, R., Avadhanula, S., Sahai, R., Steltz, E., Fearing, R.: Microrobot design using fiber reinforced composites. Journal of Mech. Design 130 (5) (2008)
229 22. Wood, R., Steltz, E., Fearing, R.: Optimal energy density piezoelectric bending actuators. Journal of Sensors and Actuators A: Physical 119 (2), 476–488 (2005) 23. Yeh, R., Kruglick, E., Pister, K.: Surface-micromachined components for articulated microrobots. Journal of Microelectrical Mechanical Systems 5 (1), 10–17 (1996)
Chapter 17
The Limits of Turning Control in Flying Insects Fritz-Olaf Lehmann
Abstract This chapter provides insights into the turning flight of insects, considering this specific behavior from experimental and numerical perspectives. The presented analyses emphasize the need for a comparative approach to flight control that links an insect’s maneuverability with the physical properties of its body, the properties and response delays of the sensory organs, and the precision with which the muscular system controls the movements of the wings. In particular, the chapter focuses on the trade-off between lift production and the requirement to produce lateral forces during turning flight. Such information will be useful not only for a better understanding of the evolution and mechanics of insect flight but also for engineers who aim to improve the performance of the future generation of biomimetic micro-air vehicles.
17.1 Introduction Insects display an impressive diversity of flight techniques such as effective gliding, powerful ascending flight, low-speed maneuvering, hovering, and sudden flight turns [1–9]. Flies, in particular, are capable of extraordinary aerial behaviors aided by an array of unique sensory specializations including neural superposition eyes and gyroscopic halteres [10–12]. Using such elaborate sensory input, flies steer and maneuver
F.-O. Lehmann () Institute of Neurobiology, University of Ulm, Albert-Einstein-Allee 11, 89081 Ulm e-mail:
[email protected]
by changing many aspects of wing kinematics including angle of attack, the amplitude and frequency of wing stroke, and the timing and speed of wing rotation [13–18]. The limits of these kinematic alterations, and thus the constraints on the aerial maneuverability of a fly, depend on several key factors including the maximum power output of the flight muscles, mechanical constraints of the thoracic exoskeleton, and the ability of the underlying neuromuscular system to precisely control wing movements [19–22]. What we experience as flight behavior of a flying insect reflects the output of a complex feedback cascade that consists of receptors to collect sensory information, the central nervous system and thoracic ganglion to process this information and to produce locomotor commands, and the mechano-muscular system to drive the wings (see also Chaps. 1– 7). Changes in flight behavior result from changes in any of these components such as alterations in the sensory input or in the bilateral symmetry of flight force production caused by wing damage. Due to the predominant role of the compound eyes for navigation, orientation, and flight stability, the vast majority of investigations on flight control in the past have been done on the question of how changes in the visual input change aerial behavior and thus on the question of how the nervous system processes visual information [23–31]. Flight control and maneuverability of flies have been studied by a variety of methods under both free and tethered flight conditions. Although tethered flight reflects only a small fraction of an insect’s total behavioral repertoire in free flight, this technique has proven useful in elucidating the organization of the flight control system in flies such as optomotor behaviors in response to rotating and expanding visual flow fields and object orientation behaviors [32–39]. A major
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_17, © Springer-Verlag Berlin Heidelberg 2009
231
232
F.-O. Lehmann
A
B
Lift /yaw R
Side slip/ pitch
CoP
b1 b3 b2
CoG Thrust / roll III1
III2-4 1.0 mm
I2
I1 0.3 mm
Fig. 17.1 Free flight body posture and flight muscles in the 1.2 mg fruit fly Drosophila. (A) Rotational axes and forces, (B) subset of the 17 pairs of flight control muscles involved in amplitude control for yaw turning. Neural activation of the basalare
b1–b3 causes an increase in stroke amplitude. Muscle spikes in the first (I1–I2) and third (III1–III4) pterale typically result in a decrease in amplitude. R, wing length; CoP, aerodynamic center of pressure on wing; CoG, the fly’s center of gravity
disadvantage of tethered studies, however, is the lack of adequate feedback from sensory organs such as the halteres and the antennae. Moreover, tethered flight studies in flight simulators require elaborate computational algorithms for feedback simulation, in order to model the physical behavior of the insect body similar to what would occur in free flight. Thus, despite the difficulty in reconstructing body and wing motion in freely flying animals and in assessing sensory stimuli, free flight measurements are crucial because they capture the behavior of an animal in a more natural context and under natural closed-loop feedback conditions between the fly’s sensory and motor system. Besides the control for adjusting translational forces such as thrust and body lift, yaw turning during maneuvering flight has attracted considerable interest, because it determines flight heading and is thus of augmented ecological relevance for foraging behavior and search strategies in an insect. This chapter attempts to summarize some of the most important factors for yaw turning control in an insect, such as the time course of yaw torque production and thus the temporal changes in wing motion, the constraints on sensory feedback, and the physics of turning. The chapter especially highlights the significance of the ratio between the body mass moment of inertia and the frictional damping between the body structures and the surrounding air. Experimental results and numerical predictions will further demonstrate some of the most important trade-offs in flight control and also show how the maximum locomotor capacity constrains stability and maneu-
verability at elevated muscle mechanical power output during flight of the small fruit fly Drosophila melanogaster (Fig. 17.1).
17.2 Free Flight Behavior and Yaw Turning In many insects, including the fruit fly, straight flight in a stationary visual environment is interspersed by sudden flight turns termed flight saccades. Flight saccades are maneuvers in which the fruit fly quickly changes flight heading between 90◦ and 120◦ within 15–25 wing strokes (75–125 ms, Figs. 17.2 and 17.3) [3, 5, 7]. Due to their high dynamics, these maneuvers differ from other forms of turning flight such as continuous, smooth turning behavior with angular velocities well below 1000◦ s−1 . There is an ongoing debate on the exact angular velocity profile during saccadic turning, because this profile critically depends on at least three factors: the time course of yaw torque production, the moments of body inertia, and the frictional damping on body and wings [40]. Although the maximum angular velocity of approximately 1600◦ s−1 within a saccade is independent of the forward speed in fruit flies, flight saccades are supposedly not uniform maneuvers in terms of a fixed motor action pattern. Instead, measurements have shown that the total turning angle within a flight saccade varies between 120◦ and 150◦ , potentially depending on the visual input the animals have experienced prior to the turn [40].
17 The Limits of Turning Control in Flying Insects
A
B
Stop
Fv
Panorama Ft
D
Fh Fl Start
CoR
40 mm
Fg
rp Flight path
D
C 28 Lateral force (F l)
21
30
Lift (F v)
14
25
7 0 Thrust (F h)
33
Force (μN)
Total flight force (µN) Flight forces (µN)
Fig. 17.2 Flight behavior and forces in freely flying fruit flies. (A) Saccades (white dots) of a 2.4 s flight path within a cylindrical, 170 mm high, free flight arena surrounded by a random-dot visual panorama. Sample time = 8 ms. (B) Forces during flight, (C) time traces of forces acting on the animal (upper traces) and total flight force production (lower trace). Gray dots indicate the times at which saccades occur in (A). (D) Total force and ratio between force components within flight recordings that fell within the top 10% maximum of total force. Fv , vertical force (lift); Fl , lateral force (centripetal); Fh , horizontal force (thrust); Ft , total force; Fg , gravitational force. D, body drag; rp , flight path radius; CoR, center of radius
233
Maximum locomotor capacity Total flight force (F t)
20
Fl
15 10
Fv
26
Fh
5
19
0
12
Body weight
5
0
0.4
0.8
1.2
1.6
2.0
2.4
Ascending order F t
Flight time (s)
A potential threat on an animal during fast yaw turning at elevated forward velocity is the occurrence of centrifugal forces that cause side-slip motion [41]. Drifting during turning is a serious problem when the insect must quickly change its flight course, for example, in response to an approaching obstacle. Besides the centering response, obstacle avoidance
behavior is of great ecological significance because it allows an insect to safely cruise through dense vegetation and also to escape from predators [42, 43]. Thus, to avoid sideslipping and to stay on track within the saccade, insects must compensate for centrifugal forces by producing centripetal forces. Many insects achieve this behavior by performing bank (roll) turns,
90–120°
A 0 ms
B
20 ms
40 ms
Amplitude envelope
Downstroke
– ΦA –ΦP
ΦS
ΦS
60 ms
Fig. 17.3 Free flight saccade visualized by high-speed video. (A) Images (top view) show body orientation and wing position at ventral (0–60 ms) and dorsal (80–120 ms) stroke reversals. (B) Mean stroke amplitude (Φ S ) supports body weight. Changes
100 ms
120 ms
Upstroke
+ ΦP –ΦA +ΦP
ωdt
80 ms
+ ΦP
+ ΦP
ωdt
–ΦP
in stroke amplitude on each body side result from passive components (Φ P ) due to body rotation and active components (Φ A ) due to changes in the activity of flight control muscles. Cycling period of a stroke cycle is ~5 ms (200 Hz)
234
in which the mean flight force vector tilts sideways, toward the inner side of the flight curve [44]. In fruit flies, side-slipping movements are rare, and even at their maximum forward velocity of 1.22 ms−1 the animals are apparently able to completely compensate for centrifugal forces while turning [5]. In terms of total force balance, lateral force production is of particular significance, because in the fruit fly the production of centripetal forces consumes up to 70% of the locomotor reserves [5, 19, 22].
17.3 Forces and Moments During Turning Flight The physical parameters predominantly determining changes in forward-, upward-, and side-slip velocities of an insect are the frictional damping coefficient on body and wings and body inertia [3, 40]. Turning rate, by contrast, is determined by the frictional damping coefficient and the mass moment of inertia of the body. The former measure determines air friction during body motion, where a higher frictional damping coefficient results in a lower peak turning velocity at constant torque production. Mass moment of inertia, in turn, determines how quickly the animal may alter its angular velocity around the three vertical and horizontal body axes: yaw, pitch, and roll (Fig. 17.1A). Elevated inertia, moment of inertia, and frictional damping potentially favor stable flight because these factors reduce angular and translational accelerations (inertia) and also maximum angular and translational velocities (friction) in an insect. A major benefit of passive frictional damping is that it reduces the computational load needed to process sensory feedback signals by the nervous system and decreases the required precision for wing control (see Sect. 17.4.2.3). The benefit of frictional damping for flight stabilization is also shown in birds [45]. For example, wing-based damping in wing amplitude asymmetry-driven turning is an important mechanism for roll dynamics during aerodynamic reorientation, because the roll damping coefficient in cockatiels is two to six times greater than the coefficient typical of airplane flight dynamics [45]. However, high moment of inertia and frictional damping also limit flight agility [1−9]. Flight behavior is thus a compromise between the need to stabilize the ani-
F.-O. Lehmann
mal body and the need to allow quick maneuvers, for example, in order to escape from predation or to avoid collisions with nearby objects in the environment.
17.3.1 Modeling Friction and Moment of Inertia In general, an insect’s instantaneous angular velocity during yaw turning ω at a given time t results from the three major components: the torque T produced around the vertical body axis, the moment of inertia I given by the body mass distribution of the animal, and the frictional damping coefficient C of body and wings [3, 40]. This relationship thus becomes T(t) = I ω(t) ˙ + Cω(t).
(17.1)
A rough estimate of the moment of inertia may be derived when assuming that the shape of the insect body can be approximated by a long thin cylinder with a given length l that rotates around its vertical axis at 50% length (Fig. 17.1A). Inserting body length and body mass mb , the moment of inertia of an insect may be easily derived from the simple equation
I=
mb l2 . 12
(17.2)
Both the mass moment of inertia and the frictional damping coefficient CB of such a cylinder are relatively small; in the fruit fly the two measures yield only 0.52 pNms2 and 0.52 pNms, respectively. By contrast, the wings’ contribution to friction during yaw rotation is more complicated because this measure depends on the complex aerodynamics of the flapping wings. In general terms, frictional damping due to flapping wings depends on the differences in profile drag between the up- and downstroke and the left and right wing. Drag, in turn, depends on stroke frequency, amplitude, the ratio between up- and downstroke, wing length, mean drag coefficient, and also on the location of the center of pressure on each wing (Fig. 17.1A). The combined damping coefficient for an insect is the sum of wing-based damping and body damping CB , and we can write this sum as
17 The Limits of Turning Control in Flying Insects
C=
¯ Dif rˆCP (R + 0.5 W) D + CB , ω¯
235
(17.3)
in which the numerator is equal to the frictional moment given by the product between mean differ¯ Dif and ence in profile drag between the two wings D the length of the moment arm, where R is wing length, W is thorax width, and rˆCP is the relative location to the wing’s center of pressure [40]. The dominator is mean angular velocity during turning. ¯ Dif is challenging, because this measure Deriving D depends on wing velocity in each half stroke. Mean wing velocity is proportional to the product between stroke frequency and stroke amplitude. The latter measure, however, changes during turning due to two processes: first, passively in the global coordinate system, as the result of body rotation, and second, actively as a result of the bilateral difference in wing beat Motion of visual pattern:
A
amplitude used for yaw torque production (Fig. 17.3). Experiments in both tethered animals flying in a reality flight simulator and in free flight show that during counterclockwise (clockwise) turning flies increase (decrease) its wing beat amplitude on the right body side and decrease (increase) the amplitude on the left side (Fig. 17.4A). The first step in deriving the damping coefficient for yaw turning is thus to derive wing velocity in each half stroke. According to Ellington, the mean wing velocity of a wing segment in each half stroke u¯ (r) at normalized distance r (0–1) from the wing base is proportional to the product between dimensionless wing velocity, the dimensionless wing velocity profile durˆ ˆt, stroke frequency, mean ing up- and downstroke dφ/d stroke amplitude Φ S , and wing length R [46]. Assuming that both the dimensionless velocity profile and stroke frequency remain constant during turning and
Upward
Downward
Left
Right
40° control range
Amplitude envelope
WKg–1
240
97
230
87
220
77
210
67
200
57
190 37
27
180 140
150
47
160
170 180 Stroke amplitude (deg)
Fig. 17.4 Stroke amplitude control by tethered fruit flies flying inside a virtual reality flight simulator. (A) The flies try to compensate for the motion of the visual panorama displayed inside the arena by actively modulating its stroke amplitude. (B) In vivo working range (kinematic envelope) of a tethered fruit fly responding to visual stimulation while actively controlling the azimuth velocity of the visual panorama using the bilateral difference between left and right stroke amplitude. At maximum
C 180
6 5
170 4 3
160
2 150 1 140 0.4
0 0.6
0.8
1.0
1.2
Variance of amplitude (degrees)
Stroke frequency (Hz)
B 250
Stroke amplitude (deg)
140–160° to support body weight
1.4
Flight force/ body weight
flight force production (arrow), the control of moments is compromised because the animal is restricted to a unique combination between stroke amplitude and frequency [50]. Hyperbolic lines represent mechanical power isolines of the indirect flight muscle [22]. (C) Mean stroke amplitude (closed circle) and temporal variance (open circle) of left and right stroke amplitudes plotted against relative flight force production. Gray areas indicate standard deviations
236
F.-O. Lehmann
only stroke amplitude changes due to passive Φ P rotation of the animal body and active steering Φ A , the velocity at the wing’s center of pressure and for a half stroke can be written as n ˆ ˆt| (ΦS + ΦP + ΦA ) rˆCP R. u¯ = 12 |dφ/d j
(17.4)
Due to the bilateral symmetry between left and right wing and the ratio between up- and downstroke, passive and active amplitudes have different signs in each half stroke (Fig. 17.3). If we consider a counterclockwise turn of the animal, Φ P is positive for the left (right) wing during the up (down)stroke and negative during the down (up)stroke (Fig. 17.3B). Since a counterclockwise turn requires an increase in stroke amplitude on the right body side and a decrease on the left side, the active component Φ A is negative (positive) in the left (right) wing in both half strokes. Another modification in the above equation is the effective frequency in each half stroke that depends on the time periods of up- and downstroke. Effective frequency is the ratio between stroke frequency and relative time spent during the up- and downstroke (ˆt = fraction of downstroke in a complete stroke cycle {0–1}). Thus, the parameter j in the above equation is equal to ˆt during the downstroke and amounts to 1 − ˆt during the upstroke. In a second step, we employ a numerical model that converts velocity estimates into mean wing pro¯ The most simple approach to derive D ¯ file drag D. for each wing is to use Ellington’s 2D quasi-steady aerodynamic model based on wing velocity squared and to lump unsteady aerodynamic effects, such as the development of a leading edge vortex, together into a mean drag coefficient [5, 46]. Despite ignoring 3D flow conditions, modified versions of this approach have successfully been used in insect flight research (for Drosophila see [18]). By combining the various velocity estimates with the quasi-steady model, we may derive the desired drag residual during turning from the differences in drag between left and right wing within the entire stroke cycle, whereby drag is positive during the downstroke and negative during the upstroke. We finally get the following expression:
1 ¯ Dif = ρ C¯ D,Pro S ˆtu¯ 2L,D + 1 − ˆt u¯ 2R,U D 2
− 1 − ˆt u¯ 2L,U − ˆtu¯ 2R,D ,
(17.5)
Table 17.1 Description of modeling parameters for the fruit fly [40] Symbol Description Value S N ω¯ ˆt P,U P,D A rˆCP R W P C¯ D,Pro S ˆ ˆt| |dφ/d
Wing stroke amplitude for weight support Wing stroke frequency Saccadic turning rate Relative duration of downstroke Passive wing stroke difference upstroke Passive wing stroke difference downstroke Mean active wing stroke difference Normalized distance to center of pressure Wing length Thorax width Density of air Mean profile drag coefficient of wing Surface area of one wing Dimensionless wing velocity
140◦ (2.44 rad) 218 Hz 1600◦ s−1 (27.9 rad s−1 ) 0.538 4.0◦ (0.069 rad) 3.1◦ (0.054 rad) 5◦ (0.087 rad) 0.7 2.47 × 10−3 m 1.0 × 10−3 m 1.2 kg m−3 1.46 2.0 × 10−6 m2 4.4
where u¯ L,D (¯uR,D ) and u¯ L,U (¯uR,U ) are wing velocities for the down- and upstroke of the left (right) wing, respectively (see Table 17.1 for abbreviations).
17.3.2 The Consequences of High Frictional Damping The ratio between moment of inertia and aerodynamic damping (I/C) is a critical measure for the control of body dynamics by the neuromuscular system of an insect. At high ratio when flight is dominated by the forces due to the distribution of body mass, turning acceleration is low but turning rate steadily increases when the animal produces rotational moments. At low ratio, by contrast, angular acceleration is high but angular velocity saturates with increasing turning rate [40]. These relationships have two consequences: Insects with an elevated moment of body inertia compared to frictional damping potentially benefit from an increase in flight-heading stability, but may lose control at high turning rates at which the transfer function
17 The Limits of Turning Control in Flying Insects
237
of the sensory structures becomes highly non-linear. Moreover, once the animal has initiated yaw moments, turning rate decreases only slowly when torque production vanishes. In this case, an insect must actively brake in order to terminate its turning by producing counter torque as shown in Fig. 17.5B. By contrast, small insects such as fruit flies that rely on relatively low I/C ratios are comparatively unstable around their rotational axes because their angu-
A
Straight flight
lar acceleration is high at low torque production [40]. These insects need to have a precise flight apparatus by allowing either a higher spatial resolution in wing amplitude control and other kinematic parameters, or a higher temporal resolution by decreasing the lack in response time of sensory feedback. High damping, however, allows an animal to passively terminate its turn without applying any kind of counter torque. In insects flying at relatively high damping coefficients,
Straight flight
Passive braking
Torque production
I/C = 0.01
B
Straight flight
Torque production
Passive rotation
Active braking
Straight flight
I/C = 1
C
D
Yaw torque (nNm)
Turning rate (103degs–1)
3
6 cycles 3
2
2
1
1
0
0
–1
–1
1
1 Torque 0
0 Counter torque
–1
–1 0
40
80
Time (ms) Fig. 17.5 Numerical modeling of yaw turning at two ratios between mass moment of inertia and frictional damping (I/C) on body and wings [40]. (A) Changes in stroke amplitude at small I/C ratio, corresponding to what is predicted for the fruit fly. (B) The fly must actively terminate yaw turning (counter torque) at high I/C ratio when frictional damping is considered on the body alone [3]. (C) Turning rates at low (black) and high (gray) frictional damping (cf. A and B) in response to a single 30 ms 0.8 pNms yaw torque pulse in the fruit fly. (D) Counter torque at
120
0
40
80
120
Time (ms) the end of the saccade (lower graph) produces negative turning rates when the model includes frictional damping on wings (gray, upper graph). Without frictional damping (black, upper graph), the production of counter torque reduces turning rate but does not totally terminate saccadic rotation. Values for the fruit fly are I = 0.52 pNms2 ; C = 0.52 and 54 pNms for body damping alone (in B) and combined frictional damping on body and wings (in A), respectively
238
F.-O. Lehmann
the production of high counter torque might even partly annihilate directional changes initiated at the beginning of the flight turn (Fig. 17.5D). When inserting the values of Table 17.1 for the fruit fly into the numerical model (Eq. 17.5), we obtain a total frictional damping coefficient C of 54 pNms due to the flapping wings that is approximately 100 times the value estimated from the body alone (0.52 pNms). Combining all estimates of moment of inertia, torque production, and damping in the fruit fly, the damping term in torque equation (17.1) is roughly 4–16 times larger than the inertia term. This finding is also confirmed by an elaborate 3D computational fluid dynamic (CFD) study on Drosophila yaw turning [47], suggesting a CFD torque profile similar to that shown in Fig. 17.6B (no active braking), rather than to a biphasic torque profile that includes active braking [3]. Consequently, in fruit flies friction plays a larger role for yaw turning behavior than moment of inertia.
17.4 Balancing Aerodynamic Forces During Maneuvering Flight Based on the assumption that flight in fruit flies is dominated by frictional forces rather than by inertia, the forces acting on the fly body during maneuvering flight may be derived from a simple numerical model for force balance. The development of such a model is beneficial for several reasons: First, it gives insight into how total aerodynamic forces are distributed among horizontal-, vertical-, and lateral components; second, it allows predictions of the relationship between flight path curvature and forward velocity; and third, it allows estimations of the maximum locomotor capacity in a freely flying animal. The following sections describe the various force components required for flight and show how force and velocity components may be determined in a freely cruising animal.
17.4.1 Forces and Velocities
Turning rate (103 degs–1)
1.6
120
1.2
90
0.8
60
0.4
30 0
0.0 0
Yaw torque (nNm)
B
40
80
120
Turning angle (deg)
A
160
3 8 Stroke cycles
2 1 0 Counter torque
–1 0
40
80
120
160
Time (ms) Fig. 17.6 Modeling of yaw torque during a saccadic turn of a freely flying fruit fly. (A) Torque development is calculated using a simplified velocity profile of turning rate (black) and turning angle (gray), measured in freely flying flies (cf. Fig. 17.2A) [5]. (B) At its natural frictional damping coefficient of 54 pNms on body and wings, the fruit fly does not depend on active braking (i.e., the production of counter torque) to terminate the saccade (gray line). Assuming damping on the fly body alone (1% of wing damping coefficient), however, active braking is required to terminate the saccade (black)
Flight velocity and thus flight direction of an insect depends on the ratio between vertical force (body lift), horizontal force (thrust), and lateral force (sideslip) multiplied by normalized friction and on the moments around these vectors: yaw (vertical axis), roll (horizontal axis), and pitch (lateral axis, Fig. 17.1A). While forces for translation are considered to play a major role in force balance of the fruit fly, the forces needed to generate torque are negligible. This assumption is fostered by torque measures obtained during optomotor behavior in tethered flies flying inside a virtual reality flight arena. Under these conditions, fruit flies typically vary yaw torque by not more than ±1.0 nNm (Fig. 17.5C,D) [48]. At a moment arm of 65% wing length from the wing base for the wing’s center of pressure (equal to the center of force, Fig. 17.1A) [49], this moment requires forces of not more than 0.50 μN or approximately 3% of the maximum flight force in this animal. Thus, the production of moments around the three body axes should require only minor modification in instantaneous force production by the flapping wings. For the above reason, we may model flight, assuming that total flight force Ft produced by both wings is equal to the vector sum between vertical-, horizontal-,
17 The Limits of Turning Control in Flying Insects
239
and lateral forces (Fv , Fh , and Fl , respectively), written as ' (17.6) FT = Fh2 + Fv2 + Fl2 .
The fly’s turning velocity ω is given by the product between horizontal velocity and path curvature,
To transform these forces into velocity estimates, we use a simplified approach assuming frictional damping at low Reynolds number. Reynolds number for body motion depends on forward velocity and thus varies between values close to zero at slow forward flight and, in the fruit fly, approximately 64 at maximum cruising speed (mean body width = 0.8 mm, kinematic viscosity of air = 15 × 10−6 m2 s−1 ). Although these values suggest a conventional ‘force–velocitysquared’ relationship, measurements on tethered flies show that flight force production is also linearly correlated with wing velocity (non-squared) given by the product between stroke amplitude and frequency (Reynolds number = 120–170) [18]. We may thus derive a reasonable approximation of forces acting on the fly body by using Stoke’s law and normalized friction [40]. Consequently, to estimate maximum flight velocities at a given flight path curvature of the insect, we replace thrust in Eq. (17.6) by body drag given as the product between normalized friction on body and wings and forward speed and lift by the sum of gravitational force and drag on the fly body when moving in the vertical. Lateral force, needed to keep the animal on track during yaw turning, is equal to centrifugal force because sideslip is negligibly small in fruit flies [40]. The latter force equals the product between the path radius rp , horizontal velocity, and body mass mb . The three forces are then
The most critical measure in these equations is normalized friction on body and wings during forward flight, because this parameter determines maximum horizontal and vertical flight velocities. In contrast to yaw turning, estimations of normalized friction for body translation are susceptible to major errors for two reasons: First, friction on the body depends on body posture which changes with flight speed as shown by David [50] and, second, the orientation of the wings continuously changes within the stroke cycle with respect to the oncoming air. It is thus advantageous to calculate normalized friction from Eq. (17.7) using an estimate of maximum forward velocity of the insect derived from behavioral experiments and an estimate for maximum thrust derived from maximum locomotor capacity. In case of the fruit fly, reconstructions of the flight path in freely flying individuals revealed a maximum horizontal velocity of approximately 1.22 m s−1 at level flight. By contrast, estimations of maximum locomotor capacity of the fruit fly are more challenging. There are at least two ways to derive this measure: First, from load lifting experiments in which freely flying animals are scored on their ability to lift up small weights as shown by Marden [51], and second, from direct force measurements in tethered flies [18, 22]. In load lifting experiments, locomotor reserve and thus maximum thrust is equal to the load that the animal is able to lift up, while in tethered flight experiments maximum locomotor capacity is equal to maximum force production when the animal is stimulated under visual open-loop optomotor conditions in a flight simulator with a vertically oscillating stripe grating. For the fruit fly, both approaches yield similar estimates for maximum thrust of approximately 4.9 μN and normalized friction in Drosophila thus amounts to approximately 4.0 μNm−1 s (Eq. 17.7).
Fl = mb u2h rp−1 ,
Fh = kuh ,
and
Fv = kuv + mb g, (17.7–17.9)
respectively, where g is the gravitational constant. If we now replace the force terms in Eq. (17.6) by the expressions in Eqs. (17.7–17.9), maximum horizontal and vertical flight velocities (uh and uv , respectively) at various flight conditions can be derived from the following two equations: 1 uh = mb and
%
1 ' 4 4 2 2 2 2 2 2 rp k − 4rp mb Fv − Ft − rp k , 2 (17.10) '
uv =
FT2 − ku2h − m2b u4h rp−2 − mb g k
ω = uh rp−1 .
(17.12)
17.4.2 Trade-Offs Between Locomotor Capacity and Control High aerial maneuverability of a flying insect may be useful in a large variety of behavioral contexts
.
(17.11)
240
including predator avoidance, prey catching, mating success, and male–male competition. A well-known example of predator avoidance is, for example, the evasive flight reaction of noctuid moths when they detect the ultrasound of predating bats [52]. Stability and maneuverability are two sides of the same coin, and the system that allows high stability in an insect also controls and constrains maneuverability. The ability of an insect to provide aerodynamic forces in excess of its body weight thereby appears to be a key factor for high maneuverability. Studies, for example, on butterfly take-off behavior show that a critical measure for high aerodynamic performance is the ratio between flight muscle and body mass [53]. In other insect species, such as dragonflies, this ratio also depends on age. Young dragonflies are typically poor flyers but gain muscle mechanical power output during adult growth. Maximum power reserves for both flight force production and steering performance are exhibited at maturity. At this stage, dragonflies defend a territory, and aerial competition determines their mating success [54]. In the fruit fly, these trade-offs between power output and flight control can be predicted by a force balance model and verified by experiments under free and tethered flight conditions. In the following sections we thus focus on the relationships between muscle performance, mechanical constraints of the thorax, and flight control in the fruit fly, i.e., (1) the trade-off between lift, thrust, and lateral force production during turning flight and its consequences for flight velocity; (2) the collapse of steering envelope at maximum locomotor performance; and (3) the significance in precision, with which the neuromuscular system is able to control bilateral stroke amplitudes during yaw turning. Altogether, these issues highlight the problems and limits of maneuvering flight in the small fruit fly and also show how our numerical model (Sect. 17.4.1) may explain the various behaviors we observe in flying flies.
17.4.2.1 The Trade-Off Between Lift, Thrust, and Lateral Forces In the previous section we learned about the relationships between forward, upward, and turning velocity and how these parameters depend on total force production. Free flight experiments in fruit flies show that the lateral force needed to keep the animal on track
F.-O. Lehmann
during a flight saccade clearly outscores thrust and lift and almost reaches 28 μN (Fig. 17.2C). Thus, shorttime total flight force production even reaches 32 μN, which is approximately three times the body weight of the animal (~1.2 mg) and above the value typically measured under tethered flight conditions. By contrast, thrust only contributes moderately to total force (<14%), and vertical force even decreases with increasing locomotor output (Fig. 17.2D) [5]. Consequently, an animal that attempts to avoid sideslipping during turning flight faces a trade-off between path curvature, flight altitude, and horizontal velocity. With respect to this trade-off, the maximum locomotor capacity (in most insects approximately twice their body weight) is an important measure that does limit not only the insect’s load lifting capacity but also its flight style during fast yaw turns [51]. Using the numerical framework in Eqs. (17.6, 17.7, 17.8, 17.9, 17.10, 17.11, 17.12), the above relationships may be quantified at various maximum locomotor capacities of the animal (Fig. 17.7A). Especially when performing small-radius flight turns, horizontal and vertical velocities should decrease when lateral force production increases during turning. This effect becomes most visible at elevated forward speed when a significant part of the locomotor reserves is used to produce thrust. During yaw turning, the numerical model thus predicts a break point in flight path radius at which the fly can no longer produce constant lift and thrust. In other words, there is a minimum flight path radius that the fly may achieve at constant horizontal velocity and level flight. Any decrease in radius below this threshold either results in a decrease in forward flight speed or a loss in flight altitude or both. Figure 17.7B shows that at a moderate, constant cruising speed of 0.6 ms−1 , fruit flies can only keep their flight altitude (zero vertical velocity) at flight path radii above approximately 50 mm. Smaller values result in a sudden and considerable loss in flight altitude. The trade-off between the various velocities predicted from numerical models can be verified by experimental data. Similar to what has been mentioned above, trade-offs are most visible when an animal tries to maximize flight force production. In free flight, this can be achieved by optomotor stimulation using a random-dot visual panorama that rotates at high angular velocity around the animal (>900◦ s−1 , Fig. 17.2A). In response to this visual stimulation, fruit flies typically minimize the retinal slip on their compound eyes
17 The Limits of Turning Control in Flying Insects
241
A
B 13.1 μN
16.3 μN
500 21.0 μN 250 32.4 μN
0 0.0
0.5
1.0
1.5
Horizontal velocity
2.0
(ms–1)
1.3
Flight velocity (ms–1)
Minimum flight path radius (mm)
750
Zero climbing velocity
0.8
uh
0.3
ul
uv
–0.2 –0.7 –1.2 0
50 100 150 200 Flight path radius (mm)
Fig. 17.7 Numerical modeling of force balance in freely flying fruit flies. (A) Minimum flight path radius at a given forward velocity and level flight shown for four estimates of total flight force. Maximum flight forces are 16.3, 21.0, and 32.4 μN in tethered flight, load lifting free flight, and free flight under optomotor stimulation, respectively. (B) Alteration in maximum vertical
climbing velocity (uv ) at a mean forward cruising speed (uh ) of 0.6 ms−1 and assuming 16 μN maximum flight force. Gray area indicates path radii at which the fly loses flight altitude while turning. Lateral speed (ul ) is equal to zero because of side-slip compensation
similar to what has been observed in tethered flies [16, 31, 32, 34, 35]. Consequently, in the attempt to match forward velocity to translational velocity and turning rate to angular velocity of the rotating visual environment, the animals continuously move in concentric circles around the arena center [5]. Under these conditions, the flies’ vertical speed approaches zero (constant altitude) while forward velocity and turning rate typically vary out-of-phase (Fig. 17.8). The numerical model in Eq. (17.6) predicts this behavior because at maximum locomotor capacity and constant flight altitude, any increase in lateral force production should lead to a corresponding decrease in thrust.
Moreover, since an increase in total flight force requires an increase in mechanical power output of the asynchronous flight musculature, transgenic flies with reduced muscle mechanical power output exhibit larger flight path radii during turning than wild-type animals flying at similar forward speed. We noticed this behavior in a myosin light chain mutant (MLC2) and several fly lines in which the phosphorylation capacity of the muscle protein flightin (fln) had been modified by point mutation (F.-O. Lehmann, unpublished data) [55]. Flightin is a multiply phosphorylated myosin-binding protein found specifically in indirect flight muscles (IFM) of Drosophila. When flown in a flight simulator and scored on maximum locomotor capacity, both strains show reductions in stroke frequency and partly also in stroke amplitude, whereas muscle and aerodynamic efficiency are similar among the two transgenic strains and wild type files.
Peak saccade velocity 2.50
0.4 1.25 0.2 0
Turning velocity (103 degs–1)
Horizontal velocity (ms–1)
0.6
17.4.2.2 Collapse of Steering Envelope at Maximum Locomotor Performance
0 1.0
1.1
1.2 1.3 Flight time (s)
1.4
1.5
Fig. 17.8 Trade-off between horizontal velocity (gray, left scale) and turning velocity (black, right scale) in a freely flying fruit fly at elevated flight force production. Horizontal velocity and turning rate co-vary during saccadic turning (gray areas) presumably due to the production of elevated centripetal forces
Since propulsion and control reside in the same locomotor system, flight control in insects is constrained by the mechanical limits of the thoracic exoskeleton that generates wing motion. The relationship between propulsion and control is of fundamental consequence because it predicts a complete loss in control at maximum locomotor force production. In general,
242
locomotor reserves of an insect function as power reserves to boost horizontal or vertical flight velocities but also allow the insect to modulate wing kinematics. Since many insects control lift and yaw moments by changing stroke amplitude (not dragonflies that apparently more often use changes in angle of attack and wing phasing for steering) [1, 8, 56–60], flying with amplitudes near the mechanical limits should impair stability and maneuverability [18, 61, 62]. In the fruit fly, the collapse in kinematic envelope may be quantified under tethered flight conditions in a closed-loop virtual reality flight simulator, in which the ability of the animal is scored to modulate stroke frequency and stroke amplitudes at different flight forces. In these experiments, the flies actively stabilize the azimuth velocity of a visual object (black bar) displayed in the panorama using the bilateral difference in stroke amplitude between both wings. While steering toward the visual target, the flies modulate mean stroke amplitude on both body sides in response to the up- and down motion of a superimposed, openloop background pattern [22]. At maximum flight force production, the temporal deviation of stroke amplitude and frequency approaches zero, indicating that the animal is restricted to a unique combination of mean wing velocity (the product between amplitude and frequency) and mean lift coefficient (Fig. 17.4B,C) [62]. Ignoring the potential contribution of other kinematic parameters, this collapse of the kinematic envelope during peak force production should greatly attenuate maneuverability and stability of animals in free flight. A possible mechanism that helps small insects to remain stable around their roll and yaw axes at elevated force production is the wings’ high-profile drag. As already outlined in the previous sections, even without any active control, frictional damping on the flapping wings is high enough to terminate yaw turning within 3–5 stroke cycles. Consequently, high frictional damping in the fruit fly helps to ensure stable flight conditions in cases in which force control by the neuromuscular system fails due to the mechanical limits of the thoracic box for wing motion.
17.4.2.3 Significance of Muscle Precision and Response Time of Sensory Feedback Another parameter that limits stability and maneuverability is the muscular precision of the flight appara-
F.-O. Lehmann
tus, including the temporal delay of sensory feedback. In Sect. 17.3.1 we discussed the relationship between moments of inertia, frictional damping on body and wings, turning velocity, and yaw torque production in a freely maneuvering insect. In terms of free flight stability and turning behavior, however, it is of interest to derive angular velocity for turning from Eq. (17.1). A time-variant form for yaw turning velocity at time t is ( I I C+ . (17.13) ω(t) = T(t) + ω(t − 1) dt dt Since torque production is proportional to the product of the bilateral difference instroke amplitude between both wings and an experimentally derived conversion factor (fruit fly: 2.9 × 10–10 Nm deg−1 ) [61], we may link the ability of an insect to keep track during turning and to precisely steer toward a visual object to its temporal changes in stroke amplitude. In free flight, changes in stroke amplitude are due to the interplay between the mechano-sensory system (halteres, antennae, and campaniform sensilla), the visual system (compound eyes and ocelli), and the muscular system. In contrast to insects with synchronous power muscles, such as dragonflies, in flies, power and control reside in two different muscle systems: the asynchronous indirect flight muscles (IFM) and the synchronous direct flight control muscles. IFM provide the power to overcome inertia and drag during wing flapping and fill up most of the thorax. By contrast, reconfiguration of the wing hinge for flight control lies in the function and interplay of 17 tiny flight control muscles. Flight control muscles typically produce positive work but also function as active springs that absorb mechanical muscle power produced by the IFM [63]. There is multiple electrophysiological evidence that three groups of control muscles play a key role in stroke amplitude control of flies: the basalare muscles b1–b3 and the muscles of the first (I1, I2) and third axillare (III1–4, Fig. 17.1B) [13–17, 64]. The flight control muscles in flies receive input from two major sensory organs: the gyroscopic halteres and the compound eyes. The halteres are condensed hindwings, driven by own power and flight control muscles [12]. Halteres beat in anti-phase with the wings and sense changes in Coriolis forces when the animal body rotates [10–12, 65]. They project on control muscle motoneurons via fast electrical synapses [66]. In fruit flies, halteres encode angular velocities
17 The Limits of Turning Control in Flying Insects
243
TMax = IωV /tV + CωV ,
(17.14)
in which ωV is the limit of angular speed allowing the fly to determine its angular rotation, and the ratio ωV /tV is the maximum angular acceleration between the upper limit of the angular speed and the sensormotor reaction time tV of the fly. Consequently, the term TMax indicates the maximum torque allowed for heading stabilization during active flight control within the limits of the sensory apparatus. Considering Eq. (17.14) from an evolutionary perspective, we may predict the following phenomenon: Insects exhibiting small delays in sensory information processing and large damping coefficients on wings and body reduce their need to develop a muscular system with high accuracy for wing motion, and thus high accuracy for yaw torque control. By contrast, a flight system with large response delays at small frictional damping requires a very precise muscular apparatus to avoid instabilities during flight. Understanding these trade-offs is of great relevance for the design of biomimetic micro-air vehicles that need to gain stability due to the function of both their electrical control circuitries and the actuators that drive wing flapping. Surprisingly, in fruit flies flying at their natural frictional damping coefficient, vision-mediated flight requires very high precision of wing amplitude con-
25 Amplitude difference (deg)
of up to at least 700◦ s−1 [67] during turning and provide fast feedback within at least a single wing stroke of approximately 5 ms [68]. By contrast, the vision system in insects suffers from a long delay scattered around 30 ms. The latter value was estimated from behavioral studies on male–female chases in houseflies, Musca [69]. Most of this delay appears to be due to the time-to-peak response of the photo-transduction process, i.e., 12 and 41 ms for the dark- and lightadapted state of the housefly’s compound eye, respectively [70]. The values reported for fruit flies are similar to those of houseflies and range from 20 to 50 ms bump latency [71]. To incorporate the properties of the sensory feedback into our analytical framework, we may modify Eq. (17.1) by inserting the response time and the upper threshold for detecting rotational body movements by the sensory organs [5, 40]. Replacing the various terms, we obtain a time-invariant equation:
Thresholds for motion detection
20 15 10 5 0 10
100
1000
10000
Damping coefficient (pNms) Fig. 17.9 Precision of steering control in tethered flying fruit flies. Data show the absolute difference between left- and rightwing stroke amplitude of flies (i.e., proportional to yaw torque), produced in order to actively stabilize yaw heading toward a black stripe displayed inside a flight simulator. The shaded area indicates the upper limits of the visual system that allow visual control of the stripe according to Eq. (17.14) (50 and 100% response thresholds of the insect’s elementary motion detector, EMD). The dotted line indicates crossing of the regression line with the x-axis. Natural damping coefficient of the fruit fly amounts to 54 pNms. Means ± S.E. N= 47 flies. See also [40]
trol for flight-heading stability within a range from 0.25◦ to 0.85◦ for each wing (Eq. (17.14), Fig. 17.9). Behavioral tests on tethered flies flying under visionmediated closed-loop conditions in a flight simulator, however, show that the neuromuscular system is not able to control stroke amplitude below a threshold of 1–2◦ [40]. Consequently, the tethered animals fail to safely control yaw moments based on vision alone, suggesting additional feedback coming from the halteres in freely flying animals. However, a fruit fly exhibiting a 5 ms response time of the visual system (delay of a single stroke cycle) might keep instantaneous angular velocity of the body below the threshold of the visual system even at its natural damping coefficient, because the required changes in stroke amplitude for flight stabilization would range between 0.6◦ and 1.9◦ and would thus be within the scope of visually mediated yaw control. Nevertheless, despite the recent progress in understanding the feedback control cascade in flies, the exact contribution of each sensory system for force control in a freely cruising animal still needs to be determined.
244
17.5 Synopsis Heading toward the construction of robotic biomimetic micro-air vehicles based on flapping wing design is challenging (see other chapters in this book), and a detailed and integrative view on flight control in insects is thus of great interest. In this chapter, we focused on the interplay between the physical forces and the neuromuscular control cascade during yaw turning in a small fly. Our analysis shows that aerial behavior results from the combined properties of the flight feedback cascade, including the physiological limits of the nervous structures, the constraints on flight muscle function, and the physics of body dynamics during flight. Consequently, if we experimentally break the connection between the various functional hierarchies for flight control in an insect, we potentially face the risk that the system changes its locomotor state and our analyses are restricted to descriptions of functionally isolated sub-components of the flight apparatus. For example, if we ignore frictional damping during turning flight in Drosophila, we tempt to conclude that flight stability resides in a fast and very precise system for sensory information processing and wing control. By contrast, including passive stabilization of the insect body due to high frictional damping, the computational load on the central nervous system may decrease as well as the required speed of sensory information processing and the precision of wing control. The ultimate challenge in understanding flight control and the limits of maneuverability, however, lies in the large variety of insect species and thus locomotor designs. A major goal for understanding both the biology of flight and the design of biomimetic micro-air vehicles must thus be to derive a comprehensive view on flapping flight that includes the various forms of senso-motor control. Eventually, this approach seems to be beneficial not only to evaluate the various forms of flight in insects but also to comprehend the physics and neuromuscular function of wing control in other flying animals, such as birds and bats.
References 1. Alexander, D.E. : Wind tunnel studies of turns by flying dragonflies. The Journal of Experimental Biology 122, 81–98 (1986)
F.-O. Lehmann 2. Ennos, A.R.: The kinematics and aerodynamics of the free flight of some Diptera. The Journal of Experimental Biology 142, 49–85 (1989) 3. Fry, S.N., Sayaman, R., Dickinson, M.H.: The aerodynamics of free-flight maneuvers in Drosophila. Science 300, 495–498 (2003) 4. Marden, J.H., Wolf, M.R., Weber, K.E.: Aerial performance of Drosophila melanogaster from populations selected for upwind flight ability. The Journal of Experimental Biology 200, 2747–2755 (1997) 5. Mronz, M., Lehmann, F.-O.: The free flight response of Drosophila to motion of the visual environment. The Journal of Experimental Biology 211, 2026–2045 (2008) 6. Rüppell, G.: Kinematic analysis of symmetrical flight manoeuvres of odonata. The Journal of Experimental Biology 144, 13–42 (1989) 7. Wagner, H.: Flight performance and visual control of flight of the free-flying housefly (Musca domesticaL) II Pursuit of targets. Philosophical Transactions of the Royal Society of London. Series B 312, 553–579 (1986) 8. Wang, H., Zeng, L., Liu, H., Chunyong, Y.: Measuring wing kinematics, flight trajectory and body attitude during forward flight and turning maneuvers in dragonflies. The Journal of Experimental Biology 206, 745–757 (2003) 9. Zbikowski, R.: Red admiral agility. Nature 420, 615–618 (2002) 10. Nalbach, G.: The halteres of the blowfly Calliphora I kinematics and dynamics. Journal Comparative Physiology A 173, 293–300 (1993) 11. Nalbach, G.: Extremely non-orthogonal axes in a sense organ for rotation: Behavioral analysis of the dipteran haltere system. Neuroscience 61, 149–163 (1994) 12. Pringle, J.W.S.: The gyroscopic mechanism of the halteres of Diptera. Philosophical Transactions of the Royal Society of London. Series B 233, 347–384 (1948) 13. Balint, C.N., Dickinson, M.H.: Neuromuscular control of aerodynamic forces and moments in the blowfly, Calliphora vivina. The Journal of Experimental Biology 207, 3813–3838 (2004) 14. Dickinson, M.H., Lehmann, F.-O., Götz, K.G.: The active control of wing rotation by Drosophila. The Journal of Experimental Biology 182, 173–189 (1993) 15. Dickinson, M.H., Lehmann, F.-O., Sane, S.: Wing rotation and the aerodynamic basis of insect flight. Science 284, 1954–1960 (1999) 16. Götz, K.G., Hengstenberg, B., Biesinger, R.: Optomotor control of wing beat and body posture in Drosophila. Biol Cybernetics 35, 101–112 (1979) 17. Heide, G.: Flugsteuerung durch nicht-fibrilläre Flugmuskeln bei der Schmeißfliege Calliphora. Z Vergl Physiologie 59, 456–460 (1968) 18. Lehmann, F.-O., Dickinson, M.H.: The control of wing kinematics and flight forces in fruit flies (Drosophila spp). The Journal of Experimental Biology 201, 385–401 (1998) 19. Casey, T.M., Ellington, C.P.: Energetics of insect flight. In: W. Wieser, E. Gnaiger (eds.) In Energy Transformations in Cells and Organisms, pp. 200–210. Stuttgart, Thieme (1989) 20. Harrison, J.F., Roberts, S.P.: Flight respiration and energetics. Annual Review of Physiology 62, 179–205 (2000)
17 The Limits of Turning Control in Flying Insects 21. Lehmann, F.-O.: The constraints of body size on aerodynamics and energetics in flying fruit flies: an integrative view. Zoology 105, 287–295 (2002) 22. Lehmann, F.-O., Dickinson, M.H.: The changes in power requirements and muscle efficiency during elevated force production in the fruit fly, Drosophila melanogaster. The Journal of Experimental Biology 200, 1133–1143 (1997) 23. Borst, A., Egelhaaf, M.: Principles of visual motion detection. Trends in Neurosciences 12, 297–306 (1989) 24. Dill, M., Wolf, R., Heisenberg, M.: Visual pattern recognition in Drosophila involves retinotopic matching. Nature 365, 751–753 (1993) 25. Egelhaaf, M., Borst, A.: Motion computation and visual orientation in flies. Comparative Biochemistry and Physiology 104A, 659–673 (1993) 26. Franceschini, N., Riehle, A., Nestour, A.: Directionally selective motion detection by insect neurons. In: Stavenga, Hardie (eds.) In Facets of vision, pp. 361–390. Berlin Heidelberg, Springer (1989) 27. Kirschfeld, K.: Automatic gain control in movement detection of the fly. Naturwissenschaften 76, 378–380 (1989) 28. Krapp, H.G., Hengstenberg, B., Hengstenberg, R.: Dentritic structure and receptive-field organization of optic flow processing interneurons in the fly. American Physiological Society. Journal of Neurophysiology 79 1902–1917 (1998) 29. O’Carroll, D.: Feature-detecting neurons in dragonflies. Nature 362 541–543 (1993) 30. Reichardt, W.: Evaluation of optical motion information by movement detectors. Journal of Comparative Physiology A 161, 533–547 (1987) 31. Tammero, L.F., Dickinson, M.H.: Spatial organization of visuomotor reflexes in Drosophila. The Journal of Experimental Biology 207, 113–122 (2004) 32. Blondeau, J., Heisenberg, M.: The three dimensional optomotor torque system of Drosophila melanogaster. Journal of Comparative Physiology A 145, 321–329 (1982) 33. Borst, A., Bahde, S.: Comparison between the movement detection systems underlying the optomotor and the landing response in the housefly. Biological Cybernetics 56, 217– 224 (1987) 34. Duistermars, B.J., Chow, D.M., Condro, M., Frye, M.A.: The spatial, temporal and contrast properties of expansion and rotation flight optomotor responses in Drosophila. The Journal of Experimental Biology 210, 3218–3227 (2007) 35. Egelhaaf, M.: Visual afferences to flight steering muscles controlling optomotor responses of the fly. Journal of Comparative Physiology A 165, 719–730 (1989) 36. Götz, K.G., Wandel, U.: Optomotor control of the force of flight in Drosophila and Musca II Covariance of lift and thrust in still air. Biological Cybernetics 51, 135–139 (1984) 37. Heide, G., Götz, K.G.: Optomotor control of course and altitude in Drosophila is achieved by at least three pairs of flight steering muscles. The Journal of Experimental Biology 199, 1711–1726 (1996) 38. Heisenberg, M., Wolf, R.: Reafferent control of optomotor yaw torque in Drosophila melanogaster. Journal of Comparative Physiology A 163, 373–388 (1988) 39. Kaiser, W., Liske, E.: Die optomotorischen Reaktionen von fixiert fliegenden Bienen bei Reizung mit Spektrallichtern. Journal of Comparative Physiology 80, 391–408 (1974)
245 40. Hesselberg, T., Lehmann, F.-O.: Turning behaviour depends on frictional damping in the fruit fly Drosophila. The Journal of Experimental Biology 210, 4319–4334 (2007) 41. Schilstra, C., van Hateren, J.H.: Blowfly flight and optic flow I. Thorax kinematics and flight dynamics. The Journal of Experimental Biology 202, 1481–1490 (1999) 42. Egelhaaf, M., Borst, A.: Is there a separate control system mediating a “centering response” in honeybees. Naturwissenschaften 79, 221–223 (1992) 43. Srinivasan, M.V., Lehrer, M., Kirchner, W.H., Zhang, S.W.: Range perception through apparent image speed in freely flying honey bees. Visual Neuroscience 6, 519–535 (1991) 44. Ennos, A.R.: The kinematics and aerodynamics of the free flight of some Diptera. The Journal of Experimental Biology 142, 49–85 (1989) 45. Hedrick, T.L., Usherwood, J.R., Biewener, A.A.: Low speed maneuvering flight of the rose-breasted cockatoo (Eolophus roseicapillus) II Inertial and aerodynamic reorientation. The Journal of Experimental Biology 210, 1912– 1924 (2007) 46. Ellington, C.P.: The aerodynamics of insect flight VI Lift and power requirements. Philosophical Transactions of the Royal Society of London. Series B 305, 145–181 (1984) 47. Ramamurti, R., Sandberg, W.C.: A computational investigation of the three-dimensional unsteady aerodynamics of Drosophila hovering and maneuvering. The Journal of Experimental Biology 210, 881–896 (2007) 48. Heisenberg, M., Wolf, R.: Vision in Drosophila. SpringerVerlag, Berlin (1984) 49. Ramamurti, R., Sandberg, W.C.: Computational study of 3-D flapping foil flows 39th Aerospace Sciences Meeting and Exhibit, 605 (2001) 50. David, C.T.: The relationship between body angle and flight speed in free flying Drosophila. Physiological Entomology 3, 191–195 (1978) 51. Marden, J.H.: Maximum lift production during take-off in flying animals. The Journal of Experimental Biology 130, 235–258 (1987) 52. Roeder, K.D., Treat, A.E.: The detection and evasion of bats by moths. Am Sci 49, 135–148 (1961) 53. Almbro, M., Kullberg, C.: Impaired escape flight ability in butterflies due to low flight muscle ratio prior to hibernation. The Journal of Experimental Biology 211, 24–28 (2008) 54. Marden, J.H., Fitzhugh, G.H., Wolf, M.R.: From molecules to mating success: Integrative biology of muscle maturation in a dragonfly. American Scientist 38, 528–544 (1998) 55. Barton, B., Ayer, G., Heymann, N., Maughan, D.W., Lehmann, F.-O., Vigoreaux, J.O.: Flight muscle properties and aerodynamic performance of Drosophila expressing a flightin gene. The Journal of Experimental Biology 208, 549–560 (2005) 56. Norberg, R.A.: Hovering flight of the dragonfly Aeshna juncea L. In: T.Y.-T. Wu, C.J. Brokaw, C. Brennen (eds.) Kinematics and Aerodynamics, vol. 2, pp. 763–781. NY, Plenum Press (1975) 57. Reavis, M.A., Luttges, M.W.: Aerodynamic forces produced by a dragonfly. AIAA Journal 88:0330, 1–13 (1988)
246 58. Wakeling, J.M., Ellington, C.P.: Dragonfly Flight II. Velocities, accelerations, and kinematics of flapping flight. The Journal of Experimental Biology 200, 557–582 (1997) 59. Usherwood, J.R., Lehmann, F.-O.: Phasing of dragonfly wings can improve aerodynamic efficiency by removing swirl. Journal of the Royal Society, Interface 5, 1303–1307 (2008) 60. Thomas, A.L.R, Taylor, G.K., Srygley, R.B., Nudds, R.L., Bomphrey, R.J.: Dragonfly flight: Free-flight and tethered flow visualizations reveal a diverse array of unsteady liftgenerating mechanisms, controlled primarily via angle of attack. The Journal of Experimental Biology 207, 4299– 4323 (2004) 61. Götz, K.G.: Bewegungssehen und Flugsteuerung bei der Fliege Drosophila. In: W. Nachtigall (ed.) BIONA-report 2 Fischer, Stuttgart (1983) 62. Lehmann, F.-O., Dickinson, M.H.: The production of elevated flight force compromises flight stability in the fruit fly Drosophila. The Journal of Experimental Biology 204, 627–635 (2001) 63. Tu, M.S., Dickinson, M.H.: Modulation of negative work output from a steering muscle of the blowfly Calliphora vicina. The Journal of Experimental Biology 192, 207–224 (1994) 64. Lehmann, F.-O., Götz, K.G.: Activation phase ensures kinematic efficacy in flight-steering muscles of Drosophila
F.-O. Lehmann
65.
66.
67.
68.
69.
70.
71.
melanogaster. Journal Comparative Physiology 179, 311– 322 (1996) Nalbach, G., Hengstenberg, R.: The halteres of the blowfly Calliphora II Three-dimensional organization of compensatory reactions to real and simulated rotations. Journal Comparative Physiology A 174, 695–708 (1994) Fayyazuddin, A., Dickinson, M.H.: Haltere afferents provide direct, electronic input to a steering motor neuron of the blowfly, Calliphora. Journal of Neuroscience 16, 5225– 5232 (1996) Sherman, A., Dickinson, M.H.: A comparison of visual and haltere-mediated equilibrium reflexes in the fruit fly Drosophila melanogaster. The Journal of Experimental Biology 206, 295–302 (2003) Hengstenberg, R., Sandeman, D.C.: Compensatory head roll in the blowfly Calliphora during flight. Proceedings of the Royal Society of London. Series B 227, 455–482 (1986) Land, M.F., Collett, T.S.: Chasing Behaviour of houseflies (Fannia canicularis). Journal of Comparative Physiology A 89, 331–357 (1974) Howard, J., Dubs, A., Payne, R.: The dynamics of phototransduction in insects: A comparative study. Journal of Comparative Physiology A 154, 707–718 (1984) Hardie, C.R., Raghu, P.: Visual transduction in Drosophila. Nature 413, 186–193 (2001)
Chapter 18
A Miniature Vehicle with Extended Aerial and Terrestrial Mobility Richard J. Bachmann, Ravi Vaidyanathan, Frank J. Boria, James Pluta, Josh Kiihne, Brian K. Taylor, Robert H. Bledsoe, Peter G. Ifju, and Roger D. Quinn
Abstract This chapter describes the design, fabrication, and field testing of a small robot (30.5 cm wingspan and 30.5 cm length) capable of motion in both aerial and terrestrial mediums. The micro-air– land vehicle (MALV) implements abstracted biological inspiration in both flying and walking mechanisms for locomotion and transition between modes of operation. The propeller-driven robot employs an undercambered, chord-wise compliant wing to achieve improved aerial stability over rigid-wing micro-air vehicles (MAVs) of similar size. Flight maneuverability is provided through elevator and rudder control. MALV lands and walks on the ground using an animalinspired passively compliant wheel-leg running gear that enables the robot to crawl and climb, including surmounting obstacles larger than its own height. Turning is accomplished through differential activation of wheel-legs. The vehicle successfully performs the transition from flight to walking and is able to transition from terrestrial to aerial locomotion by propeller thrust on a smooth horizontal surface or by walking off a vertical surface higher than 6 m. Fabricated of lightweight carbon fiber the ~100 g vehicle is capable of flying, landing, and crawling with a payload exceeding 20% its own mass. To our knowledge MALV is the first successful vehicle at this scale to be capable of both aerial and terrestrial locomotion in real-world terrains and smooth transitions between the two.
R.D. Quinn () Department of Mechanical and Aerospace Engineering at Case Western Reserve University, Cleveland, USA e-mail:
[email protected] Video of the robot during field testing may be observed at: http://faculty.nps.edu/ravi/BioRobotics/Projects.htm.
18.1 Introduction Advances in fabrication, sensors, electronics, and power storage have made possible the development of a wide range of small robotic vehicles capable of either aerial or terrestrial locomotion. Furthermore, insights into animal locomotion principles and mechanisms have significantly improved the mobility and stability of these vehicles. For example, the utility and importance of bat-inspired passively compliant wings for fixed wing micro-air vehicles (MAVS) have been demonstrated for aircraft with wingspans as small as 10 cm [1]. Likewise, highly mobile ground vehicles using animal-inspired compliant legs have been constructed with body lengths as short as 9 cm that can run rapidly over obstacles in excess of their own height [2]. This chapter describes the design, fabrication, and testing of a novel small vehicle (dubbed the micro-air– land vehicle (MALV)) that is capable of both aerial and terrestrial locomotion. Robot morphology is inspired by neuromechanics in animal locomotion, integrating passive compliance in its wings, joints, and legs, such that it may fly, land, walk on the ground, climb over obstacles, and (in some circumstances) take to the air again all while transmitting sensor feedback. Experimental testing has demonstrated that the robot can be made rugged enough for field deployment and operation. To our knowledge, MALV is the only existing small vehicle capable of powered flight and crawling, climbing obstacles comparable to its height and transitioning between locomotion modes. In the longer term, the design architecture and locomotion mechanisms are expected to lead to a family of vehicles with multiple modes of locomotion that can be scaled to a range
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_18, © Springer-Verlag Berlin Heidelberg 2009
247
248
of functions. Applications targeted include surveillance, reconnaissance, exploration, search/rescue, and remote inspection.
18.1.1 Overview and Design Approach In a biological organism, execution of a desired motion (e.g., locomotion) arises from the interaction of descending commands from the brain with the intrinsic properties of the lower levels of the sensorimotor system, including the mechanical properties of the body. Animal “neuromechanical” systems successfully reject a range of disturbances which could otherwise induce instability or deformation of planned trajectories [3]. The first response to minimize such effects, in particular for higher frequency disturbances such as maintaining posture over varying terrestrial substrates [4] and unexpected gusts in flight [1, 5], is provided by the mechanical properties (e.g., structures, muscles, and tendons) of the organism. In legged locomotion, for example, a fundamental role is played by compliance (i.e., springs and dampers) in joints and structures that store and release energy, reduce impact loads, and stabilize the body [4] in an intrinsic fashion and thus greatly simplify higher level control [6, 7]. Reproduction of the dynamic properties of muscle and the intrinsic response of the entire mechanical system [8] have been an impediment to the successful realization of animal-like robot mobility over a variety of substrates and through different mediums (e.g., air and land). It is these intrinsic properties of the musculoskeletal system which augment neural stabilization of the body of an organism. Although biological inspiration offers a wealth of promise for robot mobility, many constituent technologies are not at a state of maturity where they may be effectively implemented for small autonomous robots. Existing power, actuation, materials, and other robotic technology have not developed to the point where animal-like neuromechanics may be directly integrated into robotic systems. Given this challenge, the majority of biologically inspired legged and flying robots have been confined to laboratory or limited field demonstrations. A method to surmount this, known as abstracted biological inspiration [9], focuses principally on the delivery of critical performance characteristics to the engineering system. Abstracted biological inspiration
R.J. Bachmann et al.
attempts to extract salient biological principles and to implement them using available technologies. This approach founded the basis of the design methodology aimed at delivering capabilities of flight locomotion, crawling locomotion, and transition between each to MALV.
18.1.1.1 Organization of Chapter The remainder of this section describes past work in flying and crawling robots individually and some of the small body of research in robots with multimodal mobility. Section 18.2 delineates the biologically inspired structures for flight and walking utilized for MALV. Section 18.3 details the specific design process and fabrication of the vehicle, while Sect. 18.4 provides details on the performance characteristics of the robot based on exhaustive system testing in facsimile field operations. Section 18.5 enumerates the conclusions of the research and envisioned future work.
18.1.2 Micro-ground Vehicles Two major factors remain significant challenges to the deployment and field utility of terrestrial microrobots. First, the relative size of real-world obstacles (e.g., stairs, gravel, terrain fluctuations) makes movement a daunting task for small robots. For example, RHex [10], at approximately 50 cm in length, is the shortest existing ground robot to our knowledge that can climb standard stairs without jumping [11–13], flying, or using gripping mechanisms on its feet [14, 15]. Second, power source miniaturization has not kept pace with other critical technologies, such as actuation, sensing, and computation. A wide array of vehicles have been constructed that attest to the difficulty of designing field-deployable terrestrial mobile micro-robots. Khepera robots have a 5 cm wheelbase, onboard power, and an array of sensors [16]. Although they are widely used by group behavior researchers, their 1.4 cm diameter wheels restrict them to operation on very smooth, flat surfaces. Millibots [17] use tracks, but it is not clear that they offer significant advantages since it is difficult
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
to implement a modern track suspension at this small scale. A small hexapod has been developed by Fukui et al. [18] that runs in a tripod gait using piezoelectric actuators. However, small joint excursions also limit the vehicle to relatively flat surfaces. Birch et al. [19] developed a 7.5 cm long hexapod inspired by the cricket and actuated by McKibben artificial muscles. Though capable of waking using 2 bars of air pressure it has not yet carried its own power supply. Sprawlita [20] is a 16 cm long hexapod that uses a combination of servomotors and air cylinders. Although Sprawlita attains a top speed of 4.5 body lengths per second, which is fast compared to existing robots of similar size, a necessary operating air pressure of 6 bars makes it unlikely that the robot will become autonomous in its current form. Abstracted biological inspiration has spawned a group of highly mobile robots, called WhegsTM [21] and Mini-WhegsTM [22]. To our knowledge, MiniWhegsTM is the fastest small terrestrial vehicle that is also capable of surmounting large obstacles relative to its size. Using a single drive motor, the 9 cm long robot attains a speed of 10 body lengths per second and can easily run over 3.5 cm tall obstacles – higher than the top of its body. The more recently developed iSprawl [23] also uses single motor propulsion and benefits from abstracted biological principles. It has run even faster, 15 body lengths per second, although its obstacle climbing ability is more restricted because of the small excursions of its feet.
18.1.3 Micro-air Vehicles (MAVs) The majority of research to develop practical (nonrotary) winged MAVs can be categorized into three fundamental approaches. The first and most widely used is to configure the airframe as a lifting body or flying wing using conventional propeller-driven thrust in a manner similar to larger aircraft. In this approach, the emphasis is to increase the relative area of the lifting surface while decreasing drag, directly addressing the decrease in the aerodynamic efficiency and putting less emphasis on issues of stability and control. Research groups have used optimized rigid wings and accepted the need for stability augmentation systems or superior pilot skill to deal with intrinsically unsteady behav-
249
ior. Among the most successful examples of rigidwing MAVs designed with this approach is Aerovironment’s “Black Widow” [24], an electric 15 cm flying wing. Virtually every component on the aircraft is custom built, including a sophisticated gyro-assisted control system. Other successful examples of rigid-wing designs include the “Trochoid” [25] and the “Microstar MAV” [26]. Both of these also have gyro-assisted stabilization systems, without which these lifting bodies would be difficult to control. This approach differs significantly from natural flight: Birds and bats have welldefined wings and a fuselage, and we find no examples of lifting bodies or flying wings in animals that produce thrust and fly for extended periods rather than simply gliding. The second approach that is being explored for MAV design draws on direct biological inspiration through mimicry of insect- or bird-like beating wings [27–30] (Chaps. 11, 12, 13, 14, 15, 16). Flapping wings can produce both lift and thrust. Researchers have demonstrated flapping wing MAVs that can fly and even hover [31] (Chap. 14) using the “clap and fling” mechanism as described by Ellington [27]. However, these MAVs are susceptible to failure in even light winds and their payload capacity is very small. This approach remains attractive for future work, in particular for low-speed, low-wind applications such as inside buildings. In a third approach [32–37], the lifting surface is allowed to move and deform passively like animal wings, which leads to more favorable aerodynamic performance in a fluctuating low Reynolds number environment. These findings helped lead to a flexible wing concept, which has been applied by Ifju to successful MAVs over the past 8 years [1, 38–41]. Based upon this abstracted biologically inspired mechanism flight vehicles have been developed that utilize conventional propeller-driven thrust in combination with an adaptive-shape, compliant wing that responds to flight conditions and also develops a stable limit-cycle oscillation during flight.
18.1.4 Multi-mode Mobility While the aerial and terrestrial vehicles described above represent significant enhancements in their respective fields, their utility is limited by their
250
dependence on a single locomotion modality. At present, very few robots have been developed that are capable of multiple modes of locomotion, with the majority of work focusing on swimming/crawling robots. One example is Boxybot [42], which uses a vertically oriented tail and two horizontally oriented fins for aquatic propulsion. By reversing the orientation of one or both of the fins, turning moments or reverse thrust can be generated. Continuous rotation of the fins produces a sort of “pronking” terrestrial movement. A watertight version of RHex [43] has also been equipped with fin-like legs that allow it to swim under water. The neuromechanical design of a more recent amphibious robot is based upon salamanders and it can run on land and swim using the same central pattern generator [44]. To the knowledge of the authors, there are few published works with the stated goal of both aerial and terrestrial locomotion. The Entomopter [45] uses reciprocating chemical muscle [46] to produce flapping motion of its four wings. We are not aware of data on the vehicle’s terrestrial capabilities or performance results for either locomotion mode. The recently developed Microglider (Chap. 19) also locomotes both on the ground and in the air and implements a biologically inspired wingfolding mechanism. However, it hops into the air and then glides rather than being able to fly for extended periods as is the intended purpose of MALV and the Entomopter. Nature has repeatedly demonstrated the need for multiple modes of locomotion, especially for small animals such as insects. Pure terrestrial locomotion may be impractical at this scale simply because of the distances that must be traveled to search for food, mates, etc. However, mono-modal aerial locomotion is also undesirable because it is impossible to stay airborne indefinitely, a variety of conditions (winds, etc.) make it difficult to land at exactly the desired location, and walking is far more energy efficient than flying for traveling short distances. Utility for small robots often reflects the exact same problem domain as small animals; multiple modes of locomotion would represent a generational leap in their capability. Flight capacity could allow a vehicle to travel long distances and approach a general target area, while crawling locomotion would allow a range of additional possibilities (close inspection, surveillance, performance of tasks, etc.) unachievable by any vehicle existing today.
R.J. Bachmann et al.
18.2 Biologically Inspired Structures for Flying and Walking As stated earlier, abstracted biological inspiration focused on functionality of the MALV with technology presently available. Its mechanical design incorporated neuromechanical flight (deformable wing) and walking (compliant wheel-legs and axle joints) mechanisms, which were key to MALV’s locomotion capacity. The challenge was designing mechanisms for functionality in both modes while preserving as much mobility as possible in each individually.
18.2.1 Terrestrial Locomotion The implementation of biological locomotion principles holds considerable promise for terrestrial locomotion. Legged animals exist and thrive at a wide range of sizes and are capable of overcoming obstacles that are on the order of their own size. Animal legs behave as if they have passive spring-like compliant elements when they are perturbed [6]. Alexander describes three uses for springs in legged locomotion, including energy absorption [4]. Jindrich and Full demonstrated this in an experiment wherein a cockroach was suddenly perturbed, too quickly for its nervous system to react [7]. It was shown that the passive compliance in its legs stabilized its body. Similar stability benefits are achieved through compliant elements in the legs of the RHex robot [10], which preceded WhegsTM . Locomotion studies on cockroaches have elucidated several critical behaviors that endow the insect with its remarkable mobility [47]. During normal walking, the animal uses a tripod gait, where adjacent legs are 180◦ out of phase. The cockroach typically raises its front legs high in front of its body, allowing it to take smaller obstacles in stride, but when climbing larger obstacles, the animal moves adjacent legs into phase, thus increasing stability. These benefits have been captured in design through the WhegsTM concept (Fig. 18.1), which served as the basis for the terrestrial locomotion of MALV. WhegsTM has spawned a line of robots that implement abstracted biological inspiration (based on the cockroach) for advanced mobility. Compliance is implemented into WhegsTM legs in two ways: radially for
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
251
Fig. 18.1 The wheel-leg provides a compromise between the efficiency and ease of propulsion of a wheel and the terrain mobility of legs
shock absorption (one of the three uses of springs in legged locomotion cited by Alexander [4]) and torsionally for gait adaptation, leading to improved traction and stability [9]. Torsional compliance allows for a single motor to drive six three-spoke wheel-leg appendages in such a manner as to accomplish all of the locomotion principles discussed above. WhegsTM robots are also scalable, with successful robots being developed with body lengths ranging from 89 cm down to 9 cm. This concept has been extended to Mini-WhegsTM (Fig. 18.2) that offer a combination of speed, mobility, durability, autonomy, and payload for terrestrial micro-robots. Mini-WhegsTM robots are extremely fast (10 body lengths per second) in comparison to most other legged robots and can climb obstacles taller than the top of their bodies [22]. The wheel-leg appendage
1
Fig. 18.2 Photograph depicting the relative sizes of a MiniWhegsTM and a Blaberus giganteus cockroach. Scale is in centimeters (figure courtesy of Andrew Horchler)
results in a natural “high stepping” behavior, allowing the robot to surmount relatively large obstacles. These vehicles have tumbled down concrete stairs and been dropped from heights of over 10 body lengths, without damage. Mini-WhegsTM have also carried over twice their body weight in payload [22].
18.2.2 Compliant Wings for Aerial Locomotion A rigid leading edge, chord-wise compliant wing design is the basis for MALV’s aerial locomotion. The compliant wing inspired by flying animals has several advantages over similarly sized rigid-wing vehicles [1]. Delayed stall allows the vehicles to operate at slower speeds. Improved aerodynamic efficiency reduces the payload that must be dedicated to energy storage. Passive gust rejection significantly improves stability. It has been well established that the aerodynamic efficiency of conventional (smooth, rigid) airfoils is significantly compromised in the Reynolds number (Re) range between 104 and 106 . This Re range corresponds to the class of craft referred to as micro-air vehicles [48]. In fact, the ratio of coefficient of lift (CL ) to coefficient of drag (CD ) drops by nearly 2 orders of magnitude through this range. With smooth, rigid wings in this Re range, the laminar flow that prevails is easily separated, creating large separation bubbles, especially at higher angles of attack [49]. Flow separation leads to sudden increases in drag and loss of efficiency. The effects of the relationship discussed above are very clear in nature. Consider, for example, the behaviors of birds of various sizes. Birds with large
252
wingspan, with a fixed wing Re > 106 , tend to soar for prolonged periods of time. Medium-sized birds utilize a combination of flapping and gliding, while the smallest birds, with a fixed wing Re < 104 , flap continuously and rapidly to stay aloft. Other major obstacles exist for flight at this scale [1]. Earth’s atmosphere naturally exhibits turbulence with velocities on the same scale as the flight speed of MAVs. This can result in significant variations in airspeed from one wing to the other, which in turn leads to unwanted rolling and erratic flight. The small mass moments of inertia of these aircraft also adversely affect their stability and control characteristics; even minor rolling or pitching moments can result in rapid movements that are difficult to counteract. A rigid leading edge, chord-wise compliant wing addresses these issues for MALV’s aerial locomotion capabilities. Through the mechanism of passiveadaptive washout, a chord-wise compliant wing (first implemented in the University of Florida (UF) [1]) overcomes many of the difficulties associated with flight on the micro-air vehicle scale. Adaptive washout is a behavior of the wing that involves the shape of the wing passively changing to adapt to variations in airflow. For example, an airborne vehicle may encounter a turbulent headwind, such that the airspeed over only the right wing is suddenly increased. The compliant wing structure responds to the instantaneous lift generated by the gust to deform in a manner similar to Fig. 18.3. This is referred to as passive-adaptive washout and results in a reduction in the apparent angle of attack and a subsequent decrease in lifting efficiency, as compared to the non-deforming wing. However, because the air velocity over the deformed wing is higher, it continues to develop a nearly equivalent
Fig. 18.3 The chord-wise compliance of a flexible wing allows for passive-adaptive washout, increasing stability of the aircraft
R.J. Bachmann et al.
lifting force as the flat wing. Similarly, as the airflow over the wing stabilizes, the wing returns to its original shape. This behavior results in a vehicle that exhibits exceptionally smooth flight, even in gusty conditions; our own MALV flight tests have been conducted in the presence of winds that precluded the flight of larger (2+ m wingspan) rigid-wing aircraft. While quantification of this effect is difficult, feedback from highly skilled pilots has confirmed the efficacy of flexible wings in gusty environments.
18.3 MALV Design and Development 18.3.1 Methodology 18.3.1.1 Locomotion Mechanisms The function of MALV is to carry a sensor payload, fly a long distance, and then land and move on the ground for a short distance, all while relaying sensor (e.g., visual) information to a remote location. For flight, a flexible wing was first selected over rigid or flapping wings to provide the best combination of controllability, payload capacity, speed, and efficiency (for long-distance missions) in the critical size range. Next, a range of terrestrial locomotion mechanisms were assessed for integration onto a MAV. One possibility was to attach free-spinning wheels to the fuselage of a MAV and use its propeller to drive the vehicle on the ground and in the air. Our experiments demonstrated that such a vehicle can land and takeoff from smooth firm terrain. However, this device had extremely poor ground mobility on rugged terrain. The propeller had a strong tendency to collide with obstacles, thus severely restricting ground mobility. Ground locomotion also suffered due to the fact that the forward thrust of the propeller was, out of necessity, above the wheel axle, creating a torque that pitched the vehicle forward. Therefore, when the wheels contacted an obstacle, the vehicle would often pitch forward nose first rather than actually moving forward. A possible alternative to this involved directly powering wheels attached to the fuselage, yet the vehicle’s mobility would still be limited as it would not be able to climb obstacles even a small fraction of its own height. Legged locomotion mechanisms were judged to be too complicated, delicate,
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
and heavy at this time for use on a vehicle capable of both flight and crawling. We therefore chose to integrate flexible wing and wheel-leg (WhegsTM ) locomotion mechanisms to design MALV.
18.3.1.2 Multi-modal Mobility Trade-Offs A design analysis was performed to determine how to best integrate flight and ground mobility mechanisms. If a crawling robot was simply attached to the bottom of a MAV, the resulting vehicle would be too heavy to fly unless its wingspan were greatly increased. Therefore, a trade-off analysis was done to determine the most important parameters for ground locomotion and MAV locomotion. The successful MALV design preserves those parameters as much as possible. Less important design parameters were compromised to improve overall vehicle performance. In the case of a conflict where parameters important for flight were severely deleterious to ground mobility, a morphing mechanism was employed to resolve the problem. In the trade-off analysis flight was assumed to be the limiting condition because of energy demands and the larger payload enabled by crawling structures. In flight, legs increase drag and reduce controllability. Furthermore, their associated mass reduces payload and can alter flight stability. On the ground, wings, propellers, and tails limit payload and impede mobility in confined spaces. The fuselage of an aerial vehicle tends to be long to increase its stability, but on the ground a long chassis causes the vehicle to high center on obstacles. On the ground, more legs can increase a vehicle’s stability and mobility, but in the air they add drag and mass. These design inconsistencies broadly fell into two categories: mass and geometry. Wheel-legs were judged vital to the ground mobility of MALV. However, their implementation was reconsidered to improve the overall performance of the vehicle. Past wheel-leg robots in small sizes typically had four wheel-legs driven by one propulsion motor and its front wheel-legs are steered. The front wheel-legs are most important because they reach in front of the vehicle and on top of obstacles in the vehicle’s path to lift and pull the vehicle forward. WhegsTM are designed this way to model the front feet of cockroach, which lift high and in front of the animal to overcome obstacles [47]. To reduce mass and complexity, testing of the ground mobility of a MALV was executed with two
253
wheel-legs instead of four. The wheel-legs were placed in the front and to the side of the propeller. The rear of the fuselage dragged on the ground. We found that a MALV with this configuration could move forward over obstacles similar in height to a comparably sized Mini-WhegsTM robot. Additionally, the fuselage provided a tail-like action that prevented the robot from flipping onto its back, which happens when a purely terrestrial vehicle attempts to surmount obstacles very tall relative to its height. The drawback to this design is that MALV’s mobility in reverse on rugged terrain was poor because the fuselage impacts irregularities and impedes motion. However, the weight savings justified the two-wheel-leg design. Wheel-leg steering vs. differential steering was also contrasted in traded off design studies. Wheel-leg steering requires the wheel-legs to be placed further outboard on the wings so they do not strike the propeller when they are turned. Either design requires two motors. The differential steer design was chosen because no steering mechanism is required; the design is simpler and MALV can turn more sharply in this configuration. Efficient hybrid designs can reduce mass by integrating structural, sensor, actuator, and power components. In MALV, the fuselage of the aircraft is also the chassis of the ground vehicle. Its shape has been changed to meet design criteria for flight and ground systems. MALV uses the same cameras in flight and ground locomotion to transmit video to a remote base. The same motor could be used for both flight and ground mobility in a manner akin to insects using large muscles to drive their body–coxa leg joints and flap their wings [50, 51]. However, this idea was abandoned for the first-generation robot because the complexity overrode possible mass reduction. A transmission would be needed because the wheel-legs must turn much more slowly than the propeller. The propeller shaft and wheel-leg axles are perpendicular, which also increase transmission complexity. Furthermore, a clutch would be needed to switch from propeller to wheel-leg drive. For these reasons we chose to use different motors for flight and ground mobility. Wings provide a geometric inconsistency that cannot be compromised. Wings are clearly essential for flight, but are an impediment for ground locomotion especially when MALV is moving through narrow spaces. Birds and insects fold their wings when they are on the ground to eliminate impediments to motion.
254
R.J. Bachmann et al.
Fig. 18.4 User interface of the MAVLab design software
An insect-like wing-folding mechanism was therefore developed for MALV that mimics this action. It can stow its wings on its back when moving on the ground. 18.3.1.3 Design Summary Simply attaching wheel-legs to a MAV was inadequate to design an efficient MALV. As is described above and in more detail in the following, a range of parameters were optimized for trade-off in the vehicle design space, compromised where necessary, and implemented following a system’s approach focused on coupling between mechanisms for the design of each component. The resulting MALV achieves its goals of air and ground locomotion, but, because of necessary design trade-offs it is not yet as agile on the ground as Mini-WhegsTM and has less payload capacity and is less controllable than a flexible wing MAV.
18.3.2 MALV Design Implementation Based upon weight estimates and initial flight testing, the lift capacity of a 30 cm wingspan MAV was determined to be sufficient to carry the additional weight associated with components needed for terrestrial locomotion.1 Analysis of existing technology led to the
1 Initial performance specifications also called for a wingspan <33 cm for device portability.
selection of R/C hobby servos as the power plants to drive the wheel-leg appendages. Modified R/C servos were chosen due to their light weight and ease of implementation. The modification allows for continuous servo rotation after which the integrated position control electronics act as speed controllers so that the servos can receive commands and power directly from the receiver. This process avoided the weight of additional speed controllers. Since one R/C servo was used for each wheel-leg, this configuration required a fivechannel receiver (flight motor, elevator, rudder, left wheel-leg motor, and right wheel-leg motor), subsequently adding weight. The extra mass associated with the implementation of terrestrial walking was approximately 25 g, resulting in a 120 g projected mass. Beginning with the estimated total vehicle mass of 120 g and the established maximum dimension of 30 cm, a new compliant wing was designed to provide the necessary flight characteristics for the vehicle. A software package (designed at the University of Florida) was used to identify an efficient wing shape, within the defined parameters, capable of producing the necessary lift. Efficiency (lift/drag) and coefficient of moment were also important constraints. Using the graphical user interface (GUI) (Fig. 18.4) all the critical wing parameters, including wingspan, root chord, sweep, ellipse ratio, and curvature, were selected for desired flight performance. Real-time output from the GUI allows the designer to alter construction parameters until the desired characteristics are obtained. The program outputs 2-D aero-
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
255
Fig. 18.5 A CAD solid model of the wing tool
dynamic parameters for the wing (coefficients of lift (CL ), drag (CD ), and moment (CM ), as well as CD /CL ), key geometric features (planform area, aspect ratio, and location of aerodynamic center), and three views of the resulting wing. While MALV operates in an Re range where classical aerodynamic calculations break down, prior experience with the MAVLab software confirms the utility in using the software for initial airfoil design. Once satisfactory wing design parameters are identified an output script file is generated. This script file is then imported into a CAD program (Fig. 18.5), where it is converted into CNC tool paths for milling of a wing “tool.” The wing tool is a mold upon which the wing will be laid out during composite fabrication. The tool confers upon the wing the desired airfoil shape. The software automatically scales and translates the airfoil shape so that the entire leading edge of the wing lies in a horizontal plane. After the wing tool is milled, it is prepared for the fabrication process [52]. First, a layer of release film is applied to the tool to prevent any resins or adhesives from bonding to and damaging the tool. A schematic of the wing layout (including the leading edge, battens, and canopy) is placed on the tool and a second layer of release film is applied. Using unidirectional and woven resin-impregnated (prepreg) carbon fiber, the wing structure is laid out on the tool. The wing fabric, a polycarbonate-coated polypropylene, was selected due to its compliant properties akin to a natural wing. The fabric is then overlaid onto the structure. Additional layers of carbon fiber were applied to form a skeleton for the wing akin to a flying animal’s wing. The carbon fiber skeleton maintained the general shape of the wing while still enabling the polypropylene to warp in flight for neuromechanical stability. The choice of a flexible wing design does introduce the potential for
Fig. 18.6 Wing tools, tail sections, and a series of fabricated wings
aerodynamic dissimilarities between the wings on the opposite sides of the vehicle. However, experience has demonstrated that the flexible wing design is robust enough that a reasonable level of care during fabrication is sufficient to produce “equal” aerodynamic properties between the wings. “Reasonable care” includes controlling the tension applied to the wing fabric and maximizing the symmetry in the battens. Samples of wing tools with an array of completed wings and tail sections are pictured in Fig. 18.6. It should be noted that the compliant wing and skeletal structure may also enable MALV to fold its wings in an insect-like manner (Fig. 18.7).
Fig. 18.7 Three completed vehicles are shown with a series of fuselage tools and an array of investigated wing fabrics. The bottom right vehicle shows the wing-folding concept
256
In addition to wing selection, the fuselage had to be designed to integrate terrestrial walking. Three issues were of critical importance in MALV fuselage design: (1) physical incorporation of the wheel-leg drive motors within the fuselage, for durability; (2) maintaining the desired horizontal position of the center of gravity (CG) for flight and land mobility; and (3) locating the wheel-legs forward such that their feet contact obstacles before any other parts of the vehicle for obstacle climbing. The wing analysis described above identified the theoretical location of the wing aerodynamic center (AC). Pitch stability of an aerial vehicle is maintained by locating the CG forward of the AC. If the downforce generated by the tail section is too large, the vehicle pitches up, resulting in a more positive angle of attack for both the wing and the elevator. The changes in lift from the two surfaces counteract the original discrepancy between the desired and actual moment balance on the vehicle. Fortunately, the second and third considerations are complementary because placing the wheel-legs forward on the fuselage also moved the CG forward. After assembling a list of the components and their masses, the fuselage length was determined that would accommodate placement of the CG in the desired horizontal location with the wheel-legs in front of the vehicle. The fuselage was widened for the wheel-leg drive motors. Fuselage tools and an array of wing fabrics and completed MALV vehicles are also depicted in Fig. 18.7. To take full advantage of the strength and weight of the carbon composite material, tools were fabricated to integrate the servomotor output horn into the wheelleg (Fig. 18.8). The carbon fiber also serves to rein-
Fig. 18.8 Two wheel-leg tools and several tested designs
R.J. Bachmann et al.
force the nylon servo horn in this configuration. The tool allows for endowing the wheel-leg spokes with the necessary “splayed” shape. This shape allows the wheel-legs to reach out around (and in front of) the propeller (vehicle in lower right corner of Fig. 18.7), while minimizing the necessary width of the fuselage. In addition to enabling mobility, compliant structures in biological organisms also help to reduce damage to mechanical elements during impact, as one of the three uses of springs in legged locomotion as described by Alexander [4]. This is of critical importance to MALV to maintain the functionality of the vehicle during the impact of landing. Multiple wheel-leg designs for MALV were evaluated with this in mind against two criteria: (1) surviving the landing process and (2) facilitating terrestrial locomotion. During landing, a large torsional impact load is placed on the entire system, in particular terrestrial locomotion components including the wheel-legs and the drive motors. A range of possible solutions to this problem were considered including developing a small slip clutch mechanism to allow the wheelleg appendages to “freewheel,” designing a compliant mechanism within the wheel-leg joint, and designing a new compliant wheel-leg “foot” that would insulate the drive motor from landing impact. Explicit research was initiated to explore muscle-like joint and actuator compliance akin to structures found in nature. Traditionally, robot design has striven to maximize the impedance between actuator and load and to minimize joint compliance, given that compliance can introduce uncontrollable and underactuated degrees of freedom. For MALV, advantages of leg or joint compliance ideally will lead to (1) lower inertial forces with compliant joints, (2) lower reflected impedance with drive motors, (3) potential for energy storage and restitution, (4) passive dynamical compensation for destabilizing effects resulting from transmission lag, (5) greater shock tolerance and reduced damage due to the neuromechanical properties of compliant/elastic elements (most important for landing and crawling in sequence), and (6) conforming to the terrain for improved traction. Uses (3) and (5) were cited by Alexander as used by animals in legged locomotion and (6) is a further use that MALV has in common with these animals. It was found that compliant wheel-legs and axles lent dynamic mechanical properties enabling these advantages. Testing demonstrated these to be the most durable, most likely to survive landing impact, and best
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
257
Fig. 18.9 MALV II has piano wire wheel-legs that are compliant on landing and resist becoming embedded in the substrate. These wheel-legs also enable it to crawl through grassy areas and climb obstacles taller than its leg length
protected the system (drive motors, etc.) by providing neuromechanical rejection of high-frequency, highimpact disturbances. The simplest design implemented was a four-spoke wheel-leg fabricated from springtempered stainless steel wire (shown at the bottom of Fig. 18.8). A wire diameter of 1.19 mm (0.047 in.) provided sufficient compliance to absorb the landing impact, but not buckle undesirably during terrestrial locomotion. However, the lack of proper feet (sharp spokes in this wheel-leg design) caused a tendency to become lodged in both soft and obstacle-rich terrains. An improved design utilizing the compliance of the spring steel, implementing an efficient foot shape (identified in previous crawling research [53]) and incorporating joint compliance in the axle provided much improved performance. Figure 18.9 shows the resulting wheel-leg. The closed-loop foot avoids difficulties associated with sharp spokes becoming stuck in the substrate, compliance in the steel and leg shape increases mobility, and compliance in the axle and leg insulates the terrestrial crawling system from damage on harsh impact. Figure 18.10 shows the utility of the closed foot and compliance in the wheel-legs of the MALV in a
crawling sequence as it maneuvers over uncertain terrain, including a shifting rubble surface and ground obstacles nearly as tall as the vehicle. A comparably sized wheeled or tracked vehicle would have difficulty climbing or maintaining stability on this shifting terrain.
18.3.3 Wing-Folding Mechanisms 18.3.3.1 Introduction Because some design parameters cannot be compromised in a trade-off analysis, we enhanced MALV’s mobility by morphing its physical geometry to suit its active mode of locomotion. For example, a particular wingspan is necessary to create the desired lift for aerial locomotion, but that wingspan is an impediment to ground locomotion when MALV is moving through narrow spaces. Our initial MALV designs utilized a wing that was constantly deployed during both terrestrial and aerial locomotion. This reduced the vehicle’s ground
258
R.J. Bachmann et al.
Fig. 18.10 MALV II Demonstrating climbing and arduous terrain navigation over terrain akin to rubble encountered in disaster relief operations
maneuverability in confined spaces, unnecessarily exposed the wings to damage from the environment, and made the vehicle difficult to pack for transport when not in use. Insects, bats, and birds address this
Fig. 18.11 MALV is unable to pass through a small space with its wings deployed. By folding its wings back, it is able to reduce its physical size, allowing it to pass through the space
issue by retracting their wings against their body. We designed similar mechanisms for MALV, as shown in Fig. 18.11 where the robot folds its wings before walking through a narrow opening.
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
18.3.3.2 Mechanism Design To successfully design a wing-folding system, two different areas must be addressed: the folding mechanism itself and the design of the folding wing. Each of these poses unique challenges. The folding mechanism must be able to passively sustain in-flight forces, produce sufficient force to tension the fabric wings, provide the necessary range of motion, and be lightweight. The folding wings must be able to provide repeatable folding behavior with a flexible skin, resist drooping caused by gravity and upward bending due to lift, and be able to fold in a compact manner without impairing the vehicle’s ground mobility. The folding mechanism uses a four-bar linkage actuated by a servo and a transmission link. The linkage employs a retractable landing gear mechanism shown in Fig. 18.12. For MALV, the landing gears are modified with an aluminum arm that allows more torque to be transmitted to the system. The implementation of the folding mechanism is shown in Fig. 18.13. In operation, the servo arm pushes the transmission rod, which deploys or retracts the landing gear. The gear rotates the rigid leading edge of the wing which causes the entire wing to either deploy or retract. The
259
landing gears have a 90◦ range of motion by themselves, so the motion of the wing is limited by its structure and the angle through which the servo is able to move the landing gear. The folding wing design was implemented using a carbon fiber enclosure to permit wing rotation and also provide bending stiffness. An illustration of the carbon fiber enclosure is illustrated in Fig. 18.14. The top and bottom enclosures allow the leading edge of the wing to rotate and the window in the canopy gives the retract mechanism a free path of rotation. These enclosures in combination with struts and crossbars stiffen the wing in vertical bending. Experimental studies were conducted to determine a batten configuration that provides the necessary wing stiffness while still allowing for wing folding. Three different batten configurations are shown in Fig. 18.15. The battens emanate from the axis of rotation of the wing to allow folding. Batten designs 1 and 2 were found to hinder wing folding because they added too much stiffness at the junction between the canopy and the leading edge. When the wing is folded back, the fabric deflects the most at this point, so adding stiffness to this area prevents the fabric from deforming. Batten design 3 adds stiffness to the wing without adding stiffness to this particular area. This allows the
Fig. 18.12 GWS pico retractable landing gear, deployed (left) and retracted (center) and compared for size with a U.S. penny (right)
Transmission link
Fig. 18.13 Folding wing mechanism implementation
Aluminum arm
Actuation servo
260
R.J. Bachmann et al.
Top Enclosure Window
Wing Leading Edge
Bottom Enclosure
Fig. 18.14 Illustration of the enclosure concept. The enclosure is made from the center part of the wing that lies over the fuselage (canopy)
Fig. 18.15 Three different batten designs. Batten designs 1 and 2 hinder wing folding while batten design 3 allows rotation
1st Batten Design
fabric to deform as necessary. The final wing-folding system uses a modified version of the third batten design above (modified version shown in Fig. 18.16). When folded the width of the vehicle is reduced by a
2nd Batten Design
3rd Batten Design
factor of 2 (Fig. 18.16b). This wing-folding system is placed entirely on the wing, which allows a set of folding wings to be constructed independently from the rest of the aircraft.
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
261
a c
b Fig. 18.16 (a) Batten design that allows for compact folding. (b) The wing is able to halve its size. (c) Morphing micro-air and land vehicle (MMALV)
18.4 Results and Performance Testing 18.4.1 Vehicle Description Several iterations of the design and fabrication process led to the completion of a range of vehicles capable of aerial and terrestrial locomotion. The first vehicle had a solid carbon fiber wing which was built principally for testing the integration of wheel-legs with basic flying mechanisms. While significantly limiting the flexibility of the wing surface, this arrangement did allow for the implementation of aileron control, which provides more responsive control than the rudder/elevator system implemented on the second-generation vehicle (MALV II). However, the increased passive stability from incorporating the bio-inspired chord-wise compliant wing led to the adoption of MALV II as the standard vehicle. Table 18.1 lists the physical characteristics of MALV II. The design process was predicated on the knowledge of the masses of the components that would be included in the vehicle. Drawing upon considerable Table 18.2 List of critical components
Table 18.1 Physical characteristics of MALV II. The location of the CG and the location of the wing leading edge are along the longitudinal axis, measured from the tip of the fuselage Parameter Value Overall length Weight Fuselage length Fuselage width Location of CG from fuselage tip Location of leading edge Leg length Track (distance between wheel-legs)
30.5 cm 118 g 21.6 cm 5.1 cm 9.5 cm 6.9 cm 4.2 cm 16.7 cm
experience in the design and testing of MAVs and terrestrial robots, components were selected (Table 18.2). Weight was the primary selection criteria among components that could provide the necessary thrust, control, sensing, and terrestrial power to perform the desired tasks. Iterations on the MAVLab software generated the wing with the highest efficiency (CL /CD ), within the prescribed dimensions that could generate the required
Component
Specifications
Aerial propulsion motor Electronic speed controller Propeller Control surface servos Terrestrial drive motors Power storage
Feigao brushless DC motor ø 13 mm, 36 turn windings Castle Creations Phoenix 10 Sensorless ESC GWS EP3030 (ø = 76 mm) Saturn S44 Digital Servo Maxx Products MX-50HP Polyquest 11.1 V, 600 mA h lithium-polymer battery
262
R.J. Bachmann et al. Table 18.3 Main wing data Parameter
Value
Wingspan (b) Wing area (S) Wing loading Aspect ratio (AR)
30.5 cm 364.4 cm2 31.7 N/m2 2.55
lift at 9 m/s airspeed. This value is derived from prior experience piloting flexible wing MAVs. Beyond this speed, the vehicle becomes more difficult to pilot. Wing parameters are listed in Table 18.3. The wing planform is bounded by semi-ellipses at the leading and trailing edges. The root chord length of the wing is 15 cm, with the maximum width occurring 4.75 cm from the leading edge. The wing was mounted to the fuselage with an incidence angle of 8◦ , with respect to the thrust line of the motor. Control surface parameters are shown in Table 18.4. The horizontal stabilizer was mounted parallel to the motor thrust line. The horizontal stabilizer/rudder unit comprises two half ellipses, with a major axis of 14 cm and minor axes of 4.763 and 2.223 cm, respectively. Approximately 2.58 cm2 of the vertical stabilizer is occluded by the fuselage. By locating the rudder below the center of gravity (CG) of the craft, rudder deflection produces a sympathetic roll, i.e., a roll motion in the direction of the desired turn. Table 18.4 Control surface data Element
Area
Horizontal stabilizer Elevator Vertical stabilizer Rudder
52.3 cm2 24.4 cm2 52.9 cm2 5.0 cm2
18.4.2 Multi-mode Locomotion Table 18.5 summarizes the aerial and terrestrial locomotion characteristics of MALV II. The maximum flight time was determined experimentally (flying the plane until the battery could not produce necessary thrust). Maximum crawling time was estimated by experimentally determining the current draw for locomotion on flat terrain and comparing this to the battery capacity. However, its ground mobility is not restricted
Table 18.5 Nominal performance characteristics of the micro-air–land vehicle (MALV) II Parameter Value Cruising air speed Reynolds number Maximum flight time Range (round trip) Maximum terrestrial speed Maximum crawling time Range (round trip)
11 m/s ~1 × 105 15 min 4.9 km 0.33 m/s 100 min 0.99 km
to flat terrain. It can crawl over grassy areas and climb obstacles higher than its leg length (Figs. 18.9 and 18.10). Both tests were performed with all video system components (see Table 18.7) onboard. Round trip range was calculated from the cruising speed and maximum flight time. It is apparent from the table that MALV possesses strong aerial range for its size and is capable of considerable terrestrial locomotion within the capacity of a single battery pack. All control of MALV II is executed using standard R/C equipment. The transmitter’s programmability facilitates fluid transition from flight to crawling control. During flight, the right joystick controls elevator (up/down, channel 1) and rudder (left/right, channel 2). Once the vehicle is landed, the operator switches the controller into ground mode, which then “mixes” the right joystick commands and transmits the results on channels 4 and 5. “Forward” (joystick up) sends positive signals to both channels, while “right” sends a positive signal to the left wheelleg and a negative signal to the right wheel-leg for differential steering. “Backward” and “left” act conversely. Channels 1 and 2 are turned off during ground mode.
18.4.3 Transition Between Flight and Crawling Beyond the capacity to both fly and crawl, perhaps the most challenging aspect of MALV design was to enable effective transition between the two locomotion modalities. MALV II is presently capable of transitioning from flying to crawling locomotion and, in some circumstances, capable of attaining flight from a crawling mode.
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
263
Fig. 18.17 Snapshots of MALV II transitioning from flight to crawling locomotion
18.4.3.1 Air-to-Land Transition Figure 18.17 shows a snapshot sequence taken from a video of MALV flying, landing, and subsequently crawling (in this case to search for an object hidden in a road construction barrier). The dynamic mechanical properties of the vehicle absorb the energy of landing impact and enable effective and immediate crawling locomotion.
runway, thereby maintaining more favorable vehicle orientation at liftoff and gaining consistent separation from the building. Figure 18.18 shows this sequence. MALV has also been shown to take off from the ground on hard, smooth surfaces such as concrete and asphalt using its propeller thrust. The wheel-legs act like skids as the vehicle accelerates to takeoff speed in 3–5 m.
18.4.4 Sensor Capability and Integration 18.4.3.2 Land-to-Air Transition The vehicle’s ability to perform sufficiently at high angle of attack and low airspeed conditions resulted in a repeatable, successful takeoff capability from atop building structures two stories or taller. After the vehicle walks off the roof of the structure, it enters a powered dive, pulled down by both gravity and the propeller. As airspeed builds, the necessary lift is generated to arrest the fall and transition to flight phase. Takeoff from a sloped roof-top produces more consistent results than from a cantilevered plate. The vehicle is able to attain a higher ground speed on the declined
MALV II originally used a 7.4 V, 600 mA h lithiumpolymer (Li-Po) battery. With this power source, its Feigao propeller motor was able to produce a cruising speed of 8 m/s (17.9 mph). The lift generated at this airspeed was nearly consumed by the weight of the terrestrial drive system and one camera/transmitter unit. Installing an 11.1 V, 600 mA h battery increased the cruising speed to 11 m/s. The increased lift supported the increased battery mass (59 g vs. 41 g), a second camera, an electronic switch to control camera signal transmission, and the added mass of a sixchannel receiver. The camera transmitter operates in
264
R.J. Bachmann et al.
(a)
(b)
(c)
(d) Fig. 18.18 (a) MALV nears the edge of the structure; (b) MALV walks off the building, entering a “power dive”; (c) lift is produced, and the vehicle pulls out of the dive; and (d) closeup of MALV prepared to walk off a building
the 2.4 GHz range. Figure 18.23 shows an image capture of the transmission during a normal flight. Future work is envisaged for vision-based vehicle navigation based on the camera feedback.
18.4.5 Flight Autonomy and (Video) Telemetry Micro-aerial vehicles are very sensitive to disturbances such as wind gusts and deflections of the control surfaces, and therefore require considerable practice and talent to operate efficiently. Even if an operator were proficient at flying MALV via radio control, many situations can be envisioned that would prohibit this sort of continual monitoring. The urgency of relocation in no way diminishes the importance of the MALV task domain and clearly should not jeopardize that mission. Therefore, an autonomous control system and a remote sensor telemetry system are both critical to facilitate the eventual field utility of MALV.
An autonomous control system has been designed and implemented on the MALV for aerial waypoint navigation and a telemetry unit has been integrated onboard the plane capable of sending video feedback from the robot in both aerial and terrestrial modes of operation. This was accomplished through the integration of the Procerus Kestrel [54] autopilot, along with a modem, GPS receiver, telemetry antenna, and pitot tube into the MALV airframe. Table 18.6 details the Table 18.6 Dimensions and weights of the MALV autopilot components Length Width Height Weight (mm) (mm) (mm) (g) Item Procerus autopilot Aerocomm modem Telemetry antenna GPS receiver Pitot tube Total
50.8
34.8
11.94
16.65
41.91
48.26
5.08
21
N/A
N/A
N/A
≤5
21 152.4
21 1.59
9.6 1.59
12.6 7 62.25
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility Table 18.7 Dimensions and weights of the MALV video components Length Width Height Weight (mm) (mm) (mm) (g) Item 600 mW video transmitter Two color video cameras (air and ground views) Video antenna 5-V voltage regulator 11.1 V, 1500 mA h battery Total
49.53
24.13
8.13
14
8
8
9.5
3 (×2)
N/A 40 79
N/A 15 40.64
N/A 4 18.04
<5 6 113 144
components of the autopiloting and telemetry system for the MALV, their dimensions, and their weight. In order to ensure video feedback from the plane to an operator, a video telemetry system was also implemented on the MALV. The video feedback system requires a video transmitter, a video camera, an antenna, and a voltage regulator to direct power from the MALV battery. A summary of the video system and power source components is provided in Table 18.7. Note that two cameras were integrated onto the vehicle: one directed downward for aerial surveillance and one directed upward for terrestrial. This was judged to be simpler than one servo-mounted camera, although future work will explore this capacity. The video system and power source component selection resulted in an estimated addition of nearly 210 g to the aircraft’s gross takeoff weight (GTOW); the total weight of MALV II was 90 g without wheellegs and 116 g with wheel-legs. Although we expect future versions of these components to be significantly smaller and lighter, we initiated a new MALV design to test the capacity of our design with this additional
265
payload. Design modifications included increasing the wingspan of the MALV as well as the size of the motor that propelled the system. The intent was to increase the wingspan only as much as necessary in order to provide the requisite lift. These efforts resulted in an aircraft that measured 39 cm long and with a wingspan of 41 cm. This new version, MALV III, was given a substantially larger fuselage to accommodate the addition of the above-listed components. When the new airframe was built and before the additional components were installed, MALV III without the wheel-leg drive system weighed 130 g, only 12 g more than the 30 cm model. Both versions of the MALV are shown in Fig. 18.19 (left) along with open view (right) of MALV III illustrating component integration.
18.4.5.1 Autopilot Tuning The biologically inspired mechanisms in MALV facilitated smooth integration of the vehicle control system. Passive compliance in both the wings and wheel-legs eliminated the need for high-frequency feedback controllers adjusting to aerial disturbances and varying terrestrial terrain, thus allowing for simpler linear control. The autopilot control system therefore consisted of PID control loops regulating roll, pitch, yaw, and airspeed which were tuned to direct heading toward GPS waypoints. Control inputs were to the propeller motor and servos directing control surface deflections to the elevator and rudder. All information from the plane was transmitted to a ground computer with a digital map of the region of interest. New GPS waypoints were sent to the plane through the modem as
Fig. 18.19 The 30 cm and 41 cm wingspan versions of the MALV (left) and component integration for autopilot and video telemetry systems (right)
266
selected by a ground operator from the digital map. The gains for the control loops were found experimentally based on a series of test flights. This procedure was derived from that outlined in [54]. In ground mode, the operator would control the aircraft directly by giving combined or differential input to the wheelleg motors since this requires no training or expertise whatsoever.
18.4.5.2 Results Figure 18.20 delineates a typical MALV III experiment with six waypoints and the vehicle path highlighted. The waypoints were established with no less than 350 m in between them. Also shown are the takeoff (T), landing (L), and approach (A) points for the vehicle during the flight. Note that live video feedback from the plane was provided throughout the entire flight and in subsequent crawling locomotion. A waypoint was considered successfully reached if the vehicle was within 30 m during flight. Figure 18.21 shows a series of timed snapshots of an unpiloted takeoff sequence where the MALV was tossed by an operator and autonomously begins waypoint navigation. In this experiment, MALV took approximately 4 s to reach a desired altitude of 100 m where its first waypoint was specified.
R.J. Bachmann et al.
As a part of evaluating the vehicle control system, the aircraft’s ability to hold both altitude and airspeed was also assessed. Table 18.8 enumerates these results over several flights with MALV III to reach waypoints while maintaining a certain commanded altitude and airspeed. The accuracy of the vehicle in holding desired altitude and airspeed was judged adequate for the surveillance tasks in this experiment and will be used as a basis for future mission planning. Note that the speed of the aircraft is slightly higher than MALV II given the larger motor powering the airframe. A series of experiments were also executed to assess the quality and limits of the video telemetry system. Figure 18.22 shows snapshots of video transmitted by MALV III from an experiment where it was tasked to scout an area to locate a vehicle from the air, land near that vehicle, and crawl under it to inspect its undercarriage for any foreign objects. As mentioned earlier, MALV III was equipped with two cameras, one facing downward for aerial use and one facing upward for terrestrial use. The operator was given the option of switching between views of each camera depending on the locomotion mode of the vehicle. Experimental results in telemetry demonstrated the capacity of the vehicle to be commanded through the autopilot from more than 1.5 km and transmit video over distances greater than 0.5 km.
18.5 Conclusions This chapter introduces a unique small (~30 cm maximum dimension) micro-air–land vehicle (MALV) drawing from locomotion principles found in legged and winged animals. Much of the success of MALV is due to two biologically inspired mechanisms integrated into its design: a compliant wheel-leg terrestrial running gear and chord-wise compliant wings. Its wheel-legs mimic leg motions while rotating continuously and enable it to climb over terrestrial obstacles that are taller than its legs. MALV survives hard landings on concrete because its flexible wheel-legs Fig. 18.20 Typical flight for MALV III performing waypoint navigation from waypoints on a digital map. The flight consisted of reaching each waypoint in numerical order. The solid line indicates the desired path and the dotted line shows two experimental flight paths. Takeoff (T), landing (L), and approach (A) points are indicated in the figure
Table 18.8 MMALV altitude and airspeed Commanded Range of Commanded airspeed airspeed held altitude
Range of altitude held
14 m/s
50–84 m
10.5–16 m/s
66 m
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility
0s
0.77 s
267
0.5 s
1.5 s
4s
Fig. 18.21 Snapshots from MALV unpiloted takeoff. The operator gently tosses the vehicle and the autopilot brings it to its desired altitude of 100 m to initiate waypoint navigation
Fig. 18.22 Aerial video feedback from MALV III locating a vehicle from the air (left), MALV III crawling under the vehicle (center), and terrestrial video from MALV III locating a foreign object in the vehicle’s undercarriage (right)
passively comply during impact reducing the magnitude of the force transmitted to its onboard components. This mimics the function of passive compliance found in the legs of an animal when it is suddenly perturbed. Likewise, MALV’s chord-wise compliant wing overcomes many of the stability difficulties associated with flight on the micro-air vehicle scale through a mechanism observed in animal flight, passive-adaptive
washout, wherein the shape of the wing passively adapts to variations in airflow. The MALV is human portable, hand launched, and radio controlled. It can fly several kilometers, land, and then crawl for many meters around the landing site while surmounting tall obstacles relative to its height. MALV transmits back to the pilot video signals from its position in the air or on the ground. When it lands on a building or other
268
R.J. Bachmann et al.
Fig. 18.23 Roads and cars are clearly visible in this image captured from a video transmitted from MALV II in flight
location that is at least two stories high, it can walk off the structure and retake to the air. Furthermore, it can takeoff from the ground on hard, smooth surfaces. A slightly larger version of the vehicle, MALV III (41 cm maximum dimension), has also been developed that is capable of autonomous operation. The biologically inspired mechanism in MALV III enables smooth integration between the mechanics and control system, thus simplifying controller design. This vehicle can navigate aerial waypoints, accept commands from a user while in flight, and transmit video to a user in both aerial and terrestrial modes of operation. It should be noted that the slightly increased size of MALV III will most likely not be necessary for future operations given the rate of improvement in available micro-controllers, accelerometers, integrated autopiloting systems, and video telemetry. The components integrated on MALV III, for example, have very significantly decreased in size and weight in the last few years alone, and smaller versions have already become available from the time of the flight experiments to the time of this writing. The video transmission system has already been implemented onto the smaller MALV II airframe. Figure 18.23 shows a snapshot in flight from MALV II. Future designs are envisaged based on its 30 cm wingspan, with smaller versions already under development.
To our knowledge MALV is the first successful vehicle at this small scale to be capable of both powered flight and terrestrial locomotion in real-world terrains and smooth transition between the two. Video of the robot during field testing is currently posted at http://faculty.nps.edu/ravi/BioRobotics/Projects.htm. Fully operational field-ready versions of the vehicle are also presently under development. Targeted applications include a wide range of search and rescue, safety and security mission scenarios [52]. Rescue, fire, or police units will clearly benefit from a small robotic vehicle easily transported and deployed by the unit to provide situational awareness in specific areas. Another application of a vehicle capable of flight and ground movement would be in detection of dangerous or illegal substances. While a mono-modal unmanned aerial vehicle (UAV) might be capable of identifying the existence of potential threats from a distance, closer inspection is required to evaluate the validity of the threat. A small vehicle with the ability to land near and walk up to the location or object in question would allow an operator to accurately determine the presence or absence of harmful or dangerous substances. A scalable family of vehicles with multiple modes of locomotion is envisioned in future work based on the paradigms established in this research.
18 A Miniature Vehicle with Extended Aerial and Terrestrial Mobility Acknowledgments This work was supported by the Air Force Research Laboratories Munitions Research Directorate (under contracts FA8651-04-C-0234 and FA8651-05-C-0097) and by the Naval Postgraduate School (NPS)/USSOCOM Field Experimentation Cooperative Program. The authors would like to acknowledge the program support directors Dr. David Netzer of the Naval Postgraduate School and Chris Perry and Jeffery Wagner at Us Air Force Research Laboratories for technical and mission planning insights. Dr. Kevin Jones provided assistance in sensor placement and piloting/performance research. Baron Johnson and Daniel Claxton made significant contributions including vehicle design and flight testing. Michael Sytsma, Michael Morton, and the University of Florida MAV group also contributed to the development, testing, and analysis of MALV II.
12.
13.
14.
15.
References 16. 1. Ifju, P.G., Ettinger, S., Jenkins, D.A., Lian, Y., Shyy, W., Waszak, M.R.: Flexible-Wing-Based Micro Air Vehicles, 40th AIAA Aerospace Sciences Meeting, Reno, NV AIAA 2002-0705 (January 2002) 2. Morrey, J.M., Lambrecht, B., Horchler, A.D., Ritzmann, R.E., and Quinn, R.D.: Highly Mobile and Robust Small Quadruped Robots. IEEE Int. Conf. On Intelligent Robots and Systems (IROS’03), Las Vegas, Nevada, Vol. 1, pp. 82–87 (2003) 3. Morasso, P., Bottaro, A., Casadio, M., Sanguineti, V.: Preflexes and internal models in biomimetic robot systems. Cognitive Process 6, 25–36 (2005) 4. Alexander, R.McN.: Three Uses for Springs in Legged Locomotion. International Journal of Robotics Research 9, 2 (1990) 5. Shyy, W., Berg, M., Ljungqvist, D.: Flapping and Flexible Wings for Biological and Micro Vehicles. Process in Aerospace Sciences 35(5), 455–506 (1999) 6. Loeb, G.E., Brown, I.E., Cheng, E.J.: A hierarchical foundation for models of sensorimotor control. Experimental Brain Research. 126, 1–18 (1999) 7. Jindrich, D.L., Full, R.J.: Dynamic stabilization of rapid hexapedal locomotion. Journal of Experimental Biology 205, 2803–2823 (2002) 8. Brown, I.E., Loeb, G.E.: A reductionist approach to creating and using neuromusculoskeletal models. In: J.M. Winters, P.E. Crago (eds.) Biomechanics and neural control of movement. Springer, Berlin Heidelberg New York, pp. 148–163 (1997) 9. Quinn, R.D., Nelson, G.M., Ritzmann, R.E., Bachmann, R.J., Kingsley, D.A., Offi, J.T., Allen, T.J.: Parallel Strategies For Implementing Biological Principles Into Mobile Robots. International Journal of Robotics Research 22(3), 169–186 (2003) 10. Saranli, U., Buehler, M., Koditschek, D.: RHex a simple and highly mobile hexapod robot. International Journal of Robotics Research 20(7), 616–631 (2001) 11. Stoeter, S.A., Rybski, P.E., Gini, M., Papanikolopoulos, N.: Autonomous Stair-Hopping with Scout Robots. Proceedings of the IEEE International Conference on Intelligent
17.
18.
19.
20.
21.
22.
23.
24.
25.
269 Robots and Systems (IROS’02), pp. 721–726, Lausanne, Switzerland (September/October 2002) Morrey, J.M., Lambrecht, B., Horchler, A.D., Ritzmann, R.E., Quinn, R.D.: Highly Mobile and Robust Small Quadruped Robots. IEEE International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas (2003) Kovac, M., Fuchs, M., Guignard, A., Zufferey, J.-C., Floreano, D.: A miniature 7 g jumping robot. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’2008), pp. 373–378, (2008) Daltorio K.A., Gorb, S., Peressadko, A., Horchler, A.D., Ritzmann, R.E., Quinn, R.D.: A Robot that Climbs Walls using Micro-structured Polymer Feet. International Conference on Climbing and Walking Robots (CLAWAR), London, U.K. (September 13–15, 2005) Kim, S., Asbeck, A., Provancher, W., Cutkosky, M.R.: SpinybotII: Climbing Hard Walls with Compliant Microspines. Proceedings of the IEEE International Conference on Autonomous Robots, Seattle, WA (July, 18–20, 2005) K-TEAM SA HEADQUARTERS SWITZERLAND Chemin du Vuasset, CP 111, 1028 Préverenges, SWITZERLAND Bererton, C., Navarro-Serment, L.E., Grabowski, R., Paredis, C.J.J., Khosla, P.K.: Millibots: Small Distributed Robots for Surveillance and Mapping. Government Microcircuit Applications Conference, pp. 20–23 (March 2000) Fukui, R., Torii, A., Ueda, A.: Micro robot actuated by rapid deformation of piezoelectric elements. Proceedings 2001 International Symposium on Micromechatronics and Human Science, (MHS 2001), pp 117–22 (2001) Birch, M.C., Quinn, R.D., Ritzmann, R.E., Pollack, A.J., Philips, S.M.: Micro-robots inspired by crickets. Proceedings of Climbing and Walking Robots Conference (CLAWAR’02). Paris, France (2002) Clark, J.E., Cham, J.G., Bailey, S.A., Froehlich, E.M., Nahata, P.K., Full, R.J., Cutkosky, M.R.: Biomimetic design and fabrication of a hexapedal running robot. Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation 4, 3643–3649 (2001) Allen, T.J., Quinn, R.D., Bachmann, R.J., Ritzmann, R.E.: Abstracted Biological Principles Applied with Reduced Actuation Improve Mobility of Legged Vehicles. Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS ’03), V.2, pp. 1370–1375. Las Vegas, USA (2003) Morrey, J.M., Horchler, A.D., Didona, N., Lambrecht, B., Ritzmann, R.E., Quinn, R.D.: Increasing Small Robot Mobility Via Abstracted Biological Inspiration. IEEE International Conference on Robotics and Automation (ICRA’03) Video Proceedings. Taiwan (2003) Kim, S., Clark, J.E., Cutkosky, M.R.: iSprawl: Design and Tuning for High Speed Autonomous Open Loop Running. International Journal of Robotics Research 25(9), 903–912 (2006) Grasmeyer, J.M., Keennon, M.T.: Development of the Black Widow Micro Air Vehicle. AIAA Chapter No. 2001-0127 (2001) Morris, S., Holden, M.: Design of Micro Air Vehicles and Flight Test Validation. Proceeding of the Fixed, Flapping
270
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
R.J. Bachmann et al. and Rotary Wing Vehicles at Very Low Reynolds Numbers, pp. 153–176 (2000) Frontiers of Engineering: Reports on Leading Edge Engineering, 2001 NAE Symposium on Frontiers of Engineering, National Academy of Engineering, p. 12 (2002) Ellington, C.P.: The Aerodynamics of Hovering Flight. Philosophical Transactions of the Royal Society of London 305(1122), 1–181 (1984) Frampton, K.D., Goldfarb, M., Monopoli, D., Cveticanin, D.: Passive Aeroelastic Tailoring for Optimal Flapping Wings. Proceeding of the Fixed, Flapping and Rotary Wing Vehicles at Very Low Reynolds Numbers, pp. 26–33 (2000) Jones, K.D., Duggan, S.J., Platzer, M.F.: Flapping-Wing Propulsion for a Micro Air Vehicle. AIAA Chapter No. 2001–0126 (2001) Jones, K.D., Bradshaw, C.J., Papadopoulos, J., Platzer, M.F.: Improved Performance and Control of FlappingWing Propelled Micro Air Vehicles. 42nd AIAA Aerospace Sciences Meeting and Exhibit, AIAA Chapter 2004-0399, Reno, Nevada (January 5–8, 2004) Lentink, D., Bradshaw, N., Jongerius, S.R.: Novel micro aircraft inspired by insect flight. Comparative Biochemistry and Physiology, Part A 146, S133–S134 (2007) Waszak, M.R., Jenkins, L.N., Ifju, P.G.: Stability and Control Properties of an Aeroelastic Fixed Wing Micro Aerial Vehicle. AIAA Chapter no. 2001–4005 (2001). Shyy, W.: Computational Modeling for Fluid Flow and Interfacial Transport. Elsevier, Amsterdam, The Netherlands (1994) (revised printing 1997). Shyy, W., Thakur, S.S., Ouyang, H., Liu, J., Blosch, E.: Computational Techniques for Complex Transport Phenomena. Cambridge University Press, New York (1997) Shyy, W., Udaykumar, H.S., Rao, M.M., Smith, R.W.: Computational Fluid Dynamics with Moving Boundaries. Taylor & Francis, Washington, DC (1996) (revised printing 1997 & 1998). Smith, R.W., Shyy, W.: Computational Model of Flexible Membrane Wings in Steady Laminar Flow. AIAA Journal 33(10), 1769–1777 (1995) Jenkins D.A., Shyy, W., Sloan, J., Klevebring, F., Nilsson, M.: Airfoil Performance at Low Reynolds Numbers for Micro Air Vehicle Applications. Thirteenth Bristol International RPV/UAV Conference, University of Bristol (1998) Ifju, P.G., Ettinger, S., Jenkins, D.A., Martinez, L.: Composite Materials for Micro Air Vehicles. Proceeding for the SAMPE Annual Conference, Long Beach CA (May 6–10, 2001) Jenkins, D.A., Ifju, P.G., Abdulrahim, M., Olipra, S.: Assessment of the Controllability of Micro Air Vehicles. Micro Air Vehicle Conference, Bristol England (April 2001) Ettinger, S.M., Nechyba, M.C., Ifju, P.G., Waszak, M.: Vision-Guided Flight Stability and Control for Micro Air Vehicles. Proceedings of the IEEE International Conference on Intelligent Robots and Systems 3, 2134–2140 (2002) Zufferey, J.-C., Klaptocz, A., Beyeler, A., Nicoud, J.-D., Floreano, D.: A 10-gram Vision-based Flying Robot. Advanced Robotics 21(14), 1671–1684 (2007)
42. Lachat, D., Crespi, A., Ijspeert, A.J.: Boxybot: A Swimming and Crawling Fish Robot Controlled by a Central Pattern Generator. Proceedings of The First IEEE/RASEMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 2006) 1, 643–648 43. Georgiadis, C., German, A., Hogue, A., Liu, H., Prahacs, C., Ripsman, A., Sim, R., Torres, L.-A., Zhang, P., Buehler, M., Dudek, G., Jenkin, M., Milios, E.: AQUA: An Aquatic Walking Robot. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2004, Sendai, Japan (September 28–October 2, 2004) 44. Ijspeert, A., Crespi, A., Ryczko, D., Cabelguen, J.-M.: From swimming to walking with a salamander robot driven by a spinal cord model. Science 315(5817), 1416–1420 (2007) 45. Ayers, J., Davis, J.L., Rudolph, A. (eds.): Neurotechnology for Biomimetic Robots. The MIT Press, pp. 481–509 (2002) 46. Michelson, R., Helmick, D., Reece, S., Amareno, C.: A Reciprocating Chemical Muscle (RCM) for Micro Air Vehicle “Entomopter” Flight. AUVSI’97, Proceedings of the Association for Unmanned Vehicle Systems International (July 1997) 47. Watson, J.T., Ritzmann, R.E., Zill, S.N., Pollack, A.J.: Control of obstacle climbing in the cockroach, Blaberus discoidalis: I. Kinematics. Journal of Comparative Physiology 188, 39–53 (2002) 48. Mueller, T.J. (ed.): Proceedings of the Conference on Fixed, Flapping and Rotary Wing Vehicles at Very Low Reynolds Numbers, Notre Dame University, Indiana (June 5–7, 2000) 49. Mueller, T.J.: The Influence of Laminar Separation and Transition on Low Reynold’s Number Airfoil Hysteresis. Journal of Aircraft 22, 763–770 (1985) 50. Pringle, J.W.S.: Insect Flight. Cambridge University Press, Cambridge (1957) 51. Ritzmann, R.E., Fourtner, C.R., Pollack, A.J.: Morphological and physiological identification of motor neurons innervating flight musculature in the cockroach, Periplaneta Americana. Journal of Experimental Zoology 225, 347–356 (1983) 52. Boria F.J., Bachmann, R.J., Ifju, P.G., Quinn, R.D., Vaidyanathan, R., Perry, C., Wagener, J.: A Sensor Platform Capable of Aerial and Terrestrial Locomotion. Proceedings of IEEE/RSJ 2005 International Conference on Intelligent Robots and Systems (IROS2005), Edmonton, Alberta, Canada, pp. 2–6 (August 2005) 53. Lambrecht, B.G.A., Horchler, A.D., Quinn, R.D.: A Small Insect Inspired Robot that Runs and Jumps. Proceeding of the IEEE International Conference on Robotics and Automation (ICRA ’05), Barcelona, Spain (2005) 54. Procerus Technologies (http://www.procerusuav.com/), Kestrel Installation and Configuration Guide (May 2006) 55. Bachmann, R.J., Boria, F.J., Ifju, P.G., Quinn, R.D., Kline, J.E., Vaidyanathan, R.: Utility of a Sensor Platform Capable of Aerial and Terrestrial Locomotion. Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM2005), Monterey California (July, 2005)
Chapter 19
Towards a Self-Deploying and Gliding Robot Mirko Kovaˇc, Jean-Christophe Zufferey, and Dario Floreano
Abstract Strategies for hybrid locomotion such as jumping and gliding are used in nature by many different animals for traveling over rough terrain. This combination of locomotion modes also allows small robots to overcome relatively large obstacles at a minimal energetic cost compared to wheeled or flying robots. In this chapter we describe the development of a novel palm-sized robot of 10 g that is able to autonomously deploy itself from ground or walls, open its wings, recover in mid-air, and subsequently perform goal-directed gliding. In particular, we focus on the subsystems that will in the future be integrated such as a 1.5 g microglider that can perform phototaxis; a 4.5 g, bat-inspired, wing-folding mechanism that can unfold in only 50 ms; and a locust-inspired, 7 g robot that can jump more than 27 times its own height. We also review the relevance of jumping and gliding for living and robotic systems and we highlight future directions for the realization of a fully integrated robot.
19.1 Introduction Small robots face big problems when it comes to locomotion in natural and rough terrain. This is usually referred as the “Size Grain Hypothesis” [26], which is described as an “increase in environmental rugosity with decreasing body size.” In the animal kingdom,
M. Kovaˇc () Laboratory of Intelligent Systems, EPFL, Lausanne, Switzerland e-mail:
[email protected]
there are many animal species that master locomotion in rough terrain very well by using a combination of different locomotion modes, such as jumping and subsequent gliding flight. This allows them to minimize energetic cost of transport, overcome large obstacles, escape predators, and reduce the potentially hazardous impact forces on landing. Examples of animals that combine self-deployment with gliding can be found in many different species with different evolutionary origins. Gliding lizards [41, 37, 36, 31], locusts [45], flying fish [17, 5], gliding geckoes [25, 59], gliding ants and spiders [57, 58, 50, 16], gliding squid [32, 5] gliding frogs [35, 20], bats [52], gliding mammals [38, 42, 7, 15], gliding snakes [48, 47] and many birds use combinations of jumps and gliding flight. Gliding can also be found amongst extinct animal species such as the Sharovipteryx and some lizard-like reptiles with similar wings to the Draco lizard. It also has been argued [19, 18, 34, 10] that gliding may have been the precursor to flapping flight in insects and vertebrates due to its simplicity. As the focus of this chapter is of technological nature, the reader may be referred to [51, 18, 5, 40] for in-depth reviews of self-deploying and gliding animals with detailed description of morphology and behavior. Important here to mention, however, is that these animals barely use steady-state gliding, but change their velocity and angle of attack dynamically during flight. This increases the gliding ratio, which is defined as the horizontal distance traveled per unit height loss, or allows the animal to precisely land on a spot, such as perching on a tree branch. The combination of jumping and gliding is also interesting for miniature robots because it allows them to overcome larger obstacles compared to wheeled and
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_19, © Springer-Verlag Berlin Heidelberg 2009
271
272
legged robots at the same scale and because it requires less energy compared to flying robots of similar size. In this chapter, we describe our project on the development of a palm-sized microglider of around 10 g that possesses the ability to autonomously self-deploy from ground or walls, open its wings, recover from any position in mid-air, and perform goal-directed gliding. To date, there have been very few attempts to build robots that combine terrestrial and aerial locomotion. Armour et al. [4] recently presented a 700 g jumping robot of octahedral shape with wing-like structures to reduce the impact force on landing. This design is able to clear heights of up to 1.17 m but the addition of the wings actually reduces the range of the jump instead of extending it. A related project with similar aims is the so-called long-jumping “Grillo” mini robot [46]. Prototypes presented so far range in mass between 8 and 80 g and can jump obstacles of approximately 5 cm in height. Another recent development which is also described in Chap. 18 of this book is a hybrid sensory platform [8] that can crawl using “whegs,” a combination of wheels and legs, fold its wings to enter narrow spaces, and perform propelled flight after dropping down from roofs. Its weight is approximately 100 g, with a wingspan of 30.5 cm. It possesses flexible wings as a passive damping mechanism to deal with wind gust, but no quantitative characterization of its efficiency has been presented so far. A limitation of this flying platform is its relatively high weight per unit wing area which necessitates 6.6 m height loss for recovery after dropping down from roofs for the transition to propelled flight. To date, this transition has been shown only when the airplane is dropped along the major axis of the fuselage. Summarizing, only a few projects address the importance of hybrid locomotion as an efficient way of moving in rough terrain, and none has yet successfully integrated jumping and gliding to overcome large obstacles and reduce the energetic cost of transport. In the following sections we will present our first steps toward the creation of a self-deploying microglider. We start by outlining miniaturization and efficiency of gliding and the development of a gliding robot. We then proceed to investigate mechanisms for wing folding and rapid deployment, describe the
M. Kovaˇc et al.
prototype of a jumping microrobot, and conclude with a discussion of how to integrate it with the gliding system.
19.2 Gliding in Robotics Wood et al. recently presented a remarkable 2.2 g microglider [56] that has been specifically designed for gliding flight. It uses a four-bar piezo actuator for rudder deflection and is intended to avoid obstacles using optical flow. Although this realization is a masterpiece of micromechatronics, no characterization of autonomous flight control has been provided so far. Its relatively high flight velocity of more than 5 m/s largely limits its applicability in tight environments as it requires a turning radius of 8 m to perform a U-turn [21]. As a first step toward our self-deploying microglider, we developed a 1.5 g gliding robot [29] (Fig. 19.1) that can perform phototaxis, similar to the ground vehicles as proposed by Breitenberg [9]. To the best of our knowledge, this microglider is the lightest autonomously flying system to date. In order to achieve this very low weight, we opted for a relatively new kind of steering system. We developed a 0.2 g shape memory alloy (SMA) actuator that is harmoniously integrated into the structure of the microglider and allows for direct control of the rudder. As for navigation, two
Fig. 19.1 1.5 g SMA-actuated microglider performing autonomous phototaxis with a wingspan of 24 cm, a length of 22 cm, capable of flying at 1.5 m/s
19 Self-Deploying Microglider
273
tiny photoreceptors and a simple control strategy were used to detect and follow light gradients.
19.2.1 Airframe, Sensing, and Actuation The goal of the mechanical airframe design is to reduce the weight as far as possible while keeping the structure simple and easy to produce. As in our indoor flying robots [60], also described in Chap. 6 of this book, we chose to use carbon fiber material for the fuselage, the wing frame, and the rudder (Fig. 19.2). The material of the wing surface is biaxially oriented poly ethylene terephthalate (boPET) film (trade name “Mylar foil”) chosen for its high tensile strength. This construction principle leads to an airframe weight of only 0.31 g and has the advantage of being slightly flexible and thus being able to better absorb landing impact forces without breaking. A complete overview of the weight budget is shown in Table 19.1. As for the actuation system, different designs and materials could potentially be employed for actuating the control surfaces, such as magnetic coils, piezo actuators, or SMAs. Table 19.2 compares three types of actuators used on airplanes of less than 10 g. Small magnetic coils are easily available on the market, but deliver comparatively smaller forces and are
Table 19.1 Weight budget of the microglider Part
Mass (g)
Electronic board Battery 10 mAh Fuselage Front wing Rudder Light sensors SMA actuator Cables and soldering Total mass
0.33 0.55 0.18 0.1 0.03 0.1 0.2 0.02 1.51
difficult to control precisely in position. Piezo materials, on the other hand, deliver relatively high forces at very low power consumption, but are limited in displacement and usually require a relatively sophisticated and careful fabrication process and mechanical design. In addition to the actuator itself, the need for relatively heavy electronics to reach the required high voltage (200 V in [56]) decreases the promising properties of this approach for actuation. Therefore, we decided to use thin SMA wires because of their simplicity, high power density, and comparatively large displacement of 5% of their length. For our application of rudder control, we used commercially available nickel titanium alloy (Nitinol) wire, also known as “Muscles Wire” [1].
(a)
240 mm
Left Field of View 50°
(b)
65 mm
40 mm (e)
45°
(d) (c)
220 mm
Right Field of View
Fig. 19.2 Construction plan of the microglider: (a) main wing, (b) rudder, (c) electronic board and battery, (d) SMA actuator, (e) light sensors
274
M. Kovaˇc et al.
Table 19.2 Actuator comparison Actuator type
Mass (g)
Drive electronics (g)
Magnetic 0.15 0.02 coils [2] Piezo [56] 0.05 0.2 SMA [27] 0.12 0.01 (+ + + : very favorable; - - - : very unfavorable)
Power (mW)
Commercial availability
Mechanical complexity
Force output
180
+++
++
-
7 171
+ +
+
+++ ++
The working principle of SMA wire is that it exploits the crystallographic structure change of martensite to austenite (thermoelastic martensitic transformation) when heated above the transition temperature. This phase change produces a comparatively high force that can be used for actuation. A well-known drawback of SMA in general is its relatively high power consumption. However, for thin wires of 25 μm
diameter, the power consumption is only 160 mW, which is comparable to small magnetic actuators. The actuator we developed consists of (Fig. 19.3a) a copper–beryllium horn (Fig. 19.3e), a 0.7 mm steel tube (Fig. 19.3g), a frame with electrical interface and (Fig. 19.3d), and two 25 μm SMA wires attached to the frame and the horn. The stability of the actuator is given by the carbon fuselage (Fig. 19.3f). The wires are
Fig. 19.3 SMA actuator: (a) horn, (b) spring, (c) piston, (d) SMA wire, (e) steel tube, (f) carbon fuselage, (g) frame with electrical interface to the electronic board, (h) rudder. A PWM
signal from the microcontroller heats up the SMA wire on one side. Due to the crystalline structure change this wire contracts and bends the rudder to one side
19 Self-Deploying Microglider
activated with a pulse width modulation (PWM) signal, which leads to a maximal force of 0.069 N () =7 g) at the attachment point of the horn. This leads to a deflection of the horn and of the rudder, which is glued on the horn. The rotation point is the attachment point of the other SMA wire and the counterpart of this movement is the custom-made brass spring (Fig. 19.3b), which ensures back alignment of the rudder to the neutral position at zero PWM duty cycle. This actuator is then integrated with the airframe, a PIC 16 microcontroller, two light sensors, and a simple proportional control strategy [29], which enables the microglider to autonomously detect and follow light gradients as shown in Fig. 19.4. Launched from a catapult device to provide the glider with a take-off velocity of 2 m/s, it displays a gliding ratio of 5.6, which is relatively high compared to many gliding animals at such small scales. In order to characterize its phototaxis capabilities, we carried out three series of launches, each with a different position of the light
275
bulb, and showed that the glider consistently lands near the light source (Fig. 19.4).
19.2.2 Wing Folding The technology described above is promising for a small jumping robot, but the fixed open wings offer too much resistance to the air during the deployment phase of the jump. In order to maximize the jump height, the exposed wing surface has to be as small as possible during deployment. We thus aimed at developing a wing-folding mechanism that keeps the wings contracted while the robot is in the deployment phase and quickly unfolds them when the robot starts to lose altitude. The requirements for such a mechanism are to be able to open very quickly, be as lightweight as possible, stable when open, and robust enough to withstand eventual crash landings.
Fig. 19.4 Top view of the setup for the phototaxis experiments. Stars indicate the three possible light source locations. For each of the three locations, 12 subsequent launches have been carried out. The triangles mark the landing positions
276
Nature offers many foldable structures, which are a potential source of inspiration for the design of such a mechanism. For example, leaves unfold from a very compact package to the complete unfolded leaf with a high structural stability [33, 43]. Other ways of unfolding can be found in soft animals, such as anemones and various worms [55, 53]. Also, many insects use origami-like mechanisms to fold their wings, such as the hind wings of Dermaptera [23, 24], and most of the birds and bats fold wings to protect their often fragile structures. Figure 19.5 shows some examples of folding structures found in nature. Origami-like structures are very interesting and when carefully designed would even allow to unfold in a 3D shape and form a cambered wing (similar as the wooden roof structures in [12], see Fig. 19.5).
M. Kovaˇc et al.
A difficulty for this technology, however, is to find an appropriate material that is light enough and can be used on such small scales. After a detailed evaluation phase of all those solutions and several conceptional prototypes, we decided to adopt an abstracted form of the wing-folding principle used by bats. The main advantages of this solution are its structural stability when open and the possibility to easily build the skeleton out of carbon and cover the wing itself with Mylar foil so as to yield a minimal weight. The realized prototype (Fig. 19.6) weights only 4.5 g and is able to unfold its wings in 50 ms on command using a SMA-based locking mechanism. The skeleton of the wing-folding mechanism consists of six hinges that are interconnected with 1 mm
Fig. 19.5 A selection of folding structures: (A) hind wings of Dermaptera [23, 24]; (B) model of a wooden roof structure [12]; (C) folding leafs [33]; (D) wing folding in bats [39]
19 Self-Deploying Microglider
277
Fig. 19.6 Bat-inspired wing-folding system. A string (a) is attached to the wing tips (b) and is rolled on a spool (c) to fold the wing
carbon rods and covered with aluminum-coated 5 μm Mylar foil (Fig. 19.6). The folding of the wings happens by rolling up a string (a) which is attached at the tip of the wing (b). The hinges contain each a torsion spring that is encapsulated in a polyoxymethylene (POM) frame (Fig. 19.8). By rolling up the string, the
Fig. 19.7 A complete unfolding sequence takes only 50 ms
torsion springs store elastic energy and unfold the wing once the string gets released. A complete unfolding sequence takes only 50 ms and can be seen in Fig. 19.7. In order to roll up and release the string, we developed a SMA-based unlocking mechanism, which will be described next.
278
M. Kovaˇc et al.
Fig. 19.8 The wing-folding mechanism contains six hinges which are interconnected with carbon rods. The torsion spring (a) is embedded in a POM frame (b)
Fig. 19.9 SMA-based release mechanism. The string (a) is attached to the wing tip and gets rolled up on the spool (b), which is connected to the gear (c). A 37 μm SMA wire pulls the spool laterally to unlock it from the gear and to allow the
wing to unfold. After relaxing the SMA, the spring (d) pushes the spool back and connects it with the gear for the next folding cycle
In order to fold the wing, the string (Fig. 19.9a) that is attached to the wing tip is rolled on the spool (Fig. 19.9b). The spool itself is connected to a gear (Fig. 19.9c) which is interfaced with the gearbox of the jumping mechanism as we will describe later in Sect. 19.4. A 37 μm SMA wire (Fig. 19.9) contracts and pulls the spool laterally, to unlock it from the gear and thus allows the force stored in the torsion springs in the hinges of the wings to unfold the wings. As soon as the unfolding is completed, the spring (Fig. 19.9d) pushes the spool back and reconnects it with the gear (Fig. 19.9c) in order to fix the position and allow the next folding sequence.
of existing jumping robots with onboard energy and control. None of these systems are sufficient to meet the weight and size constrains of our self-deploying microglider, i.e., palm sized with an entire system weight of 10 g. We thus developed a 5 cm jumping mechanism [28] (Figs. 19.10, 19.11) weighting a mere 7 g with electronics and battery. It consists of the gearbox including motor, gearwheels and cam, the main leg, carbon rods as feet, the infrared receiver, and a 10 mAh lithium polymer battery. Its design allows to manually adjust the take-off angle, jumping force, and force profile during the acceleration phase. This is useful to obtain a desired trajectory, jumping height, and to be able to optimize jumping on slippery surfaces or compliant substrates. It has been shown [3, 6, 44] that at small size it is most advantageous to slowly charge an elastic element, release it with a click mechanism, and amplify the take-off velocity using the legs as catapults, rather than applying a squad or countermovement jump. This
19.3 Jumping The jumping mechanism must be lightweight and capable of propelling the glider as high as possible into the air. Table 19.3 summarizes the performance
19 Self-Deploying Microglider
279
Table 19.3 State of the art on jumping robots with onboard energy and control Approx. jump Jump height per Name Mass [g] height [cm] mass [cm/g]
Approx. jump height per size [-]
Rescue robot [54] Minimalist jumping robot [11] Jollbot [4] Glumper [4] Scout [49] Mini-Whegs [30] Grillo [46] Jumping robot presented here
3.5 6 1.4 3.2 3.5 2.2 1 27.6
2000 1300 465 700 200 190 8–80 7
Fig. 19.10 A 7 g jumping robot prototype capable of clearing obstacles of up to 1.4 m height along with a desert locust using the same biomechanical design principle for jumping
Fig. 19.11 CAD model of the gearbox: (a) brass bearing to reduce friction, (b) distance piece to align the two body plates, (c) cam axis, (d) slot in main leg for the cam, (e) main leg,
80 90 21.8 50 35 22 5 138
0.04 0.07 0.05 0.23 0.18 0.12 0.63–0.06 19.77
principle is used by most of the small jumping animals such as frogs [44], desert locusts [6], stick insects [14], froghoppers [13], click beetles [3], or fleas [22]. We applied the same biomechanical design principles and were capable of achieving a very high jumping performance compared to existing jumping robots (Table 19.3). By changing the proportions of the four-bar leg mechanism (Fig. 19.12) we can generate different foot tip trajectories, which translates to different ground force profiles, acceleration times, and take-off angles depending on which length is changed. The amount of energy that will be stored in the springs can be adjusted between 106 and 154 mJ in steps of 6 mJ, by changing the spring setting (Fig. 19.11f). The two body plates (Fig. 19.11g) consist of a material called Cibatool, which is commonly used for rapid prototyping, can be easily machined, and has low weight. The cam and gears are manufactured from polyoxymethylene (POM) due to its low weight and low surface friction
(f) series of holes for spring setting, and (g) the two body plates. (1), (2) 0.2 mm polyoxymethylene (POM) gears and (3) 0.3 mm POM gear
280
Fig. 19.12 Sketch of the four-bar linkage jumping design and the foot tip P trajectory during takeoff: (a) the input link and (b) the ground link. Changing the length of these four bars allows
coefficient. For critical structural parts in the body and legs we used polyetheretherketone (PEEK) due to its very high strength-to-weight ratio. Table 19.4 presents the weight budget of the robot. The entire and fully functional remote-controlled prototype weights 6.98 g in its current form. Further weight reduction could be achieved by optimizing the two body plates, e.g., by drilling additional holes in it and by using a smaller infrared receiver and battery. Figure 19.13 depicts a complete take-off sequence of the jumping prototype including a payload of 3 g on top of the 7 g prototype. In order to illustrate the adaptability of the jumping force, Fig. 19.14 shows the jumping trajectories that were extracted from highspeed movies for the jumping robot without, and with, a payload of 3 g. The maximal height obtained without additional payload was 138 cm. The acceleration time was 15 ms, the initial take-off velocity 5.96 m/s, and the velocity at the top 0.9 m/s. The complete jump
M. Kovaˇc et al.
to adjust the take-off angle (change distance (e)), acceleration time (change distance (a) and (c)), and trajectory of the foot tip P (change ratio (b)/(d))
Table 19.4 Weight budget of the jumping robot Part Material
Weight [g]
Body frame
Cibatool/PEEK
1.4
Cam
POM
0.78
Gears
POM
0.63
Main leg
Aluminium
0.76
Plastic parts on leg
PEEK/Carbon
0.32
Screws and axis
Steel/brass
0.79
Two springs
Spring steel
0.41
Motor
0.65
Total mass mechanism
5.74
LiPo Battery
0.48
IR receiver
0.76
Total mass prototype
6.98
19 Self-Deploying Microglider
t = –1000 ms
t = 0 ms
281
t = 6 ms
t = 12 ms
t = 18 ms
t = 24 ms
Fig. 19.13 Take-off sequence of our jumping robot compared to a desert locust
Fig. 19.14 Jump trajectory at different spring settings for the prototype with and without an additional payload of 3 g
duration is 1.02 s and the traveled distance 79 cm at a take-off angle of 75◦ . This means that the prototype presented here is capable of overcoming obstacles of more than 27 times its own body size. The motor recharges the mechanism for one jump cycle in 3.5 s. Using a 0.48 g LiPo battery, it thus allows for approximately 108 jumps.
19.4 System Integration The next step is to combine the jumping mechanism with the wing-folding system in order to allow the robot to jump and unfold the wings at the top of the
jumping trajectory. This, however, poses a number of integration challenges related to actuation and dynamics of the complete system. Figure 19.15 depicts a CAD model of a possible integration of the two mechanism. The gear (Fig. 19.9c and 19.15a) from the wing-folding mechanism is interfaced with the third stage of the gear systems of the jumping mechanism (Figs. 19.11, 19.3, 19.15b). While charging the legs for jumping, this coupled system folds also the wings. Once the robot takes off, the wings are opened by the SMA-based release mechanism as described in Sect. 19.2.2. Here, an important challenge is to define the time to unfold the wings. Since the forces acting on the system are very small at the top of the jump trajectory, it is difficult to precisely determine when it is best
282
Fig. 19.15 CAD model of a possible integration of the subsystems. The gear from the wing-folding mechanism (a) is interfaced with the third stage of the gear system of the jumping mechanism (b). While charging the legs for jumping, it also folds the wings which it can release on command on top of the jumping trajectory
to unfold the wings. A possible solution may be to just assume a time constant when to open the wings after takeoff. However, this integration of the wing-folding and the jumping mechanism still has the limitation that the attitude of the glider on top of the jump has to be upright in order to permit the robot to recover and perform stable gliding flight. Future work addresses the question how to recover from any position in air and ensure a proper transition to the subsequent gliding phase.
19.5 Conclusion In this chapter, we highlighted jumping and gliding as a promising locomotion strategy to move across rough terrain. Although there still remain challenges in the integration of the components that a self-deploying microglider will need, the three subsystems described here may also be used as stand-alone platforms. For example, the jumping prototype could be equipped with an uprighting mechanism, a small communication unit, and sensors such as cameras or chemical sensors to perform environmental or security monitoring. The gliding system may also be equipped with sensors and launched from the roof of buildings or from airplanes to monitor the environment. The wing-folding mechanism may allow the robot to be thrown high in the air by hand or by a catapult system.
M. Kovaˇc et al.
Biological systems were a useful source of inspiration; for example, the wing-folding mechanism adopts the mechanical principle from bats to fold the wings, which leads to a very lightweight, simple, and stable wing-folding mechanism. Another example of direct biological inspiration is the jumping robot which, in the same way as, e.g., locusts or fleas, first slowly charges an elastic element in the legs and then releases it quickly using a click mechanism to perform a catapult jump. On the other hand, such a robot may also be used as a physical model to test models of jumping and gliding in nature. The parameters that can be adjusted in our robotic platform include (i) the mass, (ii) the strength of the motor and spring in the jumping mechanism, (iii) the ground force profile during the acceleration phase of jumping, (iv) the leg length and flexibility, (v) the wing size and shape, and (vi) the coordination between the jumping and wing-folding subsystems, e.g., when to open the wings and how to stabilize and recover. By modifying these parameters, scale effects, such as the interplay between mass and size, of jumping animals can be investigated. The way in which the addition of wings affects jumping performance and the distance traveled per energy unit may also give insight in the evolution of flight. Acknowledgments The authors would like to thank Martin Fuchs and Gregory Savioz for their significant contribution in the development of the wing-folding mechanism and the jumping robot. Also we would like to thank the Atelier de l’Institut de production et Robotique (ATPR), and André Guignard for their competent advice and endurance in the iterative fabrication process. Many thanks to Hans Ulrich Buri at EPFL for the fruitful discussion and advise on the Origami structures. This project is funded by EPFL and by the Swiss National Science Foundation, Grant number 200021-105545/1.
References 1. Info-sheet No. 13, Nitinol Alloy Types, Conditions and Surfaces. http://www.memory-metalle.de 2. Micro flyer radio. http://www.microflierradio.com 3. Alexander, R.M.: Principles of Animal Locomotion. Princeton University Press (2003) 4. Armour, R., Paskins, K., Bowyer, A., Vincent, J.F.V., Megill, W.: Jumping robots: a biomimetic solution to locomotion across rough terrain. Bioinspiratoin and Biomimetics Journal 2, 65–82 (2007) 5. Azuma, A.: The Biokinetics of Flying and Swimming. American Institute of Aeronautics and Astronautics (2006)
19 Self-Deploying Microglider 6. Bennet-Clark, H.C.: The energetics of the jump of the locust schistocerca gregaria. Journal of Experimental Biology 63(1), 53–83 (1975) 7. Bishop, K.L.: The relationship between 3-d kinematics and gliding performance in the southern flying squirrel, glaucomys volans. Journal of Experimental Biology 209(4), 689–701 (2006) 8. Boria, F.J., Bachmann, R.J., Ifju, P., Quinn, R., Vaidyanathan, R., Perry, C., Wagener, J.: A sensor platform capable of aerial and terrestrial locomotion. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005), pp. 3959–3964 (2005) 9. Braitenberg, V.: Vehicles – Experiments In Synthetic Psychology. The MIT Press, Cambridge, MA (1984) 10. Brodsky, A.K.: The Evolution of Insect Flight. Oxford University Press (1996) 11. Burdick, J., Fiorini, P.: Minimalist jumping robot for celestial exploration. The International Journal of Robotics Research 22(7), 653–674 (2003) 12. Buri, H., Weinand, Y.: ORIGAMI - folded plate structures, architecture. 10th World Conference on Timber Engineering (2008) 13. Burrows, M.: Biomechanics: Froghopper insects leap to new heights. Nature 424(6948), 509 14. Burrows, M., Wolf, H.: Jumping and kicking in the false stick insect prosarthria teretrirostris: kinematics and motor control. Journal of Experimental Biology 205(11), 1519– 1530 (2002) 15. Byrnes, G., Lim, N., Spence, A.: Take-off and landing kinetics of a free-ranging gliding mammal, the malayan colugo (galeopterus variegatus) (2008) 16. Coyle, F.A., Greenstone, M.H., Hultsch, A.L., Morgan, C.E.: Ballooning mygalomorphs: Estimates of the masses of sphodros and ummidia ballooners(araneae: Atypidae, ctenizidae). Journal of Arachnology 13(3), 291–296 (1985) 17. Davenport, J.: Allometric constraints on stability and maximum size in flying fishes: Implications for their evolution. Journal of Fish Biology 62, 455–463 (2003) 18. Dudley, R., Byrnes, G., Yanoviak, S.P., Borrell, B., Brown, R.M., McGuire, J.A.: Gliding and the functional origins of flight: Biomechanical novelty or necessity? Annual Review of Ecology Evolution and System 38, 179–201 (2007) 19. Dyke, G.J., Nudds, R.L., Rayner, J.M.V.: Flight of sharovipteryx mirabilis: the world’s first delta-winged glider. Journal of Evolutionary Biology 19(4), 1040–1043 (2006) 20. Emerson, S.B., Koehl, M.A.R.: The interaction of behavioral and morphological change in the evolution of a novel locomotor type: “flying” frogs. Evolution 44(8), 1931–1946 (1990) 21. Entwistle, J.P., Fearing, R.S.: Flight simulation of a 3 gram autonomous glider, available at http://www. stormingmedia.us/16/1693/A169374.html (2006) 22. Gronenberg, W.: Fast actions in small animals: springs and click mechanisms. Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology 178(6), 727– 734 (1996) 23. Haas, F., Gorb, S., Wootton, R.J.: Elastic joints in dermapteran hind wings: materials and wing folding. Arthropod Structure and Development 29(2), 137–146 (2000)
283 24. Haas, F., Wootton, R.J.: Two basic mechanisms in insect wing folding. Proceedings: Biological Sciences 263(1377), 1651–1658 (1996) 25. Jusufi, A., Goldman, D.I., Revzen, S., Full, R.J.: Active tails enhance arboreal acrobatics in geckos. Proceedings of the National Academy of Sciences 105(11), 4215–4219 (2008) 26. Kaspari, M., Weiser, M.D.: The sizegrain hypothesis and interspecific scaling in ants. Functional Ecology 13(4), 530–538 (1999) 27. Keennon, M.T.: Muscle Wire Technology for Micro and Indoor Models (2004) 28. Kovac, M., Fuchs, M., Guignard, A., Zufferey, J., Floreano, D.: A miniature 7 g jumping robot (2008) 29. Kovac, M., Guignard, A., Nicoud, J.D., Zufferey, J.C., Floreano, D.: A 1.5 g sma-actuated microglider looking for the light. IEEE International Conference on Robotics and Automation, pp. 367–372 (2007) 30. Lambrecht, B.G.A., Horchler, A.D., Quinn, R.D.: A small, insect-inspired robot that runs and jumps. International Conference on Robotics and Automation, pp. 1240–1245 (2005) 31. Li, P.P., Gao, K.Q., Hou, L.H., Xu, X.: A gliding lizard from the early cretaceous of china. Proceedings of the National Academy of Sciences 104(13), 5507 (2007) 32. Macia, S., Robinson, M.P., Craze, P., Dalton, R., Thomas, J.D.: New observations on airborne jet propulsion (flight) in squid, with a review of previous reports (2004) 33. Mahadevan, L., Rica, S.: Self-organized origami. Science 307(5716), 1740 (2005). 10.1126/science.1105169. http://www.sciencemag.org/cgi/content/abstract/307/5716/ 1740 34. Maynard Smith, J.: The importance of the nervous system in the evolution of animal flight. Evolution 6(1), 127–129 (1952) 35. McCay, M.G.: Aerodynamical stability and maneuvrability of the gliding frog polypedates dennysi. Journal of Experimental Biology 204, 2817–2826 (2001) 36. McGuire, J.A.: Allometric prediction of locomotor performance: An example from southeast asian flying lizards. The American Naturalist 161(2), 337–9 37. McGuire, J.A., Dudley, R.: The cost of living large: Comparative gliding performance in flying lizards (agamidae: Draco). The American Naturalist 166(1), 93–106 (2005) 38. Meng, J., Hu, Y., Wang, Y., Wang, X., Li, C.: A mesozoic gliding mammal from northeastern china. Nature 444, 889–893 39. Norberg, U.M.: Bat wing structures important for aerodynamics and rigidity (mammalia, chiroptera). Zoomorphology 73(1), 45–61 (1972) 40. Norberg, U.M.: Vertebrate Flight: Mechanics, Physiology, Morphology, Ecology and Evolution (1990) 41. Oliver, J.A.: “gliding” in amphibians and reptiles, with a remark on an arboreal adaptation in the lizard, anolis carolinensis carolinensis voigt. The American Naturalist 85(822), 171–176 (1951). http://www.jstor.org/stable/2457833 42. Paskins, K.E., Bowyer, A., Megill, W.M., Scheibe, J.S.: Take-off and landing forces and the evolution of controlled gliding in northern flying squirrels glaucomys sabrinus. Journal of Experimental Biology 210(8), 1413 (2007) 43. Pellegrino, S.: Deployable Structures (2002)
284 44. Roberts, T.J., Marsh, R.L.: Probing the limits to musclepowered accelerations: lessons from jumping bullfrogs. Journal of Experimental Biology 206(15), 2567–2580 (2003) 45. Santer, R., Simmons, P., Rind, F.C.: Gliding Behaviour Elicited by Lateral Looming Stimuli in Flying Locusts. Journal of Comparative Physiology 191(1), 61–73 (2004) 46. Scarfogliero, U., Stefanini, C., Dario, P.: Design and development of the long-jumping “grillo” mini robot. IEEE International Conference on Robotics and Automation, pp. 467–472 (2007) 47. Socha, J., LaBarbera, M.: Effects of Size and Behavior on Aerial Performance of two Species of Flying Snakes (Chrysopelea). The Journal of Experimental Biology 208, 1835–1847 (2005) 48. Socha, J., O’Dempsey, T., LaBarbera, M.: A 3-d kinematic analysis of gliding in a flying snake, chrysopelea paradisi. Journal of Experimental Biology 208(10), 1817–1833 (2005) 49. Stoeter, S.A., Rybski, P.E., Papanikolopoulos, N.: Autonomous stair-hopping with scout robots. IEEE/RSJ International Conference on Intelligent Robots and Systems, vol.1, pp. 721–726 (2002) 50. Suter, R.B.: Ballooning: data from spiders in freefall indicate the importance of posture. Journal of Arachnology 20(2), 107–113 (1992) 51. Templin, R.J.: The spectrum of animal flight: insects to pterosaurs. Progress in Aerospace Sciences 36(5–6), 393–436 (2000)
M. Kovaˇc et al. 52. Thomas, A.L.R., Jones, G., Rayner, J.M.V., Hughes, P.M.: Intermittent gliding flight in the pipistrelle bat (pipistrellus pipistrellus)(chiroptera: Vespertilionidae). Journal of Experimental Biology 149(1), 407–416 (1990) 53. Thompson, D.: On growth and form (1992) 54. Tsukagoshi, H., Sasaki, M., Kitagawa, A., Tanaka, T.: Design of a higher jumping rescue robot with the optimized pneumatic drive. IEEE International Conference on Robotics and Automation, pp. 1276–1283 (2005) 55. Vincent, J.F.V.: Deployable Structures in Biology, pp. 23–40. Springer (2003) 56. Wood, R., Avadhanula, S., Steltz, E., Seeman, M., Entwistle, J., Bachrach, A., Barrows, G., Sanders, S., Fearing, R.: Design, fabrication and initial results of a 2 g autonomous glider. IEEE Industrial Electronics Society 2005 Meeting, Raleigh North Carolina (2005) 57. Yanoviak, S., Dudley, R., Kaspari, M.: Directed Aerial Descent in Canopy Ants. Nature 433, 624–626 (2005) 58. Yanoviak, S.P., Dudley, R.: The role of visual cues in directed aerial descent of cephalotes atratus workers (hymenoptera: Formicidae). Journal of Experimental Biology 209(9), 1777–1783 (2006) 59. Young, B.A., Lee, C.E., Daley, K.M.: On a flap and a foot: Aerial locomotion in the “flying” gecko, ptychozoon kuhli. Journal of Herpetology 36(3), 412–418 (2002) 60. Zufferey, J.C., Floreano, D.: Fly-inspired visual steering of an ultralight indoor aircraft. IEEE Transactions on Robotics 22, 137–146 (2006)
Chapter 20
Solar-Powered Micro-air Vehicles and Challenges in Downscaling André Noth and Roland Siegwart
Abstract In many aerial vehicles applications, the flight duration is a key factor for the success of a mission. A solution to improve significantly this endurance is the use of solar energy. This chapter presents a conceptual design methodology for the sizing of solar-powered airplanes, applicable to MAVs as well as to manned sailplanes, that optimizes the sizing of the different elements. It uses mathematical models, for weight prediction for example, that were studied on a very large range. This allows to clearly point out the problems that occur when scaling down solar MAVs.
20.1 Introduction The research in bio-inspired robotics is growing indisputably. Examples include fly-inspired optic flow for perception as in Chaps. 3, 5, and 6, artificial nervous systems for control developed through genetic algorithms, flight mechanics based on studies on flapping wings using tiny artificial muscles as in Chap. 16. The autonomous MAVs coming out of this research can rarely be tested more than 15–20 min, simply because in terms of energy storage and flight efficiency, robotics is still far away from biology [10]. A solution to expand this endurance for MAVs is the use of solar energy. With solar cells integrated in their structure and a dedicated electronics called maximum power point tracker (MPPT) to manage them,
they would be able to acquire energy from the sun and use it for flight propulsion and control, the eventual surplus being stored for higher power demands.
20.1.1 State of the Art The history of solar aviation has seen the realization of numerous very successful airplanes powered only with this abundant and free energy [9]. In 1974, two decades after the development of the silicon photovoltaic cell, R.J. Boucher and his team designed the first solar-powered aircraft, which then performed a 20 min flight. The new challenge that fascinated the pioneers was then to achieve manned flight solely powered by the sun. In 1980, this was realized with the Gossamer Penguin, designed by Dr. Paul B. McCready and AeroVironment Inc. Since then, many projects started, most of them in the direction of high-altitude long-endurance platforms with wingspans of several tens of meters. Unfortunately, solar flying platforms below 1 m are far less numerous. A few hobbyists developed such MAVs, like the SolFly, Sol-Mite, and Micro-Mite. In the academic community, Roberts et al. designed the 38 cm wingspan SunBeam [11] and Fuchs and Diepeveen worked on the 77 cm Sun-Surfer. However, they all encountered problems achieving a stable long-duration flight.
20.1.2 Objectives and Structure of this Chapter A. Noth () Autonomous Systems Lab, ETHZ, Zürich, Switzerland e-mail:
[email protected]
This chapter aims at investigating precisely the problems that occur when targeting solar flight at the MAV
D. Floreano et al. (eds.), Flying Insects and Robots, DOI 10.1007/978-3-540-89393-6_20, © Springer-Verlag Berlin Heidelberg 2009
285
286
size, by focusing on the global design of such flying platforms. We will not tackle the precise problem of airfoil selection only, or propeller optimization, but rather concentrate on the optimal sizing of all the elements and particularly emphasize the scaling effects. For this purpose, we will introduce the design methodology that was developed within the framework of the Sky-Sailor project1 at the Autonomous Systems Lab of ETHZ in Zürich. A first example will show how it was applied on the 3.2 m wingspan Sky-Sailor UAV. Then, using the mathematical models developed for each subpart, such as the mass and efficiency of a motor with respect to its power, the advantages and drawbacks of downscaling will be presented. That will allow an adaptation of the parameters used in the methodology in order to design a solar MAV.
20.2 Design Methodology The methodology that we propose is based on two simple balances: • Weight balance: the lift force has to be equal to the weight of all the elements constituting the airplane. • Energy balance: the energy that is collected during a day or the light period from the solar panels has to be equal or higher than the electrical energy needed by the motor during the entire flight. These two balances are represented in Fig. 20.1 where it is clear that the propulsion group and the airframe cannot be dimensioned without knowing the total mass to lift, but this value is the sum of the masses of all the airplane elements that will be sized according to the energy required by the propulsion group. From here on, and considering the type of mission and the payload to embed, there could be two different methods to solve this loop, i.e., to design the airplane.
1
http://sky-sailor.epfl.ch/
A. Noth and R. Siegwart
Energy balance
Payload Plane structure Propulsion group Solar charger Solar cells Battery
Energy needed
Mass to be lifted Mass balance
Aerodynamic Propulsion group
Fig. 20.1 Energy and mass balances
• The discrete and iterative approach would consist in selecting a first set of components (motor, solar panels, battery, etc.) based on pure estimation of the final required power or on previous designs. Then, having their total mass, the wing surface and propulsion group could be sized. Having chosen a precise motor, gearbox, and propeller, one could calculate the power needed for level flight. This value would then be compared with the power available from the previously selected solar generator, and so on, an iterative process would take place with at each time refining selection, improving the design, ending hopefully with a converging solution. • The approach used here is an analytical and continuous approach that consists in describing all the relations between the components with analytical equations using models describing the characteristics of each of them. This method has the benefit of directly providing a unique and optimized design, but requires very good mathematical models. In the present case, an important effort was made to make these models accurate on a very wide range, so that the methodology could be applied to solar MAVs as well as to manned solar airplanes. The first steps are thus to establish the expression of the power needed for the aircraft in level flight, the solar energy available, and then develop the weight prediction models for all the airplane elements. All the details of these developments can be found in [8] and [9] and are summarized in compact form in Fig. 20.2. It represents the problem of solar airplane conceptual design in a graphical approach and in fact contains the same loop that is depicted in Fig. 20.1 but displayed in a compact mathematical manner.
20 Solar-Powered Micro-air Vehicles and Challenges in Downscaling
a3 bx1
mav + mpld
a4 a5
a6
kaf AR x 2
k sc + kenc
k m ppt Imax ηsc ηcbr ηm ppt
Solar cells area
Tnight
a7
ηdchrg kbat
a8
kprop
287
Fixed masses Mass of plane structure Mass of solar panels
Total mass
Mass of mppt
3
^2
CD Mass of battery
a0
Mass of propulsion group
CL
3
2
Mechanical power for level flight
1 b 2 AR g3 ρ
a1
Electrical power for level flight
a9 Tnight 1 1 π 1+ 2 ηsc ηcbr ηmppt ηwthr Tday ηchrg ηdchrg Imax
(
(
1 ηctrl ηmot ηgrb ηplr
a2
Total electrical power consumption
1 ηbec
(P
av
+ Ppld )
Fig. 20.2 Schematic representation of the design methodology
In order to extract meaningful information, it is necessary, among the 30 parameters that our model contains, to distinguish between three different classes: • The first group is composed of the parameters which are linked to a technology and are constant or can be regarded as constant for very good design. This is for example the case of motor or propeller efficiencies that should be around 85% when optimized for a specific application [13]. • The second group of parameters is linked to the mission: they are the air density, given by the flight altitude; the day and night duration, depending on the time and the location; and the mass and power consumption of the payload. • Finally, the last group is composed of the parameters that we vary during the optimization process in order to determine the airplane layout, that is why we should use here the term variable rather than parameter. They are the wingspan and the aspect ratio of the wing.
elements are summed up in Fig. 20.2 and using the substitution variables ai , we can write 1 3 m−a0 a1 (a7 +a8 +a9 (a5 +a6 )) m 2 +, -b * a10
(20.1)
= a2 (a7 +a9 (a5 +a6 ))+a3 +a4 bx1 * +, a11
1 3 m − a10 m 2 = a11 + a4 bx1 * +, b * +, a13 a12
or more compact 3
m − a12 m 2 = a13
(20.3)
Equation (20.3) has only a positive non-complex solution, which makes physical sense, if a212 a13 ≤
A complete listing of these parameters is presented in Tables 20.1, 20.2, and 20.3. The values that are mentioned were used for the design of the 3.2 m wingspan Sky-Sailor prototype. The process to solve the loop analytically is quite simple. Considering the point where the masses of all
(20.2)
4 27
(20.4)
The conceptual design process can thus be summarized as follows: after having set the mission requirements and chosen the technological parameters, one can try many possible airplane layouts by changing b and AR. The condition on Eq. (20.4) tells directly if the
288
A. Noth and R. Siegwart
Table 20.1 Parameters that are constant or assumed constant
Parameter
Value
Unit
Description
CL CD afl CD par e Imax Kbat Ksc Kenc Kmppt Kprop Kaf mav ηbec ηsc ηcbr ηchrg ηctrl ηdchrg ηgrb ηmot ηmppt ηplr Pav x1 x2
0.8 0.013 0.006 0.9 950 190·3600 0.32 0.26 0.00042 0.0008 0.44/9.81 0.15 0.65 0.169 0.90 0.95 0.95 0.95 0.97 0.85 0.97 0.85 1.5 3.1 –0.25
− − − − [W/m2 ] [J/kg] [kg/m2 ] [kg/m2 ] [kg/W] [kg/W] [kg/m3 ] [kg] − − − − − − − − − − [W] − −
Airfoil lift coefficient Airfoil drag coefficient Parasitic drag coefficient Oswald’s efficiency factor Maximum irradiance Energy density of Li-Ion Mass density of solar cells Mass density of encapsulation Mass to power ratio of MPPT Mass to power ratio of prop. group Structural mass constant Mass of autopilot system Efficiency of step-down converter Efficiency of solar cells Efficiency of the curved solar panels Efficiency of battery charge Efficiency of motor controller Efficiency of battery discharge Efficiency of gearbox Efficiency of motor Efficiency of MPPT Efficiency of propeller Power of autopilot system Airframe mass area exponent Airframe mass aspect ratio exponent
Table 20.2 Parameters determined by the mission Parameter Value Unit Description
to help to choose the best combination and size of the different elements.
mpld ηwhtr Ppld ρ Tday
20.3 Methodology Application: The Sky-Sailor UAV
0.05 0.7 0.5 1.1655 13.2·3600
[kg] − [W] [kg/m3 ] [s]
Payload mass Irradiance margin factor Payload power consumption Air density (500 m) Day duration
design is feasible or not with this wingspan and aspect ratio. In the case of a positive answer, the total mass m can be found, which will constitute the starting point for the calculation of the power and characteristics of all the other elements. Hence, this method is not aimed at being used to optimize a precise and local element like the airfoil or the propeller, its objective is rather
Table 20.3 Variables linked to the airplane shape Parameter Value Unit Description AR b m
12.9 3.2 2.5
− [m] [kg]
Aspect ratio Wingspan Total mass
In order to see how it can be concretely applied, we will present here the example of the Sky-Sailor airplane. The objective here is to design an UAV that can embed a small payload of around 50 g, but that can achieve continuous flight at constant altitude over 24 h using only solar energy. The mission and technological parameters that were used are presented in Tables 20.1, 20.2. Using these parameters and trying various airplane shapes, i.e., wingspan from 0 to 6 m and different aspect ratios, Eq. (20.4) determines if the solution is feasible, in which case Eq. (20.3) is solved to find the airplane gross mass (Fig. 20.3). Having found for each possibility the total mass, one can then introduce it in the loop of Fig. 20.2 to calculate precisely all the other airplane characteristics:
20 Solar-Powered Micro-air Vehicles and Challenges in Downscaling
Total mass of solar airplane [kg]
9
Aspect ratio 8 9 10 11 12 13 14 16 18 20
8 7 6 5 4 3 2 1
0
1
2
3 4 Wingspan [m]
5
6
Fig. 20.3 Possible size configurations for a solar UAV embedding a 50 g payload for a 24 h flight depending on the wingspan b and the aspect ratio AR
powers at propeller, gearbox, motor and battery, surface of wing and solar panels, weights of the different subparts and also flying speed (Fig. 20.4). Finally, depending on the application, a selection criterion will be defined. They can concern speed or wingspan, the UAV being stowed in a limited volume and launched by hand. With the help of the plot of Fig. 20.4, a final configuration will be selected. In the case of the Sky-Sailor project, a wingspan of 3.2 m and an aspect ratio of 13 were selected, which leads to a theoretical mass of 2.55 kg. It is also interesting to plot the mass distribution in Fig. 20.5, for example, to observe that the battery constitutes more than
289
40% of the entire weight. Between 2005 and 2007, a fully functional prototype was realized to validate this design methodology (Fig. 20.6). The efficiencies and weight prediction models turned out to be very accurate, resulting in a total airplane mass of 2.506 kg and a total electrical power draw of 14 W for level flight, whereas the predicted power consumption was 14.2 W. The wing is covered by a half square meter of silicon solar cells that offer a maximum power of 90 W at noon in summer. The prototype was tested successfully in June 2008 during an autonomous flight of more than 27 h with a flight distance of 874 km, using the sun as the only source of energy. This experiment proved the feasibility of continuous flight at constant altitude without using thermal soaring or storing potential energy by gaining altitude in the afternoon. It, however, requires extremely calm wind conditions, a good irradiance during the day, and especially no clouds at sunrise and sunset. Further information concerning this prototype can be found in [8].
20.4 The Pros and Cons of Downscaling In the last section, we considered the application of our methodology and validated it at the UAV size, but now we can wonder how the feasibility of solar flight evolves with scaling. The analytical character of the design method and its mathematical models allows us to precisely discuss these scaling issues on the different airplane parts, which is the subject of the following subsections. 2.5
9 8
Power at propeller [W]
7
Fig. 20.4 Aircraft and flight characteristics depending on the wingspan b and the aspect ratio AR
Wing area [m2]
10
0
1
2
3
4
5
50 40 30 20 10 0
0
1
2
3
4
Wingspan [m]
5
6
2 1.5 Aspect ratio
1 0.5 0
6 Solar area ratio [%]
Speed [m/s]
11
0
1
2
3
4
5
6
0
1
2 3 4 Wingspan [m]
5
6
100 90 80 70 60 50
8 9 10 11 12 13 14 16 18 20
290
A. Noth and R. Siegwart
imately a line with the following equation:
8 7 6
Payload Avionics Airframe Batteries Solar panels Mppt Propulsion group
Mass [kg]
5
0.050 0.150 0.870 1.030 0.305 0.032 0.113 2.550[kg]
W/S = 47 W 1/3
4 3
2.55
2 1 0
1
2
3 3.2 4 Wingspan [m]
5
6
Fig. 20.5 Mass distribution for the solution with AR = 13
20.4.1 Airframe The airplane structure is the only part that scales down very well. In fact, its weight is proportional to the cube of a reference length, the wingspan for example. This property was highlighted by Tennekes in 1992 who presented, in his book “The simple science of flight” [14], very interesting correlations including insects, birds, and airplanes. He summarized the relations in a log–log diagram named “The Great Flight Diagram” where, following his own words, “everything that can fly” is represented. The result is impressive: 12 orders of magnitude in weight, 4 orders of magnitude in wing loading, and 2 orders of magnitude in cruising speed. From the common fruit fly, Drosophila melanogaster, to the boeing 747, all the flying objects follow approx-
Fig. 20.6 The Sky-