Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5794
Ulrike Cress Vania Dimitrova Marcus Specht (Eds.)
Learning in the Synergy of Multiple Disciplines 4th European Conference on Technology Enhanced Learning, EC-TEL 2009 Nice, France, September 29–October 2, 2009 Proceedings
13
Volume Editors Ulrike Cress Knowledge Media Research Center (KMRC) Konrad-Adenauer-Str. 40, 72072 Tübingen, Germany E-mail:
[email protected] Vania Dimitrova University of Leeds School of Computing Knowledge Representation and Reasoning Research Group E.C. Stoner Building, Leeds LS2 9JT, UK E-mail:
[email protected] Marcus Specht Open University of the Netherlands Centre for Learning Sciences and Technologies (CELSTEC) Valkenburgerweg 177, 6419 AT Heerlen, The Netherlands E-mail:
[email protected]
Library of Congress Control Number: 2009934787 CR Subject Classification (1998): I.2.6, K.3.2, H.5.3, J.1, J.5, K.4 LNCS Sublibrary: SL 2 – Programming and Software Engineering ISSN ISBN-10 ISBN-13
0302-9743 3-642-04635-5 Springer Berlin Heidelberg New York 978-3-642-04635-3 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12766000 06/3180 543210
Preface
This conference on technology enhanced learning is the fourth event in a series that started in 2006. It was held from September 29th to October 2nd, 2009 in Nice (France). The EC-TEL conference series provides a forum for presenting and promoting high-quality research in the area of technology enhanced learning. The EC-TEL conference was originally launched by the European network of excellence ProLearn and attracted many people from both the ProLearn and Kaleidoscope networks of excellence. In 2009, a new European network, STELLAR, was launched, which continues the work and success of the former networks and takes a broader multi-disciplinary perspective. A key issue is making the research communities aware of the different projects and activities within Europe and beyond. The aim is to build an integrated research arena in which groups with different backgrounds can build on each other and where the synergy between multiple research approaches and disciplines is fostered. The face of learning is changing substantially. As a result, the topic of technology enhanced learning has to take a broader interdisciplinary perspective. Formal learning is surrounded by a variety of opportunities for informal learning, classroom learning is complemented by workplace learning, and even the frontiers between teaching and learning are disappearing. People are learning collaboratively, they engage in knowledge communities and change from knowledge recipients to knowledge producers. These developments are driven by new technologies: large scale knowledge repositories provide learners with content and support them in an individualized and adaptive way; semantic technologies provide contextualized and task-specific information; the Web 2.0 enables people to participate actively in knowledge communication and knowledge construction, mobile and ubiquitous computing technologies enable the integration of informal and formal learning support. These new tools and technical means call for psychological and educational models of learning, which will have to take into account the vast diversity of situations in which learning takes place today, as well as the specific needs of individuals, tutors and organisations. The papers submitted to this conference reflect this broad range of topics. A total of 25% of all submissions used the keyword “user-adaptive systems and personalisation”, which has been a typical topic of advanced learning environments for many years. The keywords “learning communities and communities of practice” and “collaborative knowledge building” were used by 23% of the submissions. These topics indicate a new perspective on learning and a drift from formal to more informal and natural learning. This tendency is also evident in the strong presence of the keywords “informal learning”, “learner motivation and engagement”, “problem and project-based learning”, “distance learning”, “knowledge management and organisational learning”, and “instruction design”.
VI
Preface
One fifth of the submissions exploited the newly emerging technological directions of “semantic web and Web 2.0”. The EC-TEL 2009 was truly international and highly competitive. Overall, 136 paper submissions and 22 poster submissions from 469 authors in 43 countries were received. The majority of submissions came from European countries (29 countries), but authors also came from 8 Asian and 4 American countries, as well as one African country. One submission was received from Australia. Program Committee members, coming from 19 countries, represented a broad spectrum of disciplines connected to technology enhanced learning. A rigorous review process was conducted where each submission was reviewed by at least three reviewers. Out of all submissions, 35 were accepted as full papers (22%), 17 as short papers and a further 35 as posters. In the proceedings, the full papers are allowed up to 15 pages and the short papers and posters up to 6 pages. The conference programme included three keynote speakers who gave an idea of the wide range of technology enhanced learning. Short abstracts of the keynote talks are included in the proceedings. The contributions presented in this volume show the colourfulness of research in technology enhanced learning. They describe technical innovations, demonstrate creative educational settings, invent exciting research questions and show successful implementations. We are confident that this spectrum of research will promote creativity and synergy. A conference of this size would not have been possible without the invaluable help of the organising committee: the workshop chairs Nikol Rummel and Peter Dolog, the doctoral consortium chairs Frank Fischer and Stefanie Lindstaedt, the demonstration chairs Alexandra Cristea and Nikos Karacapilidis, and the industrial session chair Volker Zimmermann. Special thanks go to the head of the local organizing team Katherine Maillet, as well as the publicity chairs Marcela Morales and Mohamed Amine Chatti. The EC-TEL 2009 conference promises to be a stimulating research event, presenting state-of-the-art projects and shaping the future of technology enhanced research in Europe and beyond. September 2009
Ulrike Cress Vania Dimitrova Marcus Specht
Conference Organisation
General Chair Marcus Specht
Centre for Learning Sciences and Technology, OUNL, NL
Programme Chairs Ulrike Cress Vania Dimitrova
Knowledge Media Research Center, Germany University of Leeds, UK
Local Organisation Chair Katherine Maillet
Institut T´el´ecom, Telecom & Management SudParis, France
Doctoral Consortium Chairs Frank Fischer Stefanie Lindstaedt
LMU University of Munich, Germany Know Center, Austria
Workshop Chairs Nikol Rummel Peter Dolog
University of Freiburg, Germany Aalborg University, Denmark
Demonstration Chairs Alexandra Cristea Nikos Karacapilidis
University of Warwick, UK University of Patras, Greece
Industrial Session Chair Volker Zimmermann
IMC, Germany
Publicity Chairs Marcela Morales
Institut T´el´ecom, Telecom & Management SudParis, France Mohamed Amine Chatti RWTH Aachen University, Germany
VIII
Organisation
Programme Committee Heidrun Allert , Austria Katrin Allmendinger, Germany Inmaculada Arnedillo-Sanchez, Ireland Nicolas Balacheff, France Maria Bielikova, Slovakia Zuzana Bizonova , Slovakia Bert Bredeweg, The Netherlands Peter Brusilovsky, USA Daniel Burgos, Spain Manuel Caeiro, France Lorenzo Cantoni, Switzerland Alexandra Cristea, UK Valentin Cristea, Romania Paul de Bra, The Netherlands Carlos Delgado Kloos, Spain Elisabeth Delozanne, France Pierre Dillenbroug, Switzerland Yannis Dimitriadis, Spain Peter Dolog, Denmark Benedict du Boulay, UK Eric Duval, Belgium Dieter Euler, Switzerland Christine Ferraris, France Adina Magda Florea, Romania Dragan Gasevic, Canada Andreas Gegenfurtner, Finland Denis Gillet, Switzerland Monique Grandbastien, France Jorg Haake, Germany Paivi Hakkinen, Finland Peng Han, Germany Andreas Harrer, Germany Christoph Held, Germany Marek Hatala, Canada Eelco Herder, Germany Knut Hinkelmann, Switzerland Ulrich Hoppe, Germany Patrick Jermann, Switzerland Nikos Karacapilidis, Greece Michael D. Kickmeier-Rust, Austria Barbara Kieslinger, Austria David Kirsh, USA Ralf Klamma, Germany Tomaz Klobucar, Slovenia
Rob Koper, The Netherlands Nicole Kraemer, Germany Milos Kravcik, The Netherlands Effie Law, Switzerland Lydia Lau, UK Martin Lea, UK Stefanie Lindstaedt, Austria Andreas Lingnau, Germany Chee-Kit Looi, Singapore Rose Luckin, UK George Magoulas, UK Katherine Maillet, France Alejandra Mart´ınez, Spain Vittorio Midoro, Italy Tanja Mitrovic, New Zealand Riichiro Mizoguchi, Japan Paola Monachesi, The Netherlands Wolfgang Nejdl, Germany Roger Nkambou, Canada Lucia Pannese, Italy Jan Pawlowski, Finland Juan Quemada, Spain Christoph Richter, Austria Uwe Riss, Germany Nikol Rummel, Germany Maggi Savin-Baden, UK Tammy Schellens, Belgium Daniel Schneider, Switzerland Judith Schoonenboom, The Netherlands Peter Scott, UK Evgenia Sendova, Bulgaria Mike Sharples, UK Kiril Simov, Bulgaria Peter Sloep, The Netherlands Pierre Tchounikine, France Stefan Trausan-Matu, Romania Julita Vassileva, Canada Vincent Wade, Ireland Armin Weinberger, The Netherlands Katrin Wodzicki, Germany Martin Wolpers, Belgium Volker Zimmermann, Germany
Organisation
Additional Reviewers Stamatina Anastopoulou Benjamin Huynh Kim Bang Michal Barla Scott Bateman Elizabeth Brown Roman Brun Wenli Chen Manuela Delfino Hendrik Drachsler Mar´ıa Blanca Ib´ an ˜ ez Espiga Raquel M. Crespo Garc´ıa George Gkotsis Israel Gutierrez Zoe Handley Yusuke Hayashi I-Han Hsiao Eva Hudlicka Raija H¨ am¨al¨ ainen Nikos Karousos Sebastian Kelle Tom Kirkham Styliani Kleanthous Kouji Kozaki Barbara Kump
Danielle H. Lee Vignollet Laurence Derick Leony Sarah Lewthwaite Tobias Ley David Maroto Sze Ho David Moh Vlad Posea Francesca Pozzi Andreas S. Rath Traian Rebedea Riad Saba Olga C. Santos Hans-Christian Schmitz Stefano Tardini Jozef Tvarozek Manolis Tzagarakis Elizabeth Uruchurtu Luis de la Fuente Valent´ın Dominique Verpoorten Juan Quemada Vives Michael Yudelson Sam Zeini Sabrina Ziebarth
IX
Table of Contents
Keynotes Making Sense of Sensemaking in the Digital World . . . . . . . . . . . . . . . . . . . Peter Pirolli
1
Towards an Interdisciplinary Design Science of Learning . . . . . . . . . . . . . . Mike Sharples
3
Use and Acquisition of Externalized Knowledge . . . . . . . . . . . . . . . . . . . . . . Friedrich W. Hesse
5
Adaptation and Personalisation LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexandra I. Cristea, David Smits, Jon Bevan, and Maurice Hendrix
7
The Conceptual and Architectural Design of a System Supporting Exploratory Learning of Mathematics Generalisation . . . . . . . . . . . . . . . . . Darren Pearce and Alexandra Poulovassilis
22
Experience Structuring Factors Affecting Learning in Family Visits to Museums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marek Hatala, Karen Tanenbaum, Ron Wakkary, Kevin Muise, Bardia Mohabbati, Greg Corness, Jim Budd, and Tom Loughin
37
Personalisation of Learning in Virtual Learning Environments . . . . . . . . . Dominique Verpoorten, Christian Glahn, Milos Kravcik, Stefaan Ternier, and Marcus Specht
52
A New Framework for Dynamic Adaptations and Actions . . . . . . . . . . . . . Carsten Ullrich, Tianxiang Lu, and Erica Melis
67
Getting to Know Your User – Unobtrusive User Model Maintenance within Work-Integrated Learning Environments . . . . . . . . . . . . . . . . . . . . . . Stefanie N. Lindstaedt, G¨ unter Beham, Barbara Kump, and Tobias Ley
73
Adaptive Navigation Support for Parameterized Questions in Object-Oriented Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I-Han Hsiao, Sergey Sosnovsky, and Peter Brusilovsky
88
Automated Educational Course Metadata Generation Based on Semantics Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ˇ Mari´ an Simko and M´ aria Bielikov´ a
99
XII
Table of Contents
Searching for “People Like Me” in a Lifelong Learning System . . . . . . . . . Nicolas Van Labeke, George D. Magoulas, and Alexandra Poulovassilis
106
Interoperability, Semantic Web, Web 2.0 Metadata in Architecture Education - First Evaluation Results of the MACE System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Wolpers, Martin Memmel, and Alberto Giretti Phantom Tasks and Invisible Rubric: The Challenges of Remixing Learning Objects in the Wild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David E. Millard, Yvonne Howard, Patrick McSweeney, Miguel Arrebola, Kate Borthwick, and Stavroula Varella Can Educators Develop Ontologies Using Ontology Extraction Tools: An End-User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marek Hatala, Dragan Gaˇsevi´c, Melody Siadaty, Jelena Jovanovi´c, and Carlo Torniai Sharing Distributed Resources in LearnWeb2.0 . . . . . . . . . . . . . . . . . . . . . . Fabian Abel, Ivana Marenzi, Wolfgang Nejdl, and Sergej Zerr SWeMoF: A Semantic Framework to Discover Patterns in Learning Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marco Kalz, Niels Beekman, Anton Karsten, Diederik Oudshoorn, Peter Van Rosmalen, Jan Van Bruggen, and Rob Koper
112
127
140
154
160
Data Mining and Social Networks Social Network Analysis of 45,000 Schools: A Case Study of Technology Enhanced Learning in Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ruth Breuer, Ralf Klamma, Yiwei Cao, and Riina Vuorikari
166
Analysis of Weblog-Based Facilitation of a Fully Online Cross-Cultural Collaborative Learning Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anh Vu Nguyen-Ngoc and Effie Lai-Chong Law
181
Sharing Corpora and Tools to Improve Interaction Analysis . . . . . . . . . . . Christophe Reffay and Marie-Laure Betbeder
196
Collaboration and Social Knowledge Construction Distributed Awareness for Class Orchestration . . . . . . . . . . . . . . . . . . . . . . . Hamed S. Alavi, Pierre Dillenbourg, and Frederic Kaplan
211
Table of Contents
XIII
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matthias Krauß, Kai Riege, Marcus Winter, and Lyn Pemberton
226
A Comparison of Paper-Based and Online Annotations in the Workplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ricardo Kawase, Eelco Herder, and Wolfgang Nejdl
240
Learning by Foraging: The Impact of Social Tags on Knowledge Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christoph Held and Ulrike Cress
254
Assessing Collaboration Quality in Synchronous CSCL Problem-Solving Activities: Adaptation and Empirical Evaluation of a Rating Scheme . . . Georgios Kahrimanis, Anne Meier, Irene-Angelica Chounta, Eleni Voyiatzaki, Hans Spada, Nikol Rummel, and Nikolaos Avouris
267
Learning Communities and Communities of Practice Facilitate On-Line Teacher Know-How Transfer Using Knowledge Capitalization and Case Based Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . Celine Quenu-Joiron and Thierry Condamines Edushare, a Step beyond Learning Platforms . . . . . . . . . . . . . . . . . . . . . . . . Romain Sauvain and Nicolas Szilas
273 283
Design in Use of Services and Scenarios to Support Learning in Communities of Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bernadette Charlier and Amaury Daele
298
Creating an Innovative Palette of Services for Communities of Practice with Participatory Design: Outcomes of the European Project PALETTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liliane Esnault, Amaury Daele, Romain Zeiliger, and Bernadette Charlier
304
Learning Contexts NetLearn: Social Network Analysis and Visualizations for Learning . . . . . Mohamed Amine Chatti, Matthias Jarke, Theresia Devi Indriasari, and Marcus Specht
310
Bridging Formal and Informal Learning – A Case Study on Students’ Perceptions of the Use of Social Networking Tools . . . . . . . . . . . . . . . . . . . . Margarida Lucas and Ant´ onio Moreira
325
How to Get Proper Profiles? A Psychological Perspective on Social Networking Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Katrin Wodzicki, Eva Schw¨ ammlein, and Ulrike Cress
338
XIV
Table of Contents
Collaborative Learning in Virtual Classroom Scenarios . . . . . . . . . . . . . . . . Katrin Allmendinger, Fabian Kempf, and Karin Hamann
344
Review of Learning in Online Networks and Communities . . . . . . . . . . . . . Kirsti Ala-Mutka, Yves Punie, and Anusca Ferrari
350
Self-profiling of Competences for the Digital Media Industry: An Exploratory Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Svenja Schr¨ oder, Sabrina Ziebarth, Nils Malzahn, and H. Ulrich Hoppe
365
PPdesigner: An Editor for Pedagogical Procedures . . . . . . . . . . . . . . . . . . . Christian Martel, Laurence Vignollet, Christine Ferraris, Emmanuelle Villiot-Leclercq, and Salim Ouari
379
Ontology Enrichment with Social Tags for eLearning . . . . . . . . . . . . . . . . . Paola Monachesi, Thomas Markus, and Eelco Mossel
385
Problem and Project-Based Learning, Inquiry Learning How Much Assistance Is Helpful to Students in Discovery Learning? . . . . Alexander Borek, Bruce M. McLaren, Michael Karabinos, and David Yaron A Fruitful Meeting of a Pedagogical Method and a Collaborative Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B´en´edicte Talon, Dominique Leclet, Gr´egory Bourguin, and Arnaud Lewandowski
391
405
A Model of Retrospective Reflection in Project Based Learning Utilizing Historical Data in Collaborative Tools . . . . . . . . . . . . . . . . . . . . . . Birgit R. Krogstie
418
Fortress or Demi-Paradise? Implementing and Evaluating Problem-Based Learning in an Immersive World . . . . . . . . . . . . . . . . . . . . . Maggi Savin-Baden
433
Project-Based Collaborative Learning Environment with Context-Aware Educational Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zoran Jeremi´c, Jelena Jovanovi´c, Dragan Gaˇsevi´c, and Marek Hatala
441
Learning Design Constructing and Evaluating a Description Template for Teaching Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Derntl, Susanne Neumann, and Petra Oberhuemer
447
Table of Contents
XV
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Val´erie Emin, Jean-Philippe Pernin, and Viviane Gu´eraud
462
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods in a University Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Susanne Neumann, Petra Oberhuemer, and Rob Koper
477
Motivation, Engagement, Learning Games Generating Educational Interactive Stories in Computer Role-Playing Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marko Div´eky and M´ aria Bielikov´ a CAMera for PLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hans-Christian Schmitz, Maren Scheffel, Martin Friedrich, Marco Jahn, Katja Niemann, and Martin Wolpers Implementation and Evaluation of a Tool for Setting Goals in Self-regulated Learning with Web Resources . . . . . . . . . . . . . . . . . . . . . . . . . Philipp Scholl, Bastian F. Benz, Doreen B¨ ohnstedt, Christoph Rensing, Bernhard Schmitz, and Ralf Steinmetz The Impact of Prompting in Technology-Enhanced Learning as Moderated by Students’ Motivation and Metacognitive Skills . . . . . . . . . . Pantelis M. Papadopoulos, Stavros N. Demetriadis, and Ioannis G. Stamelos Creating a Natural Environment for Synergy of Disciplines . . . . . . . . . . . . Evgenia Sendova, Pavel Boytchev, Eliza Stefanova, Nikolina Nikolova, and Eugenia Kovatcheva
492 507
521
535
549
Human Factors and Evaluation Informing the Design of Intelligent Support for ELE by Communication Capacity Tapering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manolis Mavrikis and Sergio Gutierrez-Santos
556
Automatic Analysis Assistant for Studies of Computer-Supported Human Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christophe Courtin and St´ephane Talbot
572
Real Walking in Virtual Learning Environments: Beyond the Advantage of Naturalness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matthias Heintz
584
Guiding Learners in Learning Management Systems through Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Olga C. Santos and Jesus G. Boticario
596
XVI
Table of Contents
Supervising Distant Simulation-Based Practical Work: Environment and Experimentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viviane Gu´eraud, Anne Lejeune, Jean-Michel Adam, Michel Dubois, and Nadine Mandran
602
Posters Designing Failure to Encourage Success: Productive Failure in a Multi-user Virtual Environment to Solve Complex Problems . . . . . . . . . . . Shannon Kennedy-Clark Revisions of the Split-Attention Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Athanasios Mazarakis Grid Service-Based Benchmarking Tool for Computer Architecture Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carlos Alario-Hoyos, Eduardo G´ omez-S´ anchez, Miguel L. Bote-Lorenzo, Guillermo Vega-Gorgojo, and Juan I. Asensio-P´erez Supporting Virtual Reality in an Adaptive Web-Based Learning Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Olga De Troyer, Frederic Kleinermann, Bram Pellens, and Ahmed Ewais A Model to Manage Learner’s Motivation: A Use-Case for an Academic Schooling Intelligent Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tri Duc Tran, Christophe Marsala, Bernadette Bouchon-Meunier, and Georges-Marie Putois Supporting the Learning Dimension of Knowledge Work . . . . . . . . . . . . . . Stefanie N. Lindstaedt, Mario Aehnelt, and Robert de Hoog User-Adaptive Recommendation Techniques in Repositories of Learning Objects: Combining Long-Term and Short-Term Learning Goals . . . . . . . Almudena Ruiz-Iniesta, Guillermo Jim´enez-D´ıaz, and Mercedes G´ omez-Albarr´ an
609 615
621
627
633
639
645
Great Is the Enemy of Good: Is Perfecting Specific Courses Harmful to Global Curricula Performances? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maura Cerioli and Marina Ribaudo
651
Evolution of Professional Ethics Courses from Web Supported Learning towards E-Learning 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Katerina Zdravkova, Mirjana Ivanovi´c, and Zoran Putnik
657
Towards an Ontology for Supporting Communities of Practice of E-Learning “CoPEs”: A Conceptual Model . . . . . . . . . . . . . . . . . . . . . . . . . . Lamia Berkani and Azeddine Chikh
664
Table of Contents
Using Collaborative Techniques in Virtual Learning Communities . . . . . . Francesca Pozzi Capturing Individual and Institutional Change: Exploring Horizontal versus Vertical Transitions in Technology-Rich Environments . . . . . . . . . . Andreas Gegenfurtner, Markus Nivala, Roger S¨ alj¨ o, and Erno Lehtinen A Platform Based on Semantic Web and Web2.0 as Organizational Learning Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adeline Leblanc and Marie-H´el`ene Abel Erroneous Examples: A Preliminary Investigation into Learning Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dimitra Tsovaltzi, Erica Melis, Bruce M. McLaren, Michael Dietrich, Georgi Goguadze, and Ann-Kristin Meyer Towards a Theory of Socio-technical Interactions . . . . . . . . . . . . . . . . . . . . . Ravi K. Vatrapu Knowledge Maturing in the Semantic MediaWiki: A Design Study in Career Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicolas Weber, Karin Schoefegger, Jenny Bimrose, Tobias Ley, Stefanie Lindstaedt, Alan Brown, and Sally-Anne Barnes Internet Self-efficacy and Behavior in Integrating the Internet into Instruction: A Study of Vocational High School Teachers in Taiwan . . . . Hsiu-Ling Chen Computer-Supported WebQuests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Furio Belgiorno, Delfina Malandrino, Ilaria Manno, Giuseppina Palmieri, and Vittorio Scarano A 3D History Class: A New Perspective for the Use of Computer Based Technology in History Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claudio Tosatto and Marco Gribaudo Language-Driven, Technology-Enhanced Instructional Systems Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Iv´ an Mart´ınez-Ortiz, Jos´e-Luis Sierra, and Baltasar Fern´ andez-Manj´ on
XVII
670
676
682
688
694
700
706 712
719
725
The Influence of Coalition Formation on Idea Selection in Dispersed Teams: A Game Theoretic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rory L.L. Sie, Marlies Bitter-Rijpkema, and Peter B. Sloep
732
How to Support the Specification of Observation Needs by Instructional Designers: A Learning-Scenario-Centered Approach . . . . . . . . . . . . . . . . . . Boubekeur Zendagui
738
XVIII
Table of Contents
Using Third Party Services to Adapt Learning Material: A Case Study with Google Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Luis de la Fuente Valent´ın, Abelardo Pardo, and Carlos Delgado Kloos
744
Virtual Worlds for Organization Learning and Communities of Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Candace Chou
751
A Methodology and Framework for the Semi-automatic Assembly of Learning Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Katrien Verbert, David Wiley, and Erik Duval
757
Search and Composition of Learning Objects in a Visual Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amel Bouzeghoub, Marie Buffat, Alda Lopes Gan¸carski, Claire Lecocq, Abir Benjemaa, Mouna Selmi, and Katherine Maillet A Framework to Author Educational Interactions for Geographical Web Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Nhan Luong, Thierry Nodenot, Philippe Lopist´eguy, and Christophe Marquesuza` a Temporal Online Interactions Using Social Network Analysis . . . . . . . . . . ´ Alvaro Figueira Context-Aware Combination of Adapted User Profiles for Interchange of Knowledge between Peers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sergio Gutierrez-Santos, Mario Mu˜ noz-Organero, Abelardo Pardo, and Carlos Delgado Kloos ReMashed – Recommendations for Mash-Up Personal Learning Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hendrik Drachsler, Dries Pecceu, Tanja Arts, Edwin Hutten, Lloyd Rutledge, Peter van Rosmalen, Hans Hummel, and Rob Koper Hanse 1380 - A Learning Game for the German Maritime Museum . . . . . Walter Jenner and Leonardo Moura de Ara´ ujo
763
769
776
782
788
794
A Linguistic Intelligent System for Technology Enhanced Learning in Vocational Training – The ILLU Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christoph R¨ osener
800
e3 -Portfolio – Supporting and Assessing Project-Based Learning in Higher Education via E-Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Philip Meyer, Thomas Sporer, and Johannes Metscher
806
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
811
Making Sense of Sensemaking in the Digital World Peter Pirolli Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA, USA
[email protected]
In this keynote presentation I discuss some of the exciting phenomena and challenges that are emerging as the digital universe evolves to become a more social medium that supports more complex information-seeking and learning activities. This discussion emerges from attempts to extend previous work on Information Foraging Theory [1] to address these new trends in online information-seeking and sensemaking. Information Foraging Theory is a theory of human-information interaction that aims to explain and predict how people will best shape themselves to their information environments, and how information environments can best be shaped to people. The theory has mainly focused on information seeking by the solitary user, but as the Internet and Web have evolved, so too must the theory, and so I will discuss recent studies of sensemaking and the social production, sharing, and use of information in areas such as wikis, social tagging, social network sites, and social search. The opportunity (and challenges) are enormous for developing a scientific foundation to support online groups and communities that are engaged in creating, organizing, and sharing the knowledge produced through social sensemaking. Sensemaking is a natural kind of human activity in which large amounts of information about a situation or topic are collected and deliberated upon to form an understanding that becomes the basis for problem solving and action. It goes beyond simply finding information. It is also involved in learning about new domains, solving ill-structured problems, acquiring situation awareness, and participating in social exchanges of knowledge. Sensemaking involves collecting, organizing and creating representations of complex information sets, all centered on the formation and support of mental models involved in understanding a problem that needs to be solved. Examples of such problems include understanding a health problem to make a medical decision, understanding the weather to make a forecast, intelligence analysis to identify strategic threats, and the collaborative collection and understanding of an emergency by first responders. Seminal papers on this topic emerged quasi-independently in the fields of human-computer interaction [2], organizational science [3], and macrocognition [4]. Making sense of challenging domains of knowledge using the Internet has become a ubiquitous activity in the digital era. For those who have access, the Internet has become the primary resource for learning about science, technology, health and medicine, and current events [5]. As the information environment has become richer, it has become a place to explore and learn over longer periods of time. The Internet and the Web have also become much more social [6] with a variety of technologies to exploit or enhance social information foraging. The Web, blogs, email, Internet groups, U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 1 – 2, 2009. © Springer-Verlag Berlin Heidelberg 2009
2
P. Pirolli
collaborative tagging, wikis, recommender systems, and other technologies are all aimed at supporting cooperative information sharing and their success implies their effectiveness. The utility of such systems typically depends on having large user bases and higher rates of contribution by individuals. With respect to sensemaking, the utility of such sites additionally depends on such factors as how readily people can judge the credibility of the sources and authors of user-generated content, how knowledge produced by one individual transfers to another, and how well specific tools support content learning. In this presentation, I will discuss research addressing some of these needs. I will also discuss some social phenomena that arise from many interacting users including: the effects of diversity and social brokerage, the standing-onthe-shoulders-of-giants effect, the effects of social interference, and the role of user interface interaction costs. Given the increased ease with which it is possible to study social networks and information flow in the electronic world, it is likely that there will be more studies of the effects of technologies on social structure and social capital, hence a need for a suitable theoretical framework. The efflorescence of online social interaction and collective action raises fundamental questions about the conditions and interaction architectures that shape the social and cognitive machinery of people. We need a theoretical framework that is rich and encompassing enough to provide practical guidance on how to design online communities across the space of possible purposes and activities. The framework must be rich and complex enough to produce integrated models that support (a) decomposition of macroscale phenomena down to microscale mechanisms that are (b) relevant to the understanding and design of online communities that evolve over months to years and encompass large numbers of people and (c) predict accurately the effects and tradeoffs of design decisions made at levels ranging from moment-by-moment user interaction to long-term social dynamics. Whether models are developed in agent-based simulations, dynamical systems, or some other approach, there is great opportunity to integrate a new unified theoretical framework.
References 1. 2.
3. 4. 5.
6.
Pirolli, P.: Information foraging theory: A theory of adaptive interaction with information. Oxford University Press, New York (2007) Russell, D.M., Stefik, M.J., Pirolli, P., Card, S.K.: The cost structure of sensemaking. In: INTERCHI 1993 Conference on Human Factors in Computing Systems. Association for Computing Machinery, Amsterdam (1993) Weick, K.: Sensemaking in organizations. Sage, Thousand Oaks (1995) Klein, G., Moon, B., Hoffman, R.R.: Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent Systems 21(5), 88–92 (2006) Horrigan, J.: The Internet as a resource for news and information about science. Pew Internet & American Life project (November 20, 2006), http://www.pewinternet.org/Reports/2006/ The-Internet-as-a-Resource-for-News-and-Information-aboutScience.aspx (cited June 27, 2009) Lenhart, A.: Adults and social network websites. Pew Internet & American Life Project (January 14, 2009), http://www.pewinternet.org/Reports/2009/Adultsand-Social-Network-Websites.aspx (cited June 27, 2009)
Towards an Interdisciplinary Design Science of Learning Mike Sharples University of Nottingham, Jubilee Campus, Wollaton Road Nottingham NG8 1BB, UK
[email protected]
In a world of increasing complexity, confronting global environmental and social challenges, there is an urgent need to enable people of all ages to learn about themselves, their society and their environment. Yet, there is a surprising lack of attention to what this involves. The study of human learning does not form a major part of teacher education programmes and is disappearing from university Psychology courses. It is as if human learning is just too diffuse and difficult a topic to be studied and taught. A central problem with the study of learning is that it is inherently interdisciplinary. Learning as the process of effecting permanent changes to the brain is an aspect of neuroscience; as the acquisition of skills and knowledge, learning forms part of cognitive psychology; as an activity of social and cultural development, it falls under social sciences; as a process of systemic adaptation to societal changes it could be part of history, business or economics. All of these disciplines are essentially descriptive, in that they attempt to understand people and their world. To enable people to learn more effectively also involves the disciplines of design and engineering. Such complexity has traditionally been simplified, so that researchers can understand or influence one aspect, such as change in behaviour, cognitive development, or the design of teaching machines. The time has now come to put all the pieces together, to form a composite picture of how we learn as individuals, groups and societies, and how to create the conditions for more effective learning, across contexts, throughout a lifetime. If this seems like a daunting task, then much of the groundwork has been or is being done. In addition to studying facets of learning, we need to develop new methods to integrate this knowledge and to harness it for the benefit of learners and society. The suggestion is to extend educational psychology and learning science research towards a design science of large complex systems. Such an enterprise needs be international, to build on expertise across many research centres. It should be crosscultural, respecting and celebrating the diversity of settings and approaches to learning. It needs to be design-based if it is to not only describe how learning is currently achieved, but also to develop new methods for enabling and supporting productive learning. It must embrace multiple technologies, including digital media, traditional media and human knowledge, not just as resources for learning, but as integral parts of a complex learning system. It needs to be multi-level and multi-method, seeking to integrate the neural, cognitive, social and cultural aspects of learning. Methods for design and evaluation of human-technology systems, such as socio-cognitive U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 3–4, 2009. © Springer-Verlag Berlin Heidelberg 2009
4
M. Sharples
engineering [1], can provide a basis of complex systems design, and these need to be complemented with design-oriented theories of technology-enabled learning. Some immediate consequences of such an agenda are that this cannot be done be one researcher, or one lab, alone. Just as the Human Genome project required a cooperation of many research labs, a long timescale, a shared infrastructure and ethical framework, and a common set of tools, so the development of an Interdisciplinary Design Science of Learning needs a shared effort to integrate facilities for the co-design of technology-enabled learning and cross-cultural studies of learning effectiveness. Such studies are already underway. For example, the Group Scribbles technology developed at SRI (http://groupscribbles.sri.com/) and the Eduinnnova method from Pontificia Universidad Católica de Chile (http://www.eduinnova.com/english/) are being developed and tested across multiple sites in a worldwide collaboration. The Kaleidoscope and ProLearn networks have already made substantial advances towards forming a cross-national infrastructure and shared understanding for research in technology-enabled learning. The STELLAR network is ideally placed to take on this challenge.
Reference 1.
Sharples, M., Jeffery, N., du Boulay, J.B.H., Teather, D., Teather, B., du Boulay, G.H.: Socio-cognitive engineering: a methodology for the design of human-centred technology. European Journal of Operational Research 136(2), 310–323 (2002)
Use and Acquisition of Externalized Knowledge Friedrich W. Hesse Knowledge Media Research Center at the University of Tuebingen Konrad-Adenauer-Str. 40, 72072 Tuebingen
[email protected]
Knowledge acquisition is no longer mainly restricted to classical institutions and formal learning (as in schools and universities) but is also connected to informal learning settings at home in leisure time or at the workplace. Thus, the interplay between formal and informal learning is developing in a new way, mainly in connection with the development of Web 2.0 and the appearance of “social software”. Within these new social software environments different developments are especially interesting, as they offer new ways of learning, knowledge building and use of knowledge. A very special feature has to do with the possibility of externalizing knowledge. Even more, social software (e.g. bookmarking) makes not only externalized knowledge available, but together with the externalized knowledge of other people, resources can be created which are most meaningful for oneself. For cognitive psychologists and learning researchers, social software offers an interesting new demand for further study. Since the beginning of learning research, one can observe some paradigmatic changes which have had a strong impact on which learning processes have been investigated. In the very beginning, research was interested in the process of learning with regard to the manipulation of observable and measurable behavior, for example in learning by heart by Ebbinghaus [1], in classical conditioning by Pavlov [2] or in the highly influential operant conditioning by Skinner [3]. Around 1960 there was a paradigm shift from “learning” to “knowledge”, the so called “cognitive turn” (Neisser [4]). From then on researchers were interested in investigating the internal mental processes, like organization, acquisition, storing and retrieval of knowledge (e.g. Baddeley [5]). This led to a new type of theory and new results. A more recent paradigm shift moved interests from “knowledge” to “externalized knowledge”. Wegner [6] introduced the theory of transactional memory, where people don’t have to know everything themselves, but can use the knowledge of other people. Connected to some of the ideas of Wegner, a lot of developments and activities around Web 2.0 in the years from 2000 on allowed researchers to follow his perspective, especially in connection with features like having quick and easy storage of and access to “(externalized) knowledge”. However, from a research perspective we only understand partly the nature and mechanism of these activities. They are mostly related to tools which are associated with terms like “Social Networks” and “Social Software Tools”. Using a wider scope, such tools can be categorized into at least three groups: those which are primarily concerned with the social exchange between people U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 5–6, 2009. © Springer-Verlag Berlin Heidelberg 2009
6
F.W. Hesse
(like Facebook), those which also address a knowledge exchange (like bookmarking systems) and those which are mainly interested in constructing shared knowledge bases (like Wikipedia). When we take a closer look at the category “knowledge exchange” and especially at the bookmarking systems, we will discover in detail the potential of this social software tool in taking over processes which normally have to be carried out by ourselves, so that there is a new division of labor between the human cognitive system and the social software tools. How is this possible? The processes behind the bookmarking are mainly based on tags which allow all users to assign keywords individually to information or resources (e.g. picture, website, videos). These tags can help to structure, classify and filter individual collections of information and resources. These resources can – at the same time – be saved and filled in different categories. Thus information storage and retrieval is becoming very easy. But there is still the question, what is “social” in social tagging? On the one hand, the individual tags for respective resources are available for all users. On the other hand, all tags can be created by all other users and then aggregated. For a concrete resource this leads to a common description/classification in a bottom-up process, reflecting important connotations and concepts of the resource. In addition, frequently used tags of a resource are weighted more strongly. The whole tagging system additionally makes creating related tags possible. Such tags allow discovering comparable (related) terms and links. Related tags can be used as links in navigations and further search processes. By means of related tags, people with a similar interest or specific expertise can also be identified. Thus related tags have a very special potential for knowledge building, because to a certain degree semantic interpretations are carried out by these tools. Such developments, of course, do enrich our possibilities in the use of knowledge and even making shared knowledge available. But they also lead us to question to which extent these processes are understood by us and whether we are able to control such automatically carried out processes.
References 1. 2. 3. 4. 5. 6.
Ebbinghaus, H.: Über das Gedächtnis. Untersuchungen zur experimentellen Psychologie. Duncker & Humblot, Leipzig (1885) Pavlov, I.P.: Die Arbeit der Verdauungsdrüsen. St. Petersburg (1897) Skinner, B.F.: A discrimination without previous conditioning. Proceedings of the National Academy of Sciences of the United States of America 20, 532–536 (1934) Neisser, U.: Cognitive psychology. Prentice-Hall, Englewood Cliffs (1967) Baddeley, A., Hitch, G.J.: Working memory. In: Bower, G.A. (ed.) Recent advances in learning and motivation, vol. 8, pp. 47–90. Academic Press, New York (1974) Wegner, D.: Transactive memory: contemporary analysis of the group mind. In: Mullen, B., Goethals, G. (eds.) Theories of Group Behavior, pp. 185–208. Springer, New York (1987)
LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring Alexandra I. Cristea1, David Smits2, Jon Bevan1, and Maurice Hendrix1 1
Department of Computer Science, University of Warwick, Coventry, CV4 7AL, United Kingdom {A.I.Cristea,J.D.Bevan}@warwick.ac.uk 2 Faculty of Mathematics and Computer Science, Eindhoven University of Technology PB 513, 5600MB, Eindhoven, The Netherlands
[email protected]
Abstract. Reusable adaptation specifications for adaptive behaviour has come to the forefront of adaptive research recently, with EU projects such as GRAPPLE1, and PhD research efforts on designing an adaptation language for learning style specification [1]. However, this was not the case five years ago, when an adaptation language for adaptive hypermedia (LAG) was first proposed. This paper describes the general lessons learnt during the last five years in designing, implementing and using an adaptation language, as well as the changes that the language has undergone in order to better fulfil its goal of combining a high level of semantics with simplicity, portability as well as being flexible. Besides discussing these changes based on some sample strategies, this paper also presents a novel authoring environment for the programming-savvy adaptation author, that applies feedback accumulated during various evaluation sessions with the previous set of tools, and its first evaluation with programming experts. Keywords: Adaptive Hypermedia, Adaptation Language, LAG, LAOS.
1 Introduction Adaptation and personalization are considered to be both useful and desirable, and came to the fore with user modelling [2] and adaptive hypermedia [3] research. However, adaptive environments are notoriously difficult to author [4] for. Amongst all the components in adaptive environments, about which much has been modelled and written [5][6][7][8][9][10][1], the most difficult part is the specification (authoring) of the adaptive behaviour [8][12] [13][14][1]. Hence, reusability is desirable especially for the adaptive behaviour specification, in the sense of ‘write once, use many’ [12]. Since 2003-2004 a few adaptation languages have been proposed; LAG [15] is, as far as we know, the first such language, followed by LAG-XLS [16] that caters for Learning Styles. Ideally, a single common accepted standard would be best, similarly to content descriptions in the educational 1
http://www.grapple-project.org/
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 7–21, 2009. © Springer-Verlag Berlin Heidelberg 2009
8
A.I. Cristea et al.
domain (e.g., LOM2, SCORM3). In the GRAPPLE project, such an endeavour is being targeted. However, this is beyond the scope of the current paper. Making a language a common standard and reusable is only the first step, the next is to allow different levels of access to the creation process. This targets the different types of authors that will be able to use the language (e.g., computer savvy or not). One such version of different access levels is given by the LAG framework [17]: adaptation strategy – accessible for all authors, via laymen-level descriptions, adaptation language –accessible mainly to computer savvy authors, adaptation assembly language – only accessible to ‘hard core’ computer savvy authors). Another dimension is brought about by using visualization (e.g. the Graph Author developed for AHA! [18] is using visualisation in order to support authors) and handling support. In previous versions of the LAG language implementation, handling support was envisioned as not allowing an author to insert any wrong constructs [13][17]. In the GRAPPLE project, additionally to this restriction, the ultimate language to be used by the non programming-savvy author will be purely graphical [20]. Whilst this will hide most of the difficulty for the high-level author, it will also need to reduce the flexibility to some degree. When major changes are needed, or when system-system interaction is required, underlying programming languages will support this. Currently, we consider supporting multiple adaptation language output as a desirable feature, besides developing new languages targeted at specific levels of access (transformed into wrapping levels). Thus, this paper discusses general lessons learnt during the last five years in proposing, designing, implementing and using an adaptation language; the changes that the language has undergone in order to better fulfil its role as an adaptation language. Finally, this paper discusses the LAG language [8] as it currently stands, in view of the new extensions that have been performed which aim for it to better fulfil its goal of combining a high level of semantics for authors, with simplicity and portability as well as flexibility. Besides discussing these changes based on some sample strategies, this paper also presents an XML equivalent of LAG, which is to be used instead of the current language for portability between systems, as well as a novel authoring environment for the programmer or programming-savvy adaptation author, that applies feedback accumulated during various evaluation sessions with the previous set of tools. The outcomes of the tool, the adaptation strategies, can be used by any author [17]. This environment’s first evaluation is also presented. The remainder of this paper is organized as follows. Section 2 introduces the new elements in the LAG adaptation language via scenarios for adaptation. It also discusses alternative representations for the LAG language. Section 3 introduces the PEAL environment for authoring, by comparing it with the previous LAG language authoring environment, as well as with alternative solutions. The section concludes with a discussion of this environment and its first evaluation. Section 4 presents related research. In section 5, we draw general conclusions and pointers for further research. 2 3
ltsc.ieee.org/wg12/ www.adlnet.gov/scorm/
LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring
9
2 The Updated LAG Grammar, via Scenarios for Adaptation 2.1 The LAG Grammar History and Lessons Learnt The LAG language concept was first introduced in [17], together with the LAG framework (hence, the similarity in name between language and framework, although they are distinct entities). As sketched in section 1, the LAG framework distinguishes between adaptation strategy – accessible for all authors, via laymen-level descriptions; adaptation language – accessible mainly to computer savvy authors; an example of such a language is the LAG language, although any adaptation language fits at this level; adaptation assembly language – only accessible to ‘hard core’ computer savvy authors. From the moment it has been proposed, the LAG language was supposed to fill in the ‘missing link’: it had to be and adaptation language, thus at a higher level than what the LAG framework called ‘adaptation assembly language’: it had to be reusable, whereas adaptation assembly languages at the time were not. To give an example, it was possible then to write: (a) IF Concept (‘The Night Watch’) has been visited THEN show Concept (‘Rembrandt’)
However, it was not possible to write: (b) IF Concept (title) has been visited THEN show Concept (author)
Thus, even such simple generalizations were not easily available to authors, who would have to manually connect all concepts, instead of writing reusable rules. Brusilovsky’s taxonomy [3], used for defining the types of adaptation possible, also refers to such an assembly language level4. Take for instance the decision of showing a concept by stretchtext, versus showing it by regular text; or hiding it by removing, or by graying out. These are decisions which may be dependent on the capabilities of the adaptation and rendering engine. A given engine may allow for showing concepts or not, but not for applying strechtext (e.g., the AHA! engine [18]). Using such low-level requirements might make an adaptation strategy impossible to be used by different engines. Moreover, such a low-level requirement may have little to do with the pedagogy involved in teaching a course, for instance. A teacher author might decide that a certain piece of information is necessary for a student or not, but may leave the rest to the engine. Thus, another condition for a language to be an adaptation language was that it had to be able to be converted to lower level assembly language, as per Brusilovsky’s taxonomy, but that this exact conversion is to be left to the interpretation and specifics of the given adaptation engine (hence, the similarity with a programming language which is compiled into assembly language in order to run on a certain system). For the example above, any structure (b) as above, applied to a certain domain, could become something similar to (a). However, an adaptation language may not necessarily have IF-THEN constructs, as they themselves are relatively low level. 4
Although the taxonomy can be used for writing reusable rules, it still only specifies low level actions performed on (usually specific) concepts.
10
A.I. Cristea et al.
Still, for compatibility with the engines of the time, the initial LAG language allowed for IF-THEN constructs, corresponding to assembly language constructs. Supplementary, however, it also defined higher level constructs, specific to the adaptation functionality, which are part of the adaptation language level within the LAG framework. Beside this important distinction, and essentially offering an instantiation of adaptation language ideas, it also defined what such a language should have: it should make use of the application domain (adaptive hypermedia) by 1) allowing it to be simpler and with fewer constructs than a regular programming language or a logic-based language (thus lowering the authoring threshold); Thus, elements were included in LAG only when considered necessary;and by 2) using constructs specific for the adaptive hypermedia domain, or assumptions that can be safely made in that domain. For instance, at the time it was safe to assume that most adaptive hypermedia applications have an underlying tree-like structure (as they were mainly designed for the educational domain, and, to some extent, inherited the organization into chapters-subchapters from educational books). This meant that, although, hypermedia, in theory, are graphs with any connections desired, in practice they were (and many of them still are) trees with given hierarchies. Hence, the GENERALIZE and SPECIALIZE constructs were born, the first to visit concepts higher in the hierarchy, thus of higher generality, and the latter, to visit concepts lower in the hierarchy, thus more specialized. It was then not a working language, just a proposal, which in the following year has been developed [13] towards introducing, first of all, a tool for supporting this grammar. There, we also introduced the concept of adaptation procedures, i.e. code snippets that can be reused by other authors, similarly to how procedures or function calls are used in other programming languages – with the significant, simplifying distinction however that no parameter exchange would take place, and that the extra code would be pasted in its entirely in the place of the ‘adaptation procedure’ call. This made these procedures simpler than regular programming languages, as per requirement 1 above. Moreover, these snippets could be used not only by their initial designer, but also by other authors, effectively creating a tool for customized language extension. This allowed for a higher level of reuse of adaptation languages, whilst keeping the processing simpler than in regular programming. The combination of requirements 1 & 2 above generated the following list of minimal constructs that should be present in high-level adaptation language: (a) constructs allowing domain structure and composition related adaptation: As said above, the domain structure and composition can be used to determine the adaptation process. i. hierarchy-based adaptation: if a hierarchy is present, and concepts are grouped as concepts-subconcepts (such as concept ‘The Night Watch’ is a subconcept of ‘Most Famous Paintings’), this hierarchy can be used to determine the order of appearance (such as the concept ‘Most Famous Paintings’ and its information should be shown before the concept ‘Night Watch’). ii. other relations –based adaptation: the most commonly used relation in domain models in adaptive hypermedia is that of concept-subconcept, as above. However, other relations might be possible, especially in systems
LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring
11
importing RDF5 structures, for instance. Adaptation languages should be able to use these relations in the adaptation process. iii. domain-concept type-based adaptation: frequently, domain concepts have types (or other attributes). These also can be used in the adaptation process, thus should be accessible via the adaptation language. (b) constructs allowing goal-related adaptation: Adaptive hypermedia goals can be related to the pedagogy involved, if an educational application is envisioned, or to a business goal, in an e-commerce application, for instance. These goals can determine how domain concepts can be used. A simple way of adding such information is via labels and weights overlayed over the domain concepts they refer to. For example, the concept ‘The Night Watch’ can be labelled ‘visual’ to be used in a strategy involving visual versus verbal presentation, could be labelled ‘advanced’ if it is to be used in a drawing and painting class, or ‘beginner’ if it is to be used in a class on famous paintings and painters. Thus, whilst this information is added to concepts in the domain model, it is independent to the domain. This type of independence between domain and goal (or pedagogic) model was proposed as a basis for adaptation language construction [8] and has found recognition as one of the design concepts in the GRAPPLE project. i. label –based adaptation: see above. ii. weight–based adaptation: an alternative to label-based adaptation, numeric values can be used to label concepts with respect to the goal. This alternative is not used very frequently currently. (c)
structure of the adaptation program: i. Constructs defining the ‘adaptation loop’: Unlike regular procedural programs, the concept-driven adaptation in adaptive hypermedia happens in a loop. Users can visit the same concept several times. It may be that similar, or evolving behaviour is needed at successive visits. Thus, similar to the collection of rules in expert systems, the programming constructs in adaptation languages can be triggered repeatedly, and in different orders. An adaptation language should allow for an ‘adaptation loop’, that defines the continuous interaction between user and system, and for mechanisms to ensure that the correct constructs are triggered at the correct time. ii. Constructs allowing for an ‘entry point’: As adaptive hypermedia content is often based on the Web, it suffers from the same draw-back as regular Web hypermedia: first time users may visit the site. Thus, an adaptation language needs to be able to define what these users will be seeing. This is different from the ‘adaptation loop’ above, where users already have some history of recorded behaviour in the system6. The most important difference between the ‘entry point’ and the ‘adaptation loop’ is that the ‘entry point’ is a one-off event. Constructs will be executed here only once.
High level language thus means here a language created from an authoring perspective: an author is concerned about how the content, as well as the goal description for the particular application, can be used to model adaptation. The actual particulars 5 6
http://www.w3.org/RDF/ It is possible for this history of recorded behavior to be imported from a different application. In this case, direct entry into the ‘adaptation loop’ should be enabled.
12
A.I. Cristea et al.
of how the adaptation engine searches, retrieves, and renders each of these actions is of lesser relevance to the author, and could potentially add to the authoring complexity7. As will be shown in the following, the LAG language allows for all these constructs envisioned to be present in an adaptation language. A good update on the fundamental elements of the current basic LAG language is provided in [8]. There, handling of overlay variables, as well as independent variables is shown, for the different static representation layers supported: domain layer, goal and constraints layer (for representation of the goal of the application, such as pedagogical goal for educational presentations, and business goal for commercial applications), presentation layer, and user layer. These, together with the adaptation layer, that hosts the adaptation language, correspond to the layers as defined by the LAOS authoring framework for adaptive hypermedia [8]. Also there, the use of generic variables (for any domain or other static map) versus specific variables (for a given domain map or other static map) is described. Further extensions comprised authoring extensions for collaboration [13], for meta-level reuse [21], in the sense of being able to describe meta-strategies triggering strategies [16], thus allowing reuse of strategies in an automatic manner. In the remainder of this section, we illustrate with the help of scenarios8 other recent developments of the basic language, grouped around the different types of adaptation which the language allows. 2.2 Hierarchy-Based Constructs for Adaptation As previously discussed, domain models in adaptive hypermedia often have a hierarchical structure. The generalize-specialize constructs initially proposed in LAG have been replaced with simpler ones, that can be used as attributes of the concept, such as parent, child and level, order (as inspired by XPath9, in the spirit of using constructs of accepted standards where possible) The strategy shown below is a depth-first strategy, which shows the concept labelled as ‘start’ first, then the rest of the content in a depth first manner using the child-parent relations. The exact meaning of the constructs is given as comments in the strategy below: initialization( // ‘entry point’: this defines what the user first // sees; PM.next = true // allow for a ‘next’ button in the presentation; // please note that no information is given as to how to render // this ‘next’ button; this is up to the engine PM.ToDo = false // don’t allow for a ‘To Do’ list in // the presentation PM.menu = false // don’t allow for a ‘Menu’ in the presentation while true( // show the first, father concept, labelled ‘start’: if GM.Concept.label == start then ( PM.GM.Concept.show = true ) ) ) implementation ( // ‘adaptation loop’: this defines the continuous // adaptive interaction between user and system //if you visited the parent you should be able to if UM.GM.Concept.parent.access then ( // visit the child GM.Concept.show = true )) 7
This statement is based on previous evaluation experiments and interviews. Available for tryout at: http://prolearn.dcs.warwick.ac.uk/strategies.html 9 www.w3.org/TR/xpath 8
LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring
13
Similarly, for breadth-first strategy, the level of a certain concept can be used to show all concepts of higher or equal level (the rest of the strategy is removed due to lack of space): // // // if
if the current concept level is lower than the current user level, show the current concept (so only show users concepts up to their current level) GM.Concept.level <= UM.GM.level then (PM.GM.Concept.show = true)
2.3 Relation Based Constructs of Adaptation Domain models have been presumed to be hierarchical. However, more complex domains can have many types of relationships, next to, or instead of the hierarchical ones. To illustrate the use of related concepts, the LAG language supports a generic ‘relatedness’ relation between two concepts10. Here, only an excerpt of the strategy is shown. The exact meaning of the constructs is given as comments below: // for advanced learners show the related concepts if enough11(GM.Concept.access // if a concept is accessed UM.GM.stereotype == adv , 2) // and the concept is labelled as // ‘advanced’ then PM.DM.Concept.Relatedness.Concept.show = true // than show the // concept related to the current concept
2.4 Type Based Constructs for Adaptation Domain concepts are defined in LAG as having attributes, which can have predefined types, or author-created types. Examples of predefined types are: introduction, explanation, conclusion, keywords, text, image, video. These types are intrinsic to the domain, and thus are not changed via the goal model. However, they can be used to guide the adaptation. This implementation only shows ‘introduction’ concepts and doesn't show the other concepts of type ‘conclusion’, till the introductions are read. // DESCRIPTION: show first introductions, than conclusions; initialization( while PM.GM.Concept.type != conclusion // make only introductions ( PM.GM.Concept.show = true ) // readable UM.GM.showall = 0) implementation ( // if a concept is accessed and it is an introduction: if enough (PM.GM.Concept.type == introduction UM.GM.Concept.access == true, 2) // then increase the showall counter: then ( UM.GM.showall += 1 ) // if the showall counter is greater than a threshold – here, 3, // because we had three questions – 10
The LAOS framework [0] allows for multi-multi relations of any type; however, the current language only supports the relatedness relations, for compatibility with the MOT authoring environment [0]. 11 The ‘enough’ construct is based on games and provides a way to add several conditions. In some games, to pass a level, some conditions should be fulfilled, e.g., collection of objects. The exact objects may not be specified, only that there should be ‘enough’ of them to move on. Similarly, ‘enough’ in LAG means that enough conditions should be satisfied. The number after the conditions specifies how many – here, 2.
14
A.I. Cristea et al.
// and the type of the current concept is ‘conclusion’: if enough (UM.GM.showall > 3 PM.GM.Concept.type == conclusion, 2) // then show the current concept: then ( PM.GM.Concept.show = true ) )
2.5 Weight and Label Based Constructs for Adaptation This strategy shows concepts based on their weights and labels. The idea in the LAOS framework [8] is that weights and labels are added in a separate layer from the domain map, in the goal and constraints map (GM), where they, for educational applications, represent pedagogic knowledge. Thus, a typical pedagogical division is to label concepts as beginner-intermediate-advanced. Labelling these concepts outside the domain model means that a different labelling is possible for the same domain model concepts, but a different instance of the goal map. Thus, some matrix multiplication concepts can be labelled as beginner concepts for maths students, but as advanced concepts for music students, for instance. The same strategy can be applied to both, as below. implementation ( // count visits for each type of label: if UM.GM.Concept.access == true then ( if (UM.GM.Concept.beenthere == 0) then ( if (GM.Concept.label == beg) then ( UM.GM.begnum -= 1) if (GM.Concept.label == int) then ( UM.GM.intnum -= 1) if (GM.Concept.label == adv) then ( UM.GM.advnum -= 1)) UM.GM.Concept.beenthere += 1 ) // -------------------------------------------------------// Change stereotype beg -> int -> adv when appropriate: // -------------------------------------------------------if enough(UM.GM.begnum < 1 // 2 below means all conditions UM.GM.knowlvl == beg // should be satisfied ,2) then ( UM.GM.knowlvl = int ) if enough(UM.GM.intnum < 1 UM.GM.knowlvl == int,2) then (UM.GM.knowlvl = adv ) // show the concepts with the appropriate knowledge level: if (GM.Concept.label == UM.GM.knowlvl) then ( PM.GM.Concept.show = true))
Thus, weights and labels for the goal model should be the default way for a strategy to express the adaptation. However, we have found out that some properties that are domain dependent can also be exploited. In the next section, we show how the types of domain concepts can be used for adaptation. 2.6 LAG Language versus XML Languages XML languages can be preferable especially for system-to-system conversions. Whilst we don’t believe they can completely replace programming-based approaches, as they are very verbose if used directly to program by hand, they are definitely useful for interfacing. Thus, export-import into such formats is desirable. Converting the LAG language to an XML specification is relatively straightforward: for each LAG construct, an XML element can be defined, with sub-elements that enforce the prescribed grammar. Thus, a LAG adaptation strategy will contain: <description> the layman description of the strategy
a user’s first view of the system the interaction loop user-system
LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring
15
A more complex condition, such as the one using ‘enough’ in the type-based adaptation, can look as below. Obviously, the XML format is more verbose and shouldn’t be used in direct programming by hand.
<enough number=”2”> UM.GM.showall > 3 PM.GM.Concept.type == conclusion PM.GM.Concept.show = true
3 PEAL Environment for Authoring 3.1 The Problem A previous online editor had been created12 [15][17] (Fig. 1). It allows insertion of only predefined authoring constructs, thus providing handling help, as mentioned in Section 1 (in Fig.1, left side, the [add statement] link event only allows predefined constructs, such as the generalize construct below; the generalize construct in turn only allows condition insertion, via the [add condition] input; clicking this allows the [attribute][operator][value] construct to appear, which can only be populated by generic concepts, such as the ‘UM.Concept.type==expert’ on the left side of Fig.1, or by specific concepts – not shown here). Moreover, for the specific concepts, the environment allows direct database access to a domain model and goal and constraints model database, permitting selection of appropriate specific concepts directly from the respective instances13. Thus, it provides ample support for the authors, lowering the authoring threshold. Also, in terms of flexibility, it allows definition of both adaptation strategies and adaptation procedures (section 2.1). However, this environment is not currently up-to-date with the current LAG grammar. Most importantly, the environment does not allow for the separation of the interaction into initialization (what the user sees at the first interaction with the system, when nothing else is known about him) and the interaction part (called ‘implementation’ in the LAG language; this describes the loop of interaction between the user and the system, and, like a rule list, is triggered as long as the conditions of the rules hold). The environment also does not directly support types of domain concepts, as in section 2.4, relatedness relations, as in section 2.3, and other minor extensions of the LAG language. For instance, the update rule for Fig.1, right side should read: 12 13
http://elearning.dsp.pub.ro/motadapt/ This idea has been revived in the GRAPPLE authoring tool, where adaptation tools can access the domain model authoring tools via a common shell [0].
16
A.I. Cristea et al.
Fig. 1. Initial Programming Environment for the LAG language
PM.DM.Concept.exercise_expert.show = true; thus marking the fact that the ‘show’ variable (determining if to show or not something to the user) is part of the presentation model (PM), and the ‘exercise_expert’ is an attribute in of the current concept in the domain model (DM). Currently, due to the differences in grammar in the old on-line environment and the actual specification, we actually use simple text editors as adaptation language creation environments. However, this has several drawbacks, as for instance, the fact that errors are only spotted at compilation time; that the authors receive no help or support whatsoever whilst editing; that authors themselves need to keep track of the version of the language they use, and thus may be working with a version where some of the programming concepts are obsolete. 3.2 Objectives Thus, the PEAL development set out with the following objectives14: • •
•
14 15
Develop an online, AJAX based Programming Environment for the LAG language, based loosely upon the existing online editor, and developed from an existing open source project entitled CodeMirror15. Save (to database or to files) and export (to file/database) Adaptive Hypermedia strategies written in LAG for either further editing or use in the AHA! delivery system [18] Database driven user access and strategy storage similar to the existing online system. Efficient storage system for the way in which AH Strategies are constructed by the user in LAG using this programming environment. Use the above storage mechanism to recognise basic LAG Grammar (e.g. statement construction, use of operations), and provide some warning system when strategies do not follow LAG Grammar. Include basic templates for new adaptive hypermedia strategies when they are created. Recognition of complex LAG Grammar (e.g. nested statements, complex statements and user defined ‘procedures’). Colourise reserved words, both those defined by the LAG Grammar and user-defined variables,
some typical to programming environments, some typical to for the LAG language. http://marijn.haverbeke.nl/codemirror/
LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring
•
17
including an effective storage system for these reserved words. Drag-anddrop standard code sections, user-defined code sections, list wordcompletion of standard words, list word-completion of user-defined words. Integration of basic demonstration content with strategy. Integration of userdefined content with strategy, including a strictly defined required format for user-defined content.
3.3 Programming Environment for Adaptation Language (PEAL) The Programming Environment for Adaptation Language (PEAL) is shown in Fig. 1.
Fig. 2. Programming Environment for Adaptation Language (PEAL)
The following components have been realized: User Access and File Storage; Basic Template for Strategies; Strategy Creation Wizard; Tokenizer/Parser Design; Reserved Word Storage; Configuration File; Production of an CAF file for each strategy. The major focus of this project was to create a parser for the LAG language which would recognise and colour correct syntax and grammar, and highlight incorrect syntax and grammar. This is a different take than previously of providing handling help, better known to the programmer communities: language constructs are identified and coloured respectively. In case the construct cannot be identified, the colour is bright red, signalling an error. The storage system with which the strategy document structure is stored has been provided by the CodeMirror system in the form of a simplified DOM structure, so we have been working from that basis when designing grammar
18
A.I. Cristea et al.
recognition. A detailed design of the user access and file storage system, including the database design to hold user details, a security system to ensure that passwords cannot be obtained and files cannot be accessed by any unauthorised users has been set up. We have also designed a method by which errors in the code can be easily highlighted. This may be further developed by supplying messages for the user by which they can more easily determine the problem. Currently the design for Reserved Words in the LAG language requires their storage in the parser and tokenizer. However, for greater extensibility, we will design a method using the Object-Orientation techniques provided by Javascript to store keywords externally to the parser and tokenizer in their own ‘class’.Moreover, we have implemented: Predictive Word Completion; MOT2AHA Integration Support; Drag-and-Drop Coding. Predictive Word completion will be used in the wizard and also the editor. 3.4 PEAL Evaluation PEAL was evaluated on a small scale, with the help of five programming-savvy persons, as they represent the type of author at which this tool is targeted at. Nonprogrammers are supposed to be using ready-made, reusable strategies, without getting into the details of programming. Nielsen [22] showed that 95% of the usability problems can be detected with just five users. Users were asked to identify the strengths and weaknesses of PEAL. Amongst strengths, the following were mentioned: “element suggestion and possible variables.”; “syntax highlighting” “Coloured keywords.”; “Simple screen.”; “Use of other strategies.”; “completion suggestions”; “code highlighting and indentation” . “save and reuse code fragments”; “highlighting and word completion”; “reusable code snippets”. Amongst weaknesses, the following were mentioned: “Sometimes lines of code jump - automatically indented when the cursor moves.” “No search function (or search and replace)”. “If there are multiple errors only one is displayed in the status bar.”. “Some display-size bugs exist with text entry box.” The programmers were also asked to state which editor they would prefer to use to edit the LAG language, between regular text editors (used previously for editing the language) and PEAL. Without exception, they all voted for PEAL. These initial evaluations highlight the usability of PEAL, the fact that such an environment is needed, as well as immediately pointed to some bugs that are to be fixed in the near future.
4 Related Work The level of abstraction of semantics we envision in our work, that can lead to reuse of the adaptation strategies, is expressed in our paper via an adaptation language. An alternative to this is to express such sequences and interaction via workflow languages16. However, workflow languages have previously been shown to be insufficient to express personalization at a level of expression as delivered by adaptive hypermedia [20]. Although not shown here, the LAG language can express various personalization strategies (based, e.g., on preferences, learning styles, goals) and is more powerful with the proposed extensions than other workflow languages. 16
http://www.yawlfoundation.org/
LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring
19
A popular current growing competitor to adaptation languages and adaptive hypermedia expressivity is IMS-LD17. Research has shown however that IMS-LD is still not yet capable of delivering all adaptation functionality as defined by adaptation hypermedia [1], [10], and also has serious limitations when it comes to adaptively supporting collaborative learning [13]. The main question is how this contribution can be beneficial for other members of the community since LAG introduces various restrictions stemming from the assumptions on how several models need to be structured. Logic based languages or metamodelling seems to offer a bit more flexible option for doing this. Other approaches, such as other adaptation languages exist (see, e.g., LAG-XLS [16]). The later language caters for learning styles, but it would need further extensions to cater for more extensive personalization as well as collaborative aspects. Adaptation languages are based on the rule-based approach. Alternatively, reasoning mechanisms can be used to express the adaptive behaviour (e.g., description logic). Currently, logic-based languages are too broad, with a multitude of constructs that are not of use in current adaptive hypermedia, thus contradicting the first constraint as in section 2.1. The new developments in the Semantic Web offer new vehicles for reasoning such as RDF, OWL18 (used also by adaptive learning systems, such as the Personal Reader [19]). They may provide viable alternatives for the future, but currently systems based on such mechanisms have serious performance problems when compared to other rule-based systems. Furthermore, whereas such approaches are very good in expressing modelling information, interrelations, etc., they lack direct and simple support for programmatic constructs required to express most of the behaviours we have discussed in this paper. Meta-modeling, the process of designing languages through meta and meta-meta notations, such as DTD (document type definition) on top of XML, etc. are useful approaches for describing the domain content, and the different overlay structures (such as the goal structure). They are less useful however for defining adaptation. Yet another direction of adaptation representation is the family of “assembly-level” adaptation languages, such as used in systems as AHA! [18], Interbook (Worddocument-based) [25], WHURLE (LP: lesson plan) [25]. The problem with such languages is that, not only do they lack many of the required features as outlined in previous sections, but they are also extremely verbose and difficult for non-experts as well as experts alike to express high-level, reusable adaptation strategies in, rendering them a questionable choice for employing as the basis of adaptive specifications They may, nevertheless, serve as an appropriate “end-of-line representation”, in essence serving as a possible “output format” for higher level languages such as LAG. The same is true for specifications such as IMS LD already mentioned above. Our approach is also related to Pattern languages [1]: extracting snippets of adaptive behaviour (here, for collaborative adaptation) that are to be reused in different context (e.g., by different learners, teachers; groups of learners or teachers; with different course materials, etc.). 17 18
http://www.imsglobal.org/learningdesign/ http://www.w3.org/TR/owl-features/
20
A.I. Cristea et al.
5 Conclusion and Future Work In this paper, we have described the current developments of the LAG adaptation language, one of the first adaptation languages, discussed in the context of use and application of adaptation languages in general. We have sketched also an XML equivalent of LAG, which can be used for enhanced portability. Moreover, we have shown current progress in the design, implementation and small-scale evaluation of PEAL, a new environment for LAG language specification, which builds upon lessons learnt from previous implementations. Other Lessons learnt, which can be of use to the research community at large, and connected to future work are as follows: 1) authors expect the authoring environment to be online, same as their environments for domain and other static map editing; they do not expect to have to install systems on their own computer. This is why both the initial LAG environment and PEAL are online editors. 2) An adaptation language as a human-computer interface is useful only for programming-savvy authors. Other authors should not be expected to program from scratch, but only reuse existing strategies. Hence, the strategy description in natural language is vital (strategy metadata, as depicted in the ‘description’ part of the LAG language strategy). 3) It is easier to ask authors that have some level of programming knowledge to extend or modify existing strategies, thus adapting the strategies to new requirements gradually. 4) Simplifying the language can lower the threshold for authoring, but might have rejection effects for experts. 5) Similarly, visualization of the adaptation language can significantly lower the threshold for authoring novices. 6) Adaptation languages are useful also for computer-computer or system-system interfaces. Thus, they will not disappear, but instead will be the ‘hidden’ knowledge behind fancy interfaces, which are directly authored from scratch only by experts. 7) XML languages or XML compatibility is desirable, due to the automatic processing provided for web-based systems. 8) Whilst having a generic standard for all adaptive systems might sound ideal, a practical solution might be to mimic other use of standards elsewhere: e.g., LMS export to and import from various e-learning standards (LOM, IMS-CP, IMS-QTI, etc.). Similarly, adaptive systems, be they authoring or not, might need to be able to export various adaptation languages, in order to be compatible with a wide variety of applications. Acknowledgments. This research is supported by the GRAPPLE IST project IST-2007-215434 and was initiated within the PROLEARN Network of Excellence.
References 1. Stash, N.: Incorporating Cognitive/Learning Styles in a General-Purpose Adaptive Hypermedia System, PhD. Thesis. Eindhoven Univ. of Technol., The Netherlands (2007) 2. Rich, E.: User modeling via stereotypes. Cognitive Science 3(4), 329–354 (1979) 3. Brusilovsky, P.: Methods and techniques of adaptive hypermedia. Journal of User Modeling and User Adapted Interaction 6(2-3), 87–129 (1996)
LAG 2.0: Refining a Reusable Adaptation Language and Improving on Its Authoring
21
4. Brusilovsky, P.: Developing adaptive educational hypermedia systems: From design models to authoring tools. In: Murray, T., et al. (eds.) Authoring Tools for Advanced Technology Learning Environment, pp. 377–409. Kluwer Acad. Publishers, Dordrecht (2003) 5. Berlanga, A., et al.: Modelling adaptive navigation support techniques using the IMS learning design specification, Hypertext, Salzburg, Austria, pp. 148–150 (2005) 6. Cannataro, M., et al.: Modeling Adaptive Hypermedia with an Object-Oriented Approach and XML. In: Proc. of WebDyn 2002, Honolulu, Hawaii (May 2002) 7. Ceri, S., et al.: Web Modeling Language (WebML): a modeling language for designing Web sites. Computer Networks 33(1-6), 137–157 (2000) 8. Cristea, A., De Mooij, A.: LAOS: Layered WWW AHS Authoring Model and their corresponding Algebraic Operators. In: WWW 2003, Budapest, Hungary (2003) 9. Koch, N., Wirsing, M.: Software Engineering for Adaptive Hypermedia Applications? In: AH workshop at UM 2001, Sonthofen, Germany, July 13-17 (2001) 10. Specht, M., Burgos, D.: Modeling Adaptive Educational Methods with IMS Learning Design. Journal of Interactive Media in Education (2007) 11. Cristea, A.I., et al.: Towards a generic adaptive hypermedia platform: a conversion case study. J. of Digital Info. (JoDI), Spec. Iss. on Personalis. of Comp. & Services 8(3) (2007) 12. Cristea, A.I.C., Stewart, C.D.: Automatic Authoring of Adaptive Educational Hypermedia. In: Ma, Z. (ed.) Web-based Intelligent E-Learning Systems: Technologies and Applications, pp. 24–55. Info. Science Publishing (IDEA group) (2006) 13. Ohene-Djan, J.: A Formal Approach to Personalisable, Adaptive Hyperlink-Based Interaction. PhD thesis, Dept. of Computing, Goldsmiths College, Univ. of London (2000) 14. Stash, N., et al.: Adaptation languages as vehicles of explicit intelligence in Adaptive Hypermedia. IJCEEL journal 17(4/5), 319–336 (2007) 15. Cristea, A.I., Verschoor, M.: The LAG Grammar for Authoring the Adaptive Web. In: ITCC 2004, Las Vegas, US. IEEE, Los Alamitos (2004) 16. Stash, N., et al.: Adaptation to Learning Styles in E-Learning: Approach Evaluation. In: Proceedings of E-Learn 2006 Conference, Honolulu, Hawaii (2006) 17. Cristea, A.I., Calvi, L.: The three Layers of Adaptation Granularity. In: Brusilovsky, P., Corbett, A.T., de Rosis, F. (eds.) UM 2003. LNCS, vol. 2702. Springer, Heidelberg (2003) 18. De Bra, P., et al.: The Design of AHA! In: Proceedings of the ACM Hypertext Conference, Odense, Denmark, August 23-25, p. 133 (2006) 19. Cristea, A.I.: Adaptive Course Creation for All. In: ITCC 2004 (International Conference on Information Technology), Las Vegas, US, April 2004. IEEE, Los Alamitos (2004) 20. Hendrix, M., et al.: Defining adaptation in a generic multi layer model: CAM: The GRAPPLE Conceptual Adaptation Model, ECTEL (2008) 21. Hendrix, M., Cristea, A.: A meta level to LAG for Adaptation Language re-use. In: A3H: 6th Int. A3H, AH 2008. Hannover, Germany (2008) 22. Nielsen, J.: Usability Engineering, p. 165. Academic Press Inc., London (1994) 23. Alexander, C.: A Pattern Language: Towns, Buildings, Construction. Oxford University Press, USA (1977) 24. Dolog, P., et al.: The Personal Reader: Personalizing and Enriching Learning Resources using Semantic Web Technologies. In: De Bra, P.M.E., Nejdl, W. (eds.) AH 2004. LNCS, vol. 3137, pp. 85–94. Springer, Heidelberg (2004) 25. Eklund, J., Brusilovsky, P.: InterBook: An Adaptive Tutoring System UniServe Science News, March 1999, vol. 12, pp. 8–13 (1999) 26 Moore, A., et al.: WHURLE - an adaptive remote learning framework. In: ICEE 2003, Valencia, Spain, July 22-26 (2003)
The Conceptual and Architectural Design of a System Supporting Exploratory Learning of Mathematics Generalisation Darren Pearce and Alexandra Poulovassilis London Knowledge Lab, Birkbeck College {darrenp,ap}@dcs.bbk.ac.uk
Abstract. The MiGen project is designing and developing an intelligent, exploratory environment to support 11–14-year-old students in their learning of mathematical generalisation. Deployed within the classroom, the system will also provide tools to assist teachers in monitoring students’ activities and progress. This paper describes the conceptual and architectural design of the system, and gives a detailed technical explanation of a working proof-of-concept prototype of the architecture, motivating in particular the technologies and approaches chosen to implement the necessary functionality given the context of the project. We also discuss how the prototype will be used as a basis for developing the first full version of the MiGen system, in the context of ongoing knowledge acquisition and analysis within the project’s iterative, stakeholder-centred design, development and testing methodology. Keywords: Exploratory Learning, Mathematics Generalisation, Intelligent Support, Teacher Assistance Tools.
1 Introduction The use of algebra to express general concepts lies at the heart of mathematics, and the difficulty that algebraic thinking poses for children has been widely studied (e.g. [1,2]). The MiGen project is aiming to co-design, develop and evaluate, with teachers and teacher educators, a pedagogical and technical environment for improving 11–14-yearold students’ learning of mathematical generalisation.1 The idea of ‘seeing the general through the particular’ is a powerful way of introducing students to generalisation [3]. In the MiGen project, we are adopting a constructivist approach, allowing students to create and manipulate patterns and expressions and to perceive the relationships between them. Our aim is to support students’ construction of their own models through exploration while at the same time fostering their progressive building of knowledge [4]. There has been little previous work on supporting students in a constructivist context (e.g. [5]) and specifically in microworlds (e.g. [6,7]). Conversely, it has been argued that considerable guidance is required to ensure learning with constructivist instruction [8]. The nature of our learning environment requires that feedback be provided to students by the system during their modeling process, as well as at the end of it. Since 1
See http://www.migen.org/ for details of the project’s aims and background.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 22–36, 2009. c Springer-Verlag Berlin Heidelberg 2009
The Conceptual and Architectural Design of a System
23
students will be undertaking exploratory rather than structured tasks, teachers need to be assisted in monitoring students’ activities and progress by appropriate visualisation and notification tools. These will assist the teacher in focussing her attention across the class and will inform her own interventions to support students in reflecting on their constructions, on the system’s feedback, in setting and working towards goals, and in communicating and working with others [9]. Designing, developing and evaluating a system such as this poses both pedagogical and technical challenges, demanding an interdisciplinary approach. The MiGen project team comprises researchers from the mathematics education, AI in Education, and computer science disciplines. The pedagogical and technical challenges that we are currently addressing in the project are: (i) understanding and modeling the mathematics generalisation domain, the tasks to be undertaken by learners and the learners themselves; (ii) identifying the information about learners’ constructions that needs to be captured in order to underpin the provision of effective feedback by the system to learners and to the teacher; (iii) designing the feedback for the learner, and trialling and deploying appropriate intelligent techniques to generate such feedback; (iv) similarly, investigating, trialling and deploying appropriate visualisation techniques for the presentation of feedback to the teacher; and (v) designing and developing an extensible, scalable client-server architecture which will support multiple concurrent users (students and teachers) in a classroom setting, and will readily allow the incremental development and evaluation of the various components of our system during the course of the MiGen project, and beyond. This paper focuses primarily on (v), including also some aspects of (i) and (iv). We refer the reader to [10,11] for discussion of possible techniques underpinning (iii), and to [9] for discussions of (i) and (ii). This paper is structured as follows. Section 2 sets the scene by giving an overview of the envisaged context and functionalities of the MiGen system. Section 3 discusses the Conceptual Model of the system and the methodology we have adopted in developing this. Section 4 presents our overall system architecture, which aims to encompass the system functionalities presented in Section 2 while also meeting the necessary requirements in respect of extensibility, performance and easy installation within schools. Section 5 gives technical details of an architectural proofof-concept prototype, motivating the technologies and approaches chosen to implement the necessary functionality. We also discuss how this prototype will be used in the coming months as the basis for developing the first full version of the MiGen system. We give our concluding remarks in Section 6.
2 The MiGen System Context and Functionalities The MiGen system will be deployed in classrooms within schools. During a lesson, students will be working on mathematics generalisation problems as selected by their teacher and presented to them by the system. Students may be working (individually or in groups) on different variants of the same problem or on different problems. During the lesson, the teacher may wish to view real-time representations of the students’ activities and progress. At other times, teachers may also wish to access historical information about their students’ activities and progress as maintained by the system.
24
D. Pearce and A. Poulovassilis
The MiGen system will comprise a number of tools: (i) An Activity Design and Activity Management Tool. This supports the visual design, storage and enactment of activity sequences targeting maths generalisation. These activity sequences typically include phases such as: introduction to a task; undertaking a task (using the eXpresser tool — see below); reflecting on a task; and discussing constructions with other students (using the Discussion Tool — see below). We are considering using the LAMS system2 to provide this kind of functionality (excluding the construction and discussion functionality provided by the eXpresser and Discussion Tool). Currently, activity sequences are being designed by members of the research team. In the longer term, we expect that teachers will reorganise existing activities and create new ones. (ii) The eXpresser. This is a mathematical microworld [12,13] supporting students in undertaking maths generalisation tasks which have been specificied using the Task Design Tool (see below). The current version of the eXpresser supports individual construction; in the longer-term we anticipate also supporting design of maths generalisation tasks to be undertaken collaboratively by a number of students. Students’ actions undertaken within the eXpresser are stored by the system within an eXpresser Session Log. In the eXpresser, students are asked to construct ‘generalised patterns’ using a set of facilities that have been co-designed with teachers and teacher educators. Fig. 1a and Fig. 1b show two instances of a pattern that students may be asked to construct. The eXpresser supports them not only in constructing patterns but also in deriving expressions, such as the number of green tiles required (light grey) given the number of blue tiles (dark grey). For example, if a student has constructed their pattern using a series of ‘C’ shapes (as illustrated in Fig. 1c), they may derive an expression such as 5b + 3 for the number of green tiles, where b is the number of blue tiles. Other possible constructions that students may follow are shown in the remaining diagrams of Fig. 1 which also show that the form of the resulting expressions can vary widely (though all of them, if correct, will be equivalent to 5b + 3 of course). The constructions in Figs. 1c-g are general in that they make use of the pattern construction facilities of the eXpresser. As a result, changing the number of blue tiles will lead to the student’s pattern changing appropriately. For example, setting b = 4 will give the pattern of Fig. 1a and setting b = 5 that of Fig. 1b. We refer the reader to [14,15] for further details of the eXpresser. (iii) A Discussion Tool. This will allow students to view and discuss other students’ constructions. (iv) A Task Design Tool. This will support the specification of maths generalisation tasks to be undertaken using the eXpresser. In the first instance, this will be a researchteam-oriented tool. In the longer term, it will evolve into a teacher-facing tool which allows the designer/teacher to associate learning objectives with a given task and to construct possible solutions for the task so as to enable comparison against students’ actual constructions by the intelligent components of the system, e.g. to detect which construction approach a student is likely to be following, or whether they are possibly off-task. We refer the reader to [10,11] for discussion of a case-based reasoning approach to undertaking such analyses. 2
See http://www.lamsinternational.com/ for more details.
The Conceptual and Architectural Design of a System
25
(a) (c) (b)
(d) 5b + 3
(e)
(f) 2(2b + 1) + b + 1
3 + 5b
(g) 3(b + 1) + 2b
6 + 2(2b − 1) + b − 1
Fig. 1. (a)–(b) Instances of an example pattern and (c)–(g) several possible general constructions where each expression specifies the number of green tiles in terms of b, the number of blue tiles
(v) The eGeneraliser. This is a suite of intelligent components which take as their input information from the eXpresser as students are undertaking tasks, as well as information stored in the MiGen database relating to students (their learner model) and to tasks. These components generate real-time feedback for students and update their learner model attributes as appropriate. The eGeneraliser will provide different types of feedback, e.g. prompts to help students start a task, identify errors in their solutions, generalise their solutions, seek help from the teacher, etc. (See [9,11] for further discussion). Information about what feedback has been given to each student will be stored within the MiGen database (in the Feedback Log). This will be used by the Teacher Assistance Tools (see below) to allow the teacher to see how the system is interacting with her students (as well as for analysis purposes by the research team). (vi) The Teacher Assistance Tools. This is a suite of tools aiming to assist the teacher in monitoring students’ activities and progress, and intervening with additional support for students as she decides appropriate. In the first instance, these tools comprise a Classroom Dynamics (CD) Tool and a Student Tracking (ST) Tool. The CD tool will provide a visual overview of students’ locations in the classroom and their progress with respect to ongoing tasks so that the teacher can view different facets of their activities and use this to inform her choice of interventions within the classroom. The teacher will be able to select from a range of information to visualise, some of which will be derived from the eXpresser Session Log (e.g. which students have started to use variables in their construction, which have submitted their construction, which have created an algebraic expression for their construction, etc) and some from the Feedback Log and Learner Model data generated by the eGeneraliser (e.g. which construction approaches students are using, which students may be off-task, which students have achieved a particular “landmark” in their construction such as moving from a specific to a general solution). Similarly, the ST tool will provide teachers with information about individual students’ progress with respect to a given task so that the teacher can follow their progress on this task over time, as well as intervene as appropriate within the classroom to set new goals for an individual student. Some of this information will be derivable from the eXpresser Session Log (e.g. whether the student has completed all aspects of a task) and some from the Feedback Log and Learner Model (e.g. what the system has inferred about the student’s constructions, what feedback the system has given to the student).
26
D. Pearce and A. Poulovassilis
3 The MiGen Conceptual Model As stated in Section 1, the pedagogical and technical challenges posed in designing, developing and evaluating a system such as MiGen necessitates an interdisciplinary approach. An iterative methodology has been adopted on the project, comprising successive cycles of: i. pedagogical and technical research; ii. requirements elicitation within the domains of mathematics generalisation and intelligent support; iii. requirements analysis and specification in collaboration with students, teachers and teacher educators; iv. development of activity sequences and tasks to underpin evaluation of the technical deliverables; v. development (design, implementation and testing) of technical deliverables; vi. evaluation of the technical and pedagogical environment, both within the research team and with groups of students and their teachers; and vii. analysis of evaluation results, elicitation of pedagogical and technical outcomes, and planning of the next cycle. In this section, we discuss the MiGen Conceptual Model, which falls under item (iii) above. In Section 4 we present the MiGen System Architecture, which falls under item (v). Producing a Conceptual Model for the MiGen system has aimed to make explicit the key system entities and the relationships between them as a necessary first step in the development of the system. The process started from the earlier requirements elicitation activities, reported in [4,9,11], on the basis of which a first version of the Conceptual Model was produced by the authors. Discussions were then held with the rest of the research team, resulting in several refinements, modifications and extensions, and leading to the second version that we report on here. The overall Conceptual Model (CM) comprises four ovelapping subsections referring to the following aspects (breaking down the overall CM into a number of subsections facilitates its description and inclusion of diagrams within an A4 document): (a) Users and Learner Models; (b) Students’ Constructions; (c) Tasks, Activity Sequences, Learning Objectives and Landmarks; and (d) Task Solutions. For reasons of space, we present just the first three of these here and refer the reader to [16] for the full CM (including entity attributes). 3.1 Users and Learner Models Fig. 2 shows the entities relating users and learner models and the relationships between them. There are three types of user of the MiGen system: student, teacher and researcher (this relationship is indicated in the diagram by means of single-headed arrows from these entities to the User entity). For each task undertaken by a student, the eGeneraliser maintains information within a TaskShortTermModel on the student’s ongoing progress through the task (by way of explanation of the notation, at each end of an edge linking two entities is an indication of the cardinality of that particular end of the relationship and an optional verb phrase; so a Student ‘has’ zero or more TaskShortTermModels, while a TaskShortTermModel ‘belongs to’ one Student). The
The Conceptual and Architectural Design of a System
27
Fig. 2. Entity Relationship Diagram for users and learner models
TaskShortTermModel is subsequently used (by the eGeneraliser) to derive a longerterm model of the student’s strategies and outcomes in relation to this task — the TaskLongTermModel. This, in turn, is used to derive a model of the learner’s general understanding of the domain of mathematical generalisation (their DomainLongTermModel). This links with the overall DomainModel of the MiGen system (there is only one instance of this entity) which includes concepts such as ‘constants’, ‘variables’, ‘constructions’ and ‘expressions’. (Thus, a student’s “learner model” consists of their TaskShortTermModels, TaskLongTermModels and DomainLongTermModel).
3.2 Students’ Constructions Fig. 3 shows the entities involved in students’ construction of patterns. In interacting with the eXpresser, the student generates a sequence of StudentActions, arising from creating patterns and changing the attributes of patterns. This data is used by the eGeneraliser to derive and update the student’s TaskShortTermModel (introduced above), and StudentActions contribute to the student’s overall solution to the task (the TaskStudentSolution). Solutions may include TaskExpressions, which are part of the task specification and require the learner to answer algebraic questions about their constructions. During the task, the learner implicitly transitions through a number of TaskStates as they create and manipulate their construction. Examples of task states are: ‘student is currently constructing a specific instance of the pattern’, ‘student is currently constructing a general solution’ and ‘student is creating an algebraic expression’.
Fig. 3. Entity Relationship Diagram for students’ constructions
28
D. Pearce and A. Poulovassilis
3.3 Tasks, Activities, Learning Objectives, Landmarks Fig. 4 shows the entities and relationships relating to these aspects of the Conceptual Model. Each ExpresserTask is a member of a TaskFamily, which is a conceptual grouping of tasks along various dimensions, such as the number of variables students need to create for the task, and the nature of the algebraic rule. TaskFamilies are addressed by ActivitySequences, each of which consists of a number of ExpresserTasks that the learner progressively works through. Both ExpresserTasks and ActivitySequences have LearningObjectives associated with them. LearningObjectives are expressed in studentoriented language, and each of them is related to a corresponding TeachingObjective. LearningObjectives of tasks and activity sequences correspond to system-specified EpistemicObjectives which capture objectives in the domain of mathematical generalisation, e.g. ‘appreciation of the use of variables’. OperationalisedObjectives (also system-specified) serve to contextualise EpistemicObjectives in the context of the MiGen system. For example, an OperationalisedObjective such as ‘can use a variable to create a link between two patterns’ would contribute to the EpistemicObjective ‘appreciation of the use of variables’. PragmaticObjectives correspond to affordances of the eXpresser that are independent of any epistemic basis, e.g. ‘can drag a tile onto the screen’. Both OperationalisedObjectives and PragmaticObjectives can be pre-requisites for OperationalisedObjectives. As a student constructs a solution, the eGeneraliser may infer and create InferredLandmark entities, which indicate events such as the student starting to construct generally rather than specifically. The student’s actions may also generate landmark entities — ExplicitLandmarks — as a result of completing or highlighting points in their
Fig. 4. Entity Relationship Diagram for Tasks, Activities, Learning Objectives, Landmarks
The Conceptual and Architectural Design of a System
29
construction process that they may wish to reflect on later, or to discuss with their peers or teacher. Both these types of landmarks provide evidence for OperationalisedObjectives and PragmaticObjectives, as well as for LearnerInconsistencies — these are system-specified stumbling blocks, e.g. using more variables than needed.
4 The MiGen System Architecture The MiGen system architecture has been iteratively designed from ongoing identification and specification of the tools that will comprise the system and of the context of how the system will be used within the classroom (as discussed in Section 2). Fig. 5 gives a high-level component diagram illustrating the various components of our architecture and the information flow between them. Each of the user-facing tools features uniformly within the architecture, consisting of a user interface (UI) component (within the overall MiGen User Interface), an information layer for managing the client-side data structures necessary to support the UI component, and a server-side server component. The tools’ server-side components all make use of the MiGen Data Server which provides access to the underlying MiGen Database through its ‘data accessor’ Application Programming Interface (API). The data accessor is an abstraction over the data stored in the MiGen database. It provides facilities so that other components can obtain results from specific queries which are updated in real-time. This abstracts away low-level details of the underlying data storage from the other components. The MiGen database includes the eXpresser Session Log, Feedback Log, Learner Model data referred to earlier and, more generally, information relating to all the entities of the Conceptual Model presented in Section 3. As they are interacting with the eXpresser and the Discussion Tool, the eXpresser Information Layer posts students’ actions to the MiGen Data Server via the eXpresser
Fig. 5. High-level system architecture
30
D. Pearce and A. Poulovassilis
Server. The CD/ST and eGeneraliser tools work similarly except that in their case it is the MiGen Data Server that posts information to their information layers, via their servers, and this information is then presented to the teacher or student by their UI components. The Configuration Manager is responsible for the application of system-wide settings, as specified in configuration profiles stored within the MiGen Database. Issues relating to the integration (if any) of a pre-existing Activity Management System with the MiGen architecture are being discussed in ongoing work so we have shown no explicit communication links between them. We refer the reader to [16] for a more detailed architectural breakdown of the high-level components shown in Fig. 5.
5 Architectural Proof-of-Concept We now present a proof-of-concept implementation of the above architecture which focuses on the communication between the client-side components and the server-side components. It constitutes a significant first step towards building the infrastructure to support the first full version of the MiGen system in the coming months. The context of usage of the system is very specific: the system will be deployed in schools. During the lifetime of the project, we anticipate running a single server instance within our university’s IT infrastructure, facilitating data gathering and iterative development. Once the project has finished, however, we intend to provide the system to schools so that they can run it locally within their own IT infrastructure since sustainability is an important project aim. These considerations constrain the server-client architecture in a number of ways. Perhaps the most serious constraint is that there is often not sufficient technical administrative support within schools. Indeed many system administrators are teachers with a technical background who have volunteered for the role. Regardless, they are typically very reluctant to expose the school systems to any more risks than are strictly necessary. This is a potential issue both when running the server centrally since it would not be feasible to open up a new ‘MiGen Server’ port within school firewalls, as well as when the schools later run the servers locally since provision of technical support, complex server installation and/or maintenance would be highly problematic. These considerations serve to preclude possibilities such as Glassfish and other J2EE-based application servers, as well as servlet containers such as Tomcat and JBoss; the networking architecture needs to be lightweight using a port that is already open. In view of these constraints, a simple solution would therefore be a lightweight server-client technology that works over HTTP. An obvious candidate is RPC over XML or SOAP. However, a compelling alternative is to build an architecture based on Representational State Transfer (REST) [17]. REST has many advantages over RPCbased approaches: it is (more) cacheable and scalable and is also easier to performancetune and debug. Within a RESTful architecture, all data resources within the system can be made available at their own unique URL. In the context of our system, this has the additional advantage that students would be able to make a construction publicly available at its URL. This would then be accessible not only within the MiGen system but also using standard web browsers (with suitable content negotiation) thus allowing,
The Conceptual and Architectural Design of a System
31
for example, the student’s parents to view the work. Given these advantages, we have chosen to use Restlet, a lightweight Java REST framework.3 5.1 Implementation Overview The aim of our proof-of-concept implementation is to design and develop sufficient server-client infrastructure so as to demonstrate that it can fulfil the requirements of the architecture of Fig. 5. At this stage therefore, the intention is not to have a complete architecture, and the following discussion of the proof-of-concept implementation focuses on the core of the necessary functionality. Our prototype implements a simple server-client architecture where each client can post textual messages to the server. Fig. 6 provides a UML class diagram showing the salient server-side classes which utilise Restlet. The server-side functionality of the prototype is managed by an instance of the ServerApplication class which consists of a set of AbstractRestlets. An AbstractRestlet provides some convenience functionality for calling the appropriate abstract Java method given the HTTP request method. For the purpose of our prototype, only GET and POST request methods are handled, calling getRepresentation() and addRepresentation() respectively. The ClientsRestlet manages the data resource located at /clients, which lists the clients using the system. The MessagesRestlet manages the data resource located at /clients/{id}/messages, where id corresponds to the (numeric) id of an existing client; this lists the current set of messages for the given client (GET) and allows posting of new messages (POST). Finally, the RegisterRestlet provides a mechanism for each client to register their own server location to facilitate server-initiated client communication (see below). The DataModel class manages and provides access to the underlying data of the prototype.
Fig. 6. UML class diagram of the core server-side architecture
The UML class diagram for the core client-side architecture is shown in Fig. 7. For the purposes of this prototype, all data resources consist of a list of items, such as identifiers or messages. The pivotal generic class ResourceList4 is responsible for providing access to the data resources located at one of the URLs given above, ensuring that the local copy is synchronised with the version on the server. Instances of this class are created via the ResourceListManager which provides a cache of the ResourceLists used by the client. Each ResourceList makes use of a parser for the content of its URL, which is a ListRepresentationParser since all data resources are list-based in this proof-ofconcept implementation; also, all representations are text-based and the only parsers defined are based on one item per line of text. 3 4
See http://www.restlet.org/ The generic parameter is indicated in the annotation on the top right corner of the class box.
32
D. Pearce and A. Poulovassilis
Fig. 7. UML class diagram of the core client-side architecture
Fig. 8 presents a UML sequence diagram illustrating the interaction between the client-side and server-side classes. When the MessagesRestlet is first created by the server, it attaches itself as a listener to events coming from the DataModel (message 1 ). When the client initialises, it creates its ClientServer 2 which registers with the RegisterRestlet 3 which in turn stores the location of the client server for the particular client 4 . The client then creates ResourceListManager 5 and uses it to obtain a ResourceList 6, 7 . This concludes the initialisation messages; the remaining messages within the diagram illustrate interaction that happens at various points in response to client activity. In this example, this activity consists of the client posting a new message 8 to the MessagesRestlet. The restlet in turn posts the message to the underlying DataModel 9 which notifies the MessageRestlet’s listener 10 . In response to this event, the MessagesRestlet asks the DataModel to notify all connected clients that its representation has now changed 11 . The client’s server is subsequently notified 12 and requests an update to the appropriate ResourceList using the ResourceListManager 13, 14 . At this stage, the client’s ResourceList has not been updated; all that has happened is that the server has notified the ResourceList that it must re-check with the server to find out the latest state of its data resource. It therefore asks the MessagesRestlet for the latest representation of its data resource 15 . The MessagesRestlet contacts the DataModel 16 in order to obtain the updated data resource content 17 . This is returned as plain text which is subsequently parsed by the ResourceList 18 . The ResourceList then determines which elements have been added or deleted from the list, synchronising with the version of the list held on the server. In this example, a single element has been added so the client is notified appropriately 19 . This demonstrates how a client is able to post new information and receive updates to the existing information in real time. The example shows how a client posting new information only sees it once the server has been notified (which in turn then notifies other clients as illustrated). In the full system, infrastructure will be further developed so that the client will be able to function to some extent without assuming the presence of the server. This will allow local manipulation of a ResourceList directly, synchronising in the background with the server as and when it is available. It is important to note here that — consistent with RESTful principles — it is the client not the server that is responsible for calculating the difference between the server data resource content and the content of its own local copy; the server maintains no client state. This necessarily places a higher computational load on the client but leads to a scalable architecture: computational power increases with the number of clients. As
The Conceptual and Architectural Design of a System
33
Fig. 8. UML sequence diagram illustrating server-client initialisation and communication
implemented in this prototype, comparison of a server data resource with a local copy requires bandwidth proportional to the size of the data resource since each time a client is notified of changes, it downloads the entire resource. This scalability issue can be mitigated RESTfully by providing query-able data resources on the server that return the sequence of changes applied to the resource from a certain time onwards. Clients can maintain the timestamp of their last synchronisation and access this data resource as required, thus reducing network load as well as the synchronisation effort. Moreover, in the context of the MiGen system, the data resources that are necessary to support the CD/ST and eGeneraliser clients will be relatively small (of the order of 1-100K). 5.2 Realising the MiGen System Architecture It is now possible to illustrate how the various components presented in Section 4 can communicate using the infrastructure described above. Fig. 9 begins with a student manipulating their constructions and expressions within their eXpresser User Interface, which updates the data structures managed by the corresponding eXpresser Information Layer 1 . This posts these events to the MiGen server 2 , which then notifies the ClientServers of interested parties (the eGeneraliser server in this example) that the relevant data resources have changed 3 . The eGeneraliser server continues to receive notifications of the student’s actions until such time as it infers that a landmark has occurred which requires some feedback to be given to the student. At this point, it posts an update to the learner model and student feedback information stored in the MiGen Database 4 . An instance of the CD/ST Information Layer, which has previously registered its interest in receiving information relating to the updated learner model attributes of this
34
D. Pearce and A. Poulovassilis
Fig. 9. System sequence diagram
student (by creating a ResourceList pointing at the appropriate URL), is then notified 5 , updating its corresponding user interface as appropriate 6 . Fig. 10 shows two functional representations for the CD/ST User Interface which have been implemented as part of our proof-of-concept. Fig. 10a shows a CD representation based on a layout of the classroom in which the position of each student is shown, and each student is accompanied by a ‘traffic light’ graphic indicating the status of some aspect of their learner model, as selected by the teacher to view. In this example, it may be the TaskShortTermModel attribute ‘is the student on-task?’, which can take one of three values — yes/no/maybe; these values are visualised as red for ‘no’, green for ‘yes’ and amber for ‘maybe’ (the latter indicating uncertainty on the part of the system). Fig. 10b provides an informationally-richer ST representation which displays the progress of a set of students with respect to a set of landmarks, as selected by the teacher. Both of these representations update in real-time in response to information posted to the DataModel via the clients. For the purposes of the prototype, the representations are based on messages that clients submit. Despite our prototype not yet encompassing the functionality of the eGeneraliser and MiGen Data Server, the communication mechanism will be identical with these components in place. Our prototype provides crucial functionality required by the architectural design of the MiGen system: implementing the ‘data accessor’ API using Restlet and the
(a)
(b)
Fig. 10. Prototype CD/ST visualisations. (a) Classroom view; (b) Matrix view.
The Conceptual and Architectural Design of a System
35
ResourceList infrastructure. Following on from the development of this prototype, the
next steps are to extend the Restlet-based architecture so as to implement the full serverclient architecture of the MiGen system. This will involve determining the overall set of data resources that will be required, designing and implementing the underlying MiGen database and implementing the various tools’ servers. This will be followed by integration of the exisiting eXpresser UI and Information Layer with the eXpresser server, and development and integration of the other UIs and Information Layers.
6 Conclusions We have described the conceptual modelling and architectural design of the MiGen system, which aims to support 11–14-year-old students’ learning of mathematical generalisation. MiGen is aiming to provide students with personalised feedback based on their recent actions rather than just on their knowledge levels, aiming to balance students’ freedom to explore and discover with sufficient support to foster progressive building of knowledge. Teachers will be assisted in monitoring their students’ activities and progress by appropriate visualisation and notification tools, in order to support the teacher in focussing her attention across the class and in formulating her interventions. The Conceptual Model presented in Section 3 has been derived after a considerable requirements elicitation effort and represents a significant step towards the development of a common vocabulary and understanding for the multiple disciplines engaged in the MiGen project. It also underpins the ongoing logical design of the MiGen database. The System Architecture presented in Section 4 is simple, modular and scalable, aiming to meet the project’s objectives of incremental development and evaluation of the various tools of the system, and also easy deployment in schools. In Section 5 we described a working proof-of-concept implementation of the architecture and we discussed how this prototype will be extended in order to implement the first full version of the MiGen system. In the coming months, the project team will continue with iterative design, development and evaluation of the various tools of the system, continuing to follow the methodology outlined in Section 3 and aiming for the first full system evaluation to take place in Autumn 2009 with groups of students and their teachers. Acknowledgements. The MiGen project is funded by the ESRC/EPSRC TechnologyEnhanced Learning programme (RES-139-25-0381). We thank the other members of the MiGen team for our ongoing collaborative research on the project and for many stimulating discussions.
References 1. K¨uchemann, D., Hoyles, C.: Investigating factors that influence students’ mathematical reasoning. In: PME XXV, vol. 3, pp. 257–264 (2001) 2. Healy, L., Hoyles, C.: A study of proof conceptions in algebra. Journal for Research in Mathematics Education 31(4), 396–428 (2000) 3. Mason, J., Graham, A., Johnston-Wilder, S.: Developing Thinking in Algebra. Paul Chapman Publishing, Boca Raton (2005)
36
D. Pearce and A. Poulovassilis
4. Geraniou, E., Mavrikis, M., Noss, R., Hoyles, C.: A learning environment to support mathematical generalisation in the classroom. In: Proceedings of CERME 6, Lyon, France (2009) 5. Lesh, R., Kelly, A.E.: A constructivist model for redesigning AI tutors in mathematics. In: Laborde, J. (ed.) Intelligent learning environments: The case of geometry. Springer, New York (1996) 6. Kynigos, C.: Insights into pupils’ and teachers’ activities in pupil-controlled problem-solving situations. In: Information Technology and Mathematics Problem Solving: Research in Contexts of Practice, pp. 219–238. Springer, Heidelberg (1992) 7. Hoyles, C., Sutherland, R.: Logo Mathematics in the Classroom. Routledge, New York (1989) 8. Kirscher, P., Sweller, J., Clark, R.E.: Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiental and inquiry-based learning. Educational Psychologist 41(2), 75–86 (2006) 9. Mavrikis, M., Geraniou, E., Noss, R., Hoyles, C.: Revisiting pedagogic strategies for supporting students’ learning in mathematical microworlds. In: Proceedings of the International Workshop on Intelligent Support for Exploratory Environments at EC-TEL 2008 (2008) 10. Cocea, M., Magoulas, G.: Combining intelligent methods for learner modelling in exploratory learning environments. In: Proceedings of the International Workshop on Combinations of Intelligent Methods and Applications at ECAI 2008, pp. 13–18 (2008) 11. Cocea, M., Gutierrez-Santos, S., Magoulas, G.: Challenges for intelligent support in exploratory learning: the case of shapebuilder. In: Proceedings of the International Workshop on Intelligent Support for Exploratory Environments at EC-TEL 2008 (2008) 12. Thompson, P.W.: Mathematical microworlds and intelligent computer-assisted instruction. In: Artificial intelligence and instruction: Applications and methods, Boston, MA, pp. 83– 109. Addison-Wesley Longman Publishing Co., Inc. (1987) 13. Hoyles, C.: Microworlds/schoolworlds: The transformation of an innovation. In: Keitel, C., Ruthven, K. (eds.) Learning from computers: mathematics education and technology, pp. 1–17. Springer, Berlin (1993) 14. Gutierrez, S., Mavrikis, M., Pearce, D.: A learning environment for promoting structured algebraic thinking in children. In: Proceedings of ICALT, Santander, Spain (2008) 15. Noss, R., Hoyles, C., Mavrikis, M., Geraniou, E., Gutierrez-Santos, S., Pearce, D.: Broadening the sense of ‘dynamic’: a microworld to support students’ mathematical generalisation. The International Journal on Mathematics Education (ZDM) 41(5) (to appear, 2009) 16. Pearce, D., Poulovassilis, A.: Architecture specification of MiGen v1 (April 2009) Version 0.2., http://www.migen.org/publications/reports/arch-spec/0-2 17. Fielding, R.T.: Representational State Transfer (REST). In: Architectural styles and the design of network-based software architectures, ch. 5. University of California, Irvine (2000); PhD Thesis
Experience Structuring Factors Affecting Learning in Family Visits to Museums Marek Hatala1, Karen Tanenbaum1, Ron Wakkary1, Kevin Muise1, Bardia Mohabbati1, Greg Corness1, Jim Budd2, and Tom Loughin1 1 Simon Fraser University, Canada Emily Carr University of Art and Design, Canada {mhatala,ktanenba,rwakkary,kmuise,mohabbati,gcorness, tloughin}@sfu.ca,
[email protected] 2
Abstract. This paper describes the design and evaluation of an adaptive museum guide for families. In the Kurio system, a mixture of embedded and tangible technology imbues the museum space with additional support for learning and interaction, accessible via tangible user interfaces. Families engage in an educational game where family members are assigned individual challenges and their progress is monitored and coordinated by the family member with a PDA. After each round of challenges, the family returns to a tabletop display to review their progress. In this paper we present the overall evaluation result of Kurio and, using the model discovery approach, we determine which experience structuring factors have a substantial influence on the learning experience. Keywords: Social interaction, learning, user modeling, tangible user interface, museums guides, family.
1 Introduction The Kurio system facilitates social interaction in the museum by giving museum visitors personalized tasks and unique tangible user interfaces that they use in coordination with other family members to complete group activities. A mixture of embedded and tangible technology as part of an educational game facilitates novel learning and interaction opportunities in the museum space. By modeling both individuals and the group, an adaptive reasoning engine attempts to intelligently guide the flow of the visit to suit members on a personal as well as aggregate level. The main goal of the system is to select tasks for the individual family members that are the most appropriate for their knowledge level and will contribute the most to their experience and learning about the museum. The Kurio system extends our own prior research on the use of adaptivity and tangible user interfaces within a museum environment in a project known as ec(h)o [1, 2]. Museums and cultural heritage spaces have provided fertile ground for a number of projects investigating how to engage people with electronic guides or audio tours aimed at augmented information retrieval, novel museum visit interactions, and new approaches to learning in museums. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 37–51, 2009. © Springer-Verlag Berlin Heidelberg 2009
38
M. Hatala et al.
1.1 Learning in the Museum Family groups comprise more than half of all visitors to museums [3]. One common interaction pattern is that in many groups, individual members will go off to explore the museum on their own for periods of time, and then return to the group and share what they have found [3]. This return-and-share strategy is frequently enacted by younger family members exploring and then coming back to talk with a member of an older generation. Several studies have noted that parents tend to informally take on the role of a teacher within the museum, guiding the learning of the children in the group, often in subtle, almost unnoticeable ways. Hilke’s 1989 study of family behaviour in the museum revealed that children and adults alike spent a lot of time exploring their own interests, with children taking the lead slightly more than adults [4]. In another study, discussion with adult family members revealed that they often cited their children as a reason for spending time in the museum [5]. However, Hilke’s study showed that if parents do undertake a teaching role for their children, they do it “with such subtlety that the spontaneous pursuit of individual agendas to learn and share was not visibly disrupted” [4]. Woodruff et al. in their study of the use of an electronic guidebook, noted that dyads of parent-child tended to share a guide, whereas adult dyads took individual guides. In the parent-child groups, the children controlled the guide and made the choices for the most part, with the parents looking on and offering suggestions [6]. Some recent museum systems have taken this group dynamic into account and created guides that are meant to be shared. The Sotto Voce system by Aoki et al [7] is one of the first museum guide system to actively support group interaction. Sotto Voce contains an audio sharing application called eavesdropping that allows paired visitors to share audio information related to the exhibit with each other via PDA devices. This system allows for open-ended interaction, as visitors are free to follow whatever path they like and share as little or as much as they want to. The CoCicero project implemented in the Marble Museum in Carrera [8] is a group guide system which introduces a goal state that encourages group interaction. The visit is structured through a series of games, including multiple-choice questions that require visitors to gather clues from the exhibits within the museum. Each member of the group has a personal digital assistant (PDA) on which they can complete individual games or answer questions that contribute to the completion of a shared puzzle. There are also stationary large displays in the space, which can be shared by more than one person instead of using the PDAs. A final example of a museum system addressing the issue of social interaction is the ARCHIE system at the Gallo-Romano Museum in Belgium [9]. ARCHIE is a PDA-based trading game that invites small groups to play a game that helps them learn about social differentiation and exchange in West Europe around the year 825 BC. Within each group, one person is designated as the “leader” and the others are “farmers”. By exploring the museum and answering multiple choice questions, the farmers earn resources like cattle and sheep while the leader attempts to balance goods across the farmers and trade for other resources. Each of these systems encourages the kind of group interaction that can support the natural learning behaviors of families in the museum.
Experience Structuring Factors Affecting Learning in Family Visits to Museums
39
1.2 User Modeling for Groups While the systems described above encourage family members to interact and share information, none include any sort of adaptive component to tailor the situation to a particular group. Many individual guide systems have intelligent components containing user models which can adapt the system’s responses to suit the user better, allowing them to accomplish their task more efficiently, steering them in the right direction if they are unsure how to proceed, or recommending things that the user is likely to want or need based on past experience. To do this, the system might extract assumptions about the user that are not explicit in the individual’s own data or use the data to make predictions about the user’s likely preferences or future actions. This task becomes even more difficult when dealing with one user but a group of users. While there are no other museum systems that contain an adaptive component for groups, there are a number of web or desktop based systems that look at group modelling with a learning focus. Suebnukarn & Haddawy [10] have devised a system for overseeing group problem solving in the medical domain. The group model monitors student progress in reasoning from a patient’s symptoms to a cause; when the students become stuck or miss important steps, the system gives hints and clues to nudge them back onto track. Similarly, Harrar et al [11] are developing a cognitive tutor to monitor direct online collaboration between student dyads. Although their system is not fully implemented yet, they have presented preliminary work that involves studying interactions between student pairs and modeling the types of reasoning paths frequently seen in actual problem solving behaviour. In the same domain, but with a different focus, Read et al [12] present a language tutor where students have individual models which capture their linguistic abilities and background. When collaborative tasks are undertaken, a group model is created to represent that particular collaboration. The effectiveness of group interaction is assessed by a human expert, and the rating of the group work as well as the model created is associated with each individual user after the group dissolves. Knowledge of previous group formations and their efficacy is used to assign users to groups in future collaborations. Unlike the other two tutor systems, Read et al does not include the ability for the group model to supervise and direct the student activities. Their model is used primarily to achieve optimal group formation rather than being actively involved in shaping the group’s behaviour once formed. Alfonseca et al [13] studied the effect of student learning styles on collaboration in an online learning system. The system has an adaptive component which recommends activities based on each individual’s learning style and knowledge level. When the activities recommended are collaborative tasks, there is an option to have the system recommend a group to work together based on their combination of learning styles. While the group itself is not modeled or adapted to during the course of the collaboration, the recommendations for grouping arose out of the individual user models. The systems described above take a number of different approaches to modeling learning, but most have detailed individual models that capture information about a single person’s knowledge, skill level, or cognitive capacity, while the group model exists to find optimal combinations of group members or guide group performance.
40
M. Hatala et al.
2 The Kurio System In Kurio, a family imagines themselves as time travelers from the future whose time map is broken, stranding them in the present. Family members complete a series of challenges that encourage them to learn certain concepts from the museum in order to fix the map and continue their time travels. The interactive guide itself is comprised of a tangible user interface that is distributed over several tangibles with different functions, a tabletop display, and a PDA. A constructivist-learning model was used to guide decisions for the interaction, user model, and system content. A discussion of the design strategies used in developing the system can be found in [14]. 2.1 Technical Details The Kurio system has four main components: the handheld tangibles, a tabletop system, a PDA, and a server containing the reasoning engine. Fig. 1 shows the information that is exchanged between all of these parts of the system. The tangibles, PDA and table system all communicate wirelessly with the server using an XML-based message exchange protocol.
Fig. 1. The information flow between Kurio system components
The tangible user interfaces or tangibles are custom designed devices with shells produced on a 3D printer. Inside the shells, the processing is done on a Gumstix prototyping board programmed in Python and running a Linux OS and a MiniArduino using the Arduino programming language. Multi-colored light emitting diodes (LEDs) were used for confirmation and feedback to the user. The tangibles identified objects in two ways depending on the device. The pointer, listener, and finder used infra-red (IR) sensors that detected IR beacons placed next to museum artifacts. The reader incorporated an embedded radio frequency identification (RFID) reader that read RFID tags we encased in a small icon that was fastened to the didactics in the museum.
Experience Structuring Factors Affecting Learning in Family Visits to Museums
41
The monitor was an HP iPAQ running MS Windows Mobile 5.0. The tabletop display was designed by our team and was connected to a Mac Mini. Both the monitor and tabletop applications were developed in mobile and desktop versions of Adobe Flash. The rule-based reasoning engine was implemented in Jess (embedded Java reasoning engine). The rules operated on the ontological conceptual model in Web Ontology Language (OWL) representing the learning and user models, challenges, game, and artifacts. 2.2 Example Experience The flow of the Kurio experience is best described by narrating a prototypical account of a family’s interaction. The family begins at the tabletop display, where they are introduced to the time-travel narrative of the game and view a video that introduces each of the tangible user interfaces and the PDA. From there, they view the broken time map and select the first mission. There are five possible missions, each of which relate to a specific exhibit area, and each family needs to complete three of the five possible missions to fix the time map. When the first mission is selected, the table sends a message to the server and receives back the first set of individual challenges. The server is preset with each member’s age and name, allowing it to select the appropriate starting level of the challenges. For example, Kim, age 9, has been assigned a challenge with the listener, while Simon, age 7, has a pointer challenge. Each tangible has a specific function in terms of the information it can access: the pointer selects museum artifacts, the reader selects text from museum didactics, the listener plays audio clips in different locations in the exhibit, and the finder provides directional information for particular exhibits. When tangibles are assigned to each child, the tangibles glow green, indicating that they are ready to be used. The children’s mother, Sheila, is asked to collaborate with and help her children for the duration of the first mission. She is given the PDA and her role is to coordinate and facilitate the completion of the challenges by the other group members. Once everyone has been assigned a role for the mission, the family leaves the table and heads out into the museum space. Simon’s challenge asks him to “Find objects in the First Nations area made out of parts of animal.” As he walks through the First Nation’s exhibit, he notices that the tip of the pointer glows blue instead of green when he points at certain objects. When he presses the button on the pointer, the PDA that Sheila is carrying chimes and displays the object that Simon has just picked up. Together, she and Simon can look at a photo and short description of the object and decide whether or not it is the answer he was looking for. They can also call up Simon’s question if he’s forgotten what it is. Simon decides he wants to select something different, so Sheila deletes the object from the PDA and Simon’s pointer is re-activated. Meanwhile, Kim has continued exploring and discovers that her listener turns blue at the tip when she stands near the canoe. She plays the listener by pressing a button on the tangible. She hears a sound clip describing how First Nations fishermen would hunt using nets and canoes. She selects this clip as the answer to her challenge, and then goes over to Sheila to see if her mom agrees with her decision. They both think
42
M. Hatala et al.
it is the correct answer, so Sheila selects the “review” button at the bottom of Kim’s challenge screen. She answers two quick questions about how difficult Kim found the question and whether anyone helped her answer it. Kim is assigned a new challenge and goes back to the table to exchange the listener device for the reader. Kim and Simon continue doing challenges until the monitor informs them that they should return to the table. Once there, they can view the results of their work. Challenges that were completed successfully are displayed on one side of the screen while objects that are incorrect are displayed on another, along with the correct answer. The family can review this information and discuss their progress. Next, the system either assigns a new round for the same mission, or moves them on to the selection of the next mission; depending on how much time they have spent so far. At the end of each mission, they are able to view a short “reward” video that gives them more information about the area of the museum they just finished exploring. When the next mission begins, the monitor is switched over to Kim and Sheila gets to try out the tangibles. In this manner, they complete three missions, fix their time machine, and are able to continue on to the future.
3 Modeling Family and Its Members The above scenario gives a flavor of how the Kurio experience progresses. The reasoning engine on the server is what guides the course of the game, keeping track of everything that happens and making decisions based on that information. We had two main goals in mind in terms of how to customize the game experience for each family: 1. 2.
To find the appropriate challenge level for each individual, and To manage the length of the mission rounds to suit the pace of the group.
At its core, the reasoning engine is a rules-based expert recommender system, supported by a knowledge base consisting of an ontology of the available missions and challenges. A set of individual models as well as group model is maintained throughout the course of each family’s interaction with Kurio. Because of space limitation here we provide only an overview of user and group models. 3.1 Individual Models The individual models consist of some basic demographic information (name, age, family name) and a set of values for specific learning-related skills. To structure the learning model, we used Bloom’s taxonomy [15], which progresses through 6 levels of learning: Remember, Understand, Apply, Analyze, Create and Evaluate. Each individual challenge is categorized according to which level of the taxonomy it relies on most. The age of the individual participant is used to set the starting values for the skills. When a new challenge is to be assigned, the reasoning engine ranks all possible challenges and chooses from amongst the ones that are the best fit. Three criteria are used to automatically rule out certain challenges: current mission, device availability, and age.
Experience Structuring Factors Affecting Learning in Family Visits to Museums
1. 2.
3.
43
Current mission: If the challenge is not part of the current mission, then it is not considered for assignment. The missions have between 18-24 challenges in total. Tangible user interface availability: Any challenges requiring a tangible that is already in use is discarded from the pool of candidates. In each mission, there are between 3-7 challenges for each device. Age: The listener device was more difficult to use than the others, both in terms of interface and in terms of the cognitive requirement to listen to and extract information from the audio. Therefore, an age limit was set so that children under the age of 9 did not get assigned challenges using that device.
Once these hard criteria have narrowed the pool to challenges from the current mission whose device is available, a ranking algorithm assigns each candidate a value based on 3 other factors: skill progression, skill reinforcement, and variety. 1.
2.
3.
Skill progression: If the new challenge is one skill higher than the last challenge the person completed, then it is given a high ranking. If it is more than one skill higher, it is given a low ranking. If it is the same skill or lower, it is given a neutral ranking. This factor creates a pull towards increasing the level of challenge. Skill Reinforcement: The skill of the challenge compared to the current value of the skills in the individual’s model. This factor looks at the skill used by the candidate challenge and compares it to the stored value for that skill and for the skill that is one level lower on the taxonomy in the individual’s model. This factor makes sure that the challenge level does not increase too quickly, requiring the individual to have a certain number of “points” in the lower level skills before being assigned higher level skills. Variety: Whether or not the individual just used that tangible. A slight preference is given for switching to a new tangible, to prevent boredom or the perception that one person is “hogging” a specific tangible. This factor mostly functions as a tie breaker between challenges that are evenly matched in terms of the previous two factors.
Each time a challenge is completed, the value for the skill associated with that challenge is updated in the user model. Three factors determine how the value is changed: whether or not they answered correctly, whether or not they got help in completing the challenge, and whether they rated it Easy, Just Right, or Hard. 3.2 Family Model Compared to the calculations involved in the individual model, the group model of the family was quite simple. The primary thing the group model tracked was progress through each mission. Progress was calculated as a percentage out of 100. Each time a task was completed, or five minutes passed, the progress was incremented by a certain amount. When the progress reached over 60%, challenges stopped being assigned to individuals and the group was instructed to return to the table when all current challenges were completed. The group would thus typically return to the table with 60-80% of the mission completed. After reviewing the challenges completed during the first round, a second round would be assigned to bring the total up to 100% through the mission. The amount of progress that each challenge counted for was
44
M. Hatala et al.
determined by the number of people in the group to keep smaller groups from having to do more tasks to finish a single round. Using both challenge completion and time passage to increment the progress counter helped prevent slower families from getting frustrated or feeling trapped within a single round.
4 User Studies In our evaluation, families tested Kurio in a local history museum. The number of participants was 58 parents and children, or 18 families. The family sizes ranged from 2 to 4 people and in a few cases a family friend joined the group. In most cases, a single parent accompanied one or more children, but in one case two parents participated. There were 35 children between the ages of 7–12: 20 boys, 15 girls. There were 4 children between the ages of 13-17: 2 boys, 2 girls. And there were 19 parents (15 mothers, 4 fathers) ranging in age from 24 to 57. The evaluation of the system was performed in two phases three months apart, using the same protocol. The second phase differed from the first one in that it had a more robust version of the system. We also made a few small adjustments to the design of the tangible devices, such as a recessed on/off switch and reducing the number of control buttons on the listener from three to two. We also updated the user modeling component in several ways. We adjusted the bootstrapping values and some parameters in the group user model, especially related to timing of the session to achieve about the same experience for families with different number of people. The families were recruited from the local area by way of mailing lists and notices circulated at the local schools and homeschooling groups. A user session consisted of the families completing the game by repairing the time map (on average 45 minutes). This was preceded by a short tutorial on the system and a brief interview and questionnaire on previous experiences with museums and technologies. Following the session, participants completed questionnaires and a semi-structured interview. Two separate questionnaires were administered, one for children aged 7 to 12 and one for parents and children older than 13. The sessions were both videotaped and audio recorded. Lastly, 2-4 weeks after the study and on a volunteer basis, families conducted self-administered audio-recorded interviews based on a script we provided. 4.1 Questionnaire Data for 7-12 Year Olds Table 1 shows the responses to Likert scale questions using smiley faces that were converted to a scale of 1-5, with 5 being best. The questionnaire was based on Read and MacFarlane’s “Fun Toolkit” for children [16]. The questions assessed general perceptions of use, fit of the system with the family, and benefits with respect to learning and enjoyment. We first used a mixed model to test for within-family (intraclass) correlations in the responses, which would invalidate standard statistical analysis methods if present [17]. These were all found to be negligible. We then tested for the significance of the difference between the means of two phases using a 2sample t-test. Cases with significant difference (5% level) between Phases 1 and 2 are indicated in Table 1.
Experience Structuring Factors Affecting Learning in Family Visits to Museums
45
Table 1. Questionnaire for 7-12 year olds. The values in the columns represent mean and standard deviation. The mean is from a 5-scale Likert scale converted from smiley faces, 5 being the best. *Mean difference between phases 1 and 2 is significant at 0.05 level (2-tailed).
Question A. Did you have fun with Kurio? B. Was Kurio easy or hard to use? C. Were the challenges given by Kurio easy or hard? D. Were you excited or bored about the next challenge given to you by Kurio? E. Was Kurio helpful in learning about things in the museum? F. Was Kurio fun to use with your family? G. Is using Kurio a good way to visit a museum?
Phases 1 (N=14) 3.64, 0.92 3.42, 1.01
Phase 2 (N=19) 4.15, 0.89 3.15, 0.50
Phases 1+2 (N=33) 3.93, 0.93 3.27, 0.76
3.28, 0.72
3.47, 0.77
3.39, 0.74
*3.07, 0.73
*3.84, 1.21
3.51, 1.09
*3.42, 0.64
*4.10, 0.80
3.81, 0.80
*3.42, 0.85
*4.10, 0.93
3.81, 0.95
3.92, 0.64
4.15, 0.89
4.06, 0.80
4.2 Questionnaire Data for Ages 13+ The second questionnaire was given to parents and children 13 years old and older. The questionnaire included twenty-four Likert scale questions from 1-5, with 5 being best. The results are in Table 2. Again, we tested for the significance of the difference between the two phases using a 2-sample t-test, after finding intra-class correlations to be negligible. Cases with significant difference between Phases 1 and 2 are indicated in the last column. Table 2. Questionnaire for ages 13+. The values in the second and third columns represent M, SD, and N. The mean is from 5-scale Likert scale converted from smiley faces, 5 being the best. *Mean difference between phases 1 and 2 is significant at 0.05 level (2-tailed). Question A.1 How much fun was Kurio? A.2 How confident do you feel about using Kurio after the evaluation? A.3 Did Kurio require a large effort to learn? A.4 Did your attitude toward Kurio become more or less positive as the evaluation progressed? B.1 How well did Kurio help you in exploring the museum in ways that interested you? B.2 How well did Kurio let you enjoy the museum with your family and or friends? B.3 How well did Kurio help you learn about the museum exhibition and the artefacts?
Phase 1 *3.7, 0.94, 10
Phase 2 *4.58, 0.66, 12
Phases 1+2 4.18, 0.9, 22
4.11, 1.16, 9
4.41, 0.66, 12
4.28, 0.9, 21
3.3, 1.41, 10
3.75, 0.86, 12
3.54, 1.14, 22
4.1, 1.19, 10
4.66, 0.49, 12
4.4, 0.9, 22
*3.6, 1.07, 10
*4.5, 0.67, 12
4.09, 0.97, 22
3.77, 1.2, 9
4.5, 0.67, 12
4.19, 0.98, 21
*3.6, 1.07, 10
*4.58, 0.66, 12
4.13, 0.99, 22
46
M. Hatala et al. Table 2. (continued) B.4 How well did Kurio let you learn together with your family and or friends about the museum exhibition and the artefacts? C.1 How well does Kurio fit in with the exhibition environment? C.2 How integral a part did Kurio feel with the exhibition? C.3 Did using an interactive system like Kurio benefit your experience of the museum exhibition? C.4 How well does an interactive system like Kurio fit with how your family would like to visit museums similar to the Surrey Museum? C.5 If you were the monitor (used the PDA), how much do you feel you helped others in exploring the museum? D.1 Is using Kurio a good way to visit the museum? D.2 How easy was Kurio to use? D.3 Were the challenges Kurio assigned helpful in exploring and learning in the museum? D.4 How interested were you to get the next challenges assigned? D.5 If you or a member of the family completed a challenge successfully, how difficult was the next one? D.6 If you or a member of the family did not complete a challenge successfully, how difficult was the next one? E.1 How much did you discover that you did not know previously? E.2 How much more did you learn about things you already knew? E.3 How much more curious did the museum experience make you? E.4 How exciting was it to learn? E.5 Some of what I learned will be useful to me?
*3.55, 1.5, 9
*4.66, 0.49, 12
4.19, 1.16, 21
3.8, 1.03, 10
4.5, 0.79, 12
4.18, 0.95, 22
3.4, 1.26, 10
4.08, 0.9, 12
3.77, 1.1, 22
*4.11, 1.05, 9
*4.91, 0.28, 12
4.57, 0.81, 21
4.2, 1.31, 10
4.91, 0.28, 12
4.59, 0.95, 22
*4.12, 0.83, 8
*4.75, 0.45, 12
4.5, 0.68, 20
4.3, 0.94, 10
4.83, 0.38, 12
4.59, 0.73, 22
3.5, 1.17, 10
4.25, 0.45, 12
3.9, 0.92, 22
*3.8, 0.78, 10
*4.5, 0.52, 12
4.18, 0.73, 22
*4.11, 0.6, 9
*4.75, 0.45, 12
4.47, 0.6, 21
*3.62, 1.4, 8
*4.72, 0.46, 11
4.26, 1.09, 19
*4.0, 0.89, 6
*4.87, 0.35, 8
4.5, 0.75, 14
3.8, 0.78, 10
3.91, 0.66, 12
3.86, 0.71, 22
3.5, 1.06, 8
3.83, 0.83, 12
3.7, 0.92, 20
4, 0.81, 10
4.33, 0.65, 12
4.18, 0.73, 22
*3.8, 0.78, 10
*4.58, 0.51, 12
4.22, 0.75, 22
2.88, 1.36, 9
4.16, 0.71, 12
3.61, 1.2, 21
4.3 Experience Structuring Factors In addition to the interview and questionnaire data, we also saved the system log data for every study session. The log captured the fine grained interaction activities of each individual and family, including the challenge assignments, selection and de-selection of objects, activity at the tabletop, and responses to the post-challenge feedback
Experience Structuring Factors Affecting Learning in Family Visits to Museums
47
Table 3. Experience structuring factors extracted from log data Factor Number of Challenges Number of Quits
Being helped Relative Time Helping Others Average of Difficulty Deviation From Just Right Difficulty Sequential Change
Description Number of successfully completed challenges Number of challenges the user quit. As this was the way to overcome some technical glitches, its semantics represents failures of the system. Number of challenges where user responded that s/he received help. A ratio of time without an assigned challenge when user was asked to help other family members to time spent on solving their own challenge. Five last challenges were used with the latest challenge weight set at 5 and weight going backwards to 1. Ratio between challenges rated Easy and Hard and total number of challenges. Average change in difficulty between subsequent tasks measured in absolute values (e.g. change from Hard to Easy is 2)
questions. To better understand the individual interaction sessions we have extracted interaction characteristics we call experience structuring factors from the log data (see Table 3 for their description). The last three factors in Table 3 are derived from the patterns of the difficulty ratings as ranked by the user after completion of the challenge. The users were asked to rank difficulty as “Easy”, “Just Right”, or “Hard”. To compute the factors that represent the difficulty of a varying sequence of challenges we have assigned them values of 0, 1, and 2 respectively. In the following section we explore which factors have the highest influence on the user learning in the museum. 4.4 Assessing Importance of Experience Structuring Factors We wish to identify experience structuring factors that have strong associations with each of the questionnaire response variables. In this context, bivariate correlations between response and explanatory variables are of limited utility, as they do not adequately account for multicollinearity (hidden multivariable correlations) among the explanatory factors [18]. Also, model-selection procedures such as stepwise selection are inappropriate on several levels: they select a single combination of variables that may not be best by any objective measure and they ignore the effects of random variability on the model selection process [19]. We do not seek to use a model for predicting responses. Instead, we use a variable-selection approach that considers all explanatory factors simultaneously and assesses their relative importance in explaining variability in the response variables. Method. For each response variable we fit models consisting of all possible combinations of explanatory variables, including the empty model, and calculated a modelassessment criterion, the corrected Akaike Information Criterion (AICc), on each model. We converted the AICc values to model weights so that the relative sizes of the weights reflect the relative fit of the model: good-fitting models have large
48
M. Hatala et al.
weights and poor-fitting models would have weights close to zero. The weights were normalized to sum to 1 over the set of all models. Then we calculated factor weights for each explanatory factor by summing the weights corresponding to all models in which the factor appears. In this manner, factors that are consistently a part of the best-fitting models have weights close to 1, while those that are not generally included in good-fitting models have factor weights near zero. Details on this factor weighting procedure can be found in [19]. Finally, we applied a strict threshold (0.95 out of 1 for ‘++’ or ‘--' ranking and 0.90 of 1 for ‘+’ rank or ‘-‘, with the sign determined by the sign of the corresponding regression coefficients) to consider the factor as important to explain the variability in the questions and worth paying attention to in further research and design of systems like Kurio. Model for ages 8-12. Factors found as important using the method above are listed in Table 4. For other questions from the questionnaire for 8-12 year olds (Table 1), no other variables came out as sufficiently important. In those cases, the mean and standard deviation as listed in Table 1 provide the best guidance for future research. Table 4. Experience Structuring Factors Model for ages 8-12 Question E. Was Kurio helpful in learning about things in the museum? F. Was Kurio fun to use with your family?
Factors Number of challenges completed (--) Difference from just right (--), Relative time to help others (--)
The first finding is very surprising. It indicates that with increasing number of challenges completed, we can expect children to judge the Kurio’s ability to help them learn about things in museum less. To put this into perspective, the average number of challenges completed in this age group was 4.22 (SD=2.16, N=31) with maximum number of challenges being 10 and minimum 1. This is a strong indicator that although they had fun with the system and considered Kurio to be a good way to visit museum (Table 1), the actual learning can be hindered by excessive number of challenges. As a guideline, the pace of interaction has to be carefully tested before any serious deployment of the system. The second finding indicates that the two factors negatively influence the fun factor in the context of the family visit. First, difference from just right is a good confirmation that if system assigns challenges at the right level, this increases the value of the visit as a family. The relative time to help others factor is a relative measure of the time without challenges against time spent solving their challenge in each mission. As rounds of challenges were wrapped up, individuals who completed their individual assignments were not given new ones and were instead asked to help the others in their family. Depending on how quickly they did their challenge and how long their family members took, they could spend a significant amount of time without anything specific to do. The interpretation of the finding is fairly straightforward; when individuals have nothing to do, they become bored and start to feel negatively about the experience, or helping others learn is less valued than learning themselves.
Experience Structuring Factors Affecting Learning in Family Visits to Museums
49
Although a high amount of time helping others may have caused boredom for some users, others commented explicitly in the interviews that they enjoyed working together to complete the missions. In one of the self-administered interviews completed a couple weeks after the Kurio interaction, one family had the following exchange in response to the question “What did you like best about Kurio?” Jenna: I liked trying to help other people do it. I don’t know why. Sharon: I know, that’s a good reminder, I liked that too, working together… Jenna: Not just, oh it’s all about me, I can only do it. Sharon: Yeah, the problem solving was kind of fun, I liked figuring it out together. No one in the interviews complained that the Kurio system isolated them from their family members or inhibited social interaction, so in that regard at least the system was successful in correcting issues observed in individually focused systems. Model for ages 13+. Factors found as important for age group 13+ are listed in Table 5. The users had an option to quit a challenge and be assigned a new one when they found the challenge to be too difficult or uninteresting. Unfortunately, we used the quit mechanism also as a way to fix technical glitches with tangibles. This became the predominant use of this capability. The higher number of quits due to technical problems rather than difficulty influenced the variability in answers to several questions. The second uncontrollable factor that influenced some questions strongly was age (mean of 38.9, SD=12.6, N=19). Table 5. Experience Structuring Factors Model for 13+ year old Question A.4 Did your attitude toward Kurio become more or less positive as the evaluation progressed? C.4 How well does an interactive system like Kurio fit with how your family would like to visit museums similar to the Surrey Museum? C.5 If you were the monitor (used the PDA), how much do you feel you helped others in exploring the museum? D.2 How easy was Kurio to use? D.4 How interested were you to get the next challenge assigned? D.5 If you or a member of the family completed a challenge successfully, how difficult was the next one? D.6 If you or a member of the family did not complete a challenge successfully, how difficult was the next one? E.5 Some of what I learned will be useful to me?
Factors Age (++), Number of challenges completed (++), Being helped (--), Relative time to help others (--) Number of challenges completed (--)
Number of quit challenges(-)
Number of quit challenges (-) Number of quit challenges (-) Age(--), Number of quit challenges (-) Number of quit challenges (-)
Number of quit challenges (-) Relative time to help others (--)
50
M. Hatala et al.
The attitude towards Kurio is higher with number of challenges completed, which is an expected result. However, the positive view is negatively affected relative to the time spent to help others, i.e. the more time spent with others. The previous explanation presented in the paragraph above seems to apply in this case as well. Another aspect that negatively affects positive attitude toward the system is being helped, meaning that adults do not like it when they are being helped. This aspect is interesting and we plan to further investigate this issue. There is also a significant finding with respect to learning in the question E.5 that reflects the similar finding in the model for the age group 8-12.
5 Conclusions We presented a system, Kurio, which supports family visits in museums with a mixture of technology including specialized tangible devices, a personal digital assistant, and a tabletop computer. The families engage in an educational game where family members are assigned individual challenges and their progress is monitored by coordinated by the family member with PDA. After each round of challenges, the family returns to the tabletop computer to review their progress, obtain rewards, and is guided to the next round of challenges. We have evaluated the Kurio with 18 families (54 participants) in a local museum. In addition to the session using Kurio that lasted on average 45 minutes, the participants filled in pre and post-test questionnaires, provided post-session structured interviews, and also self-administered audio interviews 2-4 weeks after the session. The evaluation was organized in two phases. In this paper we reported the questionnaire results for age groups 8-12 and 13+. The evaluation of the Kurio overall was very positive. We also extracted several factors from the log data. We applied a model discovery method to determine which factors play important roles in determining users’ perception about the Kurio system and support learning in the museum. In this paper we reported the questionnaire results for age groups 8-12 and 13+. The evaluation of the Kurio overall was very positive. We also extracted several factors from the log data. We applied a model discovery method to determine which factors play important roles in determining users’ perception about the Kurio system and support learning in the museum.
References 1. Hatala, M., Wakkary, R.: Ontology-Based User Modeling in an Augmented Audio Reality System for Museums. User Modeling and User Adaptive Interaction 15, 339–380 (2005) 2. Wakkary, R., Hatala, M.: Situated Play in a Tangible Interface and Adaptive Audio Museum Guide. Journal of Personal and Ubiquitous Computing 11, 257–301 (2007) 3. Ellenbogen, K.M.: Museums in Family Life: An Ethnographic Case Study. In: Leinhardt, G., Crowley, K., Knutson, K. (eds.) Learning Conversations in Museums. Erlbaum, Mahwah (2002)
Experience Structuring Factors Affecting Learning in Family Visits to Museums
51
4. Hilke, D.D.: The Family as a Learning System: An Observational Study of Families in Museums. In: Butler, B.H., Sussman, M.B. (eds.) Museum Visits and Activities for Family Life Enrichment. Haworth Press, New York (1989) 5. Hooper-Greenhill, E.: Museums and their Visitors. Routledge, New York (1994) 6. Woodruff, A., Aoki, P., Grinter, R., Hurst, A., Szymanski, M., Thornton, J.: Eavesdropping on Electronic Guidebooks: Observing Learning Resources in Shared Learning Environments. In: Proc. of Museums and the Web, pp. 21–30 (2002) 7. Aoki, P., Grinter, R., Hurst, A., Szymanski, M., Thornton, J., Woodruff, A.: Sotto Voce: Exploring the Interplay of Conversation and Mobile Audio Spaces. In: Proc. SIGCHI, pp. 431–438 (2002) 8. Dini, R., Paternò, F., Santoro, C.: An Environment to Support Multi-User Interaction and Cooperation for Improving Museum Visits through Games. In: Proc. Mobile HCI, pp. 515–521 (2007) 9. Van Loon, H., Gabriels, K., Luyten, K., Teunkens, D., Robert, K., Coninx, K.: Supporting Social Interaction: A Collaborative Trading Game on PDA. In: Museums and the Web (2007) 10. Suebnukarn, S., Haddawy, P.: Modeling Individual and Collaborative Problem-Solving in Medical Problem-Based Learning. User Modeling and User Adaptive Interaction 16, 211– 248 (2006) 11. Harrar, A., McLaren, B.M., Walker, E., Bollen, L., Sewell, J.: Creating Cognitive Tutors for Collaborative Learning: Steps toward Realization. User Modeling and User Adaptive Interaction 16, 175–209 (2006) 12. Read, T., Barros, B., Barcena, E., Pancorbo, J.: Coalescing individual and collaborative learning to model user linguistic competences. User Modeling and User Adaptive Interaction 16, 349–376 (2006) 13. Alfonseca, E., Carro, R.M., Martin, E., Ortigosa, A., Paredes, P.: The Impact of Learning Styles on Student Grouping for Collaborative Learning: A Case Study. User Modeling and User Adaptive Interaction 16, 377–401 (2006) 14. Wakkary, R., Hatala, M., Muise, K., Tanenbaum, K., Corness, G., Mohabbati, B., Budd, J.: Kurio: A Museum Guide for Families. In: Tangible and Embedded Interaction, pp. 215– 222. ACM Press, New York (2009) 15. Read, J.C., MacFarlane, S.: Using the Fun Toolkit and Other Survey Methods to Gather Opinions in Child Computer Interaction. In: Proc. IDC, pp. 81–88 (2006) 16. Anderson, L.W., Krathwohl, D.R.: A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives: Complete Edition. Longman, New York (2001) 17. Littell, R.C., Milliken, G.A., Stroup, W.W., Wolfinger, R.D., Schabenberger, O.: SAS for Mexed Models, 2nd edn. SAS Press, Cary (2006) 18. Neter, J., Kutner, M.H., Nachtsheim, C.J., Wasserman, W.: Applied Linear Statistical Models, 4th edn. Irwin, Chicago (1996) 19. Burnham, K.P., Anderson, D.R.: Model selection and multimodel inference: a practical information-theoretic approach, 2nd edn. Springer, New York (2002)
Personalisation of Learning in Virtual Learning Environments Dominique Verpoorten, Christian Glahn, Milos Kravcik, Stefaan Ternier, and Marcus Specht CELSTEC, Open University of the Netherlands, Valkenburger Weg 177, 6411AT Heerlen, The Netherlands {dve,cgl,mkv,str,spe}@ou.nl
Abstract. Personalization of learning has become a prominent issue in the educational field, at various levels. This article elaborates a different view on personalisation than what usually occurs in this area. Its baseline is that personalisation occurs when learning turns out to become personal in the learner’s mind. Through a literature survey, we analyze constitutive dimensions of this inner sense of personalisation. Here, we devote special attention to confronting learners with tracked information. Making their personal interaction footprints visible contrasts with the back-office usage of this data by researchers, instructors or adaptive systems. We contribute a prototype designed for the Moodle platform according to the conceptual approach presented here. Keywords: Personalisation, self-regulation, VLE, learner support, learner tracking.
1 Introduction Good pedagogy is commonly assumed to be related to individualized learning. This perspective sees learners as separate entities with unique learning goals and needs requiring customized support. In contrast to individualized learning, personalised learning emphasizes the notion that learners consider given settings for learning as personally relevant. The personal perspective implies that learners take ownership and responsibility of their learning processes and of the tools which they use. This perspective allows developing courses and services for personalised learning without taking the individual differences of each learner as a starting point. Personalised learning relies on three interrelated theories: • • •
Constructivism understands learning as the process in which persons actively construct knowledge, concepts, and competences through interacting with their environment [1]; Reflective thinking stresses that instructional practice should not simply aim at engaging learners at the level of presenting information for understanding and use, but also direct them at meta-levels of learning [2]; Self-regulated learning focuses on the cognitive and communication processes through which learners control their learning [3].
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 52–66, 2009. © Springer-Verlag Berlin Heidelberg 2009
Personalisation of Learning in Virtual Learning Environments
53
One key concept of self-regulated learning is motivation [4, 5]. Therefore, supporting the learners’ motivation is a goal of personalised learning. Motivation rests on three key factors: perceived controllability, perceived value of the learning task and perceived self-efficacy for it [6, 7]. These aspects depend critically on learners’ understanding of their own process of learning and their personal situation in the learning task. Therefore, it is necessary to support the learners’ awareness of the learning goals, their progress, and the context in which their learning is situated. Feeding back personal tracked data is a way to enhance the appraisal of these personal dimensions of learning. However, such an approach of autonomous learning support has not retained much attention from research so far. Mining learners' interactions is a common concern of adaptive system improvement. Such systems harness the tracking of various parameters to the production of adaptive presentation, learning paths, content selection [8]. In all cases, the process entails a backstage treatment of personal tracked data but seldom involves presenting it to their owners. Central to this paper is an analysis of the use of this data for personalising learning experiences and supporting self-regulation in virtual learning environments (VLEs). The analysis is preceded by and grounded in a review of concepts of personalisation coming from education and technology related domains. It is followed by the description of a prototype, developed for the Moodle platform, which starts giving concrete expression to the reconsidered perspective on personalisation outlined here.
2 Background Personalisation of learning has become prominent in the educational field, at various levels: social [9], government policy [10, 11], school management [12, 13] and course/lesson design [14, 15, 16]. Definitions of personalisation greatly vary [17] from the perfectly acceptable but vague "antithesis of impersonal" to the technical and hyper-focused "automatic learning paths structuring to meet the needs of the learner" [18]. The latter view on personalisation is typical of Adaptive Hypermedia Systems [19]. The idea behind Adaptive Hypermedia Systems is the automatic production of educational adjustments based on the learner profile. The breed of personalisation featured in this article is different. It suggests that personalised learning stems from stimulating and supporting learners in self-regulating their learning processes. This has implications for the design of learning environments. It means that learners do not only have access to material to read, websites to explore or assignments and tests to perform, but also to tools for monitoring these activities. From our point of view, personalised learner support based on this approach has a stronger impact on key variables creating ownership and responsibility for the personal learning. Irrespective to the type of learning, there is only scant research on what makes a student feel that a learning experience is personalised. Waldeck [20, 21] undertook such a kind of study in regular face-to-face classrooms. Her study identified factors that students considered meaningful and relevant for characterizing an educative experience as personalised: instructor shares his/her time outside of class, instructor provides counsel to student, instructor exhibits competent communication; instructor cultivates social and personal relationships with students, instructor exhibits flexibility with course requirements. The literature surveyed for the present article reveals, in the field of eLearning, no research similar to Waldeck's one. The factors that are
54
D. Verpoorten et al.
contributing to effective personalised learning experiences in the eyes of the students are still to be elucidated. However, they might turn out to be very different from those found by Waldeck. Yet, in Waldeck's approach, the teacher remains central. As selfregulated learning leaves more control to the learner, it is reasonable to consider that the factors influencing the sense of personalisation may be conceptually and theoretically linked to elements enhancing or hampering the controllability of the learning situation. Perceived controllability is, in addition, a key factor of motivation [6, 7]. The following sections review important dimensions of personalised learning that are in most cases related to control. 2.1 Personalised Learning and Control Control is a central aspect for personalisation. In virtual learning environments (VLEs) four types of control can be distinguished: •
•
•
•
System control occurs while designing a VLE and is represented by the design decisions of the architects and developers of a VLE. This includes the look and feel of a VLE as well as its functions and the workflows that it enforces; Organizational control includes all restrictions, customizations and regulations that are specific to an instance of the VLE. This includes the reflection of the organizational identity as well as the tools and functions that are available to all users of the VLE’s instance; Teacher control defines the actual educational structure of learning units. This includes the type and availability of learning material, the availability of tools learners can use, as well as the arrangement of these tools that also encompasses their intended usage. This type of control is often called instructional design; Learner control reflects the ways through which learners can take control over their learning processes.
As this typology shows, learners' personal initiatives (lowest level of control) do not take place in a vacuum. Formal learning usually occurs thanks to externally prestructured elements combined with a space of possibilities opened up only in the actual moment of learning. Personalised learning does not require that learners have all control over their learning environment, but it requires some control for the learners [22]. This can be as simple as providing explicit, updated and understandable information useful to monitor and analyze one's learning (see section "Tracking for Mirroring"). 2.2 Dimensions of Personalised Learning Several dimensions are interconnected in the notion of personalised learning experiences. These dimensions can be structured as follows: • • • • •
Ownership [10]; Participation [23]; Diversity [24]; Regulation [4, 25]; Reflection [26, 27].
Personalisation of Learning in Virtual Learning Environments
55
These dimensions reflect different aspects of control and determine what is possible in the learning process. Personal information can provide contextual beacons and support successful management of these aspects. 2.3 Personalised Learning and Personal Information Personal information is not only information about the learner. It also comprises contextual information that characterizes the learners’ situation. This includes basic learner information (such as name or student number), information resulting from monitoring a learner’s activity, achievement of predefined learning goal, etc.Based on the considerations made for context aware systems [28] and context adaptive learning support [29], the control over personal information can be analyzed on the following five levels: data collection, information selection, arrangement, application, presentation. It is suggested that the decisions, the levels of control, and the availability of personal information at these levels influence the learners’ control of and commitment to their own learning. In the so called personal learning environments (PLEs) [30, 31, 32], learners are supposed to have full control over their personal information, while in VLEs learners have often limited or no access and control over it. This is particularly the case for tracked information.
3 Tracking for Mirroring Mirroring, i.e displaying tracked interaction footprints to the benefit of learners [33] is not a trivial task. It raises pedagogical, interface-related and technical issues. One of them relates to tracking facilities. User tracking is a key process for user and learner modeling [34] that involves recording user interactions with the intention to store them for further processing. This information is exploited to develop assumptions about the user, to generalize interaction histories into patterns, to classify or cluster these patterns. Many VLEs create interaction histories as part of their standard functionality. Such monitoring of personal learning actions is usually only accessible to lecturers, tutors, administrators or researchers. Furthermore, the related monitoring functions are often detailed but complex transcripts of the learner activity. Therefore, many approaches exist for better structuring and presentation of this information in order to support teaching staff in controlling the activities in the online environment. An extensive overview of the different approaches is provided by Romero & al. [35]. There are only a few systems that make tracking information available to learners (see details in the next section). These systems can be a form of navigation support [36]. In this case the student tracking is used to show which competences and course topics have successfully been completed and which topics are still to be learned. These educational tools usually require an additional concept and competence model that is tightly coupled with the course material. Interactions with learning footprints have privacy issues that are frequently raised and discussed in the context of user modeling [37, 38]. These topics are closely related to the control of information. Such concerns should be balanced with benefits for learning that the release of this information might yield.
56
D. Verpoorten et al.
Prior research has suggested that learners depend on external information on the own activity to analyze, organize, and orientate their actions in complex environments [4, 25]. Hence, the personal information that is represented in a learner’s interaction history might be related to personalised learning experiences. Previous work [39, 40] suggests that personal information can serve as feedback that helps learners to reflect on the learning process. Therefore, it can be assumed that information from user tracking supports learners to examine their position in the learning process and to regulate their learning activity. This conceptual claim guides the whole work presented here. In concrete terms, personal tracked data can be harnessed to various instructional purposes that must be analyzed in the specific learning contexts. However, both the abstract and the down-to-earth levels presuppose the availability of personal learning footprints. In order to resolve justified privacy concerns and the need for personalisation it is crucial to understand the structure and organization of personal information across the five levels of personal information control given in the section "Personalised Learning and Control of Personal Information". Table 1 maps these five levels against the architecture for context aware systems that is contributed in [28]. Table 1. Comparison of architectural layers and personalisation levels
Architecture layers 1. Sensor layer 2. Semantic layer 3. Control layer 3'. Control layer 4. Indicator layer
Personalisation level Data collection Information selection Information arrangement Information application Information presentation
The levels "arrangement" and "application" are two types of the architecture’s control layer. Arrangement refers to the organization of personal information sets from a learner’s interaction history. The level "application "describes higher level processing of the personal information. Examples of such higher level processing are recommendation systems or adaptation engines.
4 Mirroring for Personalising The view on personalisation examined in this paper focuses on learner's control. It is asserted that one of its influencing factors is the availability of personal information and the ways that enable the stakeholders of the learning processes to use this information according to their needs. The personal information, properly fed-back to the persons they come from, can document their development of knowledge and skills in a learning environment and their course of actions at task level. Viewed in this manner, personalised learning quite often implies the development of different kinds of organizing and presenting information about: •
Situation-related aspects: they concern the fixed components of the learning tasks (targeted learning goals, available learning resources, mandatory and optional tasks, needed and trained skills, time allocations, marks, etc.);
Personalisation of Learning in Virtual Learning Environments
•
•
57
Self-related aspects: they relate to learning behaviors and achievements and personal learner information in general (teacher's marks and remarks, tasks completed, achieved learning goals, resources consulted, time spent, skills self-assessment, note-taking tools like journals or learning diaries, etc.); Social-related aspects: they cover social awareness clues (including comparison processes with data coming from peers or from an expert). As Web 2.0 gains momentum, this social information increases in quantity and availability, inviting to a systematized observation of its potential for promoting self-regulated personalized learning.
Agents depend on this personal and contextual information [4] to organize, orientate and navigate through complex environments. By exerting their understanding on these three sources of information, in order to support decision-making for selfregulation, learners personalize their learning. The design of tools stimulating the appraisal of contextual and personal information is highly dependant on the system's capacity to track interaction footprints and to feed them back to the learner in appropriate presentation modes. This presentation to individual end-users of what the system has captured from their learning episodes is called "mirroring". In the type of personalisation exposed here, tracking tools and techniques are oriented towards this mirroring and receive their value from it. Personalising learning flows therefore partly from an appropriate integration of personal information in the learner's environment. This might sound obvious but a literature review shows it is not. A few articles exhibit interest for learning traces but they usually see learners as indirect benefactors of their treatment. Direct users are systems, instructors or researchers, as detailed now. 4.1 Tracked Data for Adaptive Systems Adaptive systems make use of mining techniques. Learner is observed through a grid of parameters. Different values for these parameters trigger automatic application of rules leading to the production or adaptation of personal learning paths. The goal remains a background treatment of this data and hardly the mirroring of their actions to students. The principle of presenting tracked traces to learners does not deny the value of research in automatic customization procedures but invites to investigate its possible and desirable complementarities [41] with an approach of personalisation concerned about autonomy development and knowledgeable personalisation. In this respect, several authors, coming from the adaptive learning field itself, recently emphasized the importance of scrutability, namely the explicit communication to students of the pedagogical aspects framing the personalised learning experience designed for them by adaptive learning technology. By stressing upon learners' awareness of the automated personalisation process they are committed to, scrutability and inspectable open learner models [42, 43, 44, 45, 46, 47] convey a view on ownership and personalisation close to the one outlined by the present article. Advocating for explanation of system decisions to the learner posits that learning and autonomy development request some sense of control of the learning environment. It also acknowledges the importance to reflect about oneself in a defined learning context.
58
D. Verpoorten et al.
4.2 Tracked Data for Instructors and Researchers Some authors expressed interest for the exploitation of different kinds of interaction footprints by researchers [48, 49]. Others speculated about its benefit for instructors [50]. Among them, Nagi & Suesawaluk [51] recommended tutors to make use of the students data tracked by the Moodle eLearning platform in order to better regulate their courses. With a tool called CourseVis, Mazza & Dimitrova [52] took student tracking data collected by content management systems and generated graphical representations useful to instructors to gain an understanding of what is happening in distance learning classes. This work lead to the production of Gismo, a tool managing the visualization of data tracked in Moodle [53]. In a similar vein and on the same platform, Zhang et al. [54] developed a VLE log analysis tool, called Moodog, to track students’ online learning activities. The goals of Moodog were twofold: first, to provide instructors with insight about how students interact with online course materials, and second, to allow students to easily compare their own progress to others in the class. The latter objective sounded congruent with the approach defined in this article. However the authors eventually postponed its achievement to a subsequent study. Scheuer and Zinn [55] developed an interesting tracking system called the Student Inspector. In their conclusion, they only evoked the possibility to open the tool to students. The presentation of personal data to learners in a context of self-regulated learning do not preclude a parallel use of user tracking data by instructors. Azevedo’s [56] findings show that external regulation using human tutors enhances learning via hypermedia. However increased awareness (making learning an object of attention/reflection) of the learning process obtained, on the learner's side, by mirroring personal information, is desirable as well. It has also the potential to boost the relevance of tutor action. 4.3 Tracked Data for Learners Attempts to place learning traces in the hands of lifelong learners who therefore turn to be agents and researchers in their own learning processes [3] are not numerous. In addition, they give contrasted results [57]. For instance, in StudyDesk [58] and ACE [59] systems, the use of available personal footprints by the learners appears to remain close to zero. It means that the mere presence of such tools is not enough to improve personalised self-regulated learning, unless students are somehow motivated to use it. Johnson & Sherlock [60] also observe that self-analytics tools can be unwelcome because they represent an incentive to change learning habits, which is hard for many learners. Nevertheless, they conclude that this kind of personal data mirroring amplify conversations about learning, which might be a condition for initiating the self-changing process. But aside from these exploratory studies, the benefits that mirroring interaction with the course might yield for the student is not an object of high attention. Glahn & al. have notwithstanding initiated a systematic investigation of the use of personal traces. They analyzed the support of self-directed learning with Web2.0 services [29, 39, 40]. These studies focused on how the presentation of recorded user activity supports reflection and engagement in personal learning. The finding of these studies is that mirroring of personal learning activity depends on two design principles:
Personalisation of Learning in Virtual Learning Environments
• •
59
Perspective of learners in their current learning context; Contrasting information that allows the learner to evaluate the own actions.
These results suggest that appropriate tooling can support personalization of learning through information that is suitable to reflect on the learning process. One tool, coined by the authors "smart indicators" displays contextualized indicators about achievements, incentives, progress.
5 Personalisation in Virtual Learning Environments – A Prototype Based on the system architecture for context aware systems [28], a prototype for personalisation in virtual learning environments is being developed. This prototype instantiates concepts, concerns, requirements and design principles conveyed by the different view on personalisation elaborated above. It capitalizes on an existing system [29, 39] that adopted an architecture for mirroring learner activity using Web2.0 services. It transfers and expands it to the Moodle platform (http://www.moodle.org). A smallscale online course has been specially designed to embed the prototype and to serve as a playground for further developments and experimental studies. Web usability principles is the topic of this course. It has been designed according to a pedagogical pattern called "Reading – Questions & Answers – Test" (RQAT) [61]. The prototype, whose first components are presented hereunder, aims to ascertain the technical feasibility and to substantiate the pedagogical value of the approach. A series of studies is planned in order to investigate possible benefits of the mirroring, among them: increased awareness about learning actions and behaviours, training of self-analytic behaviours as a situated learner, enhanced ownership of learning processes, increased sense of personalisation, improved mental model of the learning context [62] and of oneself inside this context. The personalisation of a generic course that the prototype operates takes advantage of the conjunction of tracking, mirroring and visualization. Visualizations have different degrees of complexity and interactivity: • • •
mirroring visualization presents information about different components of the learning task and learner's actions within. It can be responsive or not, interactive or not; responsive visualization is a visualization that dynamically reacts on user activity; interactive visualization is a visualization the learner can act upon.
The prototype mostly concentrates on mirroring responsive visualization. 5.1 Moodle The prototype provides a modular approach that allows system developers, instructors, and learners, to apply the above described concepts for personalisation. Additionally, it allows stakeholders to hook their preferred information processing tools onto the system. The key function of the prototype is to visualize learning activity, stored in Moodle’s activity log, back to the learners. The present prototype has two distinguishing features compared to other activity-visualization plug-ins for Moodle.
60
D. Verpoorten et al.
Firstly, it implements the different levels of personal information processing as independent services. Secondly, it is fully integrated into the Moodle platform. For the prototype three additional requirements were formulated. First, it has to be possible to add new perspectives on personal information. This defines that new ways of information selection can be easily added without much effort. In the terminology of the underlying architecture this means that new aggregation rules for the user tracking can be added at any time. Second, it has to be possible to create new arrangements of the selected information. Finally, a flexible information visualization approach is needed that allows adding new and replace existing visualizations, without concerning the underlying data. This requirement defines that different visualizations can be used of the same personal information as well as that the same visualizations can get used for different types of personal information. The tight integration into the Moodle platform assures that all system functions for authentication and authorization are appropriately applied for the role of the current user. Moreover, it assures that the prototype framework uses the same data as other components of Moodle. This is an advantage compared to the approaches of the systems for visualizing user tracking information for Moodle that have been discussed earlier in this paper. Instead of using a proxy repository for analyzing the learner activity the framework uses Moodle’s internal learner tracking and can provide live information on the learning activity. In addition to the shared data, the prototype framework is part of the Moodle system and can therefore use the Moodle interfaces for authentication and authorization of incoming data requests. The four system layers of the architecture are reflected by the framework as following. The Sensor Layer. The purpose of this layer is to collect and to store traces of actions. These traces can be accesses to a learning resource, writing a forum posting, or the results of a test. Moodle implements a detailed action logging that is automatically integrated into the different plug-ins of the system. Additionally, some Moodle plug-ins allow a more detailed view on the learners’ activities. Therefore, it is not necessary to implement a separate sensor layer for tracking learner actions in Moodle. The Semantic Layer. This layer processes the data collected by the sensor layer into semantically meaningful information. At the level of the semantic layer several aggregation rules can be active to process the traces of learning activity. The current system implements the sensor layer as a REST service through which the different aggregation rules have unique names and can be directly accessed through an URL. At this stage each aggregation rule of the semantic layer represents an SQL statement that processes Moodle’s user tracking. Each aggregation rule returns the result data to the JSON format that can be easily interpreted by web-frontends. XML output of the data is planned for future releases. Each aggregation rule can be limited to different social contexts of the learner and to a specific course. So far the social context self, course fellows and contacts are implemented. The context “self” includes only the data of the learner who is currently logged in while accessing the Moodle system. The context “course fellows” includes the data of all other learners who are enrolled in the same courses as the learner. The context “collaborators” includes all learners
Personalisation of Learning in Virtual Learning Environments
61
who were directly collaborating with the learner in at least one of the different collaboration tools of Moodle. The Control Layer. This layer defines the arrangement of the aggregators and the visualization that are used for mirroring. The control layer is implemented as a plugin that provides several widgets that can be independently integrated into the user interface of a course. Each widget contains a set of aggregators and visualizations, which can be configured by the instructor of a course. The Indicator Layer. This layer provides different presentation modes for the data of the semantic layer. The indicator chooses the presentation mode based on the configuration of the indicator layer and receives the data from the semantic layer. The indicator layer is embedded into the user interface of Moodle through a JavaScript module. It fetches the data from the semantic layer through service requests. 5.2 Personal Information Management The aggregation rules of the semantic layer of the framework offer direct support for the design principle of contrast. This principle states that visualized information is easier understood if it is presented in the context of comparable information. In other words, the other information contrasts the presented information and highlights its specific qualities. The contrasting information can be considered as a reference that allows to assess the presented information and to relate it to mental concepts of the context in which the learning takes place. The present prototype of the framework supports two types of contrasting information: yardstick references and social context references. A yardstick reference provides information on goal achievement. A yardstick reference is a value that provides a contextual reference for the presented personal information. The current activity of a learner can then be described as a percentage of the yardstick reference. A yardstick reference is typically defined in the arrangement of the aggregators and presentation modes. In the present framework two types of yardsticks are supported: predefined yardsticks that are defined at design time of a course, and dynamic yardsticks that use a different aggregation rule to determine the yardstick from the current activity. An example of a dynamic yardstick can be found in Glahn & al. [29]. The second type of contrasting information is given by social context references. These references can be applied as special filters on the aggregation rule. Such filter can cause an aggregation rule to return the average activity of a learner’s peers instead of the activity of the learner. For Moodle four relevant social contexts were identified: group members, contacts, course fellows, peers. Contrasting information shows that contextual information is considered as part of the personal information that can get controlled by the learner. In Figure 1, two pieces of personal information are visualized: the total time spent on the course (box 1) by the learner and the number of learning actions he performed (box 2). The mirrored information about the total time is simply retroacted to the learner. The mirrored information about the number of learning actions is enriched through a comparative setup which also mirrors the number of actions by the peers. Another yardstick could be a number of actions suggested by the teacher.
62
D. Verpoorten et al.
1
2
Fig. 1. The Moodle interface mashes up information centered on the course and information about the personal experience of the course by the learner (Time spent on course {box 1}, Student's actions compared to peers' actions {box 2})
5.3 Validation of the Approach This research emphasizes interaction traces in order to inspect to what extent their feedback to the learner can be beneficial to him and to the design of his learning environment. Main benefits from this personal history of learning begins with the mirroring of personal information. Sometimes or with some learners, contemplating it will be enough to generate some kind of diagnosis (for instance: "the assignment requests from me that I post 12 messages in the forum and I have only 3 so far") and to selfadminister appropriate remedial. But in other occasions, the way to achieve the improvement will not be so straightforwardly inferred from mirroring even if it brings valuable information (for instance: "Compared to the number of learning activities performed by my peers, I am slower"). It means that students must be prepared to interpret their personal data and, in some cases, must receive help for this. An underpinning hypothesis for this approach is that making learning processes explicit and comparable, through their mirroring, can affect student attributions of learning (locus of control) and increase what is advocated by all promoters of lifelong learning: the responsibility and ownership of one's own learning. Tracking might be related to selfefficacy aspects. A number of studies indicate that high-mastery students are more successful overall because they persevere, experience less anxiety, use more strategies, and attribute their success to controllable causes. It means that the others could benefit from an explicit, reified view of their actions and realize they are in control. Additionally, it is hypothesized that the provision of personal info can play the role of an involvement factor.
6 Conclusion This paper delineated and documented a perspective on personalisation based on the mirroring of personal tracked data. This approach advises to not merely use personal tracked data for backstage adaptation but to mirror it back to the learner. It therefore entails the availability of tools that automatically collect and aggregate selected information on the personal learning activities and interactions and make it visible to the user. In its last part, the article contributed a prototype which instantiates concepts, concerns, requirements and design principles conveyed by this different view
Personalisation of Learning in Virtual Learning Environments
63
on personalisation. Further elaboration of the prototype as well as experimental settings meant to substantiate the pedagogical value and possible benefits of the approach are on their way. Acknowledgements. This paper is sponsored by the TENCompetence project (www.tencompetence.org), funded by the European Commission's 6th Framework Programme and by the GRAPPLE (www.grapple-project.org) project funded by the European Commission’s 7th Framework Programme.
References 1. Terhart, E.: Constructivism and teaching: a new paradigm in general didactics? Journal of Curriculum Studies 35(1), 25–44 (2003) 2. Ertmer, P.A., Newby, T.J.: The expert learner: Strategic, self-regulated, and reflective. Instructional Science 24(1), 1–24 (1996) 3. Winne, P.: A Perspective on State-of-the-art Research on Self-regulated Learning. Instructional Science 33(5-6), 559–565 (2005) 4. Butler, D.L., Winne, P.H.: Feedback and self-regulated learning: a theoretical synthesis. Review of Educational Research 65(3), 245–281 (1995) 5. Ley, K., Young, D.B.: Instructional principles for self-regulation. Educational Technology Research and Development 49(2), 93–103 (2001) 6. Ryan, R.M., Deci, E.L.: Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 68–78 (2000) 7. Viau, R.: La motivation: condition au plaisir d’apprendre et d’enseigner en contexte scolaire. In: 3e congrès des chercheurs en Éducation, Brussels, Belgium (2004) 8. Brusilovsky, P.: Adaptive Hypermedia. User Modeling and User-Adapted InteractionIn. User Modeling and User-Adapted Interaction 11(1-2), 87–110 (2001) 9. Bonal, X., Rambla, X.: The Recontextualisation Process of Educational Diversity: new forms to legitimise pedagogic practice. International Studies in Sociology of Education 9(2), 195–214 (1999) 10. DfES: A National Conversation about Personalised Learning (2004), http://www.standards.dfes.gov.uk/personalisedlearning/ downloads/personalisedlearning.pdf 11. Leadbeater, C.: Pamphlet - Learning About Personalisation (2004), http://www.demos.co.uk/publications// learningaboutpersonalisation 12. Lambert, M.B., Lowry, L.K.: Knowing and being known: Personalisation as a foundation for student learning. Small Schools Project, Seattle (2004) 13. West-Burnham, J., Coates, M.: Personalising learning: transforming education for every child. Network Educational Press, Stafford (2005) 14. Martinez, M.: Designing learning objects to personalize learning. In: Wiley, D.A. (ed.) The Instructional Use of Learning Objects. Agency for Instructional Technology, Bloomington, pp. 151–173 (2002) 15. Polhemus, L., Danchak, M., Swan, K.: Personalised Course Development Techniques for Improving Online Student Performance. In: Conference of Instructional Technologies (CIT) Stonybrook, New York (2004) 16. Tomlinson, C.A.: Mapping a route toward differentiated instruction. Educational Leadership 57(1), 12–16 (1999)
64
D. Verpoorten et al.
17. Noss, R.: Foreword. In: Magoulas, G., Chen, S. (eds.) Advances in Web-based Education: Personalised Learning Environments. Information Science Publishing (2006) 18. Schmoller, S.: FE White Paper - the personalisation virus is spreading (2006), http://fm.schmoller.net/2006/04/fe_white_paper_.html 19. Primus, N.J.C.: A generic framework for evaluating Adaptive Educational Hypermedia authoring systems - Evaluating MOT, AHA! and WHURLE to recommend on the development of AEH authoring systems, doctoral dissertation. University of Twente, Twente (2005) 20. Waldeck, J.H.: What Does “Personalised Education” Mean for Faculty, and How Should It Serve Our Students? Communication Education 55(3), 345–352 (2006) 21. Waldeck, J.H.: Answering the Question: Student Perceptions of Personalised Education and the Construct’s Relationship to Learning Outcomes. Communication Education 56(4), 409–432 (2007) 22. Dron, J.: Controlling learning. In: Kinshuk, K.R., Kommers, P., Kirschner, P., Sampson, D.G., Didderen, W. (eds.) 6th IEEE International Conference on Advanced Learning Technologies, pp. 1131–1132. IEEE The Computer Society, Los Alamitos (2006) 23. Lave, J., Wenger, E.: Situated learning. Legitimate peripheral participation. Cambridge University Press, Cambridge (1991) 24. Wilson, S., Liber, O., Johnson, M., Beauvoir, P., Sharples, P., Milligan, C.: Personal Learning Environments: Challenging the Dominant Design of Educational Systems. In: Tomadaki, E., Scott, P. (eds.) Proceedings of the workshop Innovative Approaches for Learning and Knowledge Sharing at the EC-TEL 2006, pp. 173–182 (2006) 25. Garries, R., Ahlers, R., Driskel, J.E.: Games, motivation, and learning: a research and practice model. Simulation & Gaming 33, 441–467 (2002) 26. Attwell, G., Chrzaszcz, A., Hilzensauer, W., Hornung-Prahauser, V., Pallister, J.: Grab your future with an e-portfolio – Study on new qualifications and skills needed by teachers and career counsellors to empower young learners with the e-portfolio concept and tools. MOSEP Summary Report (2007), http://www.mosep.org/study 27. Schön, D.A.: The Reflective Practitioner: How Professionals think in Action. Maurice Temple Smith, London (1983) 28. Zimmermann, A., Specht, M., Lorenz, A.: Personalisation and context management. User Modeling and User-Adapted Interaction 15(3-4), 275–302 (2005) 29. Glahn, C., Specht, M., Koper, R.: Smart indicators on learning interactions. In: Duval, E., Klamma, R., Wolpers, M. (eds.) EC-TEL 2007. LNCS, vol. 4753, pp. 56–70. Springer, Heidelberg (2007) 30. Agustiawan, M.: PLEF: a conceptual framework for Personal Learning Environments, dissertation. Rheinisch-Westfalische Technische Hochschule Aachen (2008) 31. Anggraeni: PLEM: A Web 2.0 Driven Service for Personal Learning Management, dissertation. Rheinisch-Westfalische Technische Hochschule Aachen (2008) 32. Moedritscher, F., Wild, F.: Why not Empowering Knowledge Workers and Lifelong Learners to Develop their own Environments? In: 9th International Conference on Knowledge Management and Knowledge Technologies (I-KNOW), Graz, Austria (2009) 33. Jermann, P., Soller, A., Mühlenbrock, M.: From mirroring to guiding: A review of the state of art technology for supporting collaborative learning. In: Euro-CSCL, pp. 324–331 (2001) 34. Kobsa, A.: Generic User Modeling Systems. In: Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.) Adaptive Web 2007. LNCS, vol. 4321, pp. 136–154. Springer, Heidelberg (2007)
Personalisation of Learning in Virtual Learning Environments
65
35. Romero, C., Ventura, S., García, E.: Data mining in course management systems: Moodle case study and tutorial. Computers & Education 51, 368–384 (2008) 36. Dieberger, A.: Supporting social navigation on the world wide web. International Journal of Human-Computer Studies 46, 805–825 (1997) 37. Kobsa, A.: Tailoring privacy to users’ needs. In: Bauer, M., Gmytrasiewicz, P.J., Vassileva, J. (eds.) UM 2001. LNCS (LNAI), vol. 2109, pp. 303–313. Springer, Heidelberg (2001) 38. Schreck, J.: Security and privacy in user modeling. Dissertation. FB. Mathematik & Informatik, Gesamthochschule Essen, Germany (2001) 39. Glahn, C., Specht, M., Koper, R.: Reflecting on web-readings with tag clouds. In: 11th International Conference on Interactive Computer aided Learning (ICL), Villach, Austria (2008) 40. Glahn, C., Specht, M., Koper, R.: Visualization of interaction footprints for engagement and motivation in online communities. Journal for Educational Technology and Society (in press, 2009) 41. Verpoorten, D.: Adaptivity and adaptation: which possible and desirable complementarities in a learning personalisation process? Policy futures in education (in press, 2009) 42. Ahn, J., Brusilovsky, P., Grady, J., He, D., Syn, S.Y.: Open user profiles for adaptive news systems: help or harm? In: 16th International World Wide Web Conference (WWW 2007), Banff, Canada (2007) 43. Bull, S., Nghiem, T.: Helping Learners to Understand Themselves with a Learner Model Open to Students, Peers and Instructors. In: International Conference on Intelligent Tutoring Systems 2002 - Workshop on Individual and Group Modeling Methods that Help Learners Understand Themselves, San Sebastian, Spain (2002) 44. Czarkowski, M., Kay, J.: How to give the user a sense of control over the personalisation of adaptive hypertext? In: Workshop on Adaptive Hypermedia and Adaptive Web-Based Systems - User Modeling Session, Budapest, Hungary, pp. 121–133 (2003), http://wwwis.win.tue.nl/ah2003/proceedings 45. Kay, J.: Learner Know Thyself: Student Models to Give Learner Control and Responsibility. In: Halim, Z., Ottmann, T., Razak, Z. (eds.) International Conference on Computers in Education, ICCE 1997, pp. 18–26. AACE, Charlottesville (1997) 46. Kay, J.: An exploration of scrutability of the user models in personalised system. Engineering Personalised Systems (2002) 47. Zapata-Rivera, J.D., Greer, J.E.: Externalising Learner Modelling Representations. In: Workshop on External Representations in AIED at the International Conference on AI and Education (AIED 2001), San Antonio, Texas (2001) 48. Leclercq, D., Fernandez, A., Prendez, M.P.: OLAFO, Hypermédia destiné à entraîner à l’apprentissage de l’espagnol écrit. STE, Liège (1992) 49. Perry, N.E., Winne, P.H.: Learning from learning kits: gStudy traces of students’ selfregulated engagements using software. Educational Psychology Review 18, 211–228 (2006) 50. Diagne, F.: Instrumentation de la supervision de l’apprentissage par la réutilisation d’indicateurs: Modèles et Architecture, dissertation. Université Joseph Fourier Grenoble (2009) 51. Nagi, K., Suesawaluk, P.: Research analysis of Moodle reports to gauge the level of interactivity in elearning courses at Assumption University, Thailand. In: International Conference on Computer and Communication Engineering, Kuala Lumpur (2008)
66
D. Verpoorten et al.
52. Mazza, R., Dimitrova, V.: Visualizing student tracking data to support instructors in webbased distance education. In: 13th International World Wide Web Conference (WWW 2004) - Educational Track, New York (2004) 53. Mazza, R., Botturi, L.: Monitoring an Online Course With the GISMO Tool: A Case Study. Journal of Interactive Learning Research 18(2), 251–265 (2007) 54. Zhang, H., Almeroth, K., Knight, A., Bulger, M., Mayer, R.: Moodog: Tracking Students’ Online Learning Activities. In: ED-MEDIA, Vancouver (2007) 55. Scheuer, O., Zinn, K.: How did the e-learning session go? - The Student Inspector. In: 13th International Conference on Artificial Intelligence in Education (AIED 2007), Los Angeles (2007) 56. Azevedo, R.: Computer Environments as Metacognitive Tools for Enhancing Learning. Educational Psychologist 40(4), 193–197 (2005) 57. Perrenoud, P.: Le désir de ne pas savoir - Ambivalences et résistances face à la posture réflexive. In: Troisième journée pédagogique de l’IFRES. “Innovations pédagogiques dans les pratiques réflexives”, Liège, Belgium (2009) 58. Narciss, S., Proske, A., Koerndle, H.: Promoting self-regulated learning in web-based learning environments. Computers in Human Behavior 23, 1126–1144 (2007) 59. Specht, M., Oppermann, R.: ACE - adaptive courseware environment. The New Review of Hypermedia and Multimedia, 4 - Special Issue on Adaptivity and user modeling in hypermedia systems 1, 141–161 (1998) 60. Johnson, M., Sherlock, D.: Personal Transparency and self-analytic tools for online Habits. In: TENCompetence Workshop Stimulating Personal Development and Knowledge Sharing, Sofia, Bulgaria (2008) 61. Verpoorten, D., Poumay, M., Delcomminette, S., Leclercq, D.: From Expository Teaching to First e-Learning Course Production: Capture in a 17 Online Course Sample of a Pedagogical Pattern Facilitating Transition. In: 6th IEEE International Conference on Advanced Learning Technologies (ICALT 2006), Kerkrade, The Netherlands (2006) 62. Seel, N.M.: Epistemology, situated cognition, and mental models: ‘Like a bridge over troubled water’. Instructional Science 29(4), 403–427 (2001)
A New Framework for Dynamic Adaptations and Actions Carsten Ullrich1 , Tianxiang Lu2, , and Erica Melis2 1
Shanghai Jiao Tong University, Haoran Building, 6/F ullrich
[email protected] 2 German Research Center for Artificial Intelligence {Tianxiang.lu,Melis}@dfki.de
Abstract. Adaptive course generation is more flexible if it includes mechanisms deciding just-in-time which exercises, which external resources, and which tools to include for an individual student. We developed such a novel delivery framework (called Dynamic Items) that is used by the web-based platform ActiveMath. We describe the framework and discuss several new applications of Dynamic Items for an individual student. Keywords: User-adaptive systems and personalization, Course generation and adaptation.
1
Introduction
ActiveMath [9,10] is a Web-based intelligent learning environment for mathematics whose course generator, Paigos, uses pedagogical knowledge to generate a sequence of learning objects that is adapted to the learners’ competencies and other variables such as learning goals [11]. It uses metadata of the learning content as well as information from ActiveMath’s student model that is available at generation time. In course generation, the course is generated completely before it is presented to the learner. This early generation has the advantage that the course and its structure can be visualized to the learner. In addition, the student can navigate freely through the course. In a generated course, the structure and order of the learning objects does not change, which avoids confusion of the learner as reported in [4]. This differs from course sequencing, which dynamically selects the most appropriate resource at any moment, i.e., step by step. The benefit of this approach is that it can react to the student’s progress. However, this local approach,makes it hard to convey information about the structure of a course and the learning sequence can not be presented to the learner. Moreover, it prevents the generation of essentially equal courses which only differ in places, e.g., for students in one classroom.
This work was supported by LeActiveMath, funded under FP 6t of the European Community (Contract Nr. IST-2003-507826) and by the DFG project ALoE. Corresponding author.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 67–72, 2009. c Springer-Verlag Berlin Heidelberg 2009
68
C. Ullrich, T. Lu, and E. Melis
In this article, we describe the Dynamic Item framework and applications that we developed to combine the advantages of course sequencing and the early generation of a complete course. It starts with a brief introduction of the framework and then presents the implemented instances of Dynamic Item and their educational purposes. Finally, related work and conclusions summarize how this differs from other approaches and what was achieved.
2
Overview on the Dynamic Item Framework
The result of the course generation is a table of contents that contains references to learning object as well as Dynamic Items. Dynamic Items are abstract learning objects which are instantiated at presentation time by a component in Dynamic Item Framework. The Dynamic Item Framework consists of three stages: generation stage, adaptation stage and presentation stage. Either fetched from a persistent pre-authored content repository or generated by different learning services introduced (see §3), the Dynamic Item elements serve as intermediate objects before the presentation takes place. At the time the user opens a page that contains Dynamic Item), the Dynamic Item Transformer renders the Dynamic Items and transforms them into standard learning objects, taking into account up-to-date user information. The resulting learning objects are then transformed into the presentation format selected by the user, e. g., html.
3
Applications of Dynamic Item
In this section, we illustrate the applications of Dynamic Items in ActiveMath. 3.1
Dynamic Tasks
The most frequent application of Dynamic Item is the dynamic course generation based on dynamic task expansion. Course generation can stop at a level that (abstractly) specifies what kind of learning objects should be selected, dynamic task, but does not specify which ones. Later, at presentation time, when the learner first visits a page that contains a dynamic task t this is passed to the course generator that then assembles the sequence of resources that achieve t. The resulting identifiers of learning objects replace the dynamic task in the course structure with those learning objects. Hence, when the page is revisited, the elements do not change. This means a course is partly static, partly dynamic and, thus, the goal of presenting the complete course to the learner while preserving dynamic adaptivity. One advantage of dynamic tasks is that they can be used by authors as well. They can manually compose courses, where parts of the course are predefined and others dynamically computed. In this way, an author can use the best of both worlds: she can compose parts of the course by hand and at the same time profit from the adaptive capabilities of the course generator. This also addresses situations in a classroom, where a teacher mostly wants to provide the same material (e. g., definitions, examples) for every student (important for
A New Framework for Dynamic Adaptations and Actions
69
communication about the material with and among students) and at the same time wants to take advantage of individually selected exercise sequences at places (for more or less training as well as for adjusting the difficulty of problems). This is something a teacher can hardly manage for 20–30 students in parallel, but is easily realized with Dynamic Item. 3.2
Learning Services
Within advanced learning environments such as ActiveMath, the student is able to use external services embedded in the course. When involving the external tools, the system should be able to parameterize the call according to the current performance of the student. This is achieved by Dynamic Item in the following way. When the learner visits a page that contains a Dynamic Item for a learningsupport service, the presentation system converts it into a link and displays it. The link is generated based on the information enclosed in the Dynamic Item element and on values obtained from the student model. The following describes how three services were integrated using the service Dynamic Item: an Exercise Sequencer, a Concept Mapping tool and an Open Learner Model. Exercise Sequencing. The Exercise Sequencer presents to the learner a dynamically selected sequence of exercises whose selection strategy is parametrized, e.g., as mastery learning that leads the student towards a higher competency level. This functionality differs from the exercise selection of Paigos because the Exercise Sequencer selects an exercise at the time of first visit, presents it to the learner in a separate window, provides information about the learner’s problemsolving progress and terminates or selects a new exercise for a new cycle. The selection algorithm is based on competency levels [7]. Within this interactive sub-environment, the learner can interact with a dynamically selected sequence of exercises until he/she reaches a set learning goal. External Learning Tools. External learning tools are also integrated into the courses generated by ActiveMath using Dynamic Item. An example is the interactive Concept Mapping Tool, iCMap [8], that helps the learner to reflect on his mathematical knowledge and to visualize and construct of structures for a mathematical domain. It supports the learning process by verifying the concept map constructed by a student and by suggesting reasonable changes to the created map. It is called with an instantiated, parametrized exercise that is chosen dynamically by Paigos. Another service that can be included is the Open Learner Model (olm), which provides learners with a possibility to inspect and modify the beliefs that the learner model holds about the mastery or competencies of the student. 3.3
Dynamic Text Generation
Narrative Bridges. Similar to an advanced organizer [1], a dynamically generated text should prepare the student’s mind to what he has to expect and how
70
C. Ullrich, T. Lu, and E. Melis
this is connected to his previous study. These texts cannot be authored manually, since it is practically impossible to cater for all possible choices and histories of students. Therefore, ActiveMath includes a template-based dynamic generation of bridging texts, which (1) explain the purpose of a course or a section at an abstract level. They make the intent of sections and the structure of a course explicit, they provide cues that the learners can use to remember and set the stage for the learning processes. (2) By connecting neighboring sections they provide coherence that a mere sequence of educational resources might lack. In the case of Dynamic Items of type text, the service uses parameters to determine the adequate text template t and returns an OMDoc element whose text body consists of t. If a template is available in several languages, a specific text body is generated for each language (in case that the user changes his/her language profile any time later). Based on a template of a bridging text a controller responsible for the presentation calls the service specified in the Dynamic Item and passes the remaining attributes and sub-elements as parameters.
Fig. 1. A transformed bridging text
Fig. 1 shows a type of bridging texts after html transforming. The text is highlighted by a frame box in order to convey to the learner that the text is on a different level of abstraction than the remaining content displayed on the page. External Resources. In order to provide an opportunity of self-regulated learning, a student should be able to include additional learning objects on demand in his personal course. Since Dynamic Items can provide automatically generated text according to given parameters including hyperlinks, Dynamic Item can also be used to include external learning resources referenced by a link. A student can easily add an external resources (e.g. entries in Wikipedia) she found and add it to the current course. ActiveMath’s assembly tool [5] uses this functionality to add user-selected content. This includes not only texts but also multimedia content (e.g., videos) dragged link from internet. The technological means for this functionality are Dynamic Items for generating text including hyperlinks.
4
Related Work
Previous course sequencing such as the Dynamic Courseware Generator [12] selects the next page dynamically at the time the student requests it. While this allows better reactivity, the learner cannot see and use the structure of the complete course for learning.
A New Framework for Dynamic Adaptations and Actions
71
Our work is different from Adaptive Hypermedia systems such as AHA! [2] which focuses on adapting an individual hypertext document. Whenever the user accesses a concept, a set of rules adapts the resulting document. Our approach uses a book metaphor: a complete course is generated and navigation is unrestricted so that the user can visit each page of the course any time. In such a setting, our mechanism can add parts dynamically to a previously generated course. Selector [6] first determines the skills/concepts to be taught and then selects or constructs the required learning object. This is very similar to our approach, with the exception of dynamic tasks which allows Paigos to interrupt the planning process and select the specific learning objects at a later. KnowledgeTree [3] and its extension ADAPT2 is a distributed architecture for adaptive e-learning that integrates different learning services. A teacher can author a course and add references to static and dynamic learning objects (service calls). Our framework allows the automatic generation of courses, including the selection of such services. Automatic generation in KnowledgeTree might be possible, too, but to our knowledge has not been investigated. Compared to existing work, our approach focuses on an abstract representation of service invocation that is easily authorable and that can be created manually by human authors and automatically during course generation.
5
Conclusion
This paper describes how Dynamic Item are used to provide just-in-time adaptivity to a student in a technology-enhanced learning environment. The idea is to separate the generation of appropriate constraints from the inclusion of the actual learning material. The course generator decides where a Dynamic Item should be added and what kind of Dynamic Item should be added, its type and further constraints. Dynamic Items enable a persistent storage of information about pedagogical goals and constraints processed during course generation. Usually, pedagogical information is available at generation time only and lost afterward. Dynamic Items can store this information, which provides a context for each learning object eventually presented in the course. The implemented Dynamic Item framework applies this idea and integrates several features into the ActiveMath system, such as dynamic tasks, learning services and text generation. ActiveMath including its Dynamic Item has been used by hundreds of students so far.
References 1. Ausubel, D.: The Psychology of Meaningful Verbal Learning. Grune & Stratton, New York (1963) 2. De Bra, P., Smits, D., Stash, N.: Creating and delivering adaptive courses with AHA! In: Nejdl, W., Tochtermann, K. (eds.) EC-TEL 2006. LNCS, vol. 4227, pp. 21–33. Springer, Heidelberg (2006)
72
C. Ullrich, T. Lu, and E. Melis
3. Brusilovsky, P.: Knowledgetree: a distributed architecture for adaptive e-learning. In: Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters, pp. 104–113. ACM Press, New York (2004) 4. De Bra, P.: Pros and cons of adaptive hypermedia in web-based education. Journal on Cyber Psychology and Behavior 3(1), 71–77 (2000) 5. Homik, M.: Assembly Tool. Deliverable D37, LeActiveMath Consortium (June 2006) 6. Keeffe, I.O., Brady, A., Conlan, O., Wade, V.: Just-in-time generation of pedagogically sound, context sensitive personalized learning experiences. International Journal on E-Learning 5(1), 113–127 (2006) 7. Klieme, E., Avenarius, H., Blum, W., D¨ obrich, P., Gruber, H., Prenzel, M., Reiss, K., Riquarts, K., Rost, J., Tenorth, H., Vollmer, H.J.: The development of national educational standards - an expertise. Technical report, Bundesministerium f¨ ur Bildung und Forschung / German Federal Ministry of Education and Research (2004) 8. Melis, E., K¨ arger, P., Homik, M.: Interactive Concept Mapping in ActiveMath (iCMap). In: Haake, J.M., Lucke, U., Tavangarian, D. (eds.) Delfi 2005: 3. Deutsche eLearning Fachtagung Informatik, Rostock, Germany, September 2005. LNI, vol. 66, pp. 247–258. Gesellschaft f¨ ur Informatik e.V, GI (2005) 9. Melis, E., Andr`es, E., B¨ udenbender, J., Frischauf, A., Goguadze, G., Libbrecht, P., Pollet, M., Ullrich, C.: Activemath: A generic and adaptive web-based learning environment. International Journal of Artificial Intelligence in Education 12(4), 385–407 (2001) 10. Melis, E., Goguadze, G., Homik, M., Libbrecht, P., Ullrich, C., Winterstein, S.: Semantic-aware components and services of ActiveMath. British Journal of Educational Technology 37(3), 405–423 (2006) 11. Ullrich, C.: Courseware Generation for Web-Based Learning. LNCS (LNAI), vol. 5260. Springer, Heidelberg (2008) 12. Vassileva, J., Deters, R.: Dynamic courseware generation on the WWW. British Journal of Educational Technology 29(1), 5–14 (1998)
Getting to Know Your User – Unobtrusive User Model Maintenance within Work-Integrated Learning Environments Stefanie N. Lindstaedt1,2, Günter Beham1,2, Barbara Kump1,2, and Tobias Ley2,3 1
Knowledge Management Institute, TU Graz Inffeldgasse 21a, 8010 Graz {slind,gbeham,bkump}@tugraz.at 2 Know Center Inffeldgasse 21a, 8010 Graz {slind,gbeham,tley}@know-center.at 3 Cognitive Science Section, University of Graz Universitätsplatz 2, 8010 Graz, Austria
[email protected]
Abstract. Work-integrated learning (WIL) poses unique challenges for user model design: on the one hand users’ knowledge levels need to be determined based on their work activities – testing is not a viable option; on the other hand users do interact with a multitude of different work applications – there is no central learning system. This contribution introduces a user model and corresponding services (based on SOA) geared to enable unobtrusive adaptability within WIL environments. Our hybrid user model services interpret usage data in the context of enterprise models (semantic approaches) and utilize heuristics (scruffy approaches) in order to determine knowledge levels, identify subject matter experts, etc. We give an overview of different types of user model services (logging, production, inference, control), provide a reference implementation within the APOSDLE project, and discuss early evaluation results. Keywords: user model, service-oriented architecture, work-integrated learning, adaptivity.
1 Work-Integrated Learning The goal of most approaches to support work-integrated learning (WIL) is on assisting knowledge workers in advancing their knowledge and skills directly within their ‘real’ work tasks instead of in dedicated (artificial) learning situations. Workintegrated learning is focusing on seamlessly integrating working and learning [1]. It is relatively brief and unstructured (in terms of learning objectives, learning time, or learning support), and its main aim mostly is to enhance task performance. From the learner’s perspective, WIL is spontaneous and often unintentional. Learning in this case is a by-product of the time spent at the workplace. This especially requires that WIL support is embedded within the ‘real’ computational work environment of the U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 73–87, 2009. © Springer-Verlag Berlin Heidelberg 2009
74
S.N. Lindstaedt et al.
user and not provided in dedicated eLearning systems. Moreover, it is essential that WIL support utilizes ‘real’ content from the organizational memory (such as project reports, calculations, presentations) and repurpose it for learning. WIL is a learning concept developed for supporting continuous competence development at the workplace. It assumes that its users have basic knowledge of the learning domain in question and the ability to guide their own learning processes [2]. The WIL concept has been applied to the domains of requirements engineering, software simulations, innovation management, and intellectual property rights management [3]. Consider a scenario within the learning domain of requirements engineering. Laura is a software engineer busy creating human activity models (work task) based on user interviews she has performed in the past (previous work task). She uses a variety of different unrelated applications such as MS Word, Viso and Requisite Pro to accomplish the task (computational work environment). Even though she is unsure on how to best approach human activity modelling, she is not interested in switching to a dedicated eLearning system. Instead, she relies on different WIL services which sit silently in the background and offer their help only when needed. In her current situation, a WIL service identifies her missing knowledge in human activity modelling and recommends learning content about human activity modelling as well as examples of human activity models previously created within the organization. Another WIL service identifies people within the organization who are more experienced in human activity modelling than herself and recommends them for collaboration. We propose an adaptive approach to supporting WIL. Adaptive systems have been developed for various purposes such as helping users to find information, supporting learning, and supporting collaboration [4]. Therefore, we consider adaptivity as a highly promising means to support a variety of different learning practices [2] at the workplace, e.g. searching for relevant knowledge, or trying to find knowledgeable colleagues. In order to make an environment adaptive to the user it needs to ‘know’ what the user is able to do and what she is not able to do. For this purpose the environment contains a user model, which can be understood as a representation of “the knowledge about the user, either explicitly or implicitly encoded, that is used by the system to improve the interaction” ([5], p.6). The goal of this contribution is to present our conceptual approach to user model design and maintenance which addresses the specific challenges of WIL (Section 2). In Section 3, we show its relevance by introducing a reference implementation which was developed as part of the EU-funded, integrated project APOSDLE (www.aposdle.org, Advanced Process-Oriented Self-Directed Learning Environment). After a brief discussion of related work (Section 4), we will report on a pilot evaluation in Section 5. APOSDLE is now at the end of its third year (of four year total). Within the first two years two prototypes have been developed. The third (and last) prototype is currently being implemented. We report here mainly about lessons learned from the evaluations of the first two prototypes and give an outlook on the third prototype.
2 WIL User Models and User Model Services A number of design and usability challenges have to be tackled in order to not outweigh the benefits of the adaptation to the individual user. [4] has identified
Getting to Know Your User – Unobtrusive User Model Maintenance
75
predictability & transparency, controllability, unobtrusiveness, privacy, and breadth of experience as critical challenges for adaptive systems. For WIL environment design, unobtrusiveness and privacy constitute the hardest challenges. In line with [4], we will use the term obtrusiveness to refer to the extent to which the system places demands on the user’s attention which reduce the user’s ability to concentrate on his or her primary tasks. We argue that the main source of obtrusiveness related to the user model is based on the ways in which information about individual users is acquired and maintained. In systems that support learning it is often natural to administer tests of knowledge or skill. The main advantages of testing are that it can be used in many domains and it is easy to implement. However, testing is highly obtrusive and cannot be applied to WIL for many reasons including the absence of the one correct solution for most work tasks (consider LauraÊs scenario above). One of the major challenges for WIL therefore is finding ways for updating the user model in an unobtrusive manner. The second challenge for an adaptive system which is seen to be tougher for WIL than for other application cases is the issue of user privacy. To enforce user privacy in a WIL system which maintains a user model, appropriate organisational and technical measures have to be applied. Enhancing privacy in adaptive systems is a quite complex task as it depends on the organisational environment, data collected, privacy regulations etc. Additionally there is no standard approach to enhance privacy in adaptive systems [6]. Here we only want to point out the importance of privacy considerations in the WIL context but do not report on possible approaches. A third challenge for adaptivity within WIL situations is the need for seamless integration of the adaptive learning support into existing work environments of the user. Typically, an adaptive system adapts its own functionality to the user based on her interactions with this very same system. Within WIL the challenge is to utilize the interactions of a user with potentially all her applications (e.g. MS Word and Requisite Pro in the above scenario) in order to adapt learning support functionality. Thus, we face a situation in which a user interacts with a large variety of applications and expects support for learning within the current work situation. 2.1 Designing a WIL User Model There are several ways to diagnose user skills and maintain a user model. The knowledge represented in the user profile can be elicited explicitly from the user but it can also be acquired implicitly from inferences made about the user [7]. In our view, implicit acquisition means tracking naturally occurring actions [8]. Naturally occurring actions include all of the actions that the user performs with the system that do not have the express purpose of revealing information about the user. These actions may range from major actions like contacting a knowledgeable person about a certain topic to minor ones like scrolling down a page. A number of interesting approaches have been suggested in other adaptive systems. For instance, researchers interested in adaptive hypertext navigation support have developed a variety of ways of analyzing the user’s navigation actions to infer his or her interests or to propose navigation shortcuts (see, e.g., [9]). [10] came up with an unobtrusive approach for user learning interest profiles implicitly from user observations
76
S.N. Lindstaedt et al.
only. The problem is that such approaches cannot easily be re-used for adaptive workintegrated learning. We therefore suggest tackling the challenge of user model maintenance by observing naturally occurring actions of the user [4] which we interpret as knowledge indicating events (KIE). KIE denote user activities which indicate that the user has knowledge about a certain topic. In the context of Laura’s scenario the repeated execution of a task user interviews can be seen as a KIE for domain concepts such as structured interviews and card sorting. Another KIE for card sorting could be that Laura has been contacted repeatedly about this topic in the role of an expert. KIE thus are based on usage data. Our approach goes into a similar direction as [11], who suggest using attention metadata for knowledge management and learning management approaches. It is also related to the approach of evidence-bearing events (e.g. [12]). So far both approaches have been discussed from a rather technical point of view, e.g. the technological infrastructure necessary to identify and collect attention metadata. In addition, it has been speculated on how they could be applied in different settings. Our work provides a holistic framework for the use of KIE for the maintenance of WIL user models: starting with the identification of relevant KIE, their use for updating a WIL user model, and its technical realization through WIL user model services (see below). In order to interpret KIE an underlying model is needed in the WIL user model which allows relating user actions to knowledge and skills and drawing conclusions on the user’s knowledge level. Research into organizational structures identified that many companies create and maintain different types of formal models, so called enterprise models of their work domain [13]. The three most popular models are work domain models (typically represented as an ontology), process or task models (typically represented as a workflow or process model), and competency (or skill) structures (typically represented as a simple list or matrix). Such models provide a comprehensive representation of the whole domain. Based on these insights, we propose to structure the WIL user model as an overlay (for a definition see [5]) of existing enterprise models of the application domain in question. 2.2 Designing WIL User Model Services Integrating learning support into work practices does not only mean running a WIL system and applications already deployed in organizations side by side, but also the possibility to extend and enrich existing applications. In order to meet this requirement, we propose a service oriented architecture (SOA) approach to WIL user model design and maintenance based on the OASIS reference model1. Firstly, the paradigm of SOAs allows us to split a kaleidoscope of adaptive functionality into different subgroups (services) that can be used independently from each other. Secondly, services can easily be integrated in existing applications which make them especially attractive for the WIL situation. In Laura’s scenario their functionality could include: Predicting Laura’s performance in the (for her novel) task stakeholder analysis which involves previously mastered domain concepts such as structured interviews; detecting Laura’s current learning need based on her missing experience in human activity modeling; 1
http://docs.oasis-open.org/soa-rm/v1.0/soa-rm.pdf
Getting to Know Your User – Unobtrusive User Model Maintenance
77
and recommending an learning path for how to acquire skills in human activity modeling which is optimized based on her prior experiences. The latter has been seen as a crucial function in the context of work-integrated learning [14]. Thirdly, services are formally described which provides an overview of service functionality, protocols, etc. Eventually, existing services can be used for implementing new services (service mesh-up). With the term WIL user model services, we refer to all kinds of WIL functionality that maintains and utilizes the data stored within the WIL user model. Despite their advantages, the main limitation of KIE is that they are imprecise and hard to interpret [4]. In order to draw meaningful conclusions based on KIE we propose to use a hybrid approach – utilizing available semantic structures (such as enterprise models) as well as scruffy methods (e.g. heuristics) to interpret the user’s actions. These challenges are met with the design of hybrid WIL user model services [15] which maintain and interpret the WIL user model. We have identified four core types of services, covering the the basic needs of an WIL environment. Logging services are responsible for updating the WIL user model with new observed KIE, and thus provide the basis for all other services. Sensors within the WIL environment (possibly from many different applications) send detected user activities (such as task executions, collaboration events) to logging services to be added to the user model. Pre-processing of incoming user activities are handled here. This could involve the transformation of user activities into a format required by the user model, or enriching incoming data with timestamps and other system related information. Production services make the stored KIE available to other (client) services within the WIL environment. Based on the specific requirements of the client, production services filter or aggregate KIE – they provide specialized views on the KIE. For example, one such service could produce a list of all tasks executed by one user. The receiving client could then provide visualizations of task executions over time. Views also offer a way to retrieve usage data associated with a specific enterprise model. Besides providing predefined views filtering usage data, production services could also allow to query the user model with individual parameters. Inference services process and interpret KIE to draw conclusions about different aspects of users, such as levels of knowledge. Inferences are then utilised to adapt the functionality of the service itself, or by providing the outcome to other services. A WIL user model allows generating inferences in different ways. Heuristics could be directly applied on KIE to generate aggregated information about users. Exploitation of KIE with regard to enterprise models, or a hybrid approach by combining heuristics with organisational models, could also lead to inferences. Control services provide ways to control KIE stored in the user model. Controlling usage data is important for handling privacy issues and imprecise KIE collected in the user model. Privacy issues could be addressed by applying certain privacy policies of organizations to KIE. An example would be a policy about data retention, demanding the deletion of KIE after a certain period of time. The aspect of imprecise data can be addressed by presenting users with an overview of KIE associated with them. Based on this overview users could then use a control service to manually delete or modify the collected data.
78
S.N. Lindstaedt et al.
3 The Second APOSDLE Prototype The aim of the adaptive WIL system APOSDLE is to improve knowledge worker productivity by supporting learning situations within everyday work tasks. The understanding of a user’s knowledge level and her learning goals is a central part of the APOSDLE environment. A comprehensive overview of APOSDLE and its functionality has been given in [16]. In this contribution, we only describe the mechanisms that are related to the user model and user model services. 3.1 APOSDLE Enterprise Models As mentioned above we suggest designing a WIL user model as an overlay of existing organizational models. Within APOSDLE we have chosen to implement three organizational models for one and the same application domain, the domain model, task model and learning goal model. In order to build these models, we have developed a Modeling Methodology [17] which supports the creation of integrated models (instead of separate ones). All three models and the meta-schema are represented in OWL and are stored within a component referred to as the knowledge base. The purpose of the domain model is to provide a semantic and logic description of the work domain (e.g. requirements engineering) which also constitutes the learning domain of an APOSDLE deployment environment. The domain is described in terms of concepts (e.g. requirements) and relations (e.g. is part of) that are relevant for this domain. Technically speaking the domain model is an ontology that defines a set of meaningful terms which are relevant for the domain and, which are used to classify and retrieve knowledge artifacts. The objective of the task model is to provide a formal description of the tasks (e.g. human activity modeling) the knowledge worker can perform in a particular domain. The task model identifies and groups tasks and their interdependencies and determines a formalization of patterns and procedures occurring in a business domain. The very core of a process model is a control flow. For the sake of consistency with the domain model, we have also translated the control flow into an OWL ontology. The learning goal model establishes a relation between the domain model and the task model. It maps tasks of the task model to concepts of the domain model. A learning goal describes knowledge and skills needed to perform a task, with respect to a certain topic in the domain model. For example, the learning goal ‘apply card sorting to task user interviews’ means that in order to perform user interviews Laura needs to know how to apply the card sorting method. In other words, each learning goal refers to one topic in the domain model. This formalism is necessary for a number of functionalities provided by the APOSDLE user model services. For example, it enables the determination of user skills from past task executions (task-based knowledge assessment, see people recommender service below), or the identification of a user’s learning need within a certain task (see learning need service below). Within APOSDLE, the formalisms employed for achieving these functionalities are based on competence-based knowledge space theory (e.g. [18]) which is based on Doignon & Falmagne’s knowledge space theory [19]. Competence-based knowledge space theory is a framework that formalizes the relationship between overt behavior (e.g. task performance) and latent variables (knowledge and skills needed for performance) and
Getting to Know Your User – Unobtrusive User Model Maintenance
79
that has several advantages for WIL environments. One such advantage is that the mappings afford the computation of prerequisite relationships between learning goals (see e.g. [20]). This allows us to identify learning goals which should be mastered by the user on the way to reaching a higher level learning goal. 3.2 APOSDLE Workflow In the second APOSDLE prototype, recommendations of learning goals, (learning) material, and knowledgeable colleagues are always provided depending on a user’s current work task. Since here we do not have a learning system as the central application but integrated into the work environment, we need a way of observing what users are doing in order to identify their current task (and potentially other KIE). In APOSDLE, this task detection is realised by a specialized agent [21]. This agent observes the user interactions (e.g. key strokes, mouse movements, applications specific actions) with typical MS Office and Internet applications and compares them to previously learned task specific interaction patterns of the organization. Whenever a new task execution is detected the APOSDLE logging service is invoked. The role of the user’s current task in APOSDLE’s second prototype is twofold. On one side, the tasks serves as a trigger for learning, as it determines the knowledge and skills that a user needs to have in order to perform the task successfully. The knowledge and skills required for performing the task are compared to the knowledge and skills of the user (learning need analysis), and a learning need is identified. The learning need of a user is a (possibly empty) set of learning goals (based on domain concepts), about which the user needs to learn. In order to facilitate the learning process, the learning goals are presented to the user as a learning path, i.e. an optimized sequence in which the learning goals should be tackled in order to maximize learning transfer. On the other side, the task constitutes the (currently only) KIE in the second APOSDLE prototype. In line with competence-based knowledge space theory [18] the underlying heuristics is the following: If a user is able to perform a task, she has all knowledge and skills required for this task. If the user selects a learning goal, APOSDLE triggers a search of knowledge artifacts relevant to the learning goal, and a search of people relevant to the learning goal. The results of these searches are displayed in the form of resource and people lists unobtrusively to the user. The people list contains a number of knowledgeable persons with respect to the learning goal. The decision of who is a knowledgeable person obviously is based on the data in the APOSDLE user model and is made by the people recommender service. At any time, the user has the possibility to view her task executions logged by APOSDLE, and she has the possibility to delete them. Additionally, the user can choose between three different pre-defined privacy levels (public, private, anonymous), which define the visibility of usage data presented to other users. For instance, if the privacy level public is selected, other users have access to the task history of the user. 3.3 The APOSDLE User Model The APOSDLE user model is an overlay of the topics in the domain model. Whenever a user executes a task (e.g. user interviews) within the APOSDLE environment the counter of that task within her user model is incremented. The APOSDLE user model counts how often the user has executed the task in question. It therefore
80
S.N. Lindstaedt et al.
constitutes a simple numeric model of the tasks (KIEs) which are related to one or several topics in the domain. Based on the learning goal model we can infer that the user has knowledge about all the topics related to that task (e.g. structured interviews and card sorting). Therefore, by means of an inference service (see below), information is propagated along the relationships defined by the learning goal model, and the counter of all topics related to the task are also incremented. Consequently, the APOSDLE user model contains a value for each user and each topic at any time during system usage. As mentioned above, within the first two APOSDLE prototypes we have been focusing on task executions as the only KIE. On the one hand, this was for reasons of simplification, on the other hand the task execution was seen as most relevant KIE. We report on our lessons learned in Section 6. 3.4 APOSDLE User Model Services The APOSDLE system provides service implementations for all types of WIL user model services proposed previously. Figure 1 presents an overview of APOSDLE user model services and how data is exchanged with the user model and corresponding APOSDLE client applications. The APOSDLE system implements two different logging services. The work context logging service is dedicated to collect executions of tasks corresponding to the task model (delivered from the task detection agent). Logging information consists of a user identifier, a task identifier and an optional timestamp (depends on privacy settings). The second logging service, resource activity logging service collects all activities related to resources presented to users. Such actions are reading documents, engaging in learning events, or contacting another user. Both services receive their data from work context observation components running on the APOSDLE client Applications (see Figure 1). Incoming data is transferred into the required format of the APOSDLE user model and stored in a database backend. Taking the scenario introduced in Section 1, the work context logging service would update the user model with Laura’s current task (human activity models). Other actions she might do while performing the task are logged by the resource activity logging service. In order to allow users to examine the information WIL services have gathered, APOSDLE offers two production services. The usage data history service delivers a history of task executions and all resource-based actions. The output of this service is basically a history of all events including all KIEs. Another feature is that relations between events are also preserved. It provides a way to visualize which steps users have undertaken when executing a certain task. It also features a more in depth view into data generated by different services. The usage data history service can for example provide an overview how learning goals evolved over time based on tasks users have executed. The usage data history service in our scenario provides Laura with an overview about the knowledge the user model assumes she has acquired about her current task. It could also help her to get to know people who have been recommended to her by an inference service (by showing her a top ten list of performed tasks for a recommended user). The evaluation service is another kind of production service. It is specially designed to export different aspects of usage data for evaluation outside the APOSDLE system. In APOSDLE this service generates files containing detailed information about task executions, system usage, and information from inference services.
Getting to Know Your User – Unobtrusive User Model Maintenance
81
Fig. 1. Interaction of APOSDLE user model and user model services with APOSDLE client Applications
As one of the most important inference services the learning need service allows computing a learning need for a user. Its design is driven by the goal to support knowledge workers based on their knowledge level. A user’s learning need is inferred in three steps. Starting with the user’s current task, the user model is queried to retrieve the required learning goal vector r for this task. The vector r represents for all learning goals of the domain whether or not they are required to perform certain task. The user model is again queried with the required learning goal vector as parameter to retrieve the current knowledge levels vector k for the user. The vector k consists of numeric counters for all learning goals of the domain. Each counter represents the knowledge gained by a user for a learning goal. The higher the count the more knowledge has been gained. Step two calculates the knowledge gap a user might have for a certain task. A knowledge gap vector g is obtained by normalizing the current knowledge levels vector k and subtracting it from the required learning goal vector r. The resulting vector g provides knowledge levels ranging from 0 (learning goal might have been reached, great experience) to 1 (learning goal was not addressed until now, less to no experience). The third step generates the learning need based on the knowledge gap calculated in step two. The less experience a user has acquired for a learning goal (low value in g), the higher the rank of the learning goal. The ‘most required’ learning goal is therefore listed on the top of the learning need. The learning need is used by the APOSDLE system in two ways. An application running in the working environment of the user visualizes the result as a ranked list. The first learning goal is automatically pre-selected, which invokes an information retrieval service to find resources relevant for the learning need. In the case of Laura the learning need service
82
S.N. Lindstaedt et al.
recommends learning goals helping her to accomplish the task human activity modelling. Based on these learning goals an information retrieval system provides her with resources previously created within the organization.The Learning Need Service also provides other services with current knowledge levels of users. This feature is utilized for example by the people recommender service described below as basis for its inferences. The people recommender service aims at finding people within the organization which have expertise related to the current learning goal of the user. This service provides similar functionality as the expert finding systems described in [22]. Users specialised in certain topics are represented in the user model with high knowledge levels for these topics. Other users can now individually be provided with colleagues having equal or higher experience. Compared to the MetaDoc system [23] this service uses a more dynamic way of identifying experts. Knowledgeable users are identified by comparing the current knowledge levels vectors of all users with the knowledge level vector of the user who will receive the recommendation. To infer knowledgeable users, the people recommender service utilises the Learning Need Service to retrieve current knowledge levels vectors for all users. The next step removes all users with lower knowledge levels compared to the user receiving the recommendations. The remaining users are then ranked according to their knowledge levels in the current knowledge level vectors. The most knowledgeable user will be ranked highest. The service can be configured to use the availability status of users as ranking criteria. This setting allows recommending only users currently available. Moving back to the scenario the people recommender service recommends a list of people within Laura’s organization who are more experienced in human activity modelling than the service assumes Laura currently is. APOSDLE implements two control services. The usage data control service allows users to modify and delete any usage data. APOSDLE clients present users with a task history provided by usage data history service, and invoke the usage data control service to delete task executions selected by users. A dedicated privacy component (part of the APOSDLE server) also accesses this service to enforce certain privacy policies on usage data. The APOSDLE environment is implemented as a Java client-server architecture applying the SOA paradigm to structure the server functionality into services. A dedicated component on the server exposes all services as web services. Client applications [24] can connect to one or more services depending on the features needed. Within in the server, all services are connected to one or more components implementing the actual functionality. All user model services run independently and communicate with the user model or other services using their exposed interfaces. In the second APOSDLE prototype orchestration is done by manually specifying for each service, where other services are located.
4 Related Systems Throughout the previous sections we have discussed related research with respect to our research approach. In this section we provide a short overview of other research projects dealing with related user modelling architectures. As we are not aware of any
Getting to Know Your User – Unobtrusive User Model Maintenance
83
adaptive learning systems specifically dedicated to WIL, we shortly discuss similar approaches in the area of adaptive e-learning and compare them to APOSDLE. KnowledgeTree [12] is a distributed architecture for adaptive e-learning separating its functionality into different servers and services. As APOSDLE, KnowledgeTree utilizes a centralised, event-based user model to track student activities. Adaptations of functionality and content are separated from the user model into servers (similar to the APOSDLE user model services). A main difference exists in the way how user events are collected. KnowledgeTree collects usage data from users interacting with web sites (portals) providing, e.g. learning courses about programming. APOSDLE does not provide dedicated sites to record data, but focuses on collecting usage data from the users’ working environments. Following this approach APOSDLE is open to a large set of data sources (applications) providing input to refine user models. ELENA [25] is another architecture providing personalized support for learners by following the paradigm of a service-oriented architecture. ELENA describes its services very detailed in an ontology and uses several interoperability standards. ELENA integrates all services into a central top-level service communicating with client applications. Services are also adapted based on a learner profile, and offered as web services using WSDL descriptions. We see the approach of ELENA as complementary to the APOSDLE approach as it focuses more on interoperability and integration of existing learning repositories rather than trying to use document repositories available in companies as sources for learning material.
5 Pilot Evaluations For APOSDLE’s second prototype, our aim was to formatively evaluate the user model and its maintenance mechanisms. The outcomes of these evaluations now inform the re-design of the user model for APOSDLE’s third prototype. Evaluation of a user model and its maintenance is difficult for several reasons [26] (for example, in most cases a control group using a version of the system without a user model or with a different user model is not available) and a number of preconditions have to be fulfilled: A heterogeneous group of persons needs to be available with different levels of knowledge of the learning domain in question. Typically, in the case of evaluations within organizations such a group of persons is hard to find. In order to compare the user model with the ‘true’ knowledge of the users the knowledge of all the participants in the evaluation study must be known. This again is a hard challenge in the context of WIL. For the sake of external validity of the evaluation study, i.e. in order to generalize the outcomes to real-world WIL situations, the participants should execute genuine work tasks and interpret the usefulness of the offered learning content to their specific situation. With these challenges for evaluation in mind, three pilot evaluations were designed and conducted to assess the WIL user model and WIL user model services for APOSDLE’s second prototype: (i) a paper-based lab study with students of requirements engineering, (ii) an analysis of user log data from the application of APOSDLE’s second prototype in use in different application domains and (iii) an observation of students in the domain of statistical data analysis trying to learn with the prototype. The three pilot evaluations are briefly described below. Due to the low
84
S.N. Lindstaedt et al.
validity of external criteria in study (i), the non-authentic behaviour of users in study (ii), the low number of participants in study (iii) and because the participants in studies (i) and (iii) were students in a lab and not workers at their workplaces, the outcomes of these studies cannot be generalized. Still they serve as valuable input for further development of the APOSDLE user model and the KIE approach in general. A paper-based lab study (i) was conducted with 17 students of the learning domain requirements engineering in order to compare different algorithms for updating the user model based on the students’ task performance. Self appraisal, appraisal of a supervisor and personal learning need assessment were used as criteria against which the outcomes of different updating algorithms were tested. Self-appraisal was included in the study in order to investigate if it would be a useful criterion for comparison in realistic WIL settings (where performance tests of the workers cannot be applied). The results caution towards the use of self-appraisal information in WIL. A low correlation was found between predicted task performance (based on the algorithms) and self-assessed task performance which however might be due to the low validity of the criterion variable (self-assessed task performance). Therefore, no definite conclusions could be drawn on the usefulness of the algorithms to be applied in the APOSDLE user model. The second APOSDLE prototype suggests (based on the current work task) a ranked list of learning goals the user should tackle in order to improve her performance. For each learning goal the user can select between different learning activities (reading a text, performing a learning event or contacting a person). Each learning activity of a user is logged by the evaluation service together with the rank position of the learning goal the learning activity is related to. In study (ii), this information was analysed. Our hypothesis was that the rank position of a learning goal should be correlated to the frequency with which a learning activity was performed. That is, the closer the rank position to the top of the list, the more often a learning activity should be performed. Limited log data is available for 35 users in four application domains, with at least 8 different users in each domain. The results of our log data analysis show low correlations between the rank position of a learning goal and the frequency with which a learning activity was executed. However, since the users did not use the APOSDLE prototype regularly during their work but instead used it to explore the possibilities of WIL the log data represents non-authentic user behaviour. Therefore, we are not ready to reject our ranking algorithm. Instead further examination is needed. Despite the study’s (ii) severe limitations we learned that the approach of task-based maintenance of the user model is extremely sensitive with respect to misuse. If a user does not use the system ‘seriously’ but just ‘plays around with it’ (or if the user just clicks on several tasks unintentionally) the task-based approach quickly leads to inappropriate user models and inappropriate rankings of learning goals. For further development of the KIE approach, this points to the necessity of basing the inferences on more reliable input data than just the information that the user “clicked on a task”. This could be realized by adding a condition, e.g. regarding a task execution only as KIE if the user did not click on any other task within a certain time span (e.g. 10 sec). A further possibility for improving the reliability of the KIE would be looking at additional KIE (e.g. the user carries out a learning activity about a topic). Finally, in study (iii), 5 students were observed and interviewed while they were trying to learn with APOSDLE in the learning domain of statistical data analysis. Our
Getting to Know Your User – Unobtrusive User Model Maintenance
85
aim was to investigate the effects of the learning goal ranking on the actual performance of users in realistic tasks which they were not able to solve before the study (pretest). In the pilot study, control groups were used to compare three different versions of the ranking algorithm: (a) the ranking algorithm as it was designed for APOSDLE (taking into account both the requirements of the task and the knowledge of the user), (b) a shuffled list of learning goals required for the task at hand (taking into account the requirements of the task but not the knowledge state of the user), and (c) a set of learning goals randomly selected (neither taking into account requirements of the task, nor the knowledge state of the user). Each of the participants had to solve three different tasks, one for each version of the algorithm. With versions (a) and (b), the previously unknown tasks could be solved by all participants, whereas the task could be solved by none of them when algorithm (c) was applied. Additionally, a slight difference in the users’ behavior was found between versions (a) and (b): In case of version (a), users by tendency selected less learning goals and more frequently carried out learning activities for learning goals on the top of the list in comparison with version (b) of the ranking algorithm. This serves as a first indicator, that the ranking algorithm is useful. Of course, further experimentation with larger samples is needed.
6 Conclusion and Outlook This contribution presents our approach to user model design based on KIE for WIL environments in which unobtrusive assessment of user’s knowledge levels is essential. A variety of hybrid user model services operate on this user model in order to add observed KIE, to provide its information (possibly in a filtered and aggregated manner) to other WIL applications, to infer knowledge levels and learning needs, and to allow users to examine and adapt their user model data. The APOSDLE environment serves as a reference implementation of the concepts proposed. In APOSDLE’s first and second prototypes, the maintenance of the user profile was solely based on past tasks performed. While there is some evidence that in fact most learning at the workplace is connected to performing a task, and that task performance is a good indicator for available knowledge in the workplace, this restriction to tasks performed certainly limits the types and number of assessment situations that are taken into account. It is evident that a user’s knowledge and skills do manifest themselves through other types of user interactions with the WIL system. For example, a user who seeks help while performing a task might be in a different knowledge state than a user who provides help to others. Additionally, the tasks a user performs may be driven by organizational constraints or simply by task or job assignments and they may therefore only draw a partial picture of the knowledge and skills a user has available. Moreover, in study (ii) the approach of using task as the only basis for user model maintenance has turned out to be extremely error-prone and vulnerable to fallacious user behavior, such as accidentally clicking on tasks, or ‘playing around with the system’. In order to improve the knowledge level assessment of the APOSDLE environment we are currently working on including a variety of different KIE such as collaboration events and document creation. In addition, we also plan to incorporate negative KIE, such as unsuccessful task executions. In doing so, instead of inferring
86
S.N. Lindstaedt et al.
the minimum competency state, i.e., competencies a worker has available at the minimum, the ‘real’ competency state of a worker could be approximated.
Acknowledgements APOSDLE is partially funded under the FP6 of the European Commission within the IST Workprogramme (project number 027023). The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.
References 1. Lindstaedt, S.N., Ley, T., Mayer, H.: Integrating Working and Learning in APOSDLE. In: Proceedings of the 11th Business Meeting of the Forum Neue Medien, November 10-11, University of Vienna, Austria (2005) 2. Eraut, M., Hirsh, W.: The Significance of Workplace Learning for Individuals, Groups and Organisations. SKOPE, Oxford & Cardiff Universities (2007) 3. Christl, C., Ghidini, C., Guss, J., Lindstaedt, S., Pammer, V., Scheir, P., Serafini, L.: Deploying semantic web technologies for work integrated learning in industry - A comparison: SME vs. Large sized company. In: Sheth, A.P., Staab, S., Dean, M., Paolucci, M., Maynard, D., Finin, T., Thirunarayan, K., et al. (eds.) ISWC 2008. LNCS, vol. 5318, pp. 709–722. Springer, Heidelberg (2008) 4. Jameson, A.: Adaptive interfaces and agents. In: Jacko, J.A., Sears, A. (eds.) Humancomputer interaction handbook, pp. 305–330. Erlbaum, Mahwah (2003) 5. Kass, R., Finin, T.: Modeling the User in Natural Language Systems. Computational Linguistics 14(3), 5–22 (1988) 6. Wang, Y., Kobsa, A.: Respecting User’s Individual Privacy Constraints in Web Personalization. In: Conati, C., McCoy, K., Paliouras, G. (eds.) UM 2007. LNCS (LNAI), vol. 4511, pp. 157–166. Springer, Heidelberg (2007) 7. Benyon, D.R., Murray, D.M.: Adaptive systems; from intelligent tutoring to autonomous agents. Knowledge-Based Systems 6(4), 197–219 (1993) 8. Jameson, A.: Adaptive interfaces and agents. In: Sears, A., Jacko, J.A. (eds.) Humancomputer interaction handbook, pp. 305–330. Erlbaum, Mahwah (2003) 9. Goecks, J., Shavlik, J.: Learning users’ interests by unobtrusively observing their normal behavior. In: IUI 2000: International Conference on Intelligent User Interfaces, pp. 129– 132 (2000) 10. Schwab, I., Kobsa, A.: Adaptivity through Unobstrusive Learning. KI Special Issue on Adaptivity and User Modeling 3, 5–9 (2002) 11. Wolpers, M., Martin, G., Najjar, J., Duval, E.: Attention Metadata in Knowledge and Learning Management. In: Proceedings of the I-Know 2006 (2006) 12. Brusilovsky, P.: KnowledgeTree: A Distributed Architecture for Adaptive E-Learning. In: WWW 2004, New York, USA, May 17-22, pp. 104–113 (2004) 13. Fox, M., Grueninger, M.: Enterprise modeling. AI Magazine 19(3), 109–121 (1998) 14. Billett, S.: Constituting the Workplace Curriculum. Journal of Curriculum Studies 38(1), 31–48 (2006)
Getting to Know Your User – Unobtrusive User Model Maintenance
87
15. Lindstaedt, S.N., Ley, T., Scheir, P., Ulbrich, A.: Applying Scruffy Methods to Enable Work-integrated Learning. Upgrade: The European Journal of the Informatics Professional 9(3), 44–50 (2008) 16. Lindstaedt, S.N., Scheir, P., Lokaiczyk, R., Kump, B., Beham, G., Pammer, V.: Knowledge Services for Work-integrated Learning. In: Proceedings of the European Conference on Technology Enhanced Learning (ECTEL) 2008, Maastricht, The Netherlands, September 16-19, pp. 234–244 (2008) 17. Ghidini, C., Rospocher, M., Serafini, L., Kump, B., Pammer, V., Faatz, A., Zinnen, A., Guss, J., Lindstaedt, S.: Collaborative Knowledge Engineering via Semantic MediaWiki. In: Proceedings of the I-Semantics 2008, Graz, Austria, September 3-5, pp. 134–141 (2008) 18. Korossy, K.: Extending the theory of knowledge spaces: A competence-performance approach. Zeitschrift für Psychologie 205, 53–82 (1997) 19. Doignon, J., Falmagne, J.: Spaces for the assessment of knowledge. International Journal of Man-Machine Studies 23, 175–196 (1985) 20. Ley, T., Ulbrich, A., Scheir, P., Lindstaedt, S.N., Kump, B., Albert, D.: Modelling Competencies for supporting Work-integrated Learning in Knowledge Work. Journal of Knowledge Management 12(6), 31–47 (2008) 21. Lokaiczyk, R., Godehardt, E., Faatz, A., Goertz, M., Kienle, A., Wessner, W.M., Ulbrich, A.: Exploiting Context Information for Identification of Relevant Experts in Collaborative Workplace-Embedded E-Learning Environments. In: Proceedings of the EC-TEL, ECTEL 2007, Crete, Grece, September 15-20, pp. 217–231 (2007) 22. Yimam-Seid, D., Kobsa, A.: Expert finding systems for organizations: Problem and domain analysis and the demoir approach. Journal of Organizational Computing and Electronic Commerce 13(1), 1–24 (2003) 23. Boyle, C.: An adaptive hypertext reading system. User Modeling and User-Adapted Interaction 4(1), 1–19 (1994) 24. APOSDLE Consortium: Second Prototype APOSDLE (2008), http://www.aposdle.tugraz.at/media/multimedia/files/ second_prototype_aposdle 25. Dolog, P., Hentze, N., Nejdl, W., Sintek, M.: Personalization in distributed e-learning environments. In: Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters, pp. 170–179 (2004) 26. Chin, D.N.: Empirical Evaluation of User Models and User-Adapted Systems. User Modeling and User-Adapted Interaction 11, 181–194 (2001)
Adaptive Navigation Support for Parameterized Questions in Object-Oriented Programming I-Han Hsiao, Sergey Sosnovsky, and Peter Brusilovsky School of Information Sciences, University of Pittsburgh, USA {ihh4,sas15,peterb}@pitt.edu
Abstract. This paper explores the impact of adaptive navigation support on student work with parameterized questions in the domain of object-oriented programming. In the past, we developed QuizJET system, which is able to generate and assess parameterized Java programming questions. More recently, we developed JavaGuide system, which enhances QuizJET questions with adaptive navigation support. This system introduces QuizJET and JavaGuide and reports the results of classroom studies, which explored the impact of these systems and assessed an added value of adaptive navigation support. The results of the studies indicate that adaptive navigation support encourages students use parameterized questions more extensively. Students are also 2.5 times more likely to answer parameterized questions correctly with adaptive navigation support than without such support. In addition, we found that adaptive navigation support especially benefit weaker students helping to close the gap between strong and weak students. Keywords: adaptive navigation support, parameterized quizzes, self-assessment, object-oriented programming.
1 Introduction Parameterized questions and exercises [1] emerged as an active research area in the field of E-Learning . This technology allows generating many objective questions from a relatively small number of templates created by content authors. Using randomly generated parameters, every question template is able to produce many similar, yet sufficiently different questions. As demonstrated by a number of projects such as CAPA [2], WebAssign [3], EEAP282 [4], Mallard [5], parameterized questions can be used effectively in a number of domains allowing to increase the number of assessment items, decrease authoring efforts, and reduce cheating. The work of our research group focused on exploring the value of parameterized questions in the area of computer programming. We have developed and explored QuizPACK [1], a system which is able to generate and assess parameterized questions for C programming. Unlike the majority of modern system for automatic assessment of programming exercises [6-8], which focus on program-writing exercises, QuizPACK focused on program-tracing questions. This kind of questions is known as very important [9], however there are almost no question generation and assessment U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 88–98, 2009. © Springer-Verlag Berlin Heidelberg 2009
Adaptive Navigation Support for Parameterized Questions
89
systems, which can work with these questions. QuizPACK was evaluated in a series of classroom studies, which confirmed the educational value of this technology [1, 10]. We also found that the parameterized questions can be successfully combined with the technology of adaptive navigation support. The use of adaptive navigation support to guide students to most appropriate questions was found to increase student ability to answer questions correctly and encourage them to use the system more extensively (which, in turn, positively impacted their knowledge) [11]. While we confirmed the value of adaptive navigation support for parameterized questions in several studies, our earlier research left a number of questions unanswered. First, parameterized questions in area of C programming were not as diverse from the complexity point of view as questions in other areas such as physics [2]. As a result, it was left unclear whether adaptive navigation support can work in the context of a broader range of question difficulty from relatively simple to very difficult. Second, due to the decreased number of students in C programming classes, we were not able to separately assess the impact of this intelligent guidance technology on stronger and weaker students, which is an important typical research question. To answer these questions, we expanded our work on parameterized questions to a more sophisticated domain of object-oriented Java programming, which is now the language of choice in most introductory programming classes. This domain allows us both: to introduce questions of much broader complexity and to explore our ideas in larger classes. Capitalizing on our experiences with QuizPACK, we developed QuizJET (Java Evaluation Toolkit), which supports authoring, delivery, and evaluation of parameterized questions for Java [12]. A preliminary evaluation has demonstrated that parameterized questions work really well in this domain: we found a significant relationship between the amount of work done by students in QuizJET and their performance. Working with QuizJET, students were able to improve their in-class weekly quiz scores. We also found that their success in QuizJET (higher percentage of correct answer) correlates with high scores on the final exam. This paper presents the result of our second study of parameterized questions for object-oriented Java programming. The goal of this study was to assess the added value of adaptive navigation support on the student work with parameterized questions in this domain. In addition to exploring this technology combination in a new domain, the study specifically attempted to assess the impact of adaptive navigation support to student work with questions of different complexity as well as the impact of this technology on weaker and stronger students. In this study we compared the impact of parameterized questions in two introductory Java programming classes, featured the instructor, syllabus, and student cohort (undergraduate information science students at the University of Pittsburgh). One of these classes used QuizJET system and another used JavaGuide system, which is QuizJET enhanced with adaptive navigation support service QuizGuide [13, 14]. In the following sections we present briefly QuizJET and JavaGuide systems and report the results of our classroom studies. We conclude with a summary of results and some discussion of future work.
90
I-H. Hsiao, S. Sosnovsky, and P. Brusilovsky
2 QuizJET: Parameterized Questions for Object-Oriented Programming in Java QuizJET system was developed to explore the technology of parameterized questions in a challenging domain of object-oriented programming. QuizJET supports authoring, delivery and evaluation of parameterized questions for Java programming language. It covers a broad range of Java topics from Java basics to such advanced topics as objects, classes, polymorphism, inheritance, and exceptions.
Fig. 1. The presentation (top) and the evaluation results (bottom) of a QuizJET question
Adaptive Navigation Support for Parameterized Questions
91
The delivery component of QuizJET allows students to access each question pattern through a Web browser using a unique link (in original QuizJET, these links were included into the course portal). Once a link to the question is accessed, QuizJET generates a unique instantiation of the question (Figure 1), which features a small Java program. The student’s challenge is to mentally execute the program and answer a question such as: “What will be the final value of the specific variable?” or “What will be printed in the console window?” A tabbed interface design supports straightforward access to the full code of the problem, one Java class per tab. The driver class named Tester Class, containing the main function, is presented on the first tab, while other tabs show supporting classes (such as BankAccount in Figure 1). The tabbed arrangement uniquely characterizes object-oriented programming where even simple object-oriented program may require one or more imported classes. To answer a question, students fill the answer in the input field and hit Submit. The system immediately reports the evaluation results and the correct answer (Figure 1, bottom). Whether their results were correct or not, students can hit the Try Again button to assess the same question pattern with different parameters. This function helps students achieve mastery of the topic. While it looks relatively simple from user’s point of view, this functionality is supported by sophisticated authoring (performed with a separate authoring component), generation, and evaluation mechanisms, which are described in [12].
3 JavaGuide: Adaptive Navigation Support for QuizJET Questions JavaGuide is an adaptive hypermedia system, which provides adaptive navigation support for QuizJET questions. It is not a part of QuizJET, but an independent system, which simply hosts links to QuizJET questions and guides the students to most appropriate links using adaptive link annotation. In this sense, JavaGuide offers an alternative ways for students to access the same set of QuizJET questions. A student may choose to go through a regular course portal, select one of the lectures, and choose one of the questions, which a teacher posted to this lecture. Alternatively, a student can go to JavaGuide and select a question with the help of adaptive navigation support. The question presentation and evaluation processes are the same in JavaGuide and “plain”. The interface of JavaGuide consists of the quiz navigation area and the quiz presentation area (Figure 2). Navigation area provides the hyperlinks to QuizJET questions, which are grouped into topics. By clicking on the topic name, a user can expand or collapse questions for the topic. A click on a question loads the question into the quiz presentation area. To assist students in navigating to the appropriate topics and questions the links to each topics are annotated with an adaptive icon. JavaGuide uses “target-arrow” annotations mechanism of QuizGuide [13], which has been successfully used in a number of C-programming courses [13, 14]. Each adaptively generated target icon expresses two layers of meanings: knowledge adaptation and goal adaptation. For knowledge adaptation, a topic-based modeling is adopted. Each topic contains several educational activities which identified by the course instructor. Student progress with these activities defines the user understanding of the topic. The number of arrows in the target represents the growth of student
92
I-H. Hsiao, S. Sosnovsky, and P. Brusilovsky
knowledge of the topic. Goal adaptation is supported by a time-based mechanism which presents the relevant topics according to course lecture sequence. The color of the target expresses the relevance of the topic to the current course goal. The icon for the current topic is shown as bright blue, its prerequisites are light blue, while target icons for other earlier topics are gray. A crossed icon indicates that the student is not ready for the topic yet. It is easy to notice that JavaGuide annotation approach integrates prerequisitebased adaptation that advises whether an item is ready or not ready and progressbased annotation that displays the amount of knowledge already acquired by the student. Both approaches are relatively popular and well explored in adaptive educational hypermedia. For example, prerequisite -based approach is used in AHA [15], ELM-ART [16] and KBS-Hyperbook [17], while progress based annotation is used in INSPIRE [18] and NavEx [11]. An interesting feature of JavaGuide and its predecessor QuizGuide is the use of both annotation approaches in parallel.
Fig 2. JavaGuide Interface
4 Classroom Studies and Evaluation Results The classroom study of our technology was performed in two undergraduate introductory programming classes offered by the School of Information Sciences, University of Pittsburgh. Online self-assessment quizzes were used as one of the non-mandatory course tools. QuizJET was used in the Spring semester of 2008 and JavaGuide was used in the Fall semester of 2008. Both tools featured the same set of quizzes. All student activity with the system was recorded. For every student attempt to answer a question, the system stored a timestamp, the user’s name, the question, quiz, and session ids, and the correctness of the answer.
Adaptive Navigation Support for Parameterized Questions
93
4.1 Basic Statistics In both classes, student work with the systems was analyzed on two levels: overall and within a session. On each level we explored following performance parameters: Attempts (the total number of questions attempted by the student), Success Rate (the percentage of correctly answered questions) and Course Coverage (the number of attempts by the student for each distinct topic; the number of distinct questions attempted by the student). In addition, we decided to examine all performance parameters separately for students with weak and strong starting knowledge of the subject. This decision was guided by the results of earlier research [19, 20], which demonstrated that the starting level of knowledge of the subject may affect the impact of adaptive navigation support on student performance. To achieve this separation, the students were split into two groups based on their pre-test scores (ranging from a minimum of 0 to a maximum of 20). Table 1 compares student performance in QuizJET and JavaGuide. Strong students scored 10 or higher points in the pre-test and weak students scored less than 10 points. The table shows active use of the JavaGuide by both strong and weak students. It also indicates a remarkable increase of all performance parameters in the presence of adaptive navigation support. These results confirm that the impact of adaptive navigation support on student performance, which was originally discovered in the domain of C programming, is sufficiently universal to be observed in a different domain and with a larger variety of question complexity. Table 1. System Usage Summary
parameters Overall User Statistics Average User Session Statistics
Attempts Success Rate Distinct Topics Distinct Questions Attempts Distinct Topics Distinct Questions
Strong (n=5) 131.2 58.87% 11.00 41.60 29.82 2.50 9.45
JavaGuide (Fall 2008) Weak (n=17) 123.82 58.20% 12.00 47.53 30.51 2.95 11.71
All Users (n=22) 125.50 58.31% 11.77 46.18 30.34 2.85 11.16
QuizJET (Spring 2008) All Users (n=31) 41.71 32.15% 4.94 17.23 21.50 2.55 8.88
4.2 The Impact of Guidance on Student Work with questions of Different Complexity Our next goal was to explore the significance of the obtained data and to explore the impact of adaptive navigation support on user work with questions of different complexity. To do that, all questions were categorized into 3 complexity levels (Easy, Moderate and Hard) based on the number of involved concepts (which in ranged from 4 to as far as 287). A question with 15 or less concepts is considered to be Easy, 16 to 90 as Moderate, and 90 or higher as Hard. In total, both systems included 41 easy, 41 moderate, and 19 hard questions. We conducted two separate 2 × 3 ANOVA to evaluate student performance measured by Attempts and Success Rate within two different systems and three complexity levels. The means and standard errors for each group are reported in Table 2.
94
I-H. Hsiao, S. Sosnovsky, and P. Brusilovsky
Table 2. Means and standard error of Attempts and Success Rate, by system and complexity level
DV Total Attempts
Attempts (per question) Success Rate
Complexity Level Easy
QuizJET (2008Spring) (n=31) M±SE 38.52± 8.40
JavaGuide (2008Fall) (n=22) M±SE 75.77± 9.98
Moderate
25.06± 8.40
41.32± 9.98
Hard
5.58± 8.40
8.41± 9.98
Easy Moderate Hard Easy Moderate Hard
.94± .22 .61± .22 .29± .22 38.00% ± 5.70% 28.23% ± 5.70% 11%90% ± 5.70%
1.85± .26 1.01± .26 0.44± .26 68.73% ± 6.70% 67.00% ± 6.70% 39.32% ± 6.70%
The first 2 × 3 between-subjects ANOVA was performed on Attempts as a function of System (QuizJET and JavaGuide) and Complexity Level (Easy, Moderate and Hard). We found that JavaGuide received a significantly higher number of Attempts than QuizJET, F(1, 153)= 6.042, p= .015, partial η2=.038. This result showed that adaptive navigation encourages students to work with parameterized questions. We also found that students had significant higher Attempts on the easy quizzes in JavaGuide than in QuizJET, F(1, 153) = 7.081, p = .009, partial η2 = .044 (Figure 3, left). There were no significant differences found between the systems for the moderate and hard quizzes. The second set of 2 × 3 between-subjects analysis of variance was performed on Success Rate. We found that with JavaGuide, students achieved significantly higher Success Rate than with QuizJET, F(1, 153) = 40.593, p <.001, partial η2 = .210. The size of effect for the three complexity levels was respectively 1.81 (p = .001), 2.37 (p <.001) and 3.30 (p = .002) times higher than the Success Rate in QuizJET. This means that regardless of the complexity of the quizzes, students were, on average, 2.5 times more likely to answer a question correctly when it was accessed with adaptive navigation support than without such support. As shown on Figure 3 right, the Success Rate for JavaGuide is dramatically higher than for QuizJET. The analysis of the impact of adaptive navigation support on student work with questions of different complexity leads to some interesting observation. First, it seems that adaptive guidance encourages students to do more work early in the course when the questions are relatively easy, while also preventing them to venture too fast into the area of very hard questions. Second, the investment of student efforts on work with easy questions pays back across all three complexity levels. The knowledge gained while working with easy questions helped students to achieve better success in dealing with moderate and hard questions as well. Most pronounced, this effect is in the area of hard questions. While the number of attempts on hard questions is similar in two groups, the success rate with hard questions is
Adaptive Navigation Support for Parameterized Questions
95
Fig. 3. The total Attempts(left) and the Success Rate (right) of two systems on different complexity levels
more than three times larger in JavaGuide group. Apparently, the prerequisite-based guidance of JavaGuide prepared the students to face complex questions by exploring easier ones. 4.3 The Impact of Guidance on Weak and Strong Students As mentioned above, students were categorized as strong students if they have 10 or more points on pre-test and as weak otherwise. We discovered that stronger students had a significantly higher Success Rate with the QuizJET system than weaker students did, F(1, 87) = 4.760, p = .032, partial η2 = .052. However, we did not find any significant differences between strong and weak studentsÊ Success Rate with the JavaGuide system, F(1, 60) = .007, p = .931, partial η2 < .001. With adaptive navigation support, both strong and weak students achieved similar performance on each complexity level of quizzes. Without such support, there was a greater gap between strong and weak students. Thus, adaptive navigation support can, indeed, adapt to the student starting level of knowledge guiding students of both levels to appropriate quizzes. An analysis of the Attempts per question uncovers the mechanism behind this observation. The statistics shows that weak students using JavaGuide had a significantly higher number of Attempts made in easy questions than they did in QuizJET, F(1, 147) = 7.658, p = .006, partial η2 = .050; while stronger students using JavaGuide had a significantly higher number of Attempts in harder quizzes, F(1, 147) = 4.089, p = .045, partial η2 = .028. This suggests that JavaGuide indeed guides students to quizzes that match the studentsÊ knowledge: weaker students were more often guided to work on easy quizzes while stronger students were usually led to work on harder quizzes. Figure 4 shows the pattern of differences found between strong and weak students on various complexity levels for the two systems. The means and standard errors for each group are reported in Table 3.
96
I-H. Hsiao, S. Sosnovsky, and P. Brusilovsky
Table 3. Means and standard error of Attempt per questions and Success Rate by system, complexity level and knowledge level QuizJET (2008Spring) DV
Knowledge Level Strong
Attempts (per question) Weak
Strong Success Rate Weak
Complexity Level Easy Moderate Hard Easy Moderate Hard Easy Moderate Hard Easy Moderate Hard
M±SE 1.11 ± .32 .59 ± .32 .39 ± .32 .78 ± .309 .63 ±.309 .20 ±.309 48.27% ± 8.10% 39.40% ± 8.10% 12.07% ± 8.10% 28.38% ± 7.80% 17.75% ± 7.80% 11.75% ± 7.80%
JavaGuide (2008Fall) M±SE 1.43 ± .553 1.32 ± .553 .97 ± .553 1.97 ± .300 .92 ± .300 .29 ± .300 67.80% ± 14.00% 59.80% ± 14.00% 49.00% ± 14.00% 69.00% ± 7.60% 69.12% ± 7.60% 36.47% ± 7.60%
Fig. 4. The pattern of differences in the Attempts per Question & Success Rate for QuizJET and JavaGuide, on a variety of knowledge and complexity levels
4.4 Subjective Evaluation To evaluate the students’ subjective attitudes toward the systems, we administered questionnaires at the end of both semesters. Overall, 97.37% of the students strongly agreed or agreed that the system should be used again in teaching this course and that the online self-assessment quizzes were relevant to what was presented in class. In terms of learning, 91.12% of the students considered that the self-assessment quizzes contributed to their learning in this course. 71.71% of the students thought that online selfassessment quizzes provided useful feedback. For further improvement on the systems, we also collected students’ opinions on system features. 23% of them hoped to have future systems available for handheld computers so that the quizzes can be taken anywhere.
Adaptive Navigation Support for Parameterized Questions
5
97
Summary and Future Work
This paper explored the impact of adaptive navigation support on student work with parameterized questions for object-oriented programming. The results demonstrated that adaptive navigation encourages students’ to use parameterized questions more extensively and significantly increases their success rate. Students were, on average, 2.5 times more likely to answer a question correctly with adaptive navigation support than without such support. Most pronounced was the increase of student work with easy questions. However, encouraging students to work more on easier questions, the navigation support got them much better prepared to face complex questions as well. By categorizing the students’ initial knowledge levels into strong and weak, we found that adaptive navigation support effectively guided both strong and weak students to the appropriate quizzes and contributed to a uniformly high success rate. According to the subjective evaluation, students perceived the online selfassessment quizzes as helpful to their learning. Most of them appreciated the systems. However, about a quarter of the users considered feedback from the self-assessment quizzes to be lacking. These results provoke several new challenges and give us new directions for improvement in the future. Since JavaGuide and QuizGuide were developed from the same set of theories, we are also interested in investigating the differences between online self-assessment quizzes in C and Java. To understand more about the educational values in this context, we plan to perform a more exhaustive evaluation of the systems. Acknowledgments. This material is based upon work supported by the National Science Foundation under Grant No. 0447083.
References 1. Brusilovsky, P., Sosnovsky, S.: Individualized Exercises for Self-Assessment of Programming Knowledge: An Evaluation of QuizPACK. ACM Journal on Educational Resources in Computing 5(3) (2005) 2. Kashy, E., Thoennessen, M., Tsai, Y., Davis, N.E., Wolfe, S.L.: Using networked tools to enhanse student success rates in large classes. In: 27th ASEE/IEEE Frontiers in Education Conference, Pittsburgh, pp. 233–237 (1997) 3. Titus, A.P., Martin, L.W., Beichner, R.J.: Web-based testing in physics education: Methods and opportunities. Computers in Physics 12, 117–123 (1998) 4. Merat, F.L., Chung, D.: World Wide Web approach to teaching microprocessors. In: Frontiers in Education Conference, Pittsburgh, PA (1997) 5. Graham, C.R., Swafford, M.L., Brown, D.J.: Mallard: A Java Enhanced Learning Environment. In: WebNet 1997, World Conference of the WWW, Internet and Intranet, Toronto, Canada (1997) 6. Higgins, C., Gray, G., Symeonidis, P., Tsintsifas, A.: Automated assessment and experiences of teaching programming. ACM Journal on Educational Resources in Computing 5(3) (2005) 7. Douce, C., Livingstone, D., Orwell, J.: Automatic test-based assessment of programming: A review. ACM Journal on Educational Resources in Computing 5(3) (2005)
98
I-H. Hsiao, S. Sosnovsky, and P. Brusilovsky
8. Ala-Mutka, K.M.: A survey of automatic assessment approaches for programming assignments. Computer Science Education 15(2), 83–102 (2005) 9. Lister, R., Adams, E.S., Fitzgerald, S., Fone, W., Hammer, J., Lindholm, M., McCartney, R., Moström, J.E., Sanders, K., Seppälä, O., Simon, B., Thomas, L.: A multi-national study of reading and tracing skills in novice programmers. ACM SIGCSE Bulletin 36(4), 119–150 (2004) 10. Sosnovsky, S., Shcherbinina, O., Brusilovsky, P.: Web-based parameterized questions as a tool for learning. In: World Conference on E-Learning, pp. 309–316. AACE (2003) 11. Brusilovsky, P., Sosnovsky, S., Yudelson, M.: Addictive links: The motivational value of adaptive link annotation in educational hypermedia. In: Wade, V.P., Ashman, H., Smyth, B. (eds.) AH 2006. LNCS, vol. 4018, pp. 51–60. Springer, Heidelberg (2006) 12. Hsiao, I., Brusilovsky, P., Sosnovsky, S.: Web-based Parameterized Questions for ObjectOriented Programming. In: E-Learn 2008. AACE, Las Vegas (2008) 13. Brusilovsky, P., Sosnovsky, S., Shcherbinina, O.: QuizGuide: Increasing the Educational Value of Individualized Self-Assessment Quizzes with Adaptive Navigation Support. In: Nall, J., Robson, R. (eds.) World Conference on ELearning, E-Learn 2004, Washington, DC, USA, pp. 1806–1813 (2004) 14. Brusilovsky, P., Sosnovsky, S., Yudelson, M.: Adaptive Hypermedia Services for ELearning. In: Proceedings of Workshop on Applying Adaptive Hypermedia Techniques to Service Oriented Environments at the Third International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, Eindhoven, the Netherlands (2004) 15. De Bra, P., Calvi, L.: AHA! An open Adaptive Hypermedia Architecture. The New Review of Hypermedia and Multimedia 4, 115–139 (1998) 16. Weber, G., Brusilovsky, P.: ELM-ART: An adaptive versatile system for Web-based instruction. International Journal of Artificial Intelligence in Education 12(4), 351–384 (2001) 17. Henze, N., Nejdl, W.: Adaptation in open corpus hypermedia. International Journal of Artificial Intelligence in Education 12(4), 325–350 (2001) 18. Grigoriadou, M., Papanikolaou, K., et al.: INSPIRE: An INtelligent System for Personalized Instruction in a Remote Environment. In: Third workshop on Adaptive Hypertext and Hypermedia, Sonthofen, Germany, Technical University Eindhoven (2001) 19. Merat, F.L., Chung, D.: World Wide Web approach to teaching microprocessors. In: Frontiers in Education Conference, Pittsburgh, PA (1997) 20. Brusilovsky, P.: Adaptive navigation support in educational hypermedia: the role of student knowledge level and the case for meta-adaptation. British Journal of Educational Technology 34(4), 487–497 (2003)
Automated Educational Course Metadata Generation Based on Semantics Discovery ˇ Mari´ an Simko and M´ aria Bielikov´a Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technology, Slovak University of Technology, Ilkoviˇcova 3, 842 16 Bratislava, Slovakia {simko,bielik}@fiit.stuba.sk
Abstract. Current educational systems use advanced mechanisms for adaptation by utilizing available knowledge about the domain. However, describing a domain area in sufficient detail to allow accurate personalization is a tedious and time-consuming task. Only few works are related to the support of teachers by discovering the knowledge from educational material. In this paper we present a method for automated metadata generation addressing the educational knowledge discovery problem. We employ several techniques of data mining with regards to the e-learning environment and evaluate the method on functional programming course.
Keywords: Educational knowledge discovery, metadata generation, domain model, adaptive educational course authoring.
1
Introduction and Related Work
The domain model of an adaptive course represents an area that is the subject of learning. It consists of interlinked concepts – domain knowledge elements related to learning content [1]. The concepts are mutually interconnected forming a structure similar to a lightweight ontology. In educational systems concepts are also connected to learning objects, i.e. learning material portions containing concept instances. Let us consider a programming course containing a textbook chapter describing the Fibonacci sequence. With this learning object concepts like Fibonacci, recursion, cycle, etc. are associated. Concepts are not restricted to terms appearing within the text, nor topics of the textbook chapters. The concept space including relationships we also refer to as course metadata as it contains data about content being taught. The bottleneck of the adaptive educational systems lies in the complexity of authoring. Such systems may contain thousands of presentation pages and hundreds of other fragments of learning material such as examples, explanations, animations and questions. These numbers are certainly sufficient for a study of a particular subject, but defining relationships between concepts in such a space is not only difficult but also impossible for a human being. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 99–105, 2009. c Springer-Verlag Berlin Heidelberg 2009
100
ˇ M. Simko and M. Bielikov´ a
Our goal is to support the adaptive educational course authoring by the means of knowledge discovery techniques. In this paper we propose a method of automated metadata generation by revealing semantics hidden within the text. We show that generated metadata are useful for e-learning needs, especially for recommendation. Furthermore, the teacher’s effort is reduced since we are able to create promissing number of concepts and relationships automatically. The work related to metadata generation in the area of adaptive e-learning by means of knowledge discovery is presented in [5]. Concept similarities are computed based on the comparison of concept’s domain attributes. In contrast to our meaning, the concept in [5] also holds textual representation. This should be considered as intentional description, but then the reusability of such concepts is arguable. We are not aware of any other approaches to automated concept relationship generation in the adaptive e-learning field. Finding relations between concepts is a subtask of the ontology learning field. Relations are induced based on linguistic analysis relying on preceding text annotation [2], incorporating formal concept analysis [3] or using existing resources such as WordNet [4]. The drawback of the approaches is the dependency on precise linguistic analysis. They rely on lexico-syntactical annotations, powerful POS taggers, existing domain ontologies, huge corpuses or external semantic resources. Such knowledge is often not available during e-course authoring. The solution for content authors should involve unsupervised approaches to unburden them from additional work. This need we address in the method we propose. The task of structuring the concept space is also present in the area of topic maps. In this field the topics can be consider analogical to concepts. Authors in [6] generate relations between topics by analyzing the HTML structure of Wikipedia documents. Categorization methods are used in [8] where similar topics are discovered by latent semantic indexing (LSI) and K-means clustering. Unsupervised methods serve as guidance in topic ontology building. A similar approach is missing in the area of adaptive e-learning. Hence, our method is based on statistical unsupervised text processing and knowledge discovery.
2
Method Description
The goal of the proposed method is the automated creation of the domain model. The metadata are created automatically under the adaptive course author’s (i.e., teacher’s) supervision. Thus, his effort in the authoring process is reduced. Automated steps include concept extraction and relationship discovery. 2.1
Learning Objects Preprocessing
At the beginning we create the representation of learning objects relevant for further processing. We utilize a vector space model (VSM) based on the socalled term relevance which is the degree of importance of the term in the text (learning object). Beside the term frequency it comprises also other qualitative characteristics of the term. Learning objects preprocessing steps are as follows:
Automated Educational Course Metadata Generation
101
Vector representation composition. In this step we perform a lexical analysis of learning objects. Lexical units – tokens – are identified. We remove stop words having almost no semantic significance. Then we retrieve token’s lemmas – canonical forms. From lemmas we compose vectors containing term frequencies. In this moment we have the standard bag-of-words model. Vectors adjustment. In this step we tune up the actual vectors weights (relevance) considering factors not related to the learning object content. The adjustment consists of two steps: (1) available index processing, (2) formatting processing. An index of domain keywords is often available in a learning environment (in textbooks, as course outcomes, etc.). We increase the relevance of such terms by multiplying it with coefficient empirically set to 5.0. Formatting processing covers the relevance adjustment according to formatting in source document. We utilize the rules similar to ones presented in [6] in this step. 2.2
Pseudoconcepts Extraction
After the preprocessing step the representation allowing concept candidates – pseudoconcepts extraction is prepared. This step consists of three substeps: Relevant domain terms (RDT) selection. From the set of all learning object terms we select only those whose relevance exceeds a particular threshold k, empirically set to be equal to coefficient increasing the relevance of domain keyword. This way we find terms that represent certain semantic potential. Relevant domain terms weight computation. Using extended tf-idf measure we compute the degree of RDT relatedness to learning objects: reli,j |LO| wi,j = · idfi · log |{loj : ti ∈ loj }| k relk,j
(1)
where wi,j is relatedness of domain term ti to learning object loj , reli,j is relevance of domain term ti in learning object loj , and LO is the set of learning objects in whole course. Pseudoconcepts extraction and relationships creation. To let a RDT be promoted to a pseudoconcept we introduce the minimal relatedness threshold r ∈ 0; 1. In our experiments it is a very small number (≈ 0.05), but it effectively filters out irrelevant domain terms. Between pseudoconcepts and learning objects we create relationships with the computed relevance weight wi,j . 2.3
Relationship Discovery
Relationship discovery is the crucial step of our method. We apply several techniques of knowledge discovery on the actual domain model (consisting of pseudoconcepts connected with learning objects only) in order to obtain the degree of mutual pseudoconcepts relatedness. For each pseudoconcept we choose the most relevant neighbors – the neighbors most related to a given pseudoconcept.
102
ˇ M. Simko and M. Bielikov´ a
Concept-to-concept similarity computation. For this step we proposed and experimented with three concept-to-concept similarity computation variants: vector approach, spreading activation and PageRank-based analysis. Each variant of similarity score computation provides a unique view of the actual domain model state and employs a specific approach to knowledge discovery. Detailed description is beyond the scope of this paper and can be found in [9]. Most relevant neighbors selection. Finding the appropriate number of relevant neighbors is important for the generated domain model quality. In our experiments we select neighbors that accumulate m% of the sum of all neighbors’ similarity scores to a given concept.
3
Experimental Evaluation
We evaluated the proposed method in the domain of programming learning on the functional programming course. We performed the adaptive e-course creation process using the CourseDesigner authoring tool which implements the automated metadata generation method. The subject of experiment was a halfterm course consisting of 70 learning objects on the functional programming paradigm and programming techniques in the Lisp language. Learning objects were organized hierarchically and represented using the DocBook language. The resulting course structure was compared to functional programming metadata created manually by a randomly chosen sample of 2007/08 course students. Manual creation of metadata comprised the assignment of weighted values to concept relationships. As assigning continuous values from interval 0; 1 is a non-trivial task, weight values were from {0, 0.5, 1} implying: 0 – concepts are not related to each other (no relation), 0.5 – concepts are partially (maybe) related to each other (“weak” relation), 1 – concepts are highly (certainly) related to each other (“strong” relation). There were 366 relationships created, 216 were weak and 150 were strong. Prior to the method application we assumed that the learning objects were loaded into a newly created course. The dictionary of domain keywords was also provided. During the concepts extraction step 76 concepts were extracted. The relationship discovery step was performed separately for each similarity computation variant. Between the 76 concepts 420, 442, and 316 relationships were retrieved respectively (see Fig. 1). To evaluate the obtained results we tracked the number of correct relationships retrieved by the method in relation to the total number of relationships retrieved (precision) and the number of correct relationships retrieved in relationship to the total number of relevant relationships defined manually (recall). To compare the results, we combined both into F-measure which is the weighted harmonic mean of precision and recall. In order to gain more accurate evaluation, we extended the original recall measure to involve the manually constructed domain model relationship types: R* =
retrieved ∩ (relA ∪ relB) relA ∪ (relB ∪ retrieved)
(2)
Automated Educational Course Metadata Generation
103
Fig. 1. Example of a domain model fragment after the relationship discovery step taken from CourseDesigner. The “read” concept is selected and its direct neighbors are colored grey. (The functional programming course is being taught in Slovak.)
where R* is the extended recall measure, retrieved is the set of all relationships retrieved by the method, relA is the set of manually created “strong” relationships and relB is the set of manually created “weak” relationships. The experiments yielded best results with PageRank-based analysis (F* = 0.652). The analysis of the generated relationships highlighted common NLP problems. None of the relationship discovery variants was able to significantly overcome natural language ambiguities. Less suitable results were obtained among concepts represented by terms occurring frequently, in more than one meaning or diffused over the whole course. A similar problem affected the concepts associated with a small, relatively independent group of learning objects as they were unable to create relevant connections with other semantically related concepts. A legitimate question is what exactly does the F*-measure indicate in our experiment? We interpret it as the “completeness” of the generated metadata. Throughout the experiment, generated relationships not contained in the manually created relations were considered incorrect. Although manual relationship creators made their best effort to match real-world relations, relationships retrieved automatically need not to be irrelevant. They might represent bindings which were not explicitly realized even by the most concerned authors.
4
Conclusions
In this paper we presented an approach to the automated creation of a semantic layer over learning objects in an adaptive educational course. Our goal is
104
ˇ M. Simko and M. Bielikov´ a
to support adaptive educational course authoring and reduce the teacher’s involvement. We proposed and evaluated the method for automated metadata generation based on the educational content processing. The method produces interconnected semantic elements – concepts. Unfortunately, universal solutions enabling automatic metadata acquisition probably do not exist. To a certain degree complex domain ontologies may be used. However, they are currently not available as much as we want. Furthermore, it is questionable if we ever can produce course metadata (relationships between the concepts in particular) on a desired level of granularity. The main contribution of this paper is the proposed method and the corresponding framework for preprocessing, discovering and finalizing the domain model structure which is crucial for successive reasoning on the semantically enriched content. As opposed to most current approaches which are limited to the content annotation when creating metadata, we go one step beyond by discovering both concepts and links ultimately creating a metadata layer above the learning objects. Without proper structure of the metadata it is not possible to reason and adapt navigation and presentation in large information spaces. The proposed approach is not limited to learning objects represented by text. We can work with media content employing similarity measures for their interconnection and tagging for interconnections between the metadata layer and the content layer. Acknowledgments. This work was partially supported by the Cultural and Educational Grant Agency of the Slovak Republic, grant No. KEGA 3/5187/07 and by the Scientific Grant Agency of Slovak Republic, grant No. VG1/0508/09.
References 1. Brusilovsky, P.: Developing adaptive educational hypermedia systems: From design models to authoring tools. In: Murray, T., Blessing, S., Ainsworth, S. (eds.) Authoring Tools for Advanced Technology Learning Environment, pp. 377–409. Kluwer, Dordrecht 2. Buitelaar, P., Olejnik, D., Sintek, M.: A prot´eg´e plug-in for ontology extraction from text based on linguistic analysis. In: Bussler, C.J., Davies, J., Fensel, D., Studer, R. (eds.) ESWS 2004. LNCS, vol. 3053, pp. 31–44. Springer, Heidelberg (2004) 3. Cimiano, P., Hotho, A., Staab, S.: Learning Concept Hierarchies from Text Corpora using Formal Concept Analysis. Journal of AI Research 24, 305–339 (2005) 4. Cimiano, P., et al.: Learning Taxonomic Relations from Heterogeneous Evidence. In: Proc. of ECAI Workshop on Ontology Learning and Population (2004) 5. Cristea, A.I., de Mooij, A.: Designer Adaptation in Adaptive Hypermedia. In: Proc. of Int. Conf. on Information Technology: Computers and Communications, ITCC 2003, Las Vegas. IEEE Computer Society, Los Alamitos (2003) 6. Dicheva, D., Dichev, C.: Helping Courseware Authors to Build Ontologies: the Case of TM4L. In: 13th Int. Conf. on Artificial Intelligence in Education, pp. 77–84 (2007) 7. Diedrich, J., Balke, W.-T.: The Semantic GrowBag Algorithm: Automatically Deriving Categorization Systems. In: Kov´ acs, L., Fuhr, N., Meghini, C. (eds.) ECDL 2007. LNCS, vol. 4675, pp. 1–13. Springer, Heidelberg (2007)
Automated Educational Course Metadata Generation
105
8. Fortuna, B., Mladenic, D., Grobelnik, M.: Semi-automatic Construction of Topic Ontologies. In: Ackermann, M., Berendt, B., Grobelnik, M., Hotho, A., Mladeniˇc, D., Semeraro, G., Spiliopoulou, M., Stumme, G., Sv´atek, V., van Someren, M. (eds.) EWMF 2005 and KDO 2005. LNCS (LNAI), vol. 4289, pp. 121–131. Springer, Heidelberg (2006) ˇ 9. Simko, M., Bielikov´ a, M.: Automatic Concept Relationships Discovery for an Adaptive E-Course. In: Proc. of 2nd Int. Conf. on Educational Data Mining, EDM 2009, Cordoba, Spain, pp. 171–179 (2009)
Searching for “People Like Me” in a Lifelong Learning System Nicolas Van Labeke, George D. Magoulas, and Alexandra Poulovassilis London Knowledge Lab, Birkbeck, University of London 23-29 Emerald Street, London WC1N 3QS, UK {nicolas,gmagoulas,ap}@dcs.bbk.ac.uk
Abstract. The L4All system allows learners to record and share learning pathways through educational offerings, with the aim of facilitating progression from Secondary Education through to Further Education and on to Higher Education. This paper describes the design of the system’s facility for searching for “people like me”, presents the results of an evaluation session with a group of mature learners, and discusses outcomes arising from this evaluation. Keywords: lifelong learning, learning communities, personalisation.
1 Introduction Supporting the needs of lifelong learners is increasingly at the core of learning and teaching strategies of Higher Education and Further Education institutions and poses a host of new challenges. In particular, face-to-face careers guidance and support has been found to be uneven [1], leading some to consider the role of online support in providing some form of careers guidance [2,3]. Communication and collaboration tools provide new opportunities for exploring the role that social networks and factors play in making career decisions, and for supporting educational choices [4]. The need for better support for lifelong learners, the patchy provision of careers and educational guidance at critical points, and the potential of ICT to support these needs provided the rationale that underpinned the development of the L4All system. We refer the reader to [5] for details of the aims, research and development methodology, technical approach and evaluation of the original system. In brief, the L4All system aims to support lifelong learners in exploring learning opportunities and in planning and reflecting on their learning. It allows learners to create and maintain a chronological record of their learning, work and personal episodes – their timeline. Learners can share their timelines with other learners, with the aim of encouraging collaborative formulation of future learning goals and aspirations. The focus is particularly on those post-16 learners who traditionally have not participated in higher education. Among this group, social factors are found to have a significant influence on educational choices and career decisions [5]. The L4All user interface provides screens for the entry of personal details, for creating and maintaining a timeline of past and future learning, work and personal episodes, and for searching over courses and timelines made available by other users, U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 106–111, 2009. © Springer-Verlag Berlin Heidelberg 2009
Searching for “People Like Me” in a Lifelong Learning System
107
based on a variety of search criteria. This paper describes the design of a new facility which supports personalised searching for timelines of “people like me” (Section 2), summarises the results of an evaluation session held with a group of mature learners (Section 3), and discusses outcomes arising from this evaluation (Section 4).
2 Search for Timelines of “People Like Me” The initial prototype of the L4All system supported several search functionalities over users and their timelines. However, searching over timelines returned matches based solely on the occurrence of specified keywords in one or more episodes of the timeline and did not exploit the structure of the timeline; also, the search results were not personalised to the user performing the search. An alternative approach was needed that could take into account both these issues i.e. some form of comparison between a user’s own timeline and the other timelines in the L4All repository. Similarity metrics offer such a possibility. They have been widely used in information integration and in applied computer science [6,7]. In Intelligent Tutoring Systems, they have been used to compare alternative sequences of instructional activities produced by authors [8]. In our context, our starting point was to encode the episodes of a timeline into a token-based string, and we refer the reader to [9] for details of our encoding method. We also report in [9] on a comparison of ten different similarity metrics that we considered for trialling in the system. For the version of the system that we evaluated as discussed below, four of the metrics were deployed: Jaccard Similarity, Dice Similarity, Euclidean Distance, NeedlemanWunch Distance – see www.dcs.shef.ac.uk/~sam/stringmetrics.html. These are identified as Rule1 – Rule 4, respectively, together with a brief description, in the user interface. A dedicated interface for the new search for “people like me” facility was designed, providing users with a three-step process for specifying their own definition of “people like me” – see Figure 1. In the first step, the user specifies those attributes of their own profile that should be matched with other users’ profiles; this acts as a filter of possible candidates before applying the timeline similarity comparison. In the second step, the user specifies which parts of the timelines should be taken into account for the similarity comparison, by selecting the required categories of episode (there are several categories of episode; some categories of episode are annotated with a primary and possibly a secondary classification, drawn from standard U.K. occupational and educational taxonomies). In the final step, the user specifies the similarity measure to be applied, by selecting the “depth” of episode classification to be considered by the system and the search method i.e. one of Rules 1-4 above. Once the user’s definition of “people like me” has been specified, the system returns a list of all the candidate timelines, ranked by their normalised similarity. The user can select one of these timelines to visualise in detail, and the selected timeline is shown in the main page as an extra strip below the user’s own timeline (see Figure 2). The two timelines can be synchronised by date, at the user’s choice. Episodes within the selected timeline that have been designated as being public by its owner are visible, and the user can select these and explore their details.
108
N. Van Labeke, G.D. Magoulas, and A. Poulovassilis
3 Evaluation The aim of this first design of a personalised search for “people like me” was to gather information about usage and expectation from users of such functionality. An evaluation session was undertaken with a group of mature learners on the Certificate in IT Applications at Birkbeck College, organised around three activities.
Fig. 1. Searching for “people like me”
Activity 1 was a usability study of the extended system, focusing on participants building their own timelines and exploring also other aspects of the system. Activity 2 was an evaluation of the new searching for “people like me” functionality, focusing on participants exploring different combinations of search parameters and reporting on the usefulness of the results returned by the system. Activity 3 was a postevaluation questionnaire and discussion session. We refer the reader to [10] for details of these activities. Activity 2 in particular required a significant amount of preparatory work, due to the need for an appropriate database of timelines to search over. Which profile and timeline participants would use was also an issue. The best option would have been for users to have maximum familiarity with their profile and timeline, and
Searching for “People Like Me” in a Lifelong Learning System
109
therefore to use the profile and timeline they had created during Activity 1. However, since we did not know in advance what would be the participants’ profiles and timelines, it would have been difficult to build an appropriate database of “similar” timelines for supporting Activity 2. We therefore opted for an artificial solution: providing participants with an avatar, i.e. a ready-to-use artificial identity, complete with its own profile and timeline, and generating beforehand a database of other timelines based on various degrees of similarity with these avatars.
Fig. 2. Displaying the user’s timeline with another user's timeline
Of the 10 people who had agreed to participate in the evaluation session, 9 people came on the day, and we refer to them as bbk1-bbk9 below. They represented a variety of learners in terms of their experience and background, as extracted (after the session) from their profile and timeline data recorded within the L4All system: gender 3 female and 6 male; age 1 in 20’s, 3 in 30’s, 4 in 40’s, 1 in 50’s; and background with a mean of 3 educational episodes (SD 1.7), 2.75 occupational episodes (SD 2.0) and 2.1 personal episodes (SD 1.2). Activity 1 indicated overall satisfaction of this group with the main functionalities of the system. It also identified a number of usability issues ranging from low-level interface inconsistencies (improper labels, lack of contextual help) to more high-level usage obstacles (difficulty of first-time access to the system), most of which have now been addressed. However, Activity 2 did not fulfil our expectations of identifying user-centred definitions of “people like me”. Most participants took this activity at face value, selecting some parameters, exploring one or two of the timelines returned, and starting again. They could see no reason to try different combinations of search parameters, as their first try was returning relevant results.
110
N. Van Labeke, G.D. Magoulas, and A. Poulovassilis
The participants’ responses on searching for “people like me” in the self-report forms were 58% Poor/Mostly Poor. Participants could not see the benefits of this functionality: "you need to convey the benefit of finding people with similar timelines; this is CRUCIAL: what does it tell me if I find someone who is like me, based on criteria provided? Can I conclude anything from this? Need to create a set of examples to demonstrate how this timeline comparison is useful" – bbk3. Two factors seem to have had a negative impact on the outcome of this activity: the artificialness of the database used for the search (not enough variability in the timelines generated) and difficulties in grasping the meaning of some of the search parameters, notably the classification level and the search method ("search methods – rule 1-4 – are not clear"; "level of classification not clear at all" – bbk1, bbk3, bbk4). However, during the subsequent discussion in Activity 3, it became apparent that participants could appreciate what this functionality could deliver if it were applied in a real context: "search needs to be based on aspiration/wish" – bbk4. Moreover, the post-evaluation question on the search for “people like me” functionality had only 8% of Poor/Mostly Poor, seeming to contradict participants’ experience while actually doing this task. This contradiction may be explained by the difference between reporting on the task on the spot, and answering a post-evaluation questionnaire after participants have had time to reflect on the task. This was also illustrated by the discussion at the end of the session, in which participants identified the potential of searching “for people like me” despite the usage difficulties.
4 Outcomes and Concluding Remarks Lifelong learners need to be supported in reflecting on their learning and in formulating their future goals and aspirations. In this paper we have described a facility for searching for “people like me” as part of a system that aims to support the planning of lifelong learning. The aim of this first design of a personalised search for “people like me” was to gather information from users about their potential usage and expectation from such functionality, and we have reported here on the results of an evaluation session held with a group of mature learners. A critical issue highlighted by evaluation participants is the question of providing learners with support for exploiting the results of a similarity search. In this paper we have reported on a purely visualisation approach, which is based on displaying the user’s own timeline together with a timeline selected by the user from those returned by the search. A specifically designed dynamic widget allows the user to scroll backwards and forwards across each timeline and to access individual episodes. Such an interactive visualisation of timelines certainly helps users to explore different timelines and episodes, but more proactive supports are also required. In particular, users need to be able to identify the reasons for the system deeming two timelines as being similar. Metrics such as Needleman-Wunsch in fact do offer the possibility for such identification, by enabling backtracking of the similarity computation and showing the alignments of the sequences of tokens i.e. the alignments between pairs of episodes in the two timelines. This opens up the possibility for more contextualised usage of timeline similarity matching, which explicitly identifies possible future learning and professional possibilities for the user by indicating which
Searching for “People Like Me” in a Lifelong Learning System
111
episodes of the target timeline have no match within the user’s own timeline and therefore potentially represent episodes that the user may be inspired to explore or may even consider for their own future personal development. We will discuss such a contextualised usage, and its evaluation with two groups of mature learners, in a forthcoming paper. Acknowledgments. This work was undertaken as part of the MyPlan project, funded by the JISC Capital e-Learning programme.
References 1. Bimrose, J., Hughes, D.: IAG Provision and Higher Education. IAG Review: Research and Analysis Phase. Briefing Paper for DfES. University of Warwick (2006) 2. Cogoi, C. (ed.): Using ICT in Guidance: Practitioner Competencies and Training. Report of an EC Leonardo project on ICT Skills for Guidance Counsellors. Outline Edizone, Bologna (2005) 3. Cych, L.: ‘Social Networks’ in Emerging Technologies for Learning. Coventry. Becta, 32– 40 (2006) 4. de Freitas, S., Yapp, C. (eds.): Personalizing Learning in the 21st Century. Network Continuum Press, Stafford (2005) 5. de Freitas, S., Harrison, I., Magoulas, G.D., Mee, A., Mohamad, F., Oliver, M., Papamarkos, G., Poulovassilis, A.: The development of a system for supporting the lifelong learner. British Journal of Educational Technology 37(6), 867–880 (2006) 6. Cohen, W.W., Ravikumar, P., Fienberg, S.E.: A comparison of string distance metrics for name-matching tasks. In: Proc. IIWeb 2003 – Workshop on Information Integration on the Web, at IJCAI 2003, pp. 73–78 (2003) 7. Gusfield, D.: Algorithms on Strings, Trees, and Sequences. Computer Science and Computational Biology. Cambridge University Press, Cambridge (1997) 8. Ainsworth, S.E., Clarke, D.D., Gaizauskas, R.J.: Using edit distance algorithms to compare alternative approaches to ITS authoring. In: Cerri, S.A., Gouardéres, G., Paraguaçu, F. (eds.) ITS 2002. LNCS, vol. 2363, pp. 873–882. Springer, Heidelberg (2002) 9. Van Labeke, N., Poulovassilis, A., Magoulas, G.D.: Using Similarity Metrics for Matching Lifelong Learners. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 142–151. Springer, Heidelberg (2008) 10. Van Labeke, N.: Preliminary Evaluation Report, MyPlan Project Deliverable 5.1 (August 2008), http://www.lkl.ac.uk/research/myplan/
Metadata in Architecture Education - First Evaluation Results of the MACE System Martin Wolpers1 , Martin Memmel2 , and Alberto Giretti3 1
Fraunhofer Institute for Applied Information Technology, Schloss Birlinghoven, D-53754 Sankt Augustin, Germany
[email protected] 2 Knowledge Management Department, DFKI GmbH, Trippstadter Str. 122, D-67663 Kaiserslautern, Germany
[email protected] 3 Universita Politecnica delle Marche, P.zza Roma 22, I-60121 Ancona, Italy
[email protected]
Abstract. The paper focuses on the MACE (Metadata for Architectural Contents in Europe) system and its usage in architecture education at the university level. We report on the various extensions made to the system, describe some of the new functionality and give first results on the evaluation of the MACE System. Several universities were involved with significant student groups in the evaluation, so that the indications described here are already highly trustable. First results show that using MACE increases student performance significantly.
1
Introduction
Architecture education, specifically in higher education, relies strongly on the paradigm of ‘learning by example’, meaning here that students use existing entities like buildings and projects, but also other objects as sources of inspiration [1,2]. Consequently, students need simple and personalized access to vast amounts of architectural information. Access and navigation need new forms of visually based discovery oriented mechanisms for access to the provided learning material [3]. Thus, simple keyword search and result link presentation are not sufficient but need to be extended to, e.g. image and location based search and classification browsing. Such advanced methods of access require rich information about the learning resources. Many different repositories exist that address aspects of the needs of students in the architecture domain, unfortunately not in a homogeneous way. Instead, educational material is scattered over many repositories like the Dynamo repository1 providing information about architectural projects or ICONDA2 providing access to legislative documents important to building construction and design. 1 2
http://dynamo.asro.kuleuven.be/dynamovi/ http://www.iconda.org/
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 112–126, 2009. c Springer-Verlag Berlin Heidelberg 2009
Metadata in Architecture Education
113
Within the European project MACE3 (Metadata for Architectural Contents in Europe), we enable to search through and find learning resources that are appropriate for their context in a more discovery oriented way. By automatically and manually linking related architecture learning resources of various non-related repositories with each other, we establish relations among them, that in turn are used to enable new, simple and unified access to architectural learning resources scattered throughout repositories world-wide. Consequently, users are provided with the ability to discover new learning resources that can serve as additional sources of inspiration. In order to ease the access to relevant learning resources, and to create new connections between learning resources from various repositories, MACE also draws on the communities that use the MACE portal. In particular, users can annotate learning resources with tags, comments and ratings, they can build up personal portfolios, and they participate by contributing new learning resources. This allows for richer descriptions of resources and enables ‘social browsing’, i.e., new ways to navigate the learning resources. Furthermore, the user’s activities are analyzed to establish new relations between learning resources. This paper reports how the MACE system has evolved over time and the results of first evaluations. Section 2 introduces the MACE system with its new features in the front- and backend, focusing on the relation of learning resources across repository boundaries and cooperative learning through the usage of Web2.0 community features. Section 3 outlines the experimental setup and evaluation. First results indicate that the combination of explorative and cooperative learning significantly improves architectural education at university level.
2
The MACE System
This section describes the MACE system in general. It builds on a distributed service oriented architecture with a three-tier structure. The frontend with its graphical user interfaces and widgets support forms the client tier. The business logic that is responsible for the provision of functionalities is organized in the application-server tier while the metadata stores form the data-server or backend tier. Moreover, the MACE system incorporates the ALOE system. While the MACE system is responsible for the overall portal, storage of metadata and basic business logic, the ALOE system specifically deals with participation facilities for end users and the respective data that is generated. The overall architecture and functionality of the MACE system has already been described in [4]. We therefore present here the extensions and reasons thereof to explain the system basis on which the evaluation was carried out. We highlight the filtered search facility in the frontend, the introduction of a social layer with ALOE, and the distributed backend structure before outlining the evaluation and its results in the following chapter. 3
http://www.mace-project.eu
114
2.1
M. Wolpers, M. Memmel, and A. Giretti
Filtered Search in the MACE Frontend
The MACE portal is online and publicly accessible at http://portal. mace-project.eu/. The portal offers several means to access the contents in MACE, e.g., users can Browse by Classification and conduct a Social Search or Filtered Search.
Fig. 1. Screenshot of the MACE filtered search portal
Figure 1 shows the filtered search portal of the MACE system. Here, the user is able to qualify the keyword search with several additional facets: the repositories in which to search, the language of the results, the resource mediatype, the resource classification, and the associated competency. When choosing a respective facet, the interface is dynamically changing, providing the numbers of results for each facet that match the selected criteria. The results of a submitted query are displayed at the bottom of the page. A small overview for each result is
Metadata in Architecture Education
115
presented, containing information such as the resource title, a short description, and the repository where it was found. The user can decide to either immediately get to the result, or to view more metadata about the resource on the respective MACE detail page. An example of such a detail page is presented in Figure 2.
Fig. 2. Screenshot of a detail page in MACE
2.2
Introducing a Social Layer with ALOE
MACE not only focuses on architectural content, but also on the users that are interested in the provided contents. Therefore, we aim to attract users, and we want to encourage these users to contribute information. Such contributions can be tags, comments, and ratings, but also new resources (by using the MACE bookmarklet) as well as formal classifications for MACE resources. Furthermore, the building of communities around MACE and MACE contents is a key for our sustainability efforts. Therefore, the provision of social media functionalities that allow end users to contribute had to be realized. ALOE4 is a web-based social media sharing platform that allows to contribute, share and access arbitrary types of digital resources such as text documents, music or video files. Users are able to either upload resources (using the system as a repository) or by referencing a URL (using the system as a referatory; called bookmarking). Users can tag, rate, and comment on resources, they can create 4
http://aloe-project.de
116
M. Wolpers, M. Memmel, and A. Giretti
Fig. 3. Screenshot of the social search MACE portal
collections of resources, join and initiate groups etc. Furthermore, arbitrary additional metadata can be associated with resources. ALOE can either be accessed via a rich user interface or a Web Service API. This SOAP API offers access to data and the complete range of functionalities, thus allowing to introduce social media paradigms in existing (heterogeneous) infrastructures. See [5,6] for more information about ALOE and the underlying system architecture. Integrating ALOE in MACE. To enable community functionalities in MACE, an instance of the ALOE system used exclusively for MACE was set up at DFKI. The MACE components interact with this instance via the ALOE Web Service API. Apart from information such as tags and ratings, the usage metadata of each instance is passed on to the MACE system to contribute to the usage metadata analysis and hence to the setup of associations between learning resources. In the following, we will briefly introduce how MACE and ALOE interact to enable the desired community functionality. User Management: When a user signs up in MACE, the ALOE Web Service is used to also register the respective user in the ALOE system. Consequently, a login in the MACE system will always automatically trigger a login in ALOE, and ALOE will return a session id that allows to access the user’s information in ALOE for the following actions that might be carried out.
Metadata in Architecture Education
117
Resource Annotation: When users visit the detail page of a resource in MACE, the respective service will check whether social metadata for this resource exists in ALOE. If this is the case, information such as tags, ratings and comments will be read and presented to the user. When users want to contribute social metadata, it will be stored in ALOE together with the respective MACE id and the id of the contributing user. Furthermore, the resource will be added to the user’s bookmarks in ALOE. Resource Contribution: MACE offers a bookmarklet that allows users to easily contribute new resources to the system. When users find a web page they want to store in the MACE system, they can simply click on the bookmarklet, and the information about the resource as well as the according URL will be stored in ALOE. Note that these user driven contributions are labeled accordingly to allow the distinction between user contributions and contributions derived from repositories. This distinction is necessary to enable further research on the quality of user-driven content provision as opposed to repository-driven provision. Social Search: The MACE system presents the most popular tags contributed my MACE users as a tag cloud (see Figure 3; clicking on a tag links to the respective learning resources) as well as allows a keyword search through userprovided content. The necessary metadata is generated by the ALOE system and then streamed into the MACE system. 2.3
The MACE Backend
We summarize the general setup of the MACE backend system before discussing some meaningful extensions in detail. The backend includes the application- and the data-server-tier. The application-tier includes the business logic that enables the functionality of the MACE system necessary for front-end applications and widgets. This tier offers, for example, filtered search, communication and user management services. The ALOE system is integrated at this level. The data-server-tier comprises all stores that are used in MACE. As the goal of MACE is the provision of simplified access to architectural learning resources, the system deals only with the metadata describing the learning resources and their usage. This is also clearly visible in Figure 1 that shows the filtered search front-end. Figure 2 shows a screenshot of a detail page thereby describing the various employed metadata used in the search to describe the respective learning resource. MACE domain metadata store. As consequence of the focus on metadata, the MACE system relies heavily on the metadata descriptions of the learning resources provided by the various associated repositories. The learing resource metadata is harvested into the central MACE domain metadata store using the OAI-PMH protocol5 . The MACE store contains all relevant learning resource 5
http://www.openarchives.org/pmh/
118
M. Wolpers, M. Memmel, and A. Giretti
metadata including for example the title, the abstract, the location of the learning resource, but also information about the classification of the learning resource within the MACE classification, relations with other learning objects or competences associated with the learning resource. The complete storage mechanism is described in [7]. MACE usage metadata store. In addition to the MACE domain metadata store, the MACE system also stores how the users have interacted with the system. The observations, captured as usage metadata, include search and access as well as communication activities. Usage metadata is stored in the contextualized attention metadata (CAM) format [8]. We use usage metadata for a variety of purposes, including support for self-regulated learning, context learning path elicitation. For self-regulated learning approaches, one main tool is the reflection of learning activities [9] to the learner (and possibly the learning resource author as well.) Within MACE and using usage metadata, we provide a simple time-based overview of activities to enable the support of reflection on the learning path. The learner is able to analyze which learning resources she accessed when, how she found them and which topics have been relevant to her and when. These reflective information can then be used to make learning paths more explicit, e.g. to modify and control them to make learning activities better targeted to the respective learning goals. Following the ideas on context and context elicitation in [10], we use usage metadata in combination with learning resource domain metadata to identify the context in which a learning resource has been used. Reusing ideas from discourse and text processing [11] we define the pre-context of an action as the (currently time-based) sequence of actions before the respective actions. The post-context is respectively defined as the (currently time-based) sequence of actions after the respective action (see also [12].) The lengths of these sequences as well as the sequence defining element (currently the time) are subject to further experiments. MACE application profile. The metadata description is based on an extension of the Learning Object Metadata (LOM) standard [13], thus forming an application profile [14] of LOM. We call it the MACE application profile. Extending the application profile described in [4], we now include classifications, competences, locations and relations. We briefly outline the modifications and reasons thereof in the following paragraphs. MACE classifications of learning resources. The (mostly manually conducted) classification of the MACE learning resources enables the intelligent aggregation and linkage of information about them through the classification terms. Expert architects assigned the respective terms from the MACE classification vocabulary [15] to the appropriate learning resources. The group of expert architects will continuously classify new learning resources through an incentive-generating process that continuously motivates the participation and
Metadata in Architecture Education
119
contribution. Incentives are, for example, reputation or revenue, depending on the business model of the MACE system. The MACE system also allows for competence classifications in addition to the architectural classification. The competence classification distinguished between competences for engineering and architecture, also following the European Qualification Framework. The definition of the Engineering Competences has been based on Dublin Descriptors elaborated by the Joint Quality Initiative network and based on the results of the EUCEET II - European Civil Engineering Education and Training II project that is part of the Tuning Project (2005 - 2006). The definition of the Architecture Competences has been based on Directive 2005/36/EC of the European Parliament and the Council on the recognition of Professional qualifications and Architects’ Directive 85/384/EEC. Real world objects in MACE. Architectural education relies, apart from theoretical knowledge, on the examination and analysis of buildings and projects realized in the real world. Therefore, the MACE system also needs to be able to represent physical manifestions that we call real world objects (RWO). They are realized as non-information resources (following the W3C recommendation [16]) and therefore are used to bind together learning resources about one specific object or physical manifestation, e.g. grouping images, texts and plans about a building in one instance using the relation category, usually as part-of relation. In addition, RWOs capture properties of the physical manifestation such as the creation date of a building (in the lifecycle category) or its geographical location (in the geolocation element that extends the Technical category and has KML-XML as valuespace.) Where possible, we associate each RWO with their respective counterparts in DBpedia6 or Freebase7 to facilitate interoperability with existing and future repositories of architectural knowledge. RWOs are created from learning resources that relate to them. We use advanced data and text mining technologies, specifically an application of the GATE (General Architecture for Text Engineering) [17] framework with respective gazetteers to extract named entities from the learning resource metadata. For example, we identify names of buildings, places and architects. Combining the entity information with those from accessible general purpose repositories like DBpedia and GeoNames8 , we add names of architects and buildings as well as their locations to learning resource descriptions.
3
Experimental Feedback
The experimental evaluation of the MACE project has the surveying of the system’s performance in real conditions of use as principal objective. The scope of the MACE evaluation framework is limited to the first two levels of the Kirkpatrick assessment model: reaction and learning [18], involving both the end-users’ attitude 6 7 8
http://dbpedia.org http://www.freebase.org http://www.geonames.org
120
M. Wolpers, M. Memmel, and A. Giretti
towards the system features and the objective assessment of their learning performance. The innovative architecture of the MACE system and the unique features of the Architecture design process required the careful design of the evaluation framework. Subsection 3.1 discusses the analysis that was carried out in order to identify the system features that could have affected the learning process and the performance figures that could have been consistently identified. Subsection 3.2 reports the results of the evaluation of the overall MACE system functionality in terms of perceived quality by end-users. Subsection 3.3 discusses the results of a learning assessment of a test user group. Clearly, all the results reported in this paper will be related to the current status (initial deployment) of the MACE system. A more exhaustive assessment will be possible only after the project start-up phase, when the number of repositories will be significantly increased. 3.1
Performance Figures
In order to identify the system features that can be related to the learning process a comprehensive scenario analysis was carried out. Seven MACE usage scenarios were identified and a detailed protocol analysis was implemented. The analysis of this amount of data made clear that from the standpoint of the learning support MACE key system features are: Searching Tools: the availability of a variegated set of searching tools allowing for quick searching on topics directly related to the current design focus, providing alternative knowledge paths that reflect the structure of the relevant issues related to the project focus. In a typical fragment, for example, the student notices that the project is located in Paris and decides to search for other buildings in the same city... He uses the location map/widget for his search. The availability of the map widget lets the student straightforwardly enrich the search focus (i.e. Villa dall’Ava) with spatially related projects. From the cognitive perspective, interactions such as this foster the learner’s construction of problem oriented knowledge networks, that are at the basis of any learning process. The same can be said for the MACE hierarchy browser, which provides visual cues on semantically related information, allowing the learner to selectively address information sources, and easily access alternatives and/or similarities. Focused and structured contents: the quality (i.e. appropriateness, consistency, soundness, correctness) and the quantity of the provided contents. It is well known that distracting information is one of the main causes hindering the delicate approach to knowledge as it happens on personal e-learning tools. Therefore the availability of focused information in any step of the user-system interaction is mandatory to qualify any information provider as a learning support system. Usability: the effectiveness, the efficiency and the satisfaction with which endusers can reach specific goals in the MACE environment. Learning: the quality of the learning results of common educational tasks supported by MACE compared with the learning results of the same educational tasks, supported by traditional web searching tools.
Metadata in Architecture Education
121
On this basis an evaluation framework was designed including both subjective and objective analysis: 1. A student profiling questionnaire was administered in order to get the basic information for qualifying the student in terms of the progress in the course of studies, his/her mastery of the English language and of the Internet technology. 2. An evaluation of the system quality perceived by students was implemented by means of questionnaires encompassing the first three points of the analysis. 3. A learning assessment process, comparing students using MACE vs. student working with traditional means in a design task was established to get objective data for pointing out the impact of MACE on the architecture design learning process. 3.2
Evaluation the Perceived Quality of MACE
The model we adopted is based on 4 basic features describing the quality of a learning supporting web environment: content; functionality; usability; accessibility [19]. Hereafter we will detail each of these features, specifying their meaning in the MACE context. 1. Functionality: We have evaluated the functionality in terms of students’ general interest, information retrieval, usefulness of the special functions, perceived effectiveness in learning support. This feature sums up all the aspects concerning the interaction between MACE and its end-users and reflects the system’s functional and conceptual structure. 2. Content: We have evaluated the perceived quality of the provided contents in terms of quantity, adequacy, completeness and soundness: Is the quality of the information provided by MACE adequate to users’ learning goals? Is the information content of MACE relevant, is it complete, is it reliable? 3. Usability: We have evaluated these features in terms of the perceived quality of the information retrieval process, effectiveness of interface design and of the special functions. 4. Accessibility: This aspect concerns several points like: Is connectivity adequate? Are availability and power of the server adequate? Is the product browser-dependent or not? Is the web address easy to recall? Is it accessible through the more common search engines? and so on. The surveying was performed by administering a questionnaire made up of a Likert scale based on 12 items to 40 students from two universities of different EU countries. Each item of the questionnaire generally saturates more than one of the 4 basic features. In each item of the questionnaire, the user had to express his agreement on a given statement, utilizing a 5-point scale. A simple Kiviat graph (see Figure 4) summarizes the value of each feature, ranging from 1 to 5 corresponding to the following scale: ‘very bad’, ‘bad’, ‘good’, ‘very good’, ‘excellent’. The averaged acceptance value of MACE scores 3,3 which approaches a ‘very good’ level. Analyzing the single features on a relative scale, it emerges
122
M. Wolpers, M. Memmel, and A. Giretti
Fig. 4. The Kiviat graph as resulting from end user perceived quality assessment of the MACE system
that ‘usability’, which relates mainly to the interface design, scores the highest value. On the contrary ‘contents’ scores the lowest value. The ‘functionality’, that concerns the conceptual structuring of the content organization and of the navigation tools, scores approximately two decimals more than ‘contents’ and one less than ‘usability’. This means that the perceived value of the conceptual structuring of the application domain (i.e. Architecture), that was the principal strategy for supporting meaningful learning during user-system interaction, emerges as one of the most positive values. The lowest score of the ‘contents’ feature is mostly due, we believe, to the current development status of the project, which in this phase doesn’t offer an amount of contents covering all the information need of a typical student design task. These evaluations reflect a general trend of the MACE system which is independent of the students’ affiliation, nationality and, consequently, language. Figure 5 reports the same Kiviat graphs for the two groups of students of different universities, from Spain and Italy. The only significantly varying feature is ‘accessibility’, which is related to contingent technical parameters like bandwidth etc.
Fig. 5. The Kiviat graph as resulting from end user perceived quality assessment of the MACE system. The two graphs are related to groups of students belonging to two universities of different EU countries.
Metadata in Architecture Education
3.3
123
Assessment of MACE in a Real Learning Process
The third step of the experimentation phase is to quantify and qualify the impact of the MACE system on learning performance. The proposed assessment methodology concerns the project’s outcome in terms of the improvement (support and enrichment) of common learning processes in real educational environments. It is evident that to accomplish this goal, we needed to define experimental strategies and methodologies which were suitable to obtain consistent and useful data. Since MACE is not a complete e-learning platform, but it is limited to information access, the assessment procedure and the impact evaluation was focused on knowledge acquisition rather than on the achievement of a whole competence. Experimentation setup. The proposed MACE assessment methodology is basically an ex-post evaluation. It follows four steps: 1. To present students with real design tasks requiring them to apply relevant knowledge and skills; 2. To capture objective usage data of user-system interaction during the information gathering phase through direct observation and system logs; 3. To accurately evaluate learning degree by examining the learners’ work results; 4. To analyze and compare the data logs to correlate the diverse system features to the learning performance. The students were assigned a task that concerns, depending on the course nature and needs, either the investigation of a given subject or the development of a design fragment. In the first case, the students had to prepare of a Power Point presentation (5-10 slides) which illustrated the results of the research/investigation carried out in the classroom. In the second case, the students had to produce a set of drawing and technical documents of their solution. Both tasks were to be performed in a well defined time span. Timings was defined by the teacher on the basis of the task difficulty and scope. A preliminary questionnaire aimed at evaluating the learners’ entry level of competence and experience was filled in by all students before starting the assignment. The evaluation of the students’ work was carried out for the whole class, in a blind way, and was based on an evaluation grid made of six items: the focus of the work, the richness of the content, consistency of the contents, structure of the contents (interrelationships), originality and elaboration level as well as the Variety of the origins of the contents (repositories). MACE experimentation started on September 1st, 2008 and will last up until June 30th, 2009, encompassing two semesters organized in three groups of students belonging to three different universities in two European countries. Preliminary results. The results presented in this section concern 40 students belonging to two universities of two different EU countries. They are limited
124
M. Wolpers, M. Memmel, and A. Giretti
to the comparison of MACE vs. non MACE users in terms of learning performance. The analysis was carried out by statistically correlating students’ features with their learning performance. Bayesian Networks of increasing complexity and structure [20] were used to build statistical models suiting different requirements of the analysis. Despite its simplicity, the model allows us to conclude that the the MACE students performed better than the reference group. The MACE students’ average performance grade is 4.14 while reference group students’ is 3.65. In the following, we will examine if the usage of the MACE system was the strongest conditioning factor or whether other student qualities played a relevant role unintentionally despite students were selected randomly. In order to clarify this aspect, the basic Bayesian model was extended including other key aspects of the students’ profile we recorded in the questionnaire administered at the beginning of the experimentation. More specifically the following aspects were taken into account: 1. Student’s English level of practice: to test whether the foreign language was an access barrier. 2. Student’s domain competences: this is expressed as the number of exams that is proportional to the progress in the curriculum and, on the average, to the competences acquired by the student. 3. The usage level of internet: this capture the students’ familiarity with Internet navigation 4. The number of search engines more frequently used: this further specifies the Internet node focusing on searching and browsing Based on the sensitivity analysis, we can conclude that the positive trend on students’ performance observed in the previous model depends essentially on the fact that the students used MACE. We finally discuss in what sense the MACE system affects students’ performance. In order to discuss this, a more complex model that correlates the student’s TYPE with the more specific evaluation figures: Consistency, Clearness,
Fig. 6. Histogram showing the increment in percent of the different aspects of students’ performance due to the use of MACE
Metadata in Architecture Education
125
Focus, Originality, Structure and Richness of the work and number of Sources used was built. The interpretation is straightforward. There is a positive increment on all parameters’ expected values. In particular the Consistency, the Clearness and the Structure of the work appear to be the qualities that were more affected (see Figure 6). The focus of the work is not as affected because it is already close to the maximum of the scale so it is ‘saturated’. Sources and Richness register lower increments. One plausible conclusion, that confirms the results of the previous analysis, is that the MACE information structuring (search interface, taxonomy, visual aids, etc.) performs well, and significantly helps students’ meaningful learning. But the MACE system is still lacking a sufficient content base to boost the richness of the students’ work.
4
Conclusion
This paper briefly presents the new extensions to the MACE system and portal in which we enable access to learning resources from various, highly heterogeneous repositories. Explorative learning styles used in architecture education are supported through new ways of access to learning resources through the unified fully enclosing description of the learning resources. In addition, the MACE social layer enables social browsing and Web2.0 style community functionality. The evaluation of the MACE portal and subsequently of the MACE system clearly indicates that the targeted provision of learning resources increases the quality of results produced by university students. We conclude that by improving the access to learning resources by addressing the students’ needs, students are able to focus on the activity of learning instead of wasting time in finding appropriate learning resources tailored to their needs.
References 1. Beckmann, J.: Virtual Dimension: Architecture, Representation, and Crash Culture. Princeton Architectural Press (1998) 2. Condotta, M., Ponte, I.D.: Digipolazione architettonica, nuovi software convertiti. Master’s thesis, Universita IUAV di Venezia (2002) 3. Marchionini, G.: Exploratory search: from finding to understanding. Commun. ACM 49(4), 41–46 (2006) 4. Stefaner, M., Vecchia, E.D., Condotta, M., Wolpers, M., Specht, M., Apelt, S., Duval, E.: Mace - enriching architectural learning objects for experience multiplication. In: Duval, E., Klamma, R., Wolpers, M. (eds.) EC-TEL 2007. LNCS, vol. 4753, pp. 322–336. Springer, Heidelberg (2007) 5. Memmel, M., Schirru, R.: Sharing digital resources and metadata for open and flexible knowledge management systems. In: Tochtermann, K., Maurer, H. (eds.) Proceedings of the 7th International Conference on Knowledge Management (IKNOW), Know-Center, Graz, September 2007, pp. 41–48. Journal of Universal Computer Science (2007) ISSN 0948-695x 6. Memmel, M., Schirru, R.: Aloe white paper. Technical report, DFKI GmbH (2008)
126
M. Wolpers, M. Memmel, and A. Giretti
7. Prause, C., Ternier, S., de Jong, T., Apelt, S., Scholten, M., Wolpers, M., Eisenhauer, M., Vandeputte, B., Specht, M., Duval, E.: Unifying learning object repositories in mace. In: Massart, D., Colin, J.N., Assche, F.V. (eds.) LODE. CEUR Workshop Proceedings, vol. 311. CEUR-WS.org (2007) 8. Wolpers, M., Najjar, J., Verbert, K., Duval, E.: Tracking actual usage: the attention metadata approach. International Journal Educational Technology and Society 10(3) (2007) 9. Moon, J.A.: Reflection in learning & professional development. Routledge (1999) 10. Zimmermann, A., Lorenz, A., Oppermann, R.: An operational definition of context. In: Kokinov, B.N., Richardson, D.C., Roth-Berghofer, T., Vieu, L. (eds.) CONTEXT 2007. LNCS (LNAI), vol. 4635, pp. 558–571. Springer, Heidelberg (2007) 11. Kamp, H., Reyle, U.: From Discourse to Logic: Introduction to Model-theoretic Semantics of Natural Language. In: Formal Logic and Discourse Representation Theory (Studies in Linguistics and Philosophy). Springer, Heidelberg (1993) 12. Manning, C., Schuetze, H.: Foundations of Statistical Natural Language Processing. MIT Press, Cambridge (1999) 13. of Electrical, I., Committee, E.E.L.T.S.: IEEE standard for learning object metadata (draft). IEEE standard 1484.12.1 (2002) 14. Duval, E., Hodgins, W., Sutton, S., Weibel, S.L.: Metadata principles and practicalities. D-Lib Magazine 8(4) (April 2002) 15. Niemann, K., Wolpers, M.: Modeling vocabularies in the architectural domain. In: ICDIM, pp. 314–319. IEEE, Los Alamitos (2008) 16. Jacobs, I., Walsh, N.: Architecture of the world wide web, vol. 1, w3c recommendation (2004) 17. Cunningham, H., Maynard, D., Bontcheva, K., Tablan, V.: GATE: A framework and graphical development environment for robust NLP tools and applications. In: Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (2002) 18. Kirkpatrick, D.L.: Evaluation Training Programs: The Four Levels. Better-Koehler Publishers (1998) 19. Fresen, J.W.: Assurance of web-supported learning: Processes, products and services. In: Proceedings 6th Annual Conference on World Wide Web Applications, WWW (September 2004) 20. Korb, K., Nicholson, E.A.: Bayesian Artificial Intelligence. Chapman & Hall, New York (2004)
Phantom Tasks and Invisible Rubric: The Challenges of Remixing Learning Objects in the Wild David E. Millard1, Yvonne Howard1, Patrick McSweeney1, Miguel Arrebola3, Kate Borthwick2, and Stavroula Varella3 1
School of ECS, University of Southampton {dem,ymh,pm5}@ecs.soton.ac.uk 2 School of Humanities, University of Southampton
[email protected] 3 School of Languages and Area Studies, University of Portsmouth {Miguel.Arrebola,Stavroula.Varella}@port.ac.uk
Abstract. Learning Objects are atomic packages of learning content with associated activities that can be reused in different contexts. However traditional Learning Objects can be complex and expensive to produce, and as a result there are relatively few of these available. In this paper we describe our work to create a lightweight repository for the language-learning domain, called the Language Box, where teachers and students can share their everyday resources and remix and extend each others content using collections and activities to create new Learning Objects more easily. However, in our interactions with the community we have discovered that practitioners find it difficult to abstract their teaching materials from their teaching activities and experiences; this results in Phantom Tasks and Invisible Rubrics that can make it difficult for other practitioners to reuse their content and build new Learning Objects. Keywords: Open Educational Resources, Remixing, Learning Objects.
1 Introduction Teachers and Lecturers rely on good teaching resources to support their teaching activities. Many of these resources are created to support specific courses at specific institutions, and for many years researchers have been exploring ways in which teachers could share the effort of creating resources, by building reusable Learning Objects that wrap up a set of complementary resources in an atomic package [1]. Learning Objects are typically deployed through a Virtual Learning Environment (VLE), and as a result are relatively heavyweight, with sophisticated internal structure and meta-data that is designed for experts. But increasingly we are seeing a trend of practitioners rejecting the formality and overhead of using a VLE, and turning to resources in-the-wild, content that is online and shared through public sites such as Wikipedia, YouTube and iTunes. Unlike traditional Learning Objects, these resources are relatively simple – often single files with content that is contextualized to its original use (for example, slides that are explicitly part of a larger course). Many institutions, such as MIT in the USA U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 127–139, 2009. © Springer-Verlag Berlin Heidelberg 2009
128
D.E. Millard et al.
and the Open University in the UK, are embracing this more open lightweight approach, and the term Open Educational Resources (OER) has emerged to refer to this type of shared content [2]. We have been involved in developing a repository for OER content for the Language Learning community. The Language Box is a lightweight repository based on the EPrints platform; it encourages users to share their everyday teaching resources without the overhead of complex meta-data or structures [3]. The Language Box includes a number of mechanisms to support the reuse and remixing of content, by allowing users to create and share collections of resources, and augment each other’s materials with new activities. We hoped that we would see new types of lightweight Learning Objects emerge from community interaction, as users grouped together useful content with instructions for its use. However, although the mechanisms are well received in workshops, they are underused in practice, and we have seen users struggle to express their resources in anything other than the most simplistic way. In this paper we describe how the Language Box supports resource reuse and remixing, and explore why users seem unable to take advantage of these systems. Section 2 puts our work in context with other efforts to support reuse of educational resources. Section 3 describes the history and design behind the Language Box and Section 4 presents the data model that underpins our simple remixing facility. Section 5 explores why users have difficulty abstracting their resources into reusable parts, and we introduce Phantom Activities and Invisible Rubrics as examples of this problem. Finally Section 6 concludes the paper and describes how we plan to address these issues in future Language Box updates.
2 Background The IEEE describes a Learning Object as “any entity, digital or non-digital, that may be used for learning, education or training” [4]. This broad definition is supported by formal specifications of how to describe digital Learning Objects in the IEEE LOM standard [5]. Despite interest from educational and educational research communities Learning Objects have not been as successful as their proponents hoped. Learning Objects have been criticized for using complex terminology that is not meaningful to ordinary practitioners [6], and their sophisticated structure (for example, using packages with manifests to describe their contents) has also meant that ordinary practitioners do not usually have the skills to create a formal Learning Object that could be deployed on a VLE [7]. Those Learning Objects that have been created can be difficult to find, Learning Object repositories are a way of storing Learning Objects in an open and accessible way [8] so that they can be easily browsed and deployed [9]. Research on Learning Object repositories has explored how they might be standardized [10], and also how graphical browsing interfaces might help users find Learning Objects that match their requirements more quickly [11]. As well as formal Learning Objects, teaching and learning repositories can deal with tutor created content, and shared resources that have been discovered on the
Phantom Tasks and Invisible Rubric: The Challenges of Remixing Learning Objects
129
wider web [12]. For example, Merlot is a repository of several thousand online resources that have been peer-reviewed for quality [13]. Some repositories allow their users to create virtual structures to help manage the Learning Objects, for example the ResourceCenter allows users to collect resources together into SCORM compliant structures [14]. However, the Web has evolved a much more lightweight remix culture, that encourages users to be flexible with authorship and experiment with each others content [15], it has been argued that this attitude could be successfully extended to teaching materials, and might help create a culture of sharing that is more successful that the Learning Object economy [16]
3 The Language Box The Language Box is an attempt to re-imagine a Learning Object repository in a Web 2.0 way. We have embraced a lightweight sharing and remixing approach and have opted for a simple repository that allowed users to store their everyday materials, view them online in a browser, leave feedback and suggestions, and use simple collection and extension mechanisms to help evolve new resources over time. 3.1 Motivation and History We have been involved in creating teaching and learning repositories for several years, our early work focused on providing a repository for Learning Objects for the Language Teaching Community. This repository, which was called CLARE, was evaluated with teachers and lecturers, and although they appreciated the repository as a way of obtaining learning resources, it was clear that there was a mismatch between the sophistication of Learning Objects and their own digital assets and skills. Our workshops highlighted four key problems with the Learning Object approach: 1. Complex Metadata - the complexity of the deposit process was a significant barrier to practitioners, the problem is in the need to specify a large number of meta-data fields. CLARE used a variation of the UK LOM Core, a schema which includes 25 required fields (and a further 27 recommended fields)[7]. It was clear that while professional Learning Object developers were prepared to take the time to understand and complete the schema, everyday sharers would not be. 2. Unfamiliar Terms – LOM also uses pedagogical terms that are sometimes unfamiliar to practitioners. For example, the schema would talk about scaffolding, but teachers would talk about supporting materials. An everyday repository needs to use simple, clear terms that relate to practice. 3. Content Packaging – CLARE made Learning Objects available as compressed zip files, containing an XML manifest. Most of the teachers in our workshops had encountered compressed zip files before, but many did not really understand what they were, or how to open them. Those that did were confused by the internal structure of the Learning Object and baffled by the XML manifest. Teachers expect the materials downloaded from a repository to come in a familiar format, which matches the digital resources that they create themselves.
130
D.E. Millard et al.
4. Lack of Contextual Information – despite the amount of meta-data on each Learning object they still lack contextual information about how they have been used by other practitioners. Unstructured feedback from other users, such as simple comments, is far more important to teachers and lecturers, in terms of helping them decide if a resource will be useful, than the formal descriptions created by the Learning Object author. We concluded that the requirement for simplicity outweighed other needs, such as cross-indexing and quality control. When we revised CLARE into the Language Box, we wanted to make it as cheap as possible to add materials to the repository, and wanted complexity and detail to emerge through use, rather than being specified up front. 3.2 Design Methodology When designing the Language Box we turned to popular Web 2.0 sharing sites such as YouTube and Flickr for inspiration. We concluded that whereas traditional repositories (as typically used for research publications) are about Archiving content, a sharing-style repository is more about Hosting materials online with the minimum overhead to the user (such as YouTube’s inline video tool), allowing users to Organise their own materials alongside others (such as Flickr’s albums) and creating a Community of users (through profiles, tagging, statistics and commentary mechanisms that can be found on both sites). We found it interesting that Sharing is not the key service, users place content online with a specific audience in mind, but often this is an act of communication rather than sharing (for example, someone uploading holiday photographs may be trying to reach their extended family, they are not placing photos online for others to repurpose). Sharing in the greater altruistic sense seems to be a side effect of more pragmatic selfish motives. We listed 3 key objectives for the Language Box based on the services of Hosting, Organisation and Community: 1. 2. 3.
Hosting: Ability to preview online Organisation: Ability to create public collections and extensions Community: More prominent user presence through profiles
We concentrated on simple atomic resources with no content packaging, and used a minimum set of manual metadata to describe them (only the title is a required field). Because we have an inline preview tool, users can use resources from within their web browser without having to download them. We also encourage users to make their materials as public as possible through the use of Creative Commons licenses. 3.3 Implementation The Language Box is based on the EPrints repository platform, heavily modified through client-side Javascript and a Flash-based preview tool (Figure 1).
Phantom Tasks and Invisible Rubric: The Challenges of Remixing Learning Objects
131
Fig. 1. The Language Box Profile Page (left) helps create identity, and provides a central page for users to access all of their uploads. The Resource Page (right) allows users to view the metadata and download files, but also includes a cover flow style preview tool for inline viewing of multimedia formats (for example, video, audio, slides and documents).
4 Simple Remixing – The Language Box Model As part of our efforts to support organisation and build community we wanted to include some simple remixing tools into the Language Box. Some sharing sites, most notably video editing sites such as Jumpcut1, include quite sophisticated remixing tools that allow audio, video and images to be layered together in complex ways; for the Language Box we needed something that was simpler and more granular, but which fitted the sort of pedagogical activities that our users would be interested in. 1
Jumpcut website: http://www.jumpcut.com/
132
D.E. Millard et al.
Early in our project we undertook an extensive co-design process with a number of language teachers and e-learning specialists in order to create an appropriate remix model. Very early on we identified the need for Activities, instructions on how to use a resource for a particular teaching or learning task. Initially we modeled these as a type of comment, but it quickly became apparent that many teachers see activities not as ethereal instructions, but as concrete items in their own right, often with files directly related to them (for example in the form of a task sheet). As a result in our later iterations Activities became explicit items in the repository with their own page. This way they can have additional files uploaded to support them, and their URL can be circulated independently of the resource that they are based on. Figure 2 shows the data model that came out of this process. It consists of three types of EPrint (objects in the repository): Resources, Activities and Collections.
Fig. 2. The Language Box Data Model. The top three types are specializations of an EPrint, (a deposit in the repository with its own metadata and page); they represent the three different types of deposit supported by the repository.
The key item in the repository is a Resource; this is an atomic unit of teaching material such as a set of slides or a video. EPrints uses a Document object to represent component files such as HTML with CSS, but this is invisible to users. Resources can include multiple documents if those files should be considered together, for example the text of a newspaper article and accompanying scanned copy. Resources contain no information about how they should be used in teaching or learning. This makes it easy to repurpose Resources without modifying them.
Phantom Tasks and Invisible Rubric: The Challenges of Remixing Learning Objects
133
Fig. 3. A Collection page on the Language Box (left), and an Activity page (right). At the bottom of the Activity page there is a link through to the original resource.
Resources can be extended with an Activity (shown in Figure 3). An Activity is a set of instructions (possibly in additional files) that describe one potential use of a specific Resource. For example: “Read the article and then answer the questions on this worksheet”. Resources can have multiple Activities, and while Resources can only be edited by their authors, anyone can annotate a Resource with their own Activity. Activities have their own page in the repository that is independent from (although linked to) the original resource. This makes it possible for teachers to create pages in the repository for their own activities, even if those activities are based on someone else’s resources. Finally, Resources and Activities can be brought together into a Collection (also shown in Figure 3). Users can create collections containing both their own material and items uploaded by others. The Language Box doesn’t make any assumptions about how a user will use a collection, for example, it could be used to gather together useful items on a topic, or to organize resources for a course.
134
D.E. Millard et al.
Fig. 4. Initial survey results. Top: Where do practitioners find resources? Middle: How do practitioners create new activities? Bottom: What type of resources do they use?
Phantom Tasks and Invisible Rubric: The Challenges of Remixing Learning Objects
135
5 Challenges for Practitioners We have held a number of workshops with members of the Language Teaching community in order to understand their current use of technology, their attitudes to it, and their responses to the Language Box. Despite a high level of technology use we have discovered that many practitioners are not used to managing their own digital assets, and find it difficult to adapt their practices for the online world. 5.1 Digital Resource Use At the beginning of our project we surveyed the community in order to explore their current levels of technology use, we sent out a questionnaire to over a thousand practitioners and received 201 responses. Figure 4 shows some of our initial findings. We discovered that practitioners are very proactive in locating resources for themselves, with less than half relying on traditional sources such as books to direct them. They also used a wide range of multimedia types – this may be because of the nature of language teaching, where video and audio sources have always been important. Most practitioners also generated digital activities themselves; most used the VLE, but many also used Hot Potatoes.
Fig. 5. Usability Evaluation. Left: How clear did users find the terms used in the Language Box? Right: How easy was it to do various activities in the Language Box?
This made us hopeful that the users of the Language Box would be able to use the facilities we had provided to create activities based on other people’s content. However, although we have had 200 deposits in the first few months after the Language Box went live, users have only created 22 activities. In all cases these activities were created by the same person who deposited the original resource, so we have no instances of a user creating an Activity based on someone else’s deposit. At a later workshop we surveyed the attendees in order to find out if the problem lay with the technology, the results are shown in Figure 5. Although this was only a small usability study (11 participants) we discovered that users were mostly happy with our use of terms (with all users rating Activity and Resource between OK and Very Clear), and that most users (8/10) found that creating an Activity was Easy or Very Easy. From this it was clear that the issue was not with the usability.
136
D.E. Millard et al.
5.2 Resources vs. Activities At our most recent workshop we spent some time discussing the issue of Activity creation with participants, we suspected that the issue may be because participants were unable to decide whether something should be uploaded as a resource or an activity, so we ran a small exercise (12 participants) to see if they could match up materials that they might use with our terms. The results are shown in Table 1. Table 1. Classification of Items by Practitioners as either a Resource or an Activity No. Description of Item Resource Activity Both 1 Video of a conversation between two 10 2 French students about university life 2 A selection of images of Polish food 11 1
Other
3
1 (Resource Exploitation)
4 5 6
7 8 9
10 11
12 13 14
A transcript of an audio recording of a lecture on Spanish history A Hot Potatoes activity (html web page) which practices an element of grammar A powerpoint file which accompanied a lecture on British politics A handout explaining how to write a literature review, including some revision questions A set of exam questions for first-year Italian A set of guidelines on how to produce a podcast Some teaching notes to accompany a video of bull-fighting in Spain, which you found on the Language Box An audio file containing discussion questions for a seminar on linguistics A quiz sheet on the environment, inspired by a powerpoint file you found on Language Box A reading list for a first-year Russian course A poster advertising a Languagelearning café A grammar exercise website with lots of interactive grammar games
9
2 9
3
6
2
4
4
1
7
7
1
3
1 (Reference)
7
1
3
1
6
3
3
2
5
5
9
1
1
1
1
4 (advert)
12
7 4
2
6
If it were clear how to classify activities then we would expect to see all 12 practitioners classifying in the same way, but instead we see a great deal of variation. We followed the questionnaire with a discussion session in order to understand the reasoning behind some of their choices. We discovered that the problem lies in practitioners’ ability to abstract activities from the resources that they use.
Phantom Tasks and Invisible Rubric: The Challenges of Remixing Learning Objects
137
5.2.1 Phantom Tasks A key issue is that many resources have activities that are not specified in an explicit way, but are heavily implied by the resource, we call these Phantom Tasks. This is clearest in items 1-3 of Table 2, which are straightforward resources with no explicit tasks. Despite this several practitioners identified them as both an activity and a resource and later told us that this was because the resource implied an activity – for example, a video of a conversation implies a simple comprehension task. Phantom Tasks exist for many types of resources, but for resources that strongly imply a task they can become overpowering, and become confused with the resource itself. For example, item 4 is a Hot Potatoes page, strictly speaking this is a resource that could be used in a number of different activities, but because of its structure it strongly implies that students should work through the exercises on the page on their own (perhaps as a revision or personal study task) and thus most of our participants thought it was an activity. 5.2.2 Invisible Rubrics Another problem is that sometimes the activity part of a resource – the instructions on how to use it - can be so slight that practitioners do not see it as a separate item; we call these Invisible Rubrics. For example, item 7 is an exam paper, strictly speaking this is a set of questions (the resource) and a rubric (the activity), but practitioners do not perceive the rubric as an independent object, and are therefore confused about how the exam paper should be handled in the system. In general we found that unless the activity had a file associated with it (such as a class handout in the form of a PDF) it was likely that it would be invisible to practitioners. For some resources this means that invisible rubrics become phantom tasks; for example, practitioners would upload an exam paper as a resource, but ignore the rubric (or include it in the exam paper) because the exam questions on its own strongly implies that it should be used for formal assessment or revision. Another aspect of invisible rubrics is that tasks that are instructions for teachers, such as the teaching notes of item 9, are considered resources by practitioners, and not as activities. In contrast the quiz sheet of item 11 is strongly identified as an activity because it is something for students to do. This was part of a general feeling that items intended for teachers are resources, while those intended for students are activities.
6 Conclusions In this paper we have described our attempts to create a teaching and learning repository for the Language Teaching community that learns from the best practices of Web 2.0 sharing sites. Rather than heavyweight pre-designed Learning Objects our repository is built around lightweight sharing of everyday resources, however we included a number of tools that allow users to extend and remix other people’s resources, with the intention that this would result in more complex Learning Objects that emerge in the wild, over time and through real use. Although we have been pleased with the reception the Language Box has received, we have been disappointed with the amount of reuse of resources, and in particular with the low number of activities that have been created.
138
D.E. Millard et al.
Through community engagement workshops we have discovered that the problem is not with practitioners abilities to create digital content, nor with the usability of the tool, but in the level of abstraction that we ask of them. Teachers and Lecturers already have a level of abstraction that they are familiar working with, they talk at a business object level about ‘exam scripts’, ‘PowerPoint presentations’ and ‘lecture notes’, and if we require them to further dissect these items and upload the parts separately then this is an additional overhead that confuses some users and is a disincentive to all. Our intention is to simplify the Language Box data model so that users do not have to make an explicit choice as to whether something is a Resource or an Activity. Instead they will be able to upload anything as a resource, and then create variations that the system will link with the original. Rather than an item type, an activity then becomes a relationship between two resources. If we want practitioners to use teaching and learning repositories then we not only have to streamline the depositing process and make using the system as easy as possible, but we also have to make sure that the object types in the repository match up with people’s everyday experiences. Reuse and remixing of educational resources is possible, but only if we support it in the same messy and inconsistent way that it occurs in real life. We cannot all be information engineers; Phantom Tasks do exist, some Rubrics are Invisible, and our systems must be able to support them.
Acknowledgements The work described in this paper was part of the JISC funded Faroes project. The authors would like to thank the extended Faroes team, including Julie Watson, Marcus Ramsden and Adam Field, and also the Language Teaching Community.
References 1. Downes, S.: Learning Objects: Resources for distance education worldwide. The International Review of Research in Open and Distance Learning 2(1) (2001) 2. Caswell, T., Henson, S., Jensen, M., Wiley, D.: Open educational resources: Enabling universal education. The International Review of Research in Open and Distance Learning 9(1) (2008) 3. Millard, D., Howard, Y., McSweeney, P., Borthwick, K., Arrebola, M., Watson, J.: The Language Box: Re-imagining Teaching and Learning Repositories. In: International Conference on Advanced Learning Technologies, Riga, Latvia, July 14-18 (2009) 4. IEEE LTSC, IEEE standard for learning technology-learning technology systems architecture (LTSA), IEEE Std 1484.1-2003, pp. 1–97 (2003) 5. IEEE LOM, The learning object metadata standard, IEEE, Tech. Rep. (2005) 6. Morrison, I., Currie, M.: What is a learning object, technically? In: Proceedings of WebNet (2000) 7. Bratina, T., Hayes, D., Blumsack, S.: Preparing Teachers To Use Learning Objects. The Technology Source (November/December 2002)
Phantom Tasks and Invisible Rubric: The Challenges of Remixing Learning Objects
139
8. Nash, S.S.: Learning objects, learning object repositories, and learning theory: Preliminary best practices for Online Courses. Interdisciplinary Journal of Knowledge and Learning Objects (2005) 9. Neven, F., Duval, E.: Reusable learning objects: a survey of LOM-based repositories. In: Proceedings of the Tenth ACM international Conference on Multimedia, Juan-les-Pins, France, December 1-6, pp. 291–294. ACM, New York (2002) 10. Hatala, M., Richards, G., Eap, T., Willms, J.: The interoperability of learning object repositories and services: standards, implementations and lessons learned. In: Proceedings of the 13th international World Wide Web Conference (2004) 11. Klerkx, J., Duval, E., Meire, M.: Using information visualization for accessing learning object repositories. In: Proceedings of the Eighth International IEEE Conference on Information Visualisation (2004) 12. Thomas, A., Rothery, A.: Online repositories for learning materials: the user perspective. Ariadne (45) (October 2005) 13. Schell, G.P., Burns, M.: A Repository of e-Learning Objects for Higher Education. EService Journal, 53–64 (2002) 14. Hoermann, S., Hildebrandt, T., Rensing, C., Steinmetz, R.: ResourceCenter - A Digital Learning Object Repository with an Integrated Authoring Tool Set. In: Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2005, pp. 3453–3460. AACE, Chesapeake (2005) 15. Diakopoulos, N., Luther, K., Medynskiy, Y.E., Essa, I.: The evolution of authorship in a remix society. In: Proceedings of the eighteenth ACM conference on Hypertext and Hypermedia, pp. 133–136 (2007) 16. Lamb, B.: Dr. Mashup or, Why Educators Should Learn to Stop Worrying and Love the Remix. Educause review, 12–24 (July/August 2007)
Can Educators Develop Ontologies Using Ontology Extraction Tools: An End-User Study Marek Hatala1, Dragan Gašević 2, Melody Siadaty1, Jelena Jovanović3, and Carlo Torniai4 1
Simon Fraser University, Canada 2 Athabasca University, Canada 3 University of Belgrade, Serbia 4 University of Southern California, USA
Abstract. The recent research demonstrated several important benefits in the use of semantic technologies in development of technology-enhanced environments. The one underlying assumption for most of the current approaches is that there is a domain ontology. The second unspoken assumption follows that educators will build domain ontologies for their courses. However, ontologies are hard to build. Ontology extraction tools aim to overcome this problem. We have conducted an empirical study with educators where they used current ontology tools to extract ontologies from their existing course material. The results are reported for the IT and non-IT educators. Keywords: ontology building, domain ontologies, ontology extraction, end-user study.
1 Introduction The Semantic Web technologies seem to be a promising technological foundation for the next-generation of e-learning systems [1]. Many authors have proposed the usage of ontologies in different aspects of e-learning, such as adaptive educational hypermedia, adaptive content authoring, personalization, user model sharing, and context capturing [2, 3, 4, 5]. This is an expected reaction, since e-learning is highly dependent on effective mechanisms for knowledge management capable of integrating various activities that e-learning involves, such as course authoring and adaptation and provision of reliable and timely feedback to both students and teachers. Actually, in European Union (EU) and Canada, a lot of investments have already been put into research projects aimed at enhancing e-learning environments with Semantic Web technologies, such as LUISA (http://luisa.atosorigin.es), REWERSE (http://rewerse.net/), ProLearn (http://www.prolearn-project.org/), and Kaleidoscope (http://www.noe-kaleidoscope.org/). In Canada, the leading project addressing the use of ontologies in eLearning was LORNET Research Network (2003-2008, http://www.lornet.org). In our previous research, we have proven and exemplified the advantages of ontology supported e-learning systems. In particular, in [6] we demonstrated how a combined use of content structure ontology, content type ontology and domain ontology leads to U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 140–153, 2009. © Springer-Verlag Berlin Heidelberg 2009
Can Educators Develop Ontologies Using Ontology Extraction Tools
141
significant improvements in searching repositories with learning content. In addition, we have also shown in [7] that if these three kinds of ontologies are complemented with a user model ontology and an ontology formally specifying the learning path to be followed by a student, then advanced levels of learning content personalization can be achieved as well. Finally, in our most recent research efforts [8, 9] we demonstrated the relevancy of the integrated use of these different kinds of e-learning ontologies for providing online educators with reliable, fine grained and semantically rich feedback about the learning process. However, the main problem with all approaches that have shown the benefits of ontology adoption in e-learning systems is that they are assuming that required ontologies are available. However, this is not a realistic assumption, at least not for domain ontologies, i.e. ontologies that formalize the subject matter of learning courses. The lack of these ontologies is to be attributed to their creation process which is overly difficult and time consuming for educational practitioners. Based on the experience of the knowledge capture and learning technology communities, and our experience from the abovementioned projects, the major obstacle for widespread use of ontologies in e-learning systems lies in the complexity of the ontology development process, especially when considered from the perspective of teachers and content authors who are typically unaware of ontology existence and relevancy altogether. Although, in the recent years, the Semantic Web community has been showing a constantly increasing interest in automating the process of ontology development and thus reducing the required human effort, fully automatic ontology development is still in the distant future [6]. Our experience as well as experience of some other researchers in the field [10, 11, 12] have proven that unlike other kinds of ontologies relevant for e-learning, a domain ontology (i.e. an ontology formally specifying concepts and relationships of a specific subject domain) cannot be reused across different subject domains, but have to be created anew for each domain. Another feature that makes domain ontologies distinctive from other aforementioned ontologies relevant for eLearning is the need for their constant evolvement, so that the semantics they capture do not lag behind the courses they are aimed to support. A significant topic in our current research is to investigate how to reduce the efforts required for creating domain ontologies in educational systems, and thus implicitly enable easier and wider acceptance of ontology based systems among educational practitioners. Our first step towards achieving this goal was to explore the existing approaches (and related tools) for ontology development and we distinguished the following three general approaches: 1. Handcrafting ontologies from scratch. Even though this is currently the predominant approach, it is suitable only for ontological engineers (i.e., experts in the field of ontology development), since presently available tools, such as Protégé (http://protege.stanford.edu/), assume a background in knowledge engineering and familiarity with ontology languages. 2. (Semi)automatic ontology development using ontology learning tools. These tools aim at reducing the human intervention to supervision of the development process and refinement of the results [13]. Even though the state-of-the-art ontology learning tools, such as Text-2-Onto [14] and OntoLT [15], have a lot of advanced features, they still need a lot of improvements in order to be usable by non-experts.
142
M. Hatala et al.
1. Search and retrieval of ontologies from online ontology libraries. Created through contributions of the Semantic Web community members, ontology libraries such as Swoogle (http://swoogle.umbc.edu/) [16], OwlSeek (http://www.owlseek.com/) or OntoSelect (http://olp.dfki.de/ontoselect) [17] offer a constantly increasing number of specific domain ontologies. However, supporting tools are needed to facilitate the searching process and evaluation of the retrieved ontologies [6]. Even after a comprehensive literature review (including, for example, [13, 18]), we could not identify any research aimed at evaluating the level of adoption of these approaches among end users and identifying the requirements for enabling their widespread use. Our work presented in this paper is a first reported attempt to conduct an empirical study that evaluates those approaches and related tools taking into account both e-learning practitioners requirements and constraints imposed on ontologies by advanced e-learning systems.
2 Study Description In this section we describe the main components of the study and the processes used in the preparation and execution of the study. The third section is focused on analysis of the results and discussion. 2.1 Tool Selection Being aware of the complexity of conventional ontology editors (used for building ontologies from scratch) for non-ontology-savvy users, we put our focus on exploring the available ontology learning tools and online ontology libraries. Information about the current state-of-the-art tools for the selected approaches was collected by exploring the literature on ontology learning [13, 18, 19]. However, out of almost a dozen tools mentioned in research papers, only four of them were publicly available on the Internet: OntoGen (http://ontogen.ijs.si/), Text2Onto (http://ontoware.org/ projects/text2onto/) and its predecessor TextToOnto (http://sourceforge.net/projects/ texttoonto), and OntoLT (http://olp.dfki.de/OntoLT/ OntoLT.htm). Those were the tools that we managed to download and install. We decided to use two of them for evaluation purposes (the other two were discarded either for being outdated or depending on another proprietary tool): Text2Onto [14] and OntoGen. Text2Onto. Text2Onto is an ontology learning framework which supports the automatic or semi-automatic generation of ontologies from textual documents. It combines machine learning approaches with basic linguistic processing for learning atomic classes, class subsumption as well as object properties. The framework provides a Graphical User Interface form which the user can define the corpus (the collection of text documents) form which the ontology will be created, select the available algorithms to be applied for generating concepts/relations, and review the generated ontology. The main problem with the tool is that is not clear how the available algorithms and their combination will affect the generation of the ontology, forcing the users to try all the available options and review for each of them the generated ontology.
Can Educators Develop Ontologies Using Ontology Extraction Tools
143
OntoGen. OntoGen is more oriented towards semi-automatic ontology construction. It is an interactive tool that aids a user during the ontology construction process by suggesting concepts, automatically assigning instances to concepts and providing visual representation of both the ontology and the corpora it is built upon. To build an ontology a user has to supply a set of documents that reflects the domain for which the ontology is to be built. The tool creates the root concept of the ontology and suggests names for it using the extracted keywords. In every step of the hierarchy development, OntoGen suggests subtopic of the currently selected topic thus helping users to build a hierarchical organization of domain concepts. 2.2 Study Scoping The survey aimed at two well defined groups of teachers having low level of variability among their members. The groups belong to two completely opposite domains: Computer Science/Software Engineering/Information Technology and non-Computer Science/Software Engineering. The members of the former group are representatives of those who are in general very familiar with complex software tools utilization and may have some notions related to ontologies (group labeled as IT later), whereas the latter group (labeled nonIT) represents educators who are not aware of ontologies and knowledge representation and are less familiar with complex software tools. When designing the evaluation study we carefully tailored the procedure for conducting the survey and for formulating and formalizing the questionnaire used in the study. We conducted a simulation with the goal to estimate the sample size that, given the expected answers and variability of the population, can maximize the statistical power of the experiments. The target number of participants was set to 15 people for each group since it was a reasonable tradeoff between statistical power (generally at least 80% for expected outcomes) and the actual capability of recruiting a great number of participants. The simulations assumed a latent normal distribution beneath the Likert-scale measurement and used Analysis of Variance (ANOVA) for comparing the tools and were performed using SAS software. 2.3 Participant Selection The participants were required to have preferably a PhD degree, and at least master’s degree in their field of research. As well, they needed to have at least three years of experience in teaching or in course development. They were required to have a substantial course material prepared for the whole duration of the course in the form of text documents (PS Word and PDF), html web pages, PowerPoint lecture slides, etc. The participants were recruited from the university faculty by distrusting an invitation via departmental mailing lists. Departments at authors’ respective universities were targeted, as well as through other authors’ connections, such as departments of our research partners from different post-secondary teaching and research institutions in Canada. The interested participants were screened by the research team for their background and the completeness level of their course material to guarantee the quantitative homogeneity of the basic input into the ontology building tools. While initially approved in October 2007, after obtaining the ethics approval in May 2008, the
144
M. Hatala et al.
participant recruitment started in June 2008. However, due to the summer season the recruitment was very low till September 2008, when we recruited most of the participants and started our experiments. The experiments ended in March 2009. 2.4 Study Procedure Participants submitted their course material to the research team upfront. The research team converted the materials into the plain text format that was an accepted format for both tools. To avoid problems with tools installation on participants machines the specific remote server has been prepared with all the tools installed and the course material in the text format made ready. The server was accessible via remote desktop and the assistance1 was available via installed Skype communication software during the whole session. A step-by-step instructional package was made available to the participants a few days ahead of their scheduled session. The first section of this guide provided instructions on how to connect to the remote server and control the test environment. The other two sets included information about and instructions on how to use the selected ontology building tools: Text2Onto and OntoGen. During the session the participants were asked to build an ontology describing their subject of expertise from the course material they had provided, and using the tools selected for the survey. No time constraints were imposed for the ontology creation process, with the majority of the sessions lasting between 2 and 3 hours. After the task has been completed, the participants were asked to fill in a three-part questionnaire: first part related to the evaluation of the experiment itself (not reported in this paper), second part was a 5-value Likert scale questionnaire used for each tool, and qualitative part with 6 open ended questions. The results are discussed in the following section.
3 Results and Discussion Total of 28 participants were recruited and retained for the study, 18 with the information technology background, and 10 with non-IT background. None had prior experience in the ontology development. We originally aimed at 50-50 split but some participants from the departments that indicated nonIT background were categorized as IT due to their prior education in the computing science. 3.1 Course Material and Ontologies Produced Before the beginning of the session the research team produced the plain text documents from the course material provided by the participants. Using the OntoGen and Text2Onto tools the participants produced the ontologies. The basic descriptive statistics are provided in Table 1. Evaluation of the quality of produced ontologies is beyond the scope of this paper and reported elsewhere [20]. 1
Research assistant with experience in using the tools selected for the study, along with the strong expertise to tackle all technical challenges that might arose during the study (e.g., problems with the connection and differences of different types of computers [e.g., PCs and Macs] used by the participants).
Can Educators Develop Ontologies Using Ontology Extraction Tools
145
Table 1. Descriptive statistics of course material and produced ontologies
Number of words Concepts (OntoGen) Relations (OntoGen) Concepts (Text2Onto) Relations (Text2Onto)
N 25 22 22 22 19
Min 4,345 3 0 228 0
Max 293,555 26 13 4,602 155
Mean 59,372 8.6 3.2 1,494.0 53.4
SD 78,000 5.9 4.0 1,015.0 44.4
3.2 Tool Evaluation Both tools were used by each participant to accomplish the same task: to build an ontology from the course material the participant provided for the study. No training was provided for the tools. After completing the task with the first tool (Text2Onto) the participants were asked to fill in the questionnaire that consisted of a section of questions with respect to the tool itself and another section related to the developed ontologies. Both sections used 5-value LIKERT scale. Next, having used the second tool (OntoGen) for ontology development, the participants evaluated it as well. Finally, they filled in a questionnaire with qualitative questions for both tools. In Table 2 we report the user evaluation for both tools and their comparison from the tool usability perspective and support they provided during the ontology building process. The majority of the findings is negative. Question A.1 for both tools demonstrates that the participants felt that they would like to have more input into the process of generating ontologies. This was more pronounced in the case of Text2Onto tool where the user input is limited to removing discovered concepts and relationships from the proposed ontology, while in the OntoGen the users directly control which concepts in the ontology will be further expanded. In question A.2 in the case of OntoGen participants felt that the tool is in control of the process, while in case of Text2Onto they were neutral. There was a statistically significant difference between the two tools. Importantly, in neither case the users felt that they are in control of the tool and the process, which negatively affects their attitude towards the tools. As indicated by the question A.3, participants were neutral with respect to the tools outcome. More detail discussion of the ontology quality is in the next section. Similarly, participants were neutral with respect to how easy the process of obtaining the ontology was (question A.4). Results for question A.5 indicate that an ability to visualize the ontology is the important one. In case of Text2Onto which provides a long list of proposed concepts and relationships with their weight but without structure representation, the mean was significantly higher than in the case of OntoGen. In question A.6 participants expressed that having an ability to manipulate the generated ontology by the tools is a desirable characteristic of any tool for this purpose. They felt more strongly about this in case of Text2Onto than in case of OntoGen.
146
M. Hatala et al.
Table 2. Text2Onto and OntoGen comparison based on participants’ answers to the questionnaire. Both IT and nonIT participants are included. Values reported are from 5-value Likert scale (1-Completely Disagree, 2-Disagree, 3-Neutral, 4-Agree, 5-Completely Agree). (M, SD, N) values represent mean, standard deviation, and sample count. 2-sample t-test results are shown only for significant differences between Text2Onto and OntoGen.
Question A.1 I prefer to participate in the process of creating an ontology A.2 I felt in control of the process while obtaining the ontology A.3 I am happy with the resulting ontology A.4 I found the process of obtaining the ontology easy to accomplish A.5 Visual representation of ontology helps the creation process. A.6 Being able to manipulate the generated ontology (e.g., add new elements, exclude those that I find unimportant, etc.) would improve my work. A.7 It would have been good to have a sort of guidance during the creation process in order to know how the choices the tool provides would have affected the final result.
Text2Onto (M, SD, N) 4.11, 1.01, 27
OntoGen (M, SD, N) 3.76, 1.05, 25
2-sample t-test -
3.21, 1.06, 28
2.33, 1.03, 27
2.96, 1.03, 28
2.44, 0.93, 27
t(52.9)=3.10, p=0.003 -
3.14, 1.07, 28
3.00, 1.51, 27
-
4.17, 0.90, 28
3.44, 1.15, 27
4.25, 0.96, 28
3.88, 1.25, 27
t(49.2)=2.61, p=0.011 -
4.17, 0.98, 28
4.55, 0.84, 27
-
Finally, having guidance during the process is important (Question A.7). Especially in case of OntoGen with mean at 4.55 participants were very uncertain how to proceed. 3.3 Influence of Participants Background on Tool Evaluation We were also interested in whether participants’ background has an influence on the perception of the tool effectiveness. We have processed data independently for each tool, while calculating and comparing the means of two groups (IT and nonIT) using the 2-sample t-test. Although none of the differences between two groups were found significant there are several cases where participants’ background caused the shift in the response mean. Table 3 shows the results of this comparison. In the case of Text2Onto the only difference worth commenting on is the question A.5 where IT group felt stronger need for the visual representation (M=4.33) than the nonIT group (M=3.90). This preference was not visible in the case of OntoGen. The nonIT group found the process of obtaining ontology easier (M=3.40) in case of OntoGen than the IT group (M=2.76). However, both means are in the middle of the scale indicating neutral position on this question.
Can Educators Develop Ontologies Using Ontology Extraction Tools
147
Table 3. Comparison of tools’ evaluation based on the participants’ background. Answers were compared separately for Text2Onto and OntoGen. 2-sample t-test did not show any statistically significant differences between IT and nonIT groups. For question text and Likert scale values refer to Table 2.
Question A.1 A.2 A.3 A.4 A.5 A.6 A.7
Text2Onto-IT (M, SD, N) 4.17, 1.07, 17 3.27, 1.07, 18 2.88, 1.02, 18 3.11, 1.07, 18 4.33, 0.76, 18 4.33, 0.97, 18 4.22, 1.00, 18
Text2Onto-nonIT (M, SD, N) 4.00, 0.94, 10 3.10, 1.10, 10 3.10, 1.10, 10 3.20, 1.13, 10 3.90, 1.10, 10 4.10, 0.99, 10 4.10, 0.99, 10
OntoGen-IT (M, SD, N) 3.80, 1.14, 15 2.23, 0.97, 17 2.29, 0.84, 17 2.76, 1.34, 17 3.47, 1.00, 17 3.76, 1.39, 17 4.41, 1.00, 17
OntoGen-nonIT (M, SD, N) 3.70, 0.94, 10 2.50, 1.17, 10 2.70, 1.05, 10 3.40, 1.77, 10 3.40, 1.42, 10 4.10, 0.99, 10 4.80, 0.42, 10
The nonIT group also preferred more guidance (M=4.80) for OntoGen than the IT group (M=4.41). As we commented above, both values indicate a serious need for guidance during the process. 3.4 Evaluation of the Produced Ontology Two evaluated tools produced ontologies of different sizes and complexity. In the second part of the questionnaire the participants were asked to evaluate the quality of the ontologies produced (see Table 4). All values were at the lower end of the scale. The participants perceived OntoGen produced significantly worse ontology than Text2Onto from the perspective how Table 4. Comparison of resulting ontologies as built using Text2Onto and OntoGen. Both IT and nonIT participants are included. Values reported are from a 5-value Likert scale: for question B.1 and B.2 the scale is 1-Completely Disagree, 2-Disagree, 3-Neutral, 4-Agree, 5Completely Agree. For question B3 the scale is 1-Not enough even for a rough description of the domain, 2-Enough for a rough description of the domain, 3-Enough for a fair description of the domain, 4-Enough for a detailed description of the domain, 5-Too many even for a detailed description of the domain. For question B4 the scale was 1-Poor, 2-Fair, 3-Good, 4-Very Good, 5-Excellent. 2-sample t-test results are shown only for significant differences between Text2Onto and OntoGen.
Question B.1 The ontology describes effectively the domain it is built for B.2 I would like to have some additional relations in the generated ontology. B.3 The number of concepts in the ontology are B.4 The quality of concepts in the ontology are (i.e. the most important concepts are included)
Text2Onto (M, SD, N) 3.10, 1.06, 28
OntoGen (M, SD, N) 2.48, 0.93, 27
3.59, 0.97, 27
3.48, 0.89, 27
2.29, 0.99, 27
3.15, 1.46, 26
2.50, 0.92, 28
2.15, 1.00, 26
2-sample t-test t(52.5)=2.31, p=0.024 t(43.8)=-2.48, p=0.016 -
148
M. Hatala et al.
effectively it describes the domain (Question B.1). However, it outperformed Text2Onto in appropriate number of concepts that describe the domain fairly rather than roughly in the case of Text2Onto (Question B.3). Both differences were statistically significant. The participants would also like to have more additional relationships in the generated ontologies (Question B.2) and the quality of generated concepts was considered fair (Question B.4). The results indicate the current tools produce rather poor ontologies that are not very usable for the type of semantic web deployment in eLearning. 3.5 Influence of Participant’s Background on Produced Ontology We have also analyzed the results for ontology produced by both tools separately to find out whether the participants’ opinions are influenced by their background. Although the average opinions between the IT and nonIT groups differ in some cases none of these differences proved statistically significant using the 2-sample t-test (see Table 5). Table 5. Evaluation of produced ontologies based on participants’ background. Answers were compared separately for Text2Onto and OntoGen. 2-sample t-test did not show any statistically significant differences between the IT and nonIT groups. For the question text and Likert scale values refer to the caption of Table 4.
Question B.1 B.2 B.3 B.4
Text2Onto-IT (M, SD, N) 3.22, 0.94, 18 3.77, 1.00, 18 2.27, 1.07, 18 2.38, 0.97, 18
Text2Onto-nonIT (M, SD, N) 2.90, 1.28, 10 3.22, 0.83, 9 2.33, 0.86, 9 2.70, 0.82, 10
OntoGen-IT (M, SD, N) 2.29, 0.91, 17 3.70, 0.77, 17 2.87, 1.45, 16 2.06, 0.85, 16
OntoGen-nonIT (M, SD, N) 2.80, 0.91, 10 3.10, 0.99, 10 3.60, 1.42, 10 2.30, 1.25, 10
For both tools (Text2Onto and OntoGen) the IT group would prefer to have more relationship identified in the ontology. In case of the ontology produced by OntoGen, the nonIT group considered the number of concepts generated to provide more detailed description than the IT group. 3.6 Qualitative Evaluation of Tools After completing the tasks with both Text2Onto and OntoGen the participants were asked to provide answers to open ended questions and judge tool intuitiveness, ease of interaction, pros and cons of each tool, and whether tools met their expectation. Based on the answers we have developed a separate coding scheme for each of the questions. Three raters tested the scheme by applying it to five randomly selected questions and fine-tuned the coding manual. In the next step, the three raters applied the scheme independently to rate the answers. In the final step all the differences were resolved through the discussion during the meeting of the three raters2. 2
We wanted to report on the interrater reliability. However, Cohen’s kappa is applicable for two raters and one category assigned per answer only. We have not found a way to compute the measure for the situation with multiple raters and multiple categories per one answer.
Can Educators Develop Ontologies Using Ontology Extraction Tools
149
The results were evaluated using cross tables with identifying differences between tools and IT/nonIT groups. The results for each question are presented below. In the tables presenting these results for each code the percentage of answers is given for IT group, nonIT group, and total number of the code occurrence for the tool. More than one code could be assigned to a single answer. We tested for significance of the difference between the groups using chi-square statistics. If there is a significant difference in code occurrence between IT and nonIT group it is explicitly indicated in the table and the table caption and we address the issue explicitly in the discussion text. Intuitiveness of the ontology building approach. In Table 6 the four codes address intuitiveness explicitly: INT, LI, NVI, and GI. Although 25.9% of users found OntoGen to be intuitive and 37% found Text2Onto intuitive the scores for negative comments are even higher. Text2Onto is much less intuitive with over 48% of participants described it as lacking intuitiveness or not very intuitive. This compares to 26% for OntoGen. Moreover, for 14.8% participants OntoGen became gradually more intuitive with use, as opposed to 7.4% for Text2Onto. Interestingly, both tools became gradually more intuitive for the larger number of nonIT participants. Overall, we consider numbers for both tools to be very high. This indicates that in their current version the tools are not suitable for the direct use by educators without providing them with training. Additionally, participants commented on the visualization where 11.9% of participants considered OntoGen providing a good visualization. A small number of participants reported that the visualization was missing. Finally, some participants explicitly described the tool with respect to the other tool with 14.8% considering OntoGen a better tool than Text2Onto. Table 6. Intuitiveness of the ontology building approach. The codes have the following meaning: INT-intuitive, LI- lack of intuitiveness, NVI-not very intuitive, GI-gradual intuitiveness with the use, MV-missing visualization, GV-good visualization, BOT-better than the other tool.
OntoGen
Text2Onto
IT nonIT Total IT nonIT Total
INT 29.4% 20% 25.9% 35% 40% 37%
LI 10% 3.7% 35% 30% 33.3%
NVI 23.5% 20% 22.2% 17.6% 10% 14.8%
GI 11.8% 20% 14.8% 20% 7.4%
GV 5.9% 20% 11.9% -
MV 5.9% 3.7% 5.9% 3.7%
BOT 11.8% 20% 14.8% 10% 3.7%
The ease of interacting with and manipulating the tool. As can be seen in Table 7, the results were split in the middle for OntoGen where 40.7% considered it easy to use and 37% considered it not very easy to use. More nonIT participants considered OntoGen not very easy. In case of Text2Onto, 66.7% of all the participants considered it easy to use which include whooping 90% of the nonIT participants and only 52.9% of the IT participants. This difference was statistically significant with χ2 (1, N=27) = 3.89, p=0.049.
150
M. Hatala et al.
Table 7. The ease of interacting with and manipulating the tool. The codes have the following meaning: Easy- easy to use, NVE- not very easy to use, VP- hard to manipulate the visualization, LF- lack of feedback, NC- user has no control over the process. The values with star show statistically significant difference between IT and nonIT groups.
OntoGen
Text2Onto
IT nonIT Total IT nonIT Total
Easy 41.2% 40% 40.7% *52.9% *90% 66.7%
NVE 29.4% 50% 37% 29.4% 10% 22.2%
VP 29.4% 10% 22.2% 20% 7.4%
LF 20% 7.4% 10% 3.7%
NC 11.8% 7.4% 5.9% 3.7%
The participants in 22% cases described OntoGen’s visualization as hard to manipulate against 7.4% in case of Text2Onto. A small number of nonIT participants found both tools lacking feedback, while number of IT participants felt they had no control of the process (11.8% for Text2Onto). Overall, the opinions on this question are split and the matter of usability of ontology building tools should be studied more carefully. An interesting pattern can be observed from this and previous question for the Text2Onto tool. With its simpler interface that hides the structural aspects of the ontology, the nonIT group have found it easy to use (90%), although 40% reported that it was not intuitive or become gradually intuitive (20%). Positive aspects of the tools. The results are presented in Table 8. Over 55% of participants thought that the biggest strength of Text2Onto is its ease of use mainly because of automatic generation of large number of concepts and relationship. Again, a significantly larger number of the nonIT participants valued this feature as positive. Only 26% of the participants described ease of use as a positive characteristic for OntoGen. However, both groups valued the visualization aspects of OntoGen with Table 8. Pros of the tools. The codes have the following meaning: Ease-Ease of use because/automatic generation of ontology, Viz- Visualization, UC- User control, Nthng-nothing good, Rnk- ranking of concepts. The values with star show statistically significant difference between IT and nonIT groups.
Ontogen
Text2Onto
IT nonIT Total IT nonIT Total
Ease 23.5% 30% 25.9% *41.2% *80% 55.6%
Viz 70.6% 80% 74.1% -
UC 17.6% 30% 22.2% 11.8% 7.4%
Nthng 23.5% 10% 18.5%
Rnk 20% 7.4%
Can Educators Develop Ontologies Using Ontology Extraction Tools
151
over 74% explicitly identifying visualization as a positive characteristic while none identified it for Text2Onto. Two observation can be made. First, the strategy for ontology building used by Text2Onto of generating a large number of candidate concepts and relationships and letting user deselect the unsuitable ones is more appealing than the elaborate incremental development approach implemented by OntoGen. Secondly, the visualization is extremely highly valued characteristic and should be provided in any useful tool for ontology building. Another reported positives were user control (22.2% for OntoGen) and ranking of concepts in Text2Onto (20% of nonIT participants). Interestingly, 18.5% of the participants (23.5% of IT) explicitly stated that there is nothing good about the tool. Negative aspects of the tools. The results are presented in Table 9. Two groups of aspects were identified. First, with respect to the process of ontology building the users identified as a problem missing elements (29.6% for OntoGen and 25.9% for Text2Onto) and too many elements generated (mainly Text2Onto, 18.9% of the participants). Interestingly, too many elements generated were a concern of the IT group only. The remaining negatives were related to the usability and robustness of the tool, with no user friendly GUI being the most prominent (33.3% for OntoGen and 18.5% for Text2Onto). The remaining negative characteristics are in Table 9. Table 9. Cons of the tools. The codes have the following meaning: nGUI- not friendly GUI, Miss- missing concepts and relationships, TME- too many generated concepts, LF-lack of feedback, LC-lack of user control, Mac-Mac incompatibility, and Crash.
OntoGen
Text2Onto
IT nonIT Total IT nonIT Total
Miss 29.4% 30% 29.6% 23.5% 30% 25.9%
TME 5.9% 3.7% 29.4% 18.5%
nGUI 29.4% 40% 33.3% 23.5% 10% 18.5%
LF 11.8% 30% 18.5% 20% 7.4%
LC 17.6% 11.1% 5.9% 20% 11.1%
Mac 11.8% 7.4% 17.6% 11.1
Crash 11.8% 10% 11.1% 20% 7.4%
Meeting expectations. The results are shown in Table 10. For OntoGen the opinions of the participants were split into the three approximately same-sized groups where OntoGen met expectations for 29.6% of the participants, met them partly for 25.9%, and did not meet expectations for 29.6% of the participants. No major differences were noticed except slightly higher proportion of dissatisfied IT participants than nonIT. For Text2Onto the opinions were much more negative, especially from the IT participants. For 82.4% of the IT participants Text2Onto did not meet their expectation as opposed to 40% of the nonIT participants. This difference was statistically significant with χ2 (1, N=27) = 5.08, p=0.024. The difference for answer “Yes” was also statistically significant where Text2Onto met expectations for 30% of the nonIT participants and for no IT participant (χ2(1, N=27)=5.74, p=0.017.
152
M. Hatala et al.
Table 10. Meeting Expectations. The codes have self-evident meaning. The values with star show statistically significant difference between IT and nonIT groups.
OntoGen
Text2Onto
IT nonIT Total IT nonIT Total
Yes 29.4% 30% 29.6% 30% 11.1%
No 35.3% 20% 29.6% *82.4% *40% 66.7%
Partly 23.5% 30% 25.9% 5.9% 20% 11.1%
4 Conclusions This paper presented an empirical study of educators using two ontology building tools: Text2Onto and OntoGen. The educators used the tools to build ontology from the course material they provided for the study. Twenty eight educators participated in the study between September 2008 and March 2009. The educators group came from the Computer Science/Software Engineering/Information Technology background (18 participants) and the non-Computer Science/Software Engineering background (10 participants). The results show that the current state of the tools for developing domain ontologies by educators is unsatisfactory. However, several conclusions can be made with respect to the approaches and desirable features of the tools as well as with respect to the educators background. There is an appeal of the approach which generates large amount of suggestions for ontology concepts and relationships that is than ‘weeded out’ by the user. This approach, applied by the Text2Onto tool, was especially favored by the non IT group. However, after examining the produced ontologies from the perspective of the requirements of advanced eLearning technologies their utility is rather minimal as users tend to keep extremely large number of concepts. On the other side, the interactive approach used by OntoGen produces rather small ontologies where users stop the building process too soon. Secondly, there is a clear need for a good ontology visualization capability that can be easily manipulated by the users. Finally, although some of the differences between the two groups become visible in the survey data, the results demonstrate that both groups were equally dissatisfied with the both tools. The group with IT background was more critical of the aspects of the tools where they perceived the tool did not apply strong enough methods such as eliminating unimportant results etc.
Acknowledgment We would like to thank Prof. Thomas M. Loughin who helped us define the procedure for conducting the experiment and to properly formulate and formalize the questionnaire for the study. This study is funded in part by Athabasca University’s Mission Critical Fund, Athabasca University’s Associate Vice President Research’s special project, and NSERC.
Can Educators Develop Ontologies Using Ontology Extraction Tools
153
References 1. Devedžić, V.: Key issues in next-generation Web-based education. IEEE Transactions on Systems, Man, and Cybernetics, Part C 33(3), 339–349 (2003) 2. Dicheva, D., Aroyo, L.(eds.): Special Issue on Application of Semantic Web Technologies in E-learning. International Journal of Continuing Engineering Education and Life-Long Learning 16(1/2) (2006) 3. Naeve, A., Lytras, M.D., Nejdl, W., Balacheff, N., Hardin, J. (eds.): Special issue on Advances of the Semantic Web for e-learning: expanding learning frontiers. British Journal of Educational Technology 37(3) (2006) 4. Henze, N., Dolog, P., Nejdl, W.: Reasoning and Ontologies for Personalized E-Learning in the Semantic Web. Educational Technology & Society 7(4), 82–97 (2004) 5. Dolog, P., Nejdl, W.: Semantic Web Technologies for the Adaptive Web. The Adaptive Web, 697–719 (2007) 6. Gašević, D., Jovanović, J., Devedžić, V.: Ontology-based Annotation of Learning Object Content. Interactive Learning Environments 15(1), 1–26 (2007) 7. Jovanović, J., Gašević, D., Devedžić, V.: Ontology-based Automatic Annotation of Learning Content. International Journal on Semantic Web and Information Systems 2(2), 91– 119 (2006) 8. Jovanović, J., Knight, C., Gašević, D., Richards, G.: Ontologies for Effective Use of Context in e-Learning Settings. Educational Technology & Society 10(3), 47–59 (2007) 9. Jovanović, J., Gašević, D., Brooks, C., Devedžić, V., Hatala, M., Eap, T., Richards, G.: Using Semantic Web Technologies for the Analysis of Learning Content. IEEE Internet Computing 11(5), 45–53 (2007) 10. Stojanović, L.j., Staab, S., Studer, R.: eLearning in the Semantic Web. In: Proc. of the World Conference on the WWW and the Internet (WebNet 2001), pp. 325–334 (2003) 11. Mohan, P., Brooks, C.: Engineering a Future for Web-based Learning Objects. In: Proc. of International Conference on Web Engineering, pp. 120–123 (2003) 12. Brase, J., Nejdl, W.: Ontologies and Metadata for eLearning. In: Staab, S., Studer, R. (eds.) Handbook on Ontologies, pp. 555–574. Springer, Heidelberg (2004) 13. Cimiano, P., Völker, J., Studer, R.: Ontologies on Demand? - A Description of the Stateof-the-Art, Applications, Challenges and Trends for Ontology Learning from Text Information. Wissenschaft und Praxis 57(6-7), 315–320 (2006) 14. Cimiano, P., Voelker, J.: Text2onto - a framework for ontology learning and data-driven change discovery. In: Montoyo, A., Muńoz, R., Métais, E. (eds.) NLDB 2005. LNCS, vol. 3513, pp. 227–238. Springer, Heidelberg (2005) 15. Buitelaar, P., Olejnik, D., Sintek, M.: A Protégé Plug-In for Ontology Extraction from Text Based on Linguistic Analysis. In: Bussler, C.J., Davies, J., Fensel, D., Studer, R. (eds.) ESWS 2004. LNCS, vol. 3053, pp. 31–44. Springer, Heidelberg (2004) 16. Ding, L., Finin, T., Joshi, A., Peng, Y., Pan, R., Reddivari, P.: Search on the Semantic Web. IEEE Computer, 62–69 (October 2005) 17. Buitelaar, P.: OntoSelect: Towards the Integration of an Ontology Library, Ontology Selection and Knowledge Markup. In: Proc. of the Workshop on Knowledge Markup and Semantic Annotation (Semannot 2004) at the International Semantic Web Conference (2004) 18. Gomez- Perez, A., Manzano-Macho, D.: An overview of methods and tools for ontology learning from texts. The Knowledge Engineering Review (19), 187–212 (2004) 19. Shamsfard, M., Barforoush, A.A.: The state of the art in ontology learning: a framework for comparison. The Knowledge Engineering Review 18(4), 293–316 (2003) 20. Hatala, M., Gasevic, D., Siadaty, M., Jovanovic, J., Torniai, C.: Utility of Ontology Extraction Tools in the Hands of Educators. To appear in the Proceedings of The Third IEEE International Conference on Semantic Computing (2009)
Sharing Distributed Resources in LearnWeb2.0 Fabian Abel, Ivana Marenzi, Wolfgang Nejdl, and Sergej Zerr L3S Research Center, Leibniz University Hannover, Germany {abel,marenzi,nejdl,zerr}@L3S.de
Abstract. The success of recent Web 2.0 platforms shows that people share information and resources within their social community and beyond. The use of these platforms in an e-learning context has been limited, though. One reason for this is the fact that most of these platforms only support specific media types, and teachers trying to assemble learning resources have to login to and use several Web 2.0 tools at once to access all relevant resources. In this paper, we present LearnWeb2.0, an integrated environment we implemented for sharing Web 2.0 resources, which improves support for learners and educators in sharing, discovering, and managing learning resources distributed across different platforms. LearnWeb2.0 integrates ten popular resource sharing and social networking systems, and provides advanced features for organizing and sharing distributed learning resources in a collaborative environment. Keywords: resource sharing, distributed learning resources, knowledge management, LearnWeb2.0.
1 Introduction The success of Web 2.0 and specific platforms such as YouTube, Flickr, and Delicious demonstrates that people are willing to share knowledge and resources with other people. Popular resource sharing systems allow users to upload and share content, but do not focus on educational resources. In [1], Petrides et al. point out that there is a need for platforms, which allow us to share open educational resources and inspire a culture of continuous improvement of these resources. In addition, sharing of educational material requires an environment, which permits the storage of resources in different formats. Typically, different Web 2.0 infrastructures focus only on particular media types, videos in YouTube, pictures in Flickr, or bookmarks in Delicious, even if these resources belong to one and the same context [2]. Thus, despite the variety of available resource sharing systems, linking distributed educational resources related to the same context is still difficult. In this paper, we present LearnWeb2.0, an environment for sharing educational resources by integrating existing Web 2.0 systems. In line with the findings presented in [3], which discusses how the integration of popular Web 2.0 services into learning processes can foster active participation, LearnWeb2.0 integrates popular resource sharing systems and social networking systems to provide an environment that improves support for learners and educators in sharing, discovering, and managing learning resources. Our main contribution is the LearnWeb2.0 platform and system, U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 154–159, 2009. © Springer-Verlag Berlin Heidelberg 2009
Sharing Distributed Resources in LearnWeb2.0
155
which integrates ten popular Web 2.0 services to enable sharing, discovery, and management of distributed learning resources. LearnWeb2.0 provides various innovative features: 1) a personal learning space offering a seamless overview of the entire set of learning resources distributed across various Web 2.0 repositories, 2) sharing through standing queries, where users are notified whenever a new learning resource matches the query, 3) collaborative aggregation of different learning resources via an intuitive drag-and-drop interface, 4) integration of the user's social networks from different Web 2.0 services, 5) a browser plug-in that enables users to easily share learning resources via drag-and-drop operations.
2 LearnWeb2.0: Architecture and Functionalities 2.1 Architectural Design LearnWeb2.0 consists of a Web application, the LearnWeb2.0 platform, and the LearnWeb2.0 Browser Plug-in (Fig. 1). The Web application provides a Personal Learning Space which supports users in discovering, sharing, and organizing distributed resources. LearnWeb2.0 uses existing Web 2.0 services as storage infrastructure, which means that core functionalities implemented by the following modules are mapped to the services preferred by the individual user. The Search and Exploration module provides a uniform interface that enables the user to search and discover resources (videos, slides, textual documents, etc.), which
Fig. 1. (a) LearnWeb2.0 Web platform and (b) the LearnWeb2.0 browser plug-in
156
F. Abel et al.
are distributed across the integrated Web 2.0 services. The Annotation and Aggregation module enables users to organize learning resources, for example, by tagging or grouping these resources. The Upload and Sharing module enables users to store and share resources from their local Desktop or from the Web. Resources are stored in the Web 2.0 system, which is appropriate for the given media type and favored by the user. These modules are not only used by the LearnWeb2.0 platform, but are made available as web services so that other authorized applications, for example, our browser plug-in, can use this functionality. The Browser plug-in facilitates upload and sharing of new resources as users can simply drag a resource from their Desktop or from the Web onto the plug-in icon to initiate the upload. Functionality provided by the modules described above is based on three further modules: The Web 2.0 Service Adapter module maps LearnWeb2.0 functionalities to specific Web 2.0 services. Currently, the system provides adapters for all 10 services. Further service adapters can be easily added. LearnWeb2.0 users do not have to login to each Web 2.0 system separately, but can authorize LearnWeb2.0 to access their preferred services (single sign on). The Authorization component is also used when third-party applications try to access LearnWeb2.0 Web services, and adheres to the OAuth5 protocol and REST principles. By integrating different Web 2.0 services, LearnWeb2.0 overcomes shortcomings of existing resource sharing systems. One important aspect is access control: if a user bookmarks a Web page at Delicious she can mark the bookmark as private or public. In this way users can share their bookmarks with selected LearnWeb2.0 users fulfilling a flexible set of constraints. 2.2 LearnWeb2.0 Functionalities LearnWeb2.0 is implemented using the PHP programming language and the CakePHP framework, which provides support for Web application development in accordance with the MVC paradigm. PHP wrappers are provided for many Web 2.0 system API's, but not for all: We had to implement new API’s for GroupMe! and Slideshare. Another complex issue is authentication. Not all tools provide token-based authentication mechanisms; Slideshare and Delicious for example directly need user credentials such as login and password to access the API functionality. To enable single sign on for those tools as well, we save these credentials in an encrypted form on our server and thus are able to provide a uniform authentication interface to all Web 2.0 tools through the LearnWeb2.0 GUI and Web services. To make the system scalable, we implemented a caching mechanism which dramatically reduces the number of API calls per user. It was also necessary to work around the usage constraints of some tools, e.g. Delicious, which only allow a limited number of calls to API methods. This made it possible to provide a seamless integration of the Web 2.0 applications, using the Web 2.0 Service Adapter as integration driver (authorization, search, upload and social network services for the integrated platforms) for the different Web 2.0 tools shown in Table 1. Search and Exploration of Learning Resources. LearnWeb2.0 provides users with a generic search interface for resource discovery across various Web 2.0 services, including LearnWeb2.0 itself. Uniform authorization provided by LearnWeb2.0 enables learners to pose queries to the resources distributed in Web 2.0 platforms in a similar way as with a desktop search engine on a local machine.
Sharing Distributed Resources in LearnWeb2.0
157
Table 1. Core features of LearnWeb2.0 and Web 2.0 services used to realize the features
The LearnWeb2.0 feature provides an integrated view on the search results obtained from all integrated Web 2.0 services. Using the advanced search functionality, the learner can select a set of resources based on a common property like a tag, a file type, a timestamp or combinations thereof. On the server side, search requests for the integrated services are generated from the user’s query so that they correspond to the search functionalities supported by the particular service (see Table 1). Responses from the different services are combined into a single RSS atom feed which is made available to the user along with a list of search results. LearnWeb2.0 preserves the presentation of search results as provided by the original services. Typically a search result contains a title, an image and optionally a more detailed description. The feed can be used as a standing query in any RSS reader so as to monitor the appearance of new resources responding to the query. These capabilities provide a seamless view on all resources stored in the various Web 2.0 accounts of users, thereby creating their Personal Web 2.0 Learning Space. In order to support collaborative searching, LearnWeb2.0 provides automatic resource annotation. Once a search result is displayed in LearnWeb2.0, it is automatically tagged with the corresponding query terms. These tags can then later be used by other users to search and explore the learning-resource space available in the LearnWeb2.0 environment. LearnWeb2.0 can also access a user’s social network information from the integrated Web 2.0 applications. A LearnWeb2.0 user’s social network is thus made up of all his/her connections specified in the different Web 2.0 systems integrated into LearnWeb2.0. This enables users to explore and gather additional resources by browsing the corresponding profile pages of their friends. Annotation and Aggregation of Learning Resources. References to selected resources are stored in the LearnWeb2.0 repository. In this repository, the user can annotate a resource with additional metadata in Dublin Core format. A reference to one and the same resource can be added to the repository in different learning contexts. To support resource aggregation, LearnWeb2.0 relies on GroupMe! functionality. Users can
158
F. Abel et al.
create groups of learning resources to bundle resources that belong to the same learning context. In order to support collaborative aggregation, it is possible for several users to share a group and contribute resources to this group. These groups are fully visualized, i.e. images include previews, videos can be directly watched from the system. Resources, and groups of resources, in LearnWeb2.0 can be bookmarked, tagged, rated and discussed by other users who are allowed to access them. Hence, the LearnWeb2.0 community can collaboratively identify the best learning resources for specific learning domains. Comments on learning resources can further be used by the authors to improve their resources. Upload and Sharing of Learning Resources. The Web is not the only place where users can find relevant materials. Useful resources can also be located on the user's computer or are acquired through other devices such as a camera. Using LearnWeb2.0 users can directly upload a resource from their computer or an external source to a suitable Web 2.0 tool and enrich it with useful annotations. Users can carry out resource upload either through the LearnWeb2.0 GUI or via the LearnWeb2.0 browser plug-in (currently supported for Firefox), both of which provide upload functionality as drag-and-drop activity from Desktop or Web resources to the LearnWeb2.0 plug-in icon included in the browser.
3 Related Work Web 2.0 and Education. The authors of [3] discuss how the integration of popular Web 2.0 services into education can stimulate active participation of learners in the learning process. In line with these findings and based on requirements identified in [4], LearnWeb2.0 integrates various Web 2.0 services to provide an environment for sharing educational resources. The demand of platforms that allow users to share open educational resources is discussed in [1]. LearnWeb2.0 provides such a platform but goes one step further as it supports (i) both kinds of users, educators and learners, and (ii) both traditional educational resources and resources originally not intended for education. As resources come in different formats, LearnWeb2.0 supports all appropriate media types, and does so in an integrated way, in contrast to mashUps such as Netvibes or iGoogle, which provide access to different Web 2.0 services in a single environment, but keep these services separated [5]. The need for assistance in multiple, flexible filing and searching facilities to offer enhanced attributes in users’ desktops was identified in [6]. LearnWeb2.0 expands this concept into a virtual desktop, spread over a number of Web 2.0 services that manage these resources. Search and Sharing. Recent studies have shown that social search techniques can improve the effectiveness of the Web search. SearchTogether [7] is such an interface for collaborative search. In LearnWeb2.0 we enable users to store, rate, comment, tag and reuse the most successful queries. By representing the search result page as an RSS feed, LearnWeb2.0 implements a standing query mechanism, useful for collaboration on common tasks. LearnWeb2.0 also enables users to share queries and other resources, and collaboratively organize and use them in groups [8].
Sharing Distributed Resources in LearnWeb2.0
159
4 Conclusions and Future Work In this paper we presented the LearnWeb2.0 environment, which supports learners and educators in sharing, discovering, and managing learning resources that are spread across different Web 2.0 platforms. LearnWeb2.0 aggregates resources and uses functionalities from ten different Web 2.0 services and propagates LearnWeb2.0 actions back to these services. LearnWeb2.0 thus provides a personal learning environment offering a seamless overview of the entire set of relevant resources distributed over the different platforms. Collaborative aggregation of learning resources into groups is supported via simple drag-and-drop operations. Next steps include the evaluation of LearnWeb2.0 at our university, and the implementation of new features focusing on access control and notification of users as regards interesting resources, all of which improves awareness during collaborative search. Acknowledgments. The work on this paper has been partially sponsored by the TENCompetence Integrated Project. Contract 027087.
References 1.
2.
3.
4.
5.
6.
7.
8.
Petrides, L., Nguyen, L., Kargliani, A., Jimes, C.: Open Educational Resources: Inquiring into Author Reuse Behaviors. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 344–353. Springer, Heidelberg (2008) Demidova, E., Kärger, P., Olmedilla, D., Ternier, S., Duval, E., Dicerto, M., Mendez, C., Stefanov, K.: Services for knowledge resource sharing & management in an open source infrastructure for lifelong competence development. In: Intl. Conference on Advanced Learning Technologies (ICALT), Niigata, Japan (2007) Ullrich, C., Borau, K., Luo, H., Tan, X., Shen, L., Shen, R.: Why Web 2.0 is good for learning and for research: principles and prototypes. In: Proc. of 17th Intl. World Wide Web Conference (WWW), Beijing, China, pp. 705–714 (2008) Marenzi, I., Demidova, E., Nejdl, W., Zerr, S.: Social software for lifelong competence development: Challenges and infrastructure. International Journal of Emerging Technologies in Learning (iJET), 18–23 (2008) Thang, M., Dimitrova, V., Djemame, K.: Personalised Mashups: Opportunities and challenges for user modeling. In: Conati, C., McCoy, K., Paliouras, G. (eds.) UM 2007. LNCS (LNAI), vol. 4511, pp. 415–419. Springer, Heidelberg (2007) Bischoff, K., Herder, E., Nejdl, W.: Workplace learning: How we keep track of relevant information. In: Duval, E., Klamma, R., Wolpers, M. (eds.) EC-TEL 2007. LNCS, vol. 4753, pp. 438–443. Springer, Heidelberg (2007) Morris, M., Horvitz, E.: SearchTogether: An interface for collaborative web search. In: Proceedings of 20th ACM Symposium on User Interface Software and Technology (UIST), Newport, USA, vol. 3/12 (2007) Abel, F., Frank, M., Henze, N., Krause, D., Plappert, D., Siehndel, P.: GroupMe! Where Semantic Web meets Web 2.0. In: Aberer, K., Choi, K.-S., Noy, N., Allemang, D., Lee, K.-I., Nixon, L.J.B., Golbeck, J., Mika, P., Maynard, D., Mizoguchi, R., Schreiber, G., Cudré-Mauroux, P. (eds.) ASWC 2007 and ISWC 2007. LNCS, vol. 4825, pp. 871–878. Springer, Heidelberg (2007)
SWeMoF: A Semantic Framework to Discover Patterns in Learning Networks Marco Kalz1, Niels Beekman2, Anton Karsten2, Diederik Oudshoorn2, Peter Van Rosmalen1, Jan Van Bruggen1, and Rob Koper1 1
Open University of the Netherlands, Center for Learning Sciences and Technologies, PO Box 2960, 6401 DL Heerlen, The Netherlands 2 Open University of the Netherlands, Faculty of Informatics, PO Box 2960, 6401 DL Heerlen, The Netherlands {marco.kalz,peter.vanrosmalen,jan.vanbruggen,rob.koper}@ou.nl, {cg.beekman,as.karsten,dj.oudshoorn}@studie.ou.nl
Abstract. In this contribution we introduce SWeMoF, a semantic framework to discover patterns in learning networks and the blogosphere. Based on a description of the state of the art in data mining, text mining and blog mining we discuss the architecture of the Semantic Weblog Monitoring Framework (SWeMoF) and provide an outlook and an evaluation perspective for future research and development. Keywords: weblogs, social software, text mining, data mining, RSS, clustering, classification, Latent Semantic Analysis.
1 Introduction In the past we have concentrated on the evaluation of Latent Semantic Analysis (LSA) to approximate the prior knowledge of learners in learning networks. We could show that Latent Semantic Analysis (LSA) is a promising method to support this process [1]. Several other examples show that semantic services and language technology have the potential to help to reduce tutor load and to increase efficiency in technology-enhanced learning [2]. We expect that the application of such approaches can help in personalization processes, the automatic generation of metadata and the discovery of structural patterns in learning networks. On the other hand we made the experience that the effort to develop and evaluate learner support services based on text- and data-mining methods is a very challenging task since a lot of different tools and sources are involved and manually processing of data is needed. Based on this aspect and the need to extend our research to other methods and approaches we have developed a prototypical solution that can help to find semantic patterns in learning networks. In this contribution we present the Semantic Weblog Monitoring Framework (SWeMoF). The prototypical framework that we discuss in this article employs feed parsing techniques and data- and text-mining algorithms for several types of experiments and prototyping scenarios. A similar framework as proposed here has been described by Joshi & Belsare as BlogHarvest [3] and Chau et al. [4]. The BlogHarvest framework is a conceptual U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 160–165, 2009. © Springer-Verlag Berlin Heidelberg 2009
SWeMoF: A Semantic Framework to Discover Patterns in Learning Networks
161
framework for opinion and sentiment analysis that employs part-of-speech tagging, association rules and several miners for clustering and classification. The second proposal by Chau et al. consist of a blog spider to collect content, a blog parser to extract information, a blog analyzer and a blog visualizer. On the other hand this is a very general framework without any prototype or a detailed architecture. The Semantic Weblog monitoring Framework (SWeMoF) enables researchers, course designers and learning technology developers to conduct several kinds of semantic experiments using different algorithms from natural language processing and data mining. We expect this framework to support the development of semantic technologies and web services to solve some basic problems in educational technology like formulated by Koper [5]. Applying common data and text mining techniques for discovery, recommendation and similarity classification can help to overcome problems of efficiency and effectiveness of the learning process and the workload of tutors. In the next part of the paper we describe the state-of-the art in data mining, text mining and blog mining. Afterwards we introduce the architecture of SWeMoF and provide an evaluation outlook.
2 Data Mining, Blog Mining and Text Mining Data Mining is a process to find patterns in large numbers of data [6]. While data mining is applied most of the time to numerical data in large databases the application of techniques from data mining to textual data is called text mining. Inside the text mining research the application of text mining to weblogs is called blog mining. The target of data mining is to discover meaning in a vast amount of data and to find patterns that are not recognizable by traditional statistical measurement and direct visual inspection. Witten and Frank refer to an increasing gap in today’s society between the generation of data and the understanding of it [6]. In this sense data mining does not have the target to generate new data but to use existing data and to find structures which have not been explored before. Fayyad et al. describe the data mining process as an interactive and iterative process which involves several steps with different tasks [8]. A special focus on using data mining in educational settings and with educational data has been developed in the last years and applied to different educational problems [9]. The same procedure as described above can also be applied to non-numeric data. If data mining is applied to text the process is called text data mining or text mining. Hearst defines text mining as a means of exploratory data analysis and he stresses the distinction between text mining and information retrieval [9]. While information retrieval and information access are only about finding information which are hard to find because of a lot of similar information text mining in his opinion is a process which has the target to discover information that have never been encountered before. Several disciplines contribute to text mining research, the most important one computational linguistics/natural language processing (NLP) [10]. In addition several disciplines from literature studies to genetics and bio-informatics have applied text mining to solve some basic problems in their domain of research. The application of data mining techniques and text mining problems to weblogs is coined as Blog Mining. Blog mining is a very recent research direction. Barone provides a good overview of
162
M. Kalz et al.
research done in this area until 2007 [11]. The framework proposed in this contribution will allow blog mining experiments with a special focus on discovery, classification and clustering. In the next part we describe the architecture of the framework.
3 Architecture of the Semantic Weblog Monitoring Framework SWeMoF is an object-oriented, web-based application designed for semantic experiments on the basis of content produced from weblogs and other text-based applications which offer an RSS-feed. Within this framework several data mining/natural language processing experiments are possible. Every experiment takes the content of one or more weblogs as input, applies one or more algorithms/miners to the content and gives an output which can be downloaded. The level of input can be the whole content of a weblog (set level), content from a dedicated category in a weblog (category level) or even only dedicated postings (document level). The prototype has implemented 5 example algorithms/miners for three different experiments: Semantic Similarity, Classification and Clustering. The prototype is written in Java and makes use of an integrated database and the Echo framework for the interface. The example algorithms are implemented using the Weka framework, but the SWeMoF framework does not depend on it. Both filters and text mining algorithms can be written from scratch or by using any available components and libraries. For the design of the system the following use cases have been defined: • Corpus Creation A corpus has to be defined before an experiment can be created. This corpus can be constructed from several RSS-Feeds and/or OPML files. Besides this functionality, the domain corpus can be combined with a general language corpus which has been discussed as an important option in several information retrieval scenarios. For classification experiments several examples need to be classified manually before an experiment can be executed. In the classification experiment these ‘gold standard’ examples are needed to allow a semantic comparison between the classified documents and the unclassified documents. This step can be done by inspecting the corpus directly or during the creation of an experiment. • Experiment Creation In the experiment creation phase the parameters for a text mining experiment can be configured. These parameters consist of a corpus, an optional general language corpus, filters and a text mining algorithm. Further, the level on which the experiment is conducted (set, category or document) must be configured. It is also possible to disable a part of the corpus on any level: set, category or document. After an experiment has been created it can be executed. This division between experimentation and execution allows for repeating experiments and comparing results with different settings. • Result Presentation & Download After the execution of an experiment the results are presented to the user and the user can download the results.
SWeMoF: A Semantic Framework to Discover Patterns in Learning Networks
163
• Adding of additional miners In the current prototype the following miners have been implemented: Naive Bayes Classifier, IB1 Classifier, EM Clusterer, Simple K-Means Clusterer and a similarity rater using LSA. In addition, LSA can be combined with the miners implemented. But it is easy to add additional miners into the system. The SWeMoF Framework allows the user to either create new experiments or retrieve and execute older experiments that have been stored. The parameters of an experiment (corpus, general language corpus, filters, miner, mining level) are saved in an experiment configuration. The following figure shows how a text mining experiment is conducted with SWeMoF.
Fig. 1. Overview of the components of the SWeMoF system
The Input module is responsible for the import of text. Single texts (e.g. single web posts) are organized in groups to create a hierarchy. Single text documents must be grouped in a document category, document categories must be grouped in document sets. Since SWeMoF’s main focus is on web feeds, this design has been chosen to reflect the structure of these feeds. Even when only a single text document is imported it will have to be placed inside a document category, and the document category inside a document set. It is important to note that the corpus is not created by the Input module but by the Corpus module. For the feed parsing we have used the ROME library. ROME is a set of open source Java tools for parsing, generating and publishing RSS and Atom feeds. The Corpus module is responsible for the aggregation of documents generated from the input text by the input module. A corpus contains a collection of document sets. The structure within these sets is as described in the Input module section. After execution of an experiment the results are generated. The View module can display these results in different ways. In the prototype the View module is not implemented, instead a textual output is generated directly from the Result object. Three types of information can be stored through the DAO module: the corpus, the configuration parameters of the experiment, and the results of an experiment. Finally a GUI takes care of the interaction with the user, enabling him to create new experiments, retrieve old experiments, retrieve results of experiments, and set parameters of an experiment. The GUI takes care of the interaction between users and the SWeMoF application. It is designed to let the user select text to convert (single document or web feeds); select filters (preprocessors) to generate the appropriate text corpus; select an experiment and choose a way to study the outcomes of the experiment. The GUI has been implemented with the Echo web framework.
164
M. Kalz et al.
The SWeMoF framework can be extended on several areas. The framework focuses on weblog monitoring and thus the focus for the prototype has been on implementing RSS and OPML as the document source. The input module however is designed in such a way that it can easily be extended with other input sources by implementing the appropriate interfaces. The second more important part where SWeMoF can be extended is in the filters and miners. To add a new filter or text mining algorithm all that needs to be done is implement the interface Filter or Miner and create a descriptor. The descriptor will tell the GUI what the Filter or Miner does and which options can be set. After this has been done, the descriptor can be added in to the registry. SWeMoF will then automatically make this filter or miner available to the end user.
4 Discussion, Outlook and Future Work At the current stage of the development we could conduct several tests related to code functionality and result quality. After the components have been tested alone the integrated system has been tested to see if the system supports the use cases for which it was designed for. In addition we have compared the system results with the results of using Weka directly. The integration testing confirmed that the system is able to support the use cases and the comparison to Weka was successful as well. A real enduser and usability testing could not be conducted yet, but we are planning to present the system to researchers and learning technology developers with different levels of prior knowledge about data and text mining. For this purpose we are planning to combine traditional usability testing with the hedonic and pragmatic approach developed by Hassenzahl [13]. In this framework the “hedonic quality” aspect covers non-task-oriented quality aspects like innovativeness or originality and takes appealingness of a software system into account as well. As a next step we will conduct an end-user testing with colleagues in the field. Based on the feedback of the potential end-users we will improve the system. The full code of the framework has been released under a GPL license [14] and a demonstration of the framework is available [15]. Depending on the reaction of end-users of the system we might improve the storage and presentation of the results. In addition we are going to extend the system with more miners from Weka and use it as an evaluation instrument for the development of several semantic web-services in the future.
Acknowledgements The work presented was partially carried out in the by the TENCompetence Integrated Project that is funded by the European Commission's 6th Framework Programme, priority IST/Technology Enhanced Learning. Contract 027087 (www.tencompetence.org) and partially carried out as part of the partially carried out as part of the LTfLL project, which is funded by the European Commission (IST2007-212578) (http://www.ltfll-project.org).
SWeMoF: A Semantic Framework to Discover Patterns in Learning Networks
165
References [1] Kalz, M., Van Bruggen, J., Giesbers, B., Waterink, W., Eshuis, J., Koper, R.: Where am I? – An Empirical Study about Learner Placement based on Semantic Similarity (manuscript submitted for publication, 2009) [2] Van Rosmalen, P.: Supporting the tutor in the design and support of adaptive e-learning. Doctoral Dissertation. SIKS Dissertation Series 2008-07. Open University of the Netherlands, Heerlen (2008) [3] Joshi, M., Belsare, N.: BlogHarvest: Blog mining and search framework. In: Lakshmanan, L.V., Roy, P., Tung, A.K. (eds.) Proceedings of the 13th International Conference on Management of Data (COMAD), Delhi, India. Computer Society of India (2006) [4] Chau, M., Xu, J., Cao, J., Lam, P., Shiu, B.: A Blog Mining Framework. IT Professional 11, 36–41 (2009) [5] Koper, R.: Use of the semantic web to solve some basic problems in education: Increase Flexible, Distributed Lifelong Learning, Decrease Teacher’s Workload. Journal of Interactive Media in Education 6, 1–23 (2004) [6] Witten, I., Frank, E.: Data Mining. Practical Machine Learning Tools and Techniques. Morgan Kaufmann Series in Data Management Systems. Morgan Kaufmann, San Francisco (2000) [7] Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., Uthurusamy, R.: Advances in knowledge discovery and data mining. AAAI Press, Menlo Park (1996) [8] Romero, C., Ventura, S.: Educational data mining: A survey from 1995 to 2005. Expert Systems with Applications 33, 135–146 (2007) [9] Hearst, M.: Untangling text data mining. In: Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, Morristown, NJ, USA, pp. 3–10. Association for Computational Linguistics (1999) [10] Manning, C.D., Schutze, H.: Foundations of Statistical Natural Language Processing. MIT Press, Cambridge (2003) [11] Barone, F.: Current Approaches to Data Mining Blogs. University of Kent, Kent (2007) [12] Brooks, C.H., Montanez, N.: Improved annotation of the blogosphere via autotagging and hierarchical clustering. In: Proceedings of the 15th international Conference on World Wide Web, Edinburgh, Scotland, pp. 625–632. ACM Press, New York (2006) [13] Hassenzahl, M.: The Effect of Perceived Hedonic Quality on Product Appealingness. International Journal of Human-Computer Interaction 13, 481–499 (2001) [14] Semantic Weblog Monitoring Framework project page, http://swemof.sf.net [15] Semantic Weblog Monitoring Framework demonstration page, http://www.swemof.org
Social Network Analysis of 45,000 Schools: A Case Study of Technology Enhanced Learning in Europe Ruth Breuer1 , Ralf Klamma1 , Yiwei Cao1 , and Riina Vuorikari2 1
RWTH Aachen University, Informatik 5 (Information Systems) Ahornstr. 55, D-52056 Aachen, Germany {breuer,klamma,cao}@i5.informatik.rwth-aachen.de 2 European Schoolnet, eTwinning Rue de Tr`eves 61, 1040 Brussels, Belgium
[email protected]
Abstract. Social networks make an essential contribution to knowledge sharing in our fast moving and changing world. However, it is difficult to apply new techniques to the complex, firm and Europe-wide differing educational systems. And this process for technology enhanced learning is still evolving and challenging. This paper presents the research results of applying social network analysis methods to a real and lively social network which intends to enhance the cooperation and knowledge sharing among over 45,000 European schools within the eTwinning network. We developed a web-based tool for network analysis and the visualization of various network views and data mining results as proof of concept. This prototype is evaluated on the educational social network eTwinning coordinated by the European Schoolnet, with special regard to its network structure and collaboration activity. Keywords: Social Networks, Knowledge Sharing, Social Network Analysis, Network Visualization, Data Mining, Information and Communication Technologies, Technology Enhanced Learning.
1
Introduction
Unknowingly internalizing fundamental knowledge in infancy, we never stop learning in our whole life. This learning process is mandatory to keep pace in a constantly and more and more rapidly changing world. Nowadays it is supported by numerous organizations. In 2000 the European Council postulated policies and actions for an European Lifelong Learning as part of the Lisbon Strategy [Com01]. Essential components are firstly skills and usage of Information and Communication Technologies (ICT), and secondly access to learning opportunities for people of every age group. Europe consists of numerous countries having diverse educational systems, long histories and old traditions. The different environments complicate cooperations between European teachers as well as evaluating their competences in U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 166–180, 2009. c Springer-Verlag Berlin Heidelberg 2009
Social Network Analysis of 45,000 Schools
167
comparison to other schools in other countries. Moreover, teachers should be assisted by ICT for their lifelong learning process. Since 2005, the European Schoolnet (EUN) coordinates the eTwinning initiative1 , which is part of the European Commission’s Lifelong Learning Programme and intends an advancement of cooperations between European schools by the usage of ICT. eTwinning is a web-based social network of thousands of European schools, which can form partnerships to work on projects integrated in the curriculum of their pupils. Although creating, managing and handling school networks implicate their own problems [BGJ+ 07], they are indispensable for integrating both ICT and European culture in the educational system and curriculum. However, improving the daily routine in schools in this way is a more and more important challenge in an Europe growing together. Educational networks are social networks consisting of different actors and their relationships. But beside the opinions of the actors, various traditions and values as well as proceedings and educational directives may have influence on their behavior. Thus, it is not easy to reconcile all these factors in one direction for Europe. In addition, social network structure is crucial for its working, because separated actors are not able to learn from each other. Hence, structural knowledge may increase the efficiency by selectively enhancing the connectivity of rather isolated actors. However, the density of interdependencies is one reason for the complexity of social network analysis (SNA). Another measure is the network dimension which often receives an unmanageable size. eTwinning is a very large and complex network with over 45,000 actors, and its structure was completely unknown up to this approach. But in social networks, the rather technical knowledge of the structure is not the only difficulty. Further problems arise from untrained users, which may not be able to get an overview of the network and therefore understand networking principles [Nik07], or to use it easily. Thus, aside from the analysis itself, a transparent and comprehensible representation is required. Certainly, because of the large scale it is impossible to realize these tasks by hand without any adequate software [BW04a]. Our research aims at exploring teachers competence, finding channels for knowledge sharing and assisting school cooperations through appropriate social network analysis methods and data mining techniques. Network characteristics, conditions and relations shall be revealed and presented in a prototype. This paper is structured as follows. The next section overviews the state of the art. Section 3 describes our concept in principle. Section 4 contains the realization and Section 5 the evaluation results of our system. Section 6 gives an conclusion and the outlook.
2
State of the Art
The goal of a social network analysis is discovering information about social relationships [KCD+ 07]. In order to get this information about individual characteristics or current conditions of a network, we have to analyze it systematically. 1
http://www.etwinning.net/
168
R. Breuer et al.
The relationships between the nodes contain statements about connections, exchange of resources or even acquaintances, influence and trustfulness in a social network. 2.1
Social Network Analysis
Before an extensive network like eTwinning can be analysed, we need the methodological instruments to describe and explore it. In the 1980s, Bruno Latour and Michael Callon introduced the Actor Network Theory (ANT) [Lat96, SS00]. A network starts with isolated nodes, hardly comparable, which join by-and-by through own actions as well as being a receiver for external actions. Beside human beings abstract actors are considered as nodes, and the network develops due to changes in characteristics and relationships of these actors. Thus, the state of the constantly changing network is only a snapshot of the current result of actions. The Actor Network Theory will help us to describe the eTwinning network and to figure out its different network views, characteristics and relationships. Although eTwinning consists of over 45,000 schools and teachers, the graph theory is well suited to enhance the network analysis. It is also the groundwork for visualizations, because the intuitive recognization of graphical structures is much better than in adjacency matrices, for example. Hence, basic concepts of the graph theory are an essential precondition for our approach. We are interested in exploring a social network, whose structure provides a significant insight into its activity and efficiency. Therefore, the structure qualities must be inspected. Beside the measurements of network sizes, measurements for computing connectivity and distances are required. The social network analysis goes far beyond the scope of graph theory. A detailed introduction to SNA concepts can be found in [WF94]. The analysis of the eTwinning network concentrates on the behaviour and influence of its actors. Thus, clusterings [LN05, BE05] and centralities [Fre79, BKW03] are of special interest. Beside these rather mathematical aspects, there exist some phenomena observable in social networks. A typical network characteristic is the small world phenomenon (“Six Degrees Of Separation”) discovered by Stanley Milgram [Mil67] in the 1960s. Most of the nodes are linked on a path with only a few nodes in-between, which has a direct impact on a network efficiency and is a good criterion for its structural quality. With this fundamental methodology, a meaningful and detailed analysis of a social network comes into reach. But the analysis is not the the only task. The visualization makes its own demands go far beyond the Formal Sciences. Barely sufficient for the structure, the graph theory does not provide communicating further details concerning the element attributes. Actually, this topic is challenging and there is no general solution. The positioning within the graph structure is a main problem. The size of a large network considerably reduces the visual clarity if the nodes and edges are not well arranged. On the other hand, a visual representation provides a very intuitive understanding of networks and their characteristics, even if they are both large and complex [Kre05]. However,
Social Network Analysis of 45,000 Schools
169
Table 1. A comparison of common social networks Network Purpose Existing SNA approaches Bahu presentation, friends and interests degree centrality, activity Facebook presentation, friends and interests degree centrality, activity; Nexus mySpace presentation, friends and interests degree centrality, activity, connectivity StudiVZ presentation, friends and interests degree centrality, connectivity; StudiAnalyse Xing vocational networking degree centrality, connectivity
applying new properties to the elements creates further structures in the network, hence, the balance between clear arrangements and the amount of additional information must be well deliberated. In general, three visual concepts exist to realize complex data in a way that structures can be detected intuitively: sizes, colors and shapes. It is even a challenging task to only look at color shades. Although the human eye can distinguish about seven million distinct shades under certain conditions [WM97], it is difficult to distinguish hundreds of colors in a large network, which take an area of just a few millimeters. 2.2
Social Networks
Now we will consider common social networks of today and their realization of analysis. Today social networks are used to find old friends from the school, to exchange resources of every description and so on. Networks like Xing conduce to get occupational contacts. Others like Facebook allow users to present themselves and exchange resources with friends. The efficiency of these networks arises from the network structure and the number of involved persons. But without any methodical analysis, the coordinators can hardly get an idea of the network. For the users who just receive information directly related to them, it is almost impossible. So the analysis would be reasonable, but the realized aspects are rather disappointing. Some networks display the network dimension, or inform the registered users if their direct neighbours; in few cases also including the connection chain to other members. For StudiVZ and Facebook, web-based tools have been developed. But overall, a published analysis like the one we intend for eTwinning, is not existing (cf. Table 1). Our analysis and visualization tool for eTwinning shall include much more functionality to fulfill our requirements. Today a multitude of tools are existing, many of which are quite new or have been developed by known experts in social network analysis or visualization. Various powerful tools are in principle appropriate. Here we shortly introduce two convenient toolkits. One application for biomedical, physical and social research is the Network Workbench, which contains for example detailed network analysis functions, some layout algorithms and methods for data preprocessing. Although it is a powerful tool, which integrates a lot of approaches and techniques, the handling is
170
R. Breuer et al.
Fig. 1. Toolkits
very complex and far from being intuitive. In contrary, Visone is very easy in handling [BW04b]. This tool could rather fit our demands. For example, new attributes can be applied to the network and represented by customizable visual aspects. Furthermore, common social network analysis functions can be used and displayed. Unfortunately, Visone can not handle very large networks, and the layout algorithms leave a lot to be desired. There are other tools providing good analysis or visualization functionalities, for example Pajek or even yEd, a tool based on the same yFiles library as our prototype. But as for Visone and the Network Workbench, they hardly fit our requirements which demand easy handling and interactive exploration based on not editable network data. Beside these toolkits created for every kind of network, two applications are of interest, which explicitly inspect particular networks. The major task of Nexus is visualizing the individual network of a Facebook member, not including himself (Figure 1 shows screenshots of all four tools.). All direct friends (connections) are shown, including the relationships in-between. It yields no information about the Facebook network in general, thus, it is impossible to conclude anything about the overall network from the small extraction for one single member. Moreover, accurate or detailed analysis factors like centralities are missing. Although StudiAnalyse provides more features for analysis and visualization, it wants for general
Social Network Analysis of 45,000 Schools
171
information as well. Again, the visualization only contains the direct neighbours of the registered member and the connections between them, this time including himself. Even though StudiAnalyse supports generating the corresponding network of a chosen node, the whole network is not transparent. Including the attributes university, sex and liaison state is a good beginning for a detailed network visualization, but extending to further data would be desirable.
3
Analysis of 45,000 Schools
In our prior research we have applied SNA onto different social networks. We analysed conference participation networks to recommend computer science researchers interesting academic events [KPC09]. We focused on the dynamic evolvement of Wiki networks [KH08] and identified patterns for digital social networks [KSD06]. All this work helps us analyse the eTwinning network in an appropriate and systematic way. 3.1
Requirement Analysis
Our approach addresses to several user groups with different motivations. The coordinators of eTwinning are interested in exploring the network structure. They want to find out in what kind of network the projects results, and how the participants behave after the registration. The visualization shall give the required substantial overview. Especially participating teachers who do not have any experience in online collaborations, or who for the first time concern themselves with eTwinning, can be motivated to join. Additionally, finding themselves to be at an isolated location in the network may inspire the participants to improve their connection. At last, advancing the knowledge about social networks is commendable altogether, not only for eTwinning users. Table 2 contains a comparison of realized functionalities. We concretize the aims for the prototype “eVA” (eTwinning Network Visualization and Analysis) listed beside Table 2. Beside those functions, there are other necessary aspects to be considered. First, because eTwinning is realized as a web portal, the prototype should also be web-based. Whereas the lack of experience makes some constraints on handling and help functionalities, it also requires an easy access to social network analysis. Therefore, there should be predefined, already interpreted analysis questions. The large extent of the network necessitates the possibility of regarding the network graph as a whole as well as in detail. Furthermore, to deal with networks always necessitates an explicit model where all of the individual network elements are well-defined in meaning and coverage. The requirements of our prototype eVA is summarized in Table 3. Overall, this approach works by the following steps. The exported data sets from the original eTwinning database must pass an extensive preprocessing and be stored in a new database. The network models are the base for the visualization and analysis functionalities. These must include the possibility of changing the element properties, searching nodes, layouting the network with different algorithms. Furthermore, predefined questions, the expert social network analysis and the statistics must be integrated.
172
R. Breuer et al.
1. The tool must represent the eTwinning network appropriately in the visualization. 2. Relevant characteristics must be displayed (for example using colors). 3. The analysis must contain common SNA aspects. 4. The application aims not at creating an editor for the network graph, but at giving a detailed overview over eTwinning. Hence, the customization should only be possible up to a certain degree. Experience and expertise of the prospective users are quite unknown, so user interface and functionalities should therefore be simple.
Functionality Visualize the whole network Filter nodes/edges Change layout Show node labels Show edge labels Cluster nodes Cluster edges Legend for clustering SNA concepts Statistics Zooming and moving Search nodes
Nexus StudiAnalyse eVA
Table 2. Functionalities in Nexus, StudiAnalyse and eVA
Table 3. Requirements Modelling Database General
M1 D1 G1 G2 G3 G4 Visualization V1 V2 V2 V2 Analysis A1 A2
3.2
Represent eTwinning and fit visualization and analysis demands Avoid redundant and noisy data, optimized for our tasks Client-Server architecture Intuitive and simple user interfaces Search function for nodes Help function Different layout algorithms, optimized for the created networks Zoom, navigation and overview function Cluster nodes and edges by size and color to display further data Browsing function to show evolution of eTwinning General statistics about eTwinning SNA statistics and predefined questions
The Data Models for eTwinning
We want to win an overview of a network which consists of thousands of varying nodes and diverse types of nodes with complexity. Furthermore, we intend to discover characteristics, similarities, coherences and differences of the network elements, in behaviour or attributes. eTwinning consist of the three obvious entities schools, teachers and projects, and moreover, of the entity country. The latter is not integrated in the original database, but of great importance for the
Social Network Analysis of 45,000 Schools
173
goal of eTwinning. To make comparisons between the participating countries possible, this entity has been added as model. It is easy to see that modeling one network with all entities enlarges the already challenging network size extremely. The visual clarity would be lost completely, because different types of nodes must be integrated aside from different characteristics. Therefore, the network is split into four parts. The main model for the overview consists of schools as nodes. Two schools are connected, if they have worked on a project together. And the teacher network is quite similar. In the country network two country nodes are connected, if at least two of their schools worked together. And in the project network two project nodes are connected, if they have been submitted by the same teacher. 3.3
Data Preprocessing and Data Cleaning
At the moment, there are over 45,000 registered schools, even more teachers and over 8,000 projects. For every entity, numerous attributes hold much more information, for example the number of pupils of a school, or the available technologies. But not all attributes are of the same relevance. Thus, only a selection is imported to a new database. As the data might be inconsistent and has missing values, a detailed data cleaning is necessary. The resulting database consists of six tables. One table each for schools, teachers and the employment of teachers to schools, and furthermore, a table for projects. The fifth table covers the participation of teachers and their schools on projects, and the last table concerns the countries. Additionally, several indexes have been added during the implementation to answer the frequent requests efficiently. Now after reducing the amount of attributes and dealing with noisy data, the challenge of numerous distinct values for some attributes still remains. Although this fact is not critical on its own, the visualization is impaired, because every value requires its own color, for example. Thus, the data must be aggregated, e.g. using the concept hierarchies. 3.4
System Architecture of eVA
The system is based on a common three-tier architecture (cf. Figure 2). A database server contains the new DB2 database with all the required data. The application server includes the logic behind eVA and communicates with the database via a JDBC interface to generate statistics and graph elements. The application logic is composed of two servlets for the main and the legend graph, the Java classes for all the computations and the Java Server Pages for the interfaces. The client tier communicates with the application server asynchronously, using Ajax, respectively the framework Dojo and JavaScript. The client handles users’ input and displays the resulting graphs and analysis tables. The database server contains the new database for the prototype, including the cleaned and refined information about schools, teachers, countries and projects. The business logic on the application server is integrated in a web server and supports three services for visualization, the statistics computation, and the SNA methods. Whereas the statistics can be deduced from the database,
174
R. Breuer et al.
& #$&
76+&86$&
, -"- "
5B& 34
#$
./0&1&2$& 34
! "
+&' *+&' ()&' %$&'
Fig. 2. System architecture
the other two services need more complex operations to turn the data into network graph elements. Both server communicate via an JDBC interface. The application server communicates with the web client through synchronous and asynchronous protocols. The client holds the graphical user interface (GUI) only, and this comprises all visual representations of the diverse functionalities.
4
Implementation of eVA
Handling large networks, modifying and not at last visualizing them requires complex methods. The yFiles package has been employed, which has been developed for graph visualizations and adaptations and includes a huge amount of functions for generating graph structures, transforming them to displayable elements and editing them. But because this library is not intended to be webbased, a second package, the yFiles Ajax library must be used. A great advantage is the asynchronous communication with the application server, because the returned images of the graphs are very large and the layout computation is very time-consuming. However, the data in the database is not sufficient to generate the networks without any further processing. Consider for example the edges in the school network. A project between two schools is represented by an edge, but an edge represents all corresponding projects. Beginning with the underlying data, a graph must process several realization steps. First, the empty graph and realizers for nodes and edges must be initialized. These are essential for rendering the visual representation of the elements. Then, the data must be requested and corresponding nodes and edges must be created in the graph - including a unique identifier. Figure 3 shows a visualization of all four networks. In the school and teacher network the unconnected nodes are hidden from the graph for visual clarity reasons. In the country network the edges are weighted using the amount of cooperations between the corresponding nodes. When properties are added to the current network graph, the needed data is requested from the database and the corresponding realizer properties are
Social Network Analysis of 45,000 Schools
175
Fig. 3. The four eTwinning networks
modified for each element. To ensure distinguishable properties and to enhance the reusability, particular color and line sets have been created. If labels have been applied, the graph is re-layouted. The parameters for the layout algorithms have been empirically tested to generate the arrangements as well as possible. The circular layout can be divided into two algorithms by setting the parameters. The first locates all connected nodes on a single circle. This positioning is useful for dense and symmetric networks. The second results more in a disc. The circles are adapted considering the connections of the nodes. The third algorithm is called smartorganiclayout. It takes the connections of nodes as well as overlappings of elements into consideration and is therefore very good for complex network structures. Figure 4 shows the different layout algorithms on the same structure in the network.
176
R. Breuer et al.
Fig. 4. Three Layout algorithms
In general, there are two different approaches of analysis realized in th eVA prototype. The first is of course the social network analysis, and the second yields some basic statistic about the eTwinning network, partially adapted to the modeling. These statistics can completely be deduced from the database and are shown in a new window as a table. The statistic function computes amongst others frequently used units like project themes or languages, taught subjects or school types. Applying the corresponding characteristic to the network, the distribution of the values within the network can be seen. Other interesting aspects are activities like summarized by countries and cooperation aspects like the year of the last collaboration. The social network analysis is divided into two aspects due to varying knowledge and experience of users. As mentioned above, beside usual SNA methods already interpreted questions give an easier introduction. A predefined question replaces the current network with one answering the question; e.g. “Which schools are the most active?” as well as “which have done the most projects?” Figure 5 shows an example. The other social network analysis results are computed on the currently presented network. They include the degree, closeness and betweenness centralities. As mentioned in Section 2, some methods can only be applied to completely connected networks. As we do not expect this applicable within eTwinning, we use the other concept for exploring its structure and evolution, the bottom-up approach. With this we can examine increasing structures, beginning with the smallest one composed of two connected nodes. A priori unknown, but of exceptional interest is the distribution of the various structures and what size the largest may achieve. Hence, the network is tested for connectivity, and the distribution of nodes to the components of different sizes are computed. Note, that also the closeness centrality can not be computed, if the network is not completely connected. Like the statistics, the outcomes are shown in a table in a new window. The reason for keeping the results for each single node and edge out of the graph is again the visual clarity.
5
Analysis Results and Evaluation
Exploring a huge and manifold network like eTwinning opens up numerous applicable analysis aspects. A partition is already realized in eVA, but there are also some more potential methods. Hence, we will describe some analysis results exemplary.
Social Network Analysis of 45,000 Schools
177
The connectivity for the teacher network is very little, and the school network looks similar. Over 75% of nodes are not connected, even in the networks in 20052008. Note, that the networks for the single years indeed consist of all actors registered in this year, but the edges resulting from projects are independent of the date of the cooperation. The low values for 2008 arise from the fact that the export has been made in July. The results show clearly, that the amount of active schools and teachers is very little in comparison to the number of registered ones. In the complete school network the largest component has a diameter of 20, with 2783 nodes in all. The complete teacher network has a diameter of 19 in the largest component with 4965 nodes. Otherwise, the networks grow constantly (cf. Figure 6). But while the number of new teachers is increasing, in 2007 just one-fifth of schools joined, compared to the year before. Looking at the project network, where projects are connected if they have been submitted by the same teacher, the connectivity is surprising (cf. Figure 3 (b)). You would think that the initiators of projects are distributed to the whole number of participants, but the half of all projects share the initializing teacher with at least one other project. Thus, the initiative of the teachers differs a lot. However, the country network is very close to the best case. Here, only four edges are missing in the largest component to be a complete graph.
Fig. 5. Predefined question result
Fig. 6. Evolution of the teacher network
Additional to the implemented SNA methods, we computed further results on a particularly chosen substructure (cf. Figure 7): In the teacher network of 2008, the largest component consists of 65 nodes and 213 edges, and yields several interesting characteristics. First, there are 22 complete substructures - a complete substructure means that all included teachers have worked together on the same project. Other interesting aspects can be found in Figure 8. Beside the analysis itself, the prototype must be evaluated, too. For this, we submitted an evaluation form to a group of students of the lecture “Web Science” at RWTH Aachen University, and to a group of German teachers participating at eTwinning. The results were quite interesting even though only few teachers participated.
178
R. Breuer et al.
Fig. 7. 65-Tuple in teacher network 2008
Fig. 9. Evaluation result: Handling Complexity
Fig. 8. SNA results
Fig. 10. Evaluation result: Behaviour
As expected, the previous knowledge of the teachers about social networks and graphs, and the students’ knowledge about eTwinning was very low. The estimation of the handling complexity was surprisingly good, though the teachers did hardly dare to test difficult functionalities (cf. Figure 9). Again this understanding argues for the intuitive communication of knowledge via visual representations. One of the most interesting outcome of the evaluation arise from the questionnaires. What would people do, if they find themselves at a rather isolated position in the network, far from the core. Here, the difference in the knowledge is reflected very strongly. Except one, none of the teachers understood the consequences from this fact. Being well connected means having better opportunities to exchange knowledge or resources. And the contrary is neither good for oneself, nor for the whole network (cf. Figure 10).
Social Network Analysis of 45,000 Schools
6
179
Conclusion and Outlook
In this paper, we have seen the concept and realization of a prototype concerning the analysis and visualization of a certain educational network: eTwinning. Some remarkable outcomes of the analysis concern the connectivity: the only network that is well connected is the country network. All others consist of about 70% of unconnected nodes. This means that the most of the registered teachers or schools are inactive in terms of being involved in projects. On the other hand, several projects are done by many teachers in large cooperations, partially beyond one single school type. eTwinning exists for four years now, and every year, the number of involved teachers grows rapidly, whereas the number of new schools decreases. This can be regarded as positive sign of growth where the expansion of eTwinning happens at the local level involving more teachers within a given school, an outcome of a “back to school” campaign. There were much more aspects and results of particular interest, but of course, there are still opportunities for improvement: finding most participants being inactive has been reflected in the new release of eTwinning portal in 2008, where the new focus is to involve participants in a plethora of activities (e.g. online communities, learning labs) that eventually will lead to project involvement. The eTwinning model was split in four appropriate network models, but to highlight all coherences between the entities, mixed model could be integrated into eVA. Furthermore, further interesting methods could be added in addition to a lot of statistical, analytical and graphical aspects. Especially the detection of complete substructures would be significant to indicate partnerships for single projects. The most interesting point for eTwinning arises from the evaluation results. A rather critical point is the knowledge about social networking, which influences the cooperation behaviour a lot. Thus, the network would benefit from training the teachers in the social networking idea even more. eVA can also be rolled out to the whole eTwinning communities. More teachers should test this tool and get to know their roles in the whole eTwinning network.
References [BE05] [BGJ+ 07]
[BKW03]
[BW04a] [BW04b]
Brandes, U., Erlebach, T. (eds.): Network Analysis. LNCS, vol. 3418. Springer, Heidelberg (2005) Bienzle, H., Gelabert, E., J¨ utte, W., Kolyva, K., Meyer, N., Tilkin, G.: The Art of Networking. European Networks in Education. Wien, die Berater (2007) Brandes, U., Kenis, P., Wagner, D.: Communicating Centrality in Policy Network Drawings. IEEE Transactions on Visualization And Computer Graphics 9(2), 241–252 (2003) Brandes, U., Wagner, D.: Netzwerkvisualisierung. Information Technology 46(3), 129–134 (2004) Brandes, U., Wagner, D.: Visone - Analysis and Visualization of Social Networks. In: J¨ unger, M., Mutzel, P. (eds.) Graph Drawing Software, pp. 321–340. Springer, Heidelberg (2004)
180
R. Breuer et al.
[Com01]
[Fre79] [JRW97] [KCD+ 07]
[KH08]
[KPC09]
[Kre05]
[KSD06]
[Lat96] [LN05] [Mil67] [Nik07]
[SS00]
[WF94]
[WM97]
European Commission. Communication from the Commission: Making a European Area of Lifelong Learning a Reality, Bruxelles (November 2001), http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2001: 0678:FIN:EN:PDF Freeman, L.C.: Centrality in social networks: Conceptual clarification. Social Networks 1(3), 215–239 (1979) Jobson, D.J., Rahman, Z., Woodell, G.A.: A multiscale retinex for bridging the gap between color images andthe human observation of scenes. IEEE Transactions on Image Processing 6(7), 965–976 (1997) Klamma, R., Chatti, M.A., Duval, E., Fiedler, S., Hummel, H., Hvannberg, E.T., Kravcik, M., Law, E., Naeve, A., Scott, P.: Social Software for LifeLong Learning. Journal of Educational Technology & Society 10(3), 72–83 (2007) Klamma, R., Haasler, C.: Dynamic Network Analysis of Wikis. In: Tochtermann, K., et al. (eds.) Proceedings of I-Know 2008 and I-Media 2008, International Conferences on Knowledge Management and New Media Technology, Graz, Austria, September 3-5. Journal of Universal Computer Science, J.UCS (2008) Klamma, R., Pham, M.C., Cao, Y.: You Never Walk Alone: Recommending Academic Events Based on Social Network Analysis. In: Proceedings of the First International Conference on Complex Science (Complex 2009), Shanghai, China, February 23-25 (2009) Krempel, L.: Visualisierung komplexer Strukturen. Grundlagen der Darstellung mehrdimensionaler Netzwerke. Campus Verlag, Frankfurt/Main (2005) Klamma, R., Spaniol, M., Denev, D.: PALADIN: A Pattern Based Approach to Knowledge Discovery in Digital Social Networks. In: Tochtermann, K., Maurer, H. (eds.) Proceedings of I-KNOW 2006, 6th International Conference on Knowledge Management, Graz, Austria, September 6-8, pp. 457–464, J.UCS (Journal of Universal Computer Science). Springer, Heidelberg (2006) Latour, B.: On actor-network theory: A few clarifications plus more than a few complications. Soziale Welt 47, 369–381 (1996) Liben-Nowell, D.: An Algorithmic Approach to Social Networks. PhD thesis, MIT Computer Science and Artificial Intelligence Laboratory (2005) Milgram, S.: The Small World Problem. Psychology Today, 60–67 (1967) Nikolov, R.: Towards web 2.0 schools: Rethinking the teachers professional development. In: Joint IFIP Conference (June 2007), http://dspace.learningnetworks.org/handle/1820/1064 Schulz-Schaeffer, I.: Akteur-Netzwerk-Theorie. Zur Koevolution von Gesellschaft, Natur und Technik. In: Weyer, J. (ed.) Soziale Netzwerke. Konzepte und Methoden der sozialwissenschaftlichen Netzwerkforschung, pp. 187–209. Oldenbourg Verlag (2000) Wasserman, S., Faust, K.: Social Network Analysis: Methods and Applications. Structural Analysis in the Social Sciences, vol. 8. Cambridge University Press, Cambridge (1994) Wooten, B., Miller, D.L.: The psychophysics of color. In: Hardin, C.L., Maffi, L. (eds.) Color Categories in Thought and Language, ch. III, pp. 59–88. Cambridge University Press, Cambridge (1997)
Analysis of Weblog-Based Facilitation of a Fully Online Cross-Cultural Collaborative Learning Course Anh Vu Nguyen-Ngoc and Effie Lai-Chong Law Department of Computer Science University of Leicester Leicester LE1 7RH, United Kingdom {anhvu,elaw}@mcs.le.ac.uk
Abstract. Online facilitation in a cross-cultural collaborative learning context is increasingly prevalent, but limited research has been conducted on the related issues, especially on facilitation via weblogs, which are increasingly used as an educational tool and becoming a significant component of many web-based learning environments. In this paper we address two issues: (i) how facilitators use weblogs as an educational instrument to support the students’ activities in a fully online multi-national collaborative learning course; (ii) whether facilitation style plays a role in influencing the learners’ working styles and performance in a setting where weblogs have been heavily used. The analysis results indicate the practicality of deploying weblogs as an effective online facilitation and learning tool, the rather intricate relationships between facilitation and learning styles mediated by some socio-cultural factors, and the usefulness of our proposed weblog analysis scheme. Keywords: Weblog, Facilitation, Content analysis, Social network analysis, Collaborative learning.
1 Introduction Facilitation, with the aim to make an action or a process easier and to help bring about desirable outcomes, is an integral part of as well as a critical success factor for most computer-supported collaborative learning (CSCL) settings. In the recent decade a cluster of similar terms, such as e-facilitator [1], e-moderator [2], e-tutor and ementor, have emerged and been used interchangeably to refer to a person who enables or “guides on a side” online group interactions and communications. On the other hand, there is an ever increasing trend of cross-cultural CSCL which requires teachers with different academic and cultural backgrounds to orchestrate their experience and expertise and to crisscross cultural boundaries adeptly to facilitate online learning activities. Furthermore, there seems no standardised facilitation approach, which varies with the nature of people, tasks and technologies and involves managing complex tripartite relationships among these three elements. Roughly speaking, there are two major facilitation approaches: A proactive facilitator is to set the context for the group, install group norms and manage the group progress [3], whereas a reactive facilitator is to launch an intervention strategy when problems arise. No consistent U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 181–195, 2009. © Springer-Verlag Berlin Heidelberg 2009
182
A.V. Nguyen-Ngoc and E.L.-C. Law
findings indicate which facilitation approach is most effective. Nor are there any consensual methods to gauge the effectiveness of online facilitation. The facilitation style interacts intricately with the type/purpose of an online group. Social software tools are indispensable for supporting online facilitation. Among such tools, weblogs (or blogs) are playing a more and more important role. Weblog is a type of website that is usually maintained by an individual with regular entries being commonly displayed in reverse chronological order. Weblog has emerged as a popular social software tool in recent years, and blogging as a web-based form of communication is becoming mainstream [4]. In Higher Education Institutions (HEIs) worldwide, we can witness the broad dissemination and adoption of this tool. This increasing trend suggests that learners tend to circumvent the constraints of centralised authorship [5]. Furthermore, weblog meets the need for instant communications in a knowledge-building community [6] because it enables self reflection, studentfacilitator and peer communications. Indeed, a salient quality of weblog is its effectiveness and ease-of-use for publishing one’s thoughts that invite further intellectual as well as social dialogues. Weblogs provide the space where the decentralised authorship can be realised and create a flexible environment where students can be motivated to reflect and discuss [7]. Being able to engage in sustainable constructive discourses is imperative for motivating students to continuously participate in a webbased learning course. With appropriate design and guiding strategies, weblogs have great potential to become a powerful tool that supports web-based learning in academic institutions and workplaces [8, 9]. Given the foregoing observations, in our project we are motivated to develop better understanding of how facilitators use weblogs as an educational instrument to support the students’ activities in a complex multi-national collaborative learning setting, and whether facilitation style plays a role in influencing learners’ working styles and performance in a setting where weblogs have been heavily used. We are also very interested in identifying viable approaches to analyse weblog activities for the evaluation of online collaborative learning/facilitation. This issue is very important as a theoretically sound and practically usable content analysis scheme for making sense of usually voluminous weblog contents is still lacking. Our project aimed to create an open virtual learning space for HEIs in Europe and to advance learners’ self-directed learning competence [10]. The validation of these goals was realised through Trials (or field studies). We adopt mixed methods approach to capture data from different sources, actors and perspectives to triangulate the findings. In this paper, we report the design and implementation of our most recent validation Trial (hereafter ‘Trial’) in Section 3, Empirical results and discussions on them are presented in Section 4 and Section 5, respectively. Implications for the future work are described in the last section.
2 Related Literatures on Online Facilitation In online education, facilitation is an integral part of as well as a critical success factor for group interactions and communications, which take place mostly in some communication media. The correlation between tutors’ experiences as well as qualifications and the students’ performance in traditional education have been found in many
Analysis of Weblog-Based Facilitation
183
studies (e.g., [11], [12]). However, to the best of our knowledge, no existing papers examine in detail the online facilitation as well as the impact of facilitation styles on the students’ learning styles and performance in a cross-cultural collaborative learning environment. Such learning context has its own features that might influence the students’ performance, such as the context of learning (online learning mediated by technology, group-mates coming from different backgrounds and cultures, and learning activities facilitated by facilitators who might also come from different cultures), learning methods (different than those used in classrooms), supports (hardware, software and the like), and other matters raised by the cultural differences or time management (different time zones). Furthermore, facilitators in the online environment are required to be active and experienced in the subjects to be taught [13]. An online facilitator should also be able to adapt to the lack of physical presence in an online environment by creating a supportive environment where students feel comfortable and where they know that their facilitators are accessible. The facilitators should also be well trained in online learning context. Knowledge of the use of the tools used in facilitating online learning and collaboration activities, the appropriate methods used in communicating with online students and with online colleagues using the provided tools, the ability to control and assess the students’ learning and collaboration, and how to effectively prepare for a course in an online environment are just a few of other expectations placed on a facilitator. Using weblogs in online courses is becoming popular in many HEIs. This communication instrument is becoming a significant component of many web-based learning environments. Consequently, online facilitators are required to be knowledgeable about this communication tool in order to facilitate their students in an effective and appropriate way. In reviewing the existing CSCL literature, three points are noteworthy: Firstly, target groups are mostly (if not only) students, in other words, there exist only a limited number of studies investigating how facilitators and students interact over time; Secondly, studies on deploying weblogs for facilitation and on the impact of such weblog-based facilitation are not yet available; Thirdly, systematic analysis methods for evaluating different dimensions of the online collaborative learning and facilitation are still lacking.
3 Trial: Settings, Participants and Feeding Mechanism The Trial was implemented as an international, fully online master level course in the spring semester 2008 (March – June). It was an introductory course on “e-learning course design”. Pedagogically, it was grounded in the social constructivist learning theories with the aim of advancing the learner’s competence in self-directed learning, social networking, and cross-cultural collaboration through individual as well as groupwork activities. The course consisted of a series of practical hands-on activities and reflective discussions in international groups. Ten facilitators, designated as fa1, fa2, fa3 and so on, from seven European countries with different levels of online facilitation experience were involved in the Trial. Two so-called internal facilitators (i.e. fa1 and fa2) were the research team members of the project and co-designers of the pedagogical intervention strategies. 76 students
184
A.V. Nguyen-Ngoc and E.L.-C. Law
from eight European countries registered for the course. The other eight external facilitators were compensated with some monetary rewards. A face-to-face meeting was held where all facilitators were introduced to the related pedagogical concepts, the Trial scenarios and the tools to be deployed. They also discussed various issues such as student recruitment, course outline, and assessment schemes. This bootstrapping event enabled the facilitators to know each other, thereby laying the groundwork for subsequent collaborations. The physical meeting was followed by a series of regular videoconferences where the facilitators could share some practical experiences and strategies for resolving issues such as passive students. The participating students were under- and postgraduates majored in different fields of information science and social sciences. They were divided into 10 different groups, designated as Group1, Group2 and so forth. Each group was supervised by a facilitator. The course lasted 14 weeks. To facilitate establishing collaborative relationships, all the students and facilitators were required to create their own personal weblogs to introduce themselves. Each group was required to develop an online course on the topic of their choice. For each week, there was a specific e-learning topic, and the students were required to read a list of related learning materials and to write their reflections on their individual as well as group learning activities by using a pre-defined template in their weblogs. Besides weblog, which was used as the main tool of the Trial, the students could also use other recommended open source social software to support different activities. Furthermore, our project aimed to form a “distributed collaborative learning space” [10], in which the facilitators and students were allowed to easily share, exchange, search and browse weblog data without moving back and forth among several weblogs. A feed mechanism was the technical solution. It enables the creation of the mashed team feed to monitor weblog postings and comments, i.e. making team member aware of each other’s postings. This mechanism allows a user to reply to other users’ postings not in their weblogs but in her/her own one. A user can also keep on his/her weblog aggregated contents of all “subscribed” weblogs. The technical team of the project has developed a WordPress Feedback plugin to support the feed management, enabling the user to activate and deploy the feed mechanism easily [14]. The proposed feedback mechanism complements the existing feed standards, including RSS 2.0 or Atom. Furthermore, the students were asked to categorise their weblog messages with some pre-defined tags (e.g. groupXXreflection). The tagging scheme enables efficient search for specific messages in a weblog.
4 Data Analysis and Results 4.1 Blog Analysis Scheme To analyse weblogs effectively, we have developed a weblog analysis scheme originally inspired by France Henri’s [15]. In her scheme, the transcripts are analysed according to five dimensions, which are participative, interactive, social, cognitive and metacognitive. Her approach is grounded in a cognitive view of learning, focusing on the level of knowledge and skills evident in the learners' communications and has been applied widely for evaluating (mostly) forum discussions in many online
Analysis of Weblog-Based Facilitation
185
learning courses, e.g. [16, 17, 18]. Henri’s approach, however, much focuses on forum-based communications, where everybody uses the same instrument provided. It also lacks detailed classification of electronic messages [17]. The cognitive and metacognitive dimensions defined in Henri’s scheme are very hard to measure and the inter-rater reliability [19] tends to be very low. We have much extended and adapted Henri’s scheme to the context of using weblogs in an online cross-cultural collaborative learning environment. We have combined content analysis and social network analysis (SNA) techniques [20] to analyse and visualise the collaborative learning and interaction patterns derived from the weblog messages. The proposed analysis scheme has been deployed to evaluate the Trial, which is a very complex web-based multi-national collaborative learning context. In the Trial, we focus on evaluating the learning and interaction activities taken place in a collection of individual and group weblogs. Our combined analysis scheme helps evaluate different dimensions of the collaborative learning activities, including the facilitating activities, using weblogs. Basically, the weblog posting messages (or entries) and comments are broken down into small units, which are separate meaningful ideas [19]. Each unit is coded and classified according to its type, content, its “sender” or “receiver” attribute, and its connectivity. Unique identifiers (IDs) are assigned to posting entries and their corresponding comments. Details of the scheme could be found in [21]. In this paper, we present the analysis results of our proposed weblog analysis scheme which has been applied to the Trial. However, due to the space limit, some aspects of the scheme can only be briefly described when the analysis results are discussed. 4.2 Analysis Results To facilitate the evaluation, the whole course of 14 weeks was divided into three phases according to the course schedule. • Phase1: Week1 and Week2 in which one of the assigned tasks was to form groups • Phase2: From Week3 to Week10, the major tasks assigned to students were to explore the tools, to investigate the appropriated learning resources as well as to design and develop an online course • Phase3: From Week11 to Week14, the students were required to (peer-) evaluate other groups’ courses as well as to improve their designed course based on the received feedback from the other groups. Descriptive statistics. Some useful information could be extracted from the facilitators’ self-introduction messages in their weblogs. All the facilitators are academic staff at different HEIs. Five of them are working in educational technology, two in distance education/e-education, one in organisation and informatics, and another one in information system management. Seven of them have had experience in using social software tools. Almost all of them have had experience in online education. However, only two confirmed their experience in facilitating international groups. In the period of 14 weeks, 159 messages (or entries) and 136 comments were published in the facilitators’ weblogs (compared to 749 messages and 766 comments
186
A.V. Nguyen-Ngoc and E.L.-C. Law
were published in the students’). fa1 and fa2, who co-designed the course, were among the most active facilitators. fa1 posted 39 messages in her own weblog and also many comments in the weblogs of the students in her own Group1 as well as in the other groups. Similarly, fa2 was extremely active in posting comments and initiating discussions in many different facilitators’ and students’ weblogs. She had the highest number of posted comments. fa3 and fa5 were also very active in posting messages. On the contrary, fa4, fa7 and fa10 were more passive. They posted only seven, seven and twelve messages in their own weblogs, respectively. The idea of “distributed collaborative space” was not implemented effectively with only a relatively small number of facilitators and students having tried out this possibility. Eight facilitators activated the feedback plugin but only six of them, including fa1, fa4, fa5, fa7, fa8 and fa10, really got feeds from their students in their own weblogs. The others just added hyperlinks referencing to their students’ weblogs. Four facilitators (fa1, fa4, fa5 and fa6) published a message instructing their students how they should have tagged their weblog messages but only two facilitators (fa1 and fa5) applied the pre-defined tagging scheme. Hence, it was not surprising that very few students (actually only two of them) tagged their messages. While many of the students used their own meaningful tags, the others just did not bother with the tagging. Whilst 51 students activated the feedback plugin, only 28 of them got feeds from their facilitators and/or group members and they mostly viewed the new messages from the facilitators and/or their group members’ weblogs. The reply messages in their own weblogs were rarely found. Only in Group1, in which the facilitator fa1 strongly encouraged her students to use the proposed feeding/tagging mechanism (by setting an example and also by posting messages explaining the benefits of doing it), the feed functions were more or less used as expected. Weblog content analysis: Classified message units. The message units are “typed”. Such typing scheme would help reveal the features and patterns of the students’ activities on their weblogs. Four different types are defined, including TA (if the unit content is task-related), CO (if the unit content is about coordination, such as organising a group meeting), SO (social-related units) and TE (anything concerning technical issues). One should note that these types can be refined depending on the context and evaluation objectives. The number of SO units was highest in Phase1 and Phase3. In Phase2, the number of TA units was slightly higher than that of SO units. This shows that weblogs were heavily used by the facilitators for social exchange. In Phase2, which was supposed to be the main working phase, the number of TA units increased significantly. The number of TE units was low in all phases. Weblog content analysis: Message categorisation. Individual weblog messages, based on their meanings, were classified into the pre-defined sub-categories, which belong to one of the two so-called core categories, namely the “Course Design” and “Groupwork”. Within the Course Design category, concerning the expectation when joining the Trial, around 71% of the facilitators’ statements were about learning new things. They were eager to learn new social software tools, e.g. they posted in their weblogs “Just think about […] tools as part of the course - it is not only novel, but on my opinion it is also very inviting environment for new type of learning”. They were also eager to face a new experience, e.g. “what is the international experience of doing courses in
Analysis of Weblog-Based Facilitation
187
web 2.0 environment”. Other expectations included meeting new people, sharing ideas and establishing good cooperation. Within the Groupwork category, most of the SO (social-related) message units fell into the sub-category “Encouragement”, whereas the other SO messages were about health problems or they were simply welcome as well as greeting messages. Very often, the facilitators posted messages to encourage their students when they faced some difficulties (e.g. “don’t worry, you can do it”), to ask them to keep on working (e.g. “Please, continue working, don’t leave your task… we can do it!!!”) or to recognise their progress as well as their contributions to the group activities. The messages that were categorised into “Facilitation” and “Groupwork strategies” sub-categories are discussed in the following sub-sections. Groupwork strategies. Although collaboration (in which everyone works on the same issue such as solving a problem together) among the students was encouraged, cooperation (in which each one works separately) was the basic working style observed. In the Trial, each group was required to design an online course and this main group task could be divided into different sub-tasks. Usually, each student worked separately on their assigned tasks. However, they could also switch to the collaboration mode in which they discussed directly using a synchronous communication tool about the group tasks. Asynchronous communication tools such as Google group and weblog could also be used to support group collaboration. The group tools selection and usage was much influenced by the facilitators’ recommendations and guidance. Most facilitators posted such messages as “As a starting point, I would like to try out this feedback plugin which is integrated to our weblogs” , or “Now, please subscribe your blogs to the other groupmates or put the RSS into the feed on feeds tool to be in contact, and start…”. Facilitation. Different facilitators had different facilitation styles. fa1 and fa2 were very actively involved in the Trial and the activities of most of the groups. Both had many thorough reflections on the students’ work as well as on the course design and implementation. They also raised some questions for their own research, e.g. fa2 posted “I started to think about the “netiquette” - how to behave in virtual spaces… I started to read about transactional distance theory … as it might explain our learning situation and its context”. fa1 mostly actively supported her own group though she also posted some messages in other groups’ weblogs. She claimed in her weblog that “the facilitator is part of the group”. She actively guided her group to select the appropriate working style, e.g. “What needs to be clarified in your team is will you favor cooperative or collaborative workstyle”. She proposed some interesting tools, provided feedback to the group discussions, and proposed some new ideas. fa1 also applied the feeding/tagging mechanism in her weblog and encouraged her students to do the same, e.g. “it is better not to comment students’ postings but rather write replies to students in my own weblog … and the student will get aware of such replies through pingbacks....”. Some facilitators, including fa1, fa2, fa3 and fa5, considered the Trial being an excellent chance for them to learn and to work together with their students. For example, fa5 posted “I am not a teacher, trainer or a lecturer in this group … I would do my best to help if necessary (and it is my real pleasure) but this is not me who makes decisions…” Some facilitators, e.g. fa1 or fa8, were quite formal in their postings, though many social-related messages (mostly for encouragement) were also found. In
188
A.V. Nguyen-Ngoc and E.L.-C. Law
contrast, some others such as fa2, fa3, or fa5, were more informal. Those active facilitators fa1, fa2, fa3, fa5, fa8 followed the progress of their groups quite closely. They regularly provided feedback and guidance to their students and tried to help their students when they got problems with the collaborative learning activities. Furthermore, fa1, fa2 and fa5 actively raised questions for their students for further discussions or research, e.g. “What do you think about our group work according to the principles […]...”. On the contrary, fa4, fa7 and fa10 were quite passive in posting in their weblogs. Most of their messages were about the generic instructions related to the Trial. The number of messages posted in fa9’s weblog was quite high. However, most of the messages were generic instructions or encouragement. This facilitator was not actively involving in her student activities. Passive members were one of the biggest problems of the Trial. Although the number of registered students was very high, only a small number of students in each group were really active, contributed to their group activities and gained benefits from the Trial. However, it seems that there was no optimal solution to motivate the passive members and to encourage them to keep on working. Basically, to deal with the passive members, the facilitators sent individual and group emails, posted messages and comments in the passive students' weblogs. Some facilitators also organised group chat meetings to find out the problems their passive students might have. fa1 posted “...We can’t stop to wait the others to catch up because we’re having a rather tight schedule with our other studies as it is....” fa1 also proposed that “...This week I learned that … local facilitators and local peer help was helpful to figure out why people are missing…”. This was actually a very good approach as the local facilitators could have stronger influence on the local students. In fact, many facilitators, including fa5, fa6 and fa8 organised regular face-to-face meetings with the local students not only from their own group but also from the other groups to discuss the Trial and the groupwork. Social network. Social network analysis of weblogs (e.g. [22]) plays a very important role in our analysis scheme. Typically, a weblog is written by a single user (the blogger) and is closely identified with that person [23]. A weblog interaction may emerge when a user explicitly posts a comment to an original posted message or when a new message implies or refers to an idea raised by a previous message posted in the same or in another weblog. The latter, however, is very difficult to locate. In our analysis, we address both interaction cases. First, the interactions amongst the users, who post messages in the weblogs, are constructed. Both intra-group and inter-group interactions are taken into account. The social networks constructed would help reveal (i) the learning community structure, (ii) the activeness and contributions of the participants, including both facilitators and students, to their own groups as well as to the whole community, (iii) the interactions and the relationships developed and maintained as the course progressed. Second, the connectivity between the units are analysed and visualised. The questions investigated include “From each unit, were there any references to other units from the same or from other weblogs?”, “How was the complexity of the connected messages, e.g. the number of messages found in a chain of connected messages?” The message connectivity analysis allows us to evaluate the interaction process in online asynchronous discussions, e.g. how well and how often the students use weblog for their collaborative learning activities.
Analysis of Weblog-Based Facilitation
189
Participant interactions. As expected, during Phase1 and Phase2 of the course, there were a very high number of connected participants. During Phase1, the facilitators and students exchanged several social and welcome messages/comments in their weblogs. Some also discussed their experiences from the previous courses in which they had participated. Phase2 could be considered as the working phase in which the students were required to work individually as well as collaboratively with their facilitators and peer students to design an online course. Hence, the number of connected participants correspondingly increased. However, in some groups such as Group7, which was a passive group, the number of connected participants decreased as the course progressed. Phase3 was the last phase of the course and only active members kept on working till this Phase. In addition, the tasks during this Phase did not require many interactions from the participants. That is why the number of contacted participants reduced significantly in Phase3. The whole course learning community is also investigated. We defined learning community in the context of our Trial as the combination of all the ten groups, including their facilitators. Fig. 1 shows the sociogram [20] (i.e. the social structure) of all the participants’ interactions in Phase2 of the Trial. The facilitators are represented by bigger (blue) nodes while the students are represented by smaller (red) ones. The members belonging to the same group are relatively positioned together (with Group1 on the top of Fig. 1). All groups had “isolated” nodes, which represent those students who did not have any interaction at all. Interactions were clearly visible in intra-group mode. One could see that interactions were quite dense in Group1, 3, 5 and 8. On the contrary, there were very few intra-group interactions in Group4, 6, 7 and 10. With a closer look of Group1 in Phase2, the main working phase, we found seven cliques, which are defined as a maximal complete sub-network containing three nodes
Fig. 1. The whole Trial learning community structure
190
A.V. Nguyen-Ngoc and E.L.-C. Law
or more [20]; each was composed of three participants. fa1 was found in six cliques. This implied the activeness or the centrality of this facilitator in the group activities. The calculated Freeman’s centrality degree [20] confirmed this statement. fa1 had a very high centrality degree, which is defined as the number of links incident upon a node, with the OutDegree and InDegree being 39 and 13, respectively. On the contrary, as example of a passive group in Phase2, Group7 had only one clique, which did not contain the group facilitator fa7. g7st2 was the most active student of Group7 with both Freeman’s OutDegree and InDegree being 5 only. Several interactions were also found in the inter-group mode, many of which originated from fa2, or between the facilitators and students from the same country. Clearly, fa2 played a central role in the whole community interactions. She interacted with many students of all the ten groups, though the student interactions in her own group (Group2) were not particularly strong. Another interesting case was fa8. This Finnish facilitator had very strong interactions with her local Finnish students. All of her inter-group interactions were with the Finnish students. Interestingly, the Finnish students were mostly active students. They had very high technical skills and very good communications with each other. Message connectivity. The message connectivity analysis and visualisation is important to see how the students used weblog for their learning and collaboration. During Week1/Phase1, there were several star-pattern links found in almost all the weblogs. It could be attributed to the fact that the facilitators and students posted their comments on the self-introduction messages in the weblogs. There were a few links found in inter-group mode. We have noticed that the chosen group working approach depended heavily on the suggestion from the facilitators and/or from some active group members. For example, Group1 message interactions on the facilitator’s weblog and her active students’ were very strong. Many links were found in those weblogs. Group5 started using Google group for their discussion in Week4 following some discussions involving fa5 and since then, most of the groupwork discussions took place in their Google group. In Group9, there were only two active students. They showed their preference for working directly using individual weblogs as their facilitator did not have any recommendations on the collaboration tools. In Fig. 2, which shows this group interaction in Week3, we use different colours to refer to different weblogs to which the messages were posted. All red nodes (i.e. the upper left and right clusters) are the messages posted on the g9st6’s Weblog, the green ones (i.e. the lower right cluster) are those posted on the g9st4’s. There were many links, even some long chains of messages, found between these two weblogs. In Group10, fa10 proposed the group to use his weblog for the group discussions. However, fa10 was a very passive facilitator and actually the Group10 message interactions were very weak in both Group space (fa10’s weblog) and all students’ individual weblogs. In other “weak” groups, e.g. Group7, there were very few posted messages and, of course, very few connected messages during the working phase (Phase2). The differences in the number of posted messages and in the message connectivity between a strong and a weak group are clear. Interestingly, considering the messages posted on Weblogs, those “weak” groups’ facilitators were more passive as compared to their colleagues from other (stronger) groups.
Analysis of Weblog-Based Facilitation
191
Fig. 2. Group9 message connectivity in Week3
5 Discussions How facilitators use weblogs as an educational instrument The requirements for the selection of facilitators to join the Trial, which was a very complex cross-national collaborative learning setting, were very demanding. In addition to the qualities specified in [13], the facilitators should be knowledgeable not only about the subject-matter but also the pedagogical concepts underlying the Trial, competent in deploying the enabling technologies (most of them are up-to-date about information and communication technologies) for online collaborative learning activities, able to monitor and assess international students’ learning and collaboration, confident in conversing in English which is their second or even third language, and motivated as well as accessible almost round-the-clock to give timely feedback and support to students, who are from different countries with different time zones. Consequently, although most of the facilitators involved in the Trial were experts in education-related fields, they still had quite a large variation of abilities, skills and experiences. Some of them were very motivated and active and were really involved in the students’ collaborative learning activities. Some other facilitators were very sluggish and passive. Their weblog-based facilitation approaches were also very different. Some facilitators actively used their weblogs as an important communication channel to share and exchange ideas as well as to guide their students. The passive facilitators’ weblogs were used only for the generic instructions and did not contain timely feedback for the students’ questions and problems. The influence of facilitation styles on the students It appeared that the facilitator’s influence in organising and using his/her weblog had a strong impact on the way their students used theirs. The combination of the feeding mechanism, which was a practical means to collect data from different weblogs in one
192
A.V. Nguyen-Ngoc and E.L.-C. Law
place, and a pre-defined tagging scheme, which helped browsing/searching in weblogs easier, could greatly support the formation of a “distributed collaborative learning space”. However, only in Group1, in which the facilitator fa1 strongly encouraged her students to use the proposed feeding/tagging mechanism (by setting an example and by posting messages explaining the benefits of doing it), the feed functions were more or less used as expected. The fact that only half of the students who had activated the feedback plugin exploited the feeding features (cf. Section 4.2) implies that some students may have had misconceptions about feeding as some feeding concepts were not intuitive. We believe that the facilitators needed to use this mechanism effectively to demonstrate and encourage their students to use it. They should have strictly used the feeding mechanism as instructed (and expected) by the Trial designers and should have followed the tagging scheme to set the examples to their students. They should also have provided clear instructions about the basic ideas behind those mechanisms so the students may have avoided any misunderstanding about the use of ‘standardised’ tags (i.e. search for certain blog types). The analysis results also show that the facilitators’ activeness in posting messages and comments may have had some impact on the way their students use their weblogs. It seemed that if the facilitators were more active in posting messages, the students would be more active in using their weblogs. In the groups where the facilitators actively raised discussions or research questions, the number of students’ postings was high and especially, the group students’ interactions via their weblogs were dense. Otherwise, when the facilitators were passive, the interactions in their students’ weblogs were hardly found. We did not find any significant correlation between the students’ grades, which were granted based on the structured evaluations by the facilitators and peers, and the facilitators’ facilitation styles. In other words, it seems that the facilitator’s style has no influence upon the student’s performance. Actually, in all the groups, there were some active students and many passive students. The active students managed to finish the Trial even though the intervention from the facilitator was very limited in some groups. Even in the groups, where the facilitators were very active, many students dropped out very quickly. Motivation, self-directed learning competence and cultural features would be more important factors that influence the effectiveness and efficiency of weblog usage in web-based learning. The motivated students were very active in doing some research that supported their assigned tasks and in getting involved in the group activities. Some even found the joy of working. Interestingly, there are apparently cultural differences in the student readiness for self-directed learning. Specifically, the Finnish students consistently outperformed their counterparts from the southern European countries in terms of activeness, leadership, level of motivation, autonomy, and critical thinking. Presumably the students from countries where autonomy is not encouraged may tend to be less self-directed. The facilitators admitted in their interviews that “there was some cultural problem between different people in understanding their roles and the way they have to perform their joint projects … most of the students were from […] countries in which the teaching system … self-directed learning is not a reality, but students have the habit to be directed by their teacher”. Such socio-cultural factors can have strong impact on the way the students collaboratively work using online tools, including weblog, especially when the intervention from the facilitators may be kept minimum.
Analysis of Weblog-Based Facilitation
193
Analysis of weblog activities to evaluate the learning/facilitation activities The analysis results show that weblogs could be seen as a very rich evaluation source to study group interactions and collaborative learning activities in a web-based learning environment. Our proposed weblog analysis scheme has proved useful in analysing a pretty large amount of weblogs within a fairly short amount of time. It enables us to have a good understanding of the complex qualitative data of our Trial. The combination of qualitative, quantitative and SNA approaches allows one to dig into the richness of weblog data and to construct the whole picture from different perspectives. The visualisation of weblog interactions in both people and message unit levels help reveal the learning community structure as well as the weblogs usage and connectivity in a very intuitive way. In addition, the visualisation of weblog-based interaction patterns as feedback for users, when necessary, can help them regulate their behaviours [24, 25]. The breaking down of weblog messages into small units allow the evaluators to easily apply different coding and/or classification schemes if necessary. Finally the dimensions mentioned are quite flexible, which allow the subdimensions and categories, or even some dimensions to be refined depending on the evaluation objectives and on the emerging results of the analysis. However, there are still drawbacks of our weblog analysis approach. To classify and categorise the messages is rather subjective. In addition, it seems straightforward to analyse and create the participant interactions; but it is very difficult to find out the message connectivity, i.e. to trace the references from one message unit to the others, which may be posted in the same or in a different weblog. Furthermore, it is timeconsuming to extract data from weblogs, to analyse data, to create the analysed data matrices and then to generate the sociograms. The cognitive process dimension in our scheme also remains as a future task as such dimension would help evaluate the information processing and cognitive deepness of postings.
6 Concluding Remarks As revealed by our analysis results, weblog could serve as a powerful tool in webbased learning. It is useful to enable students to self-reflect on their interactions and learning activities, to express their thoughts, ideas, and share/exchange them with their peers when working together. Especially, it could serve as a communication and information channel for the facilitators to exchange ideas as well as to guide their students in online collaborative learning activities. However, much research effort is still required to exploit the potential of weblog as an educational technology. Specifically, several open research questions are worthy to explore: • First, it concerns the feasibility of fully automating the tedious, manual process of weblog content analysis. A range of qualitative data analysis software packages are available; the popular ones include Atlas.ti, NUD.IST, and NVivo. Some formal and informal reviews on these products have been conducted (e.g. [26]). For our future work, we will experiment with NVivo to analyse our weblog data to assess the relative effectiveness and efficiency of the manual and automated approaches • Second, there are definitely viable approaches to weblog content analysis other than the one we proposed (e.g. [22, 27]. Amongst others, pattern analysis approaches seem promising. Indeed, the research question whether e-learning patterns can be
194
A.V. Nguyen-Ngoc and E.L.-C. Law
equally successful as their pendant in software development has recently been addressed (http://www.iwm-kmrc.de/workshops/e-learning-patterns/index.html). Our future work may contribute to this question • Third, differentiating active from inactive weblogs relies on some arbitrary, nonvalidated metrics and thresholds [28]. A one-message weblog is clearly a passive one. However, given the wide variety of posting behaviours, it is very hard and impractical to apply a consistent threshold to all users. The weblog content analysis and the social networks construction would help reveal the quality of the postings as well as the activeness and the participation in the group and community activities of the weblog user, but the processes involved are extremely time-consuming. We plan to develop and validate a mechanism that streamlines this process.
References 1. Howell-Richarson, C., Preston, T.: Introduction. Reflecting Education 1, 1–2 (2005) 2. Salmon, G.: E-moderating: the Key to Teaching and Learning Online. Kogan Page (2000) 3. Friedman, P.G.: Upstream facilitation: A proactive approach to managing problem-solving groups. Management Communication Quarterly 3, 33–50 (1998) 4. Nardi, B., Schiano, D.J., Gumbrecht, M.: Blogging as social activity, or, would you let 900 million people read your diary? In: CSCW conference, Chicago (2004) 5. Karger, D., Quan, D.: What would it mean to blog on the Semantic Web? In: McIlraith, S.A., Plexousakis, D., van Harmelen, F. (eds.) ISWC 2004. LNCS, vol. 3298, pp. 214– 228. Springer, Heidelberg (2004) 6. Divitini, M., et al.: Blog to support learning in the field: lessons learned from a fiasco. In: IEEE conference on Advanced Learning Technologies, Taiwan (2005) 7. Lin, W.-J., et al.: Blog as a tool to develop e-learning experience in an international distance course. In: 6th International conference on Advanced Learning Technologies, The Netherlands (2006) 8. Alexander, B.: Web 2.0: A New Wave of Innovation for Teaching. Educause Review 41 (2006) 9. Jackson, A., Yates, J., Orlikowski, W.: Corporate blogging: Building community through persistent digital talk. In: 40th Annual Hawaii International conference on System Sciences (2007) 10. Fiedler, S., Kieslinger, B., Ehms, K., Pata, K.: D1.3: iCamp educational intervention model. Technical report (2009) 11. Darling-Hammond, L.: Teacher quality and student achievement. Education Policy Analysis Archives 8(1) (2000), http://epaa.asu.edu/epaa/v8n1.html 12. Foster, G.: Teacher influence on student performance and selection in broad-spectrum tertiary education. Working paper, University of South Australia (2007) 13. Illinois Online Network: Educational Resources Web, http://www.ion.illinois.edu/resources/tutorials/pedagogy/ instructorProfile.asp 14. Wild, F. (ed.): An interoperability infrastructure for distributed feed networks. Technical report (2007) 15. Henri, F.: Computer conferencing and content analysis. In: Kaye, A.R. (ed.) Collaborative learning through computer conferencing. Springer, Berlin (1992)
Analysis of Weblog-Based Facilitation
195
16. Hara, N., Bonk, C.J., Angeli, C.: Content analysis of online discussion in an applied educational psychology. In: Center for Research on Learning and Technology. Indiana University, Bloomington (1998) 17. McKenzie, W., Murphy, D.: I hope this goes somewhere: Evaluation of an online discussion group. Australian journal of Educational Technology 16(3), 239–257 (2000) 18. Ng, K.C., Murphy, D.: Evaluating interactivity and learning in Computer conferencing using content analysis techniques. Distance Education 26(1), 89–109 (2005) 19. Strijbos, J.-W., et al.: Content analysis: What are they talking about? Computers and Education Journal, 29–48 (2005) 20. Wasserman, S., Faust, K.: Social Network Analysis: Methods and Applications. Cambridge University Press, Cambridge (1994) 21. Nguyen-Ngoc, A.V., Law, E.L.-C.: Challenges for blog analysis and possible solutions. In: International conference on Web-based Learning, Aachen, Germany (2009) 22. Chang, Y.-J., et al.: Social network analysis to blog-based online community. In: IEEE International conference on Convergence Information Technology, Korea (2007) 23. Chin, A., Chignell, M.: A social hypertext model for finding community in blogs. In: Hypertext conference, Denmark (2006) 24. Indratmo, V.J., Gutwin, C.: Exploring blog achives with interactive visualization. In: International conference on Advanced Visual Interfaces, Italy (2008) 25. Takama, Y., Matsumura, A., Kajinami, T.: Visualization of News distribution in Blog space. In: IEEE/ACM conference on Web Intelligence and Intelligent Agent Technology, Hong Kong (2006) 26. Lewis, B.R.: NVivo 2.0 and ATLAS.ti 5.0: A comparative review of two popular qualitative data-analysis programs. Field Methods 16(4), 439–469 (2004) 27. Herring, S.C., et al.: Bridging the gap: A genre analysis of Weblogs. In: 37th Annual Hawaii International conference on System Science, pp. 101–111 (2004) 28. Kramer, A.D.I., Rodden, K.: Applying a user-centered metric to identify active blogs. In: CHI conference, San Jose (2007)
Sharing Corpora and Tools to Improve Interaction Analysis Christophe Reffay and Marie-Laure Betbeder LIFC: Computer Science laboratory of the University of Franche-Comté 16 Route de Gray 25030 Besançon cedex, France {Christophe.Reffay,Marie-Laure.Betbeder}@univ-fcomte.fr
Abstract. A very wide range of online interaction analysis staying in the hands of researchers, and tools being implemented in research prototypes, often used only in non-replicated experimentations, we point out the need for TEL research community to reach large scale validation for its results. This paper is a concrete step in this direction. For a deeper collaboration in our community, we suggest to share structured data collections. The Mulce project aims at proposing a structure for teaching and learning corpora (including pedagogical and research context), and especially for interaction tracks. Two main corpora are built according this structure. This paper defines a teaching and learning corpus, shows its main structure and browses some parts of the structured interaction data. We also describe the platform that enables the community to browse and analyze a shared corpus. Keywords: e-research, corpora sharing, interaction analysis.
1 Motivation In the last twenty years, we saw the emergence of an incredible number of tools, services and platforms. One technology quickly replaced the previous one, offering more and more potential for interaction analysis. Some voices in our communities are pointing out the problem of little impact of our research outcomes on real learning situations: our very intelligent tools and services often stay in the researchers’ hands and rarely go beyond the prototype stage. The time rate for technology innovation is too high, comparing to the time needed by social science to validate some of our prototypes. In this paper, in order to propose to the community a way to access, share, analyze and visualize learning and teaching corpora, we propose a new formalism [1] which defines, describes and structures data provided by on-line training. Before presenting our proposal, in this section, we come back to the validation of indicators and tools for Technology Enhanced Learning and present other works related to this contribution. The study of collaborative online learning, whether aimed at understanding this form of situated human learning, at evaluating relevant pedagogical scenarios and settings or at improving technological environments, requires the availability of interaction data from all actors involved in such learning situations, including learners and teachers. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 196–210, 2009. © Springer-Verlag Berlin Heidelberg 2009
Sharing Corpora and Tools to Improve Interaction Analysis
197
We can find a lot of technical proposals for indicators for social or cognitive process monitoring especially in the TEL, Intelligent Tutoring System and Computer Supported Collaborative Learning communities of the last decade. If some of these indicators are very specialized, i.e. strongly related to a given tool or activity, we find also very general purpose indicators taking their raw data from widely used communication tools like text chat, text conferencing or e-mail. These technical implementations for indicators conceptually provide a large range of possibilities and make this research area very creative. The very most part of these indicators (including ours) are designed in a given context, where they show some interesting properties and even promise usefulness for the various actors involved in real situations. Unfortunately, these indicators often stay in the researchers’ hands and are rarely used by real actors of the situation. As far as we know, none of them have been validated or at least evaluated by real/concrete actors. The need of validation for these indicators, at least in a given context, becomes crucial if we want this domain to contribute to the real world distance learning area. These indicators are also rarely reused in other situations or contexts. We argue in this paper that our research community should be able to widen the validity of an indicator by testing it on different situations. In their work [2] on coding and counting analysis methodology, the authors already pointed out the weakness of our research domain. Replicability, reliability and objectivity need to be improved in our work. The main idea of research collaboration is already well expressed in [3] in the following terms: “There is urgent need of putting together complementary strengths and contexts and combining our insights as rapidly as possible to make a greater impact and further elevate our research quality at the same time. Research generally has had a small voice in national educational outcomes; we can speak louder if we speak together.” We know how hard it is to build natural classroom situations, called here authentic learning situations. This is one of the reasons why we launched the Mulce [4] project. Instead of having hundreds of unclassified learning situations, where the data of each are available only by the researchers that built it, we argue that our communities would gain maturity and deepen its understandings by sharing some of the representative situations. Such data could be used as a test-bed for the variety of indicators or methods to analyze various facets of the collaboration. For the Intelligent Tutoring System (ITS) field, the PSLC DataShop [5] presented in [6] provides a data repository including data sets and a set of associated visualization and analysis tools. These data can be uploaded as well-formed XML documents that conform to the Tutor_message schema. The goal is to improve ITS the data are logged from. The data sets are fine-grain mainly automatically generated by the ITS themselves and focus on action/feedback interaction between learners and (virtual) tutor tools. In the CSCL community, a very interesting framework: DELFOS [7, 8] provides similar proposals as the Mulce project. It defined an XML based data structure [9] for collaborative actions in order to promote interoperability (between analysis tools), readability (either for human analysts and automated tools) and adaptability to different analyzing perspectives. Some of these authors joined the European research project
198
C. Reffay and M.-L. Betbeder
(JEIRP–IA) on Interaction Analysis and reported in [10] a template describing IA tools and a common format. This common format should be automatically obtained from Learning Support Environments (by an XSL transformation) and either directly processed by new versions of Interaction Analysis tools, or automatically transformed in their original data source format to be processed by previous versions of theses tools. The resulting common format focused more on technical interoperability than on learning context or human readability. The context is given for fine grain interaction. A very interesting experience in the CSCL community has been initiated by the Virtual Math Team [11]. Multimodal Chat sessions (namely teams B and C of the 2006 Spring Fest) in the Virtual Math Forum have been collected and delivered to numerous (28) external collaborators coming from 11 countries, 18 institutions and 8 different research fields. Every collaborator applied his/her own analysis methods and tools processing these interaction data in order to see what came up. The result is reported in [12]. The same data set has been used also for a pre-Workshop of the last CSCL conference in Rhodes. In this context, we showed how this data set can be structured in a Mulce structure and a new collaboration is currently building this data set as a fully documented corpus to be available in the Mulce repository. In the Mulce structure, the learning situation and the research context are described as wholes possibly in different formats (IMS-LD, LDL, MotPlus, simple text document, etc.) If they conform to IMS-LD, their identified included objects can be referred to by the workspace elements structuring acts’ lists that are recorded in the instantiation part. The nature of sharing perspectives is very different: in the JEIRP, the goal is to share a schema structure, whereas the Mulce platform’s main objective is to share the data collections. For this last issue, an impressive work has been done in the Dataverse Network project [13] described in [14]. We agree with the members of this project on the fact that datasets have to be made available, or at least identified and recorded in a fixed state in order to make sure that data used for a given publication are the same as those identified and (hopefully) made available for other researchers. We also consider that such a (data) publication, when connected to a traditional paper published in a journal or conference, would increase the value of this article and of the related journal (or conference proceedings). In the Mulce project, we provide a technical framework to describe an authentic situation, described by a formal or informal learning design or detailed guidelines, with a representative number of actual participants, according to a research protocol. We also: define a “Learning and teaching Corpus”, provide a technical XML format for such a corpus to be sharable and we are currently developing a technical platform for researchers to save, browse, search, extract and analyze online interactions in their context. The main idea of the Mulce project is to provide contextualized interaction data connected to published results. Considering today’s available technology, Lina Markauskaite and Peter Reimann drew an ideal research world in [15] where grid computing, middleware services, tools managing remote resources, open access to publications and data repositories, open and interactive forms of peer review process, constitute great potential for eresearch. We globally share the same vision for the future of research. Even if we consider that the way to reach this ideal vision is rather long, the main contribution of this paper can be considered as a modest but concrete step in this direction by giving a
Sharing Corpora and Tools to Improve Interaction Analysis
199
definition and the data structure of a teaching and learning corpus as well as the associate platform to share such corpora. Availability of data should enable deeper scientific discussion on previously published results. Other researchers may be able to verify or replicate the methods proposed. It becomes possible to compare methods on the same data and then discuss the result or the efficiency of the methods. This way, different analyses can be done on the same set of interaction data. The Mulce platform currently plans to connect these analyses to the set of data they are based on. Such a set can be part of one or more corpora available on the Mulce platform. Even if sharing research data on collaborative learning has a wider range of implications, in this paper, the discussion, introduced in the next section, focuses on the validity of indicators given in the TEL literature. The main contribution, presented later on, is the open Mulce XML structure for interaction data and the related Mulce platform allowing our communities to share such data collections. We will first define a teaching and learning corpus, then describe its components, and detail the structure of interaction data. The Mulce platform design is then presented in section entitled proposal, before a brief conclusion.
2 How to Validate Our Tools or Indicators? In the review [16] of the “state of the art technology for supporting collaborative learning”, we find 23 referenced systems allocated to 3 main categories: Mirroring tools, Metacognitive tools and Guiding systems. These are allocated to the categories according to the locus of processing (i.e. where diagnosis and remediation are synthesized). As mentioned by the authors in their conclusion, “We have not yet seen fullscale evaluations of the types of systems we have covered here” and “If our objective is to assist students and teachers during real, curriculum-based learning activities, we must also understand how well our laboratory findings apply to natural classroom situations.” In this very nice classification, and particularly for both of the last categories, the definition of a “desired” or “ideal” state of interaction is crucial. When should the supervisor (in the second category) or the system (in the third one) consider that the current state is too far from the desired state? This decision can be made by a simple comparison with a threshold like the desired number of messages suited during a certain period of time, or the number of group members one learner has interacted with… Sometimes, like in [17], this threshold can be directly extracted from the learning design, where the guidelines for learners explicitly indicate a list of precise tasks involving a countable number of interaction messages. For sessions (replicated over years) applying the same learning design to similar cohorts of learners, we can use a first session to calibrate the “ideal state”. For example, in an English as second language acquisition module, the online Copeas experiment has been used to measure the time each of the actors (tutor and learners) has talked during the online audio-graphic synchronous sessions [18]. These measures are related to the different profiles of learners: English level, age, favorite modality of interaction [19], etc.
200
C. Reffay and M.-L. Betbeder
For metacognitive or guiding tools currently designed and implemented for further learning sessions, a set of representative interaction data collections would be very useful (if available) for a calibration step. In such a case, these tools could be tested in the design process by using available (shared) data collections and be applied and evaluated directly by real actors during the first experiment. We can quote [20] as a good example of experiment where mirroring tools are actually tested by learners to get a bird-eye on their ongoing collaboration in a longterm project using a wiki. In their paper, the authors conclude that this first step of tool evaluation showed its usefulness especially for group leaders, and had a positive effect on collaboration management. A better understanding of the representation seems to be needed by learners to improve their interpretation. The authors plan to give more control to users to choose what and how information should be visualized in order to get a better appropriation of the tool. As wiki has become popular in our research experiments, we could imagine that other researchers have similar tools or analysis methods that run on such data. Their availability could help to compare these tools if they have the same goal, or to enrich the analysis if they give a complementary point of view. For computer scientists, it could be enough to put the raw data of the wiki logs and contents on a shared repository, but for the major part of our communities that would analyze the content and draw some interpretations of these analyses, the format of the data should be understandable and the context of the situation readable. Either we can keep developing more and more prototypes giving intelligent feedback to their (hypothetic) users. This way implies that a great part of our force is dedicated to the construction of new experiments for most of our new prototypes. We can try to reuse a very interesting analysis tool in a slightly different context, but, in the worse case, with very different data formats. Or we could try to share some representative authentic learning sessions, for a wider use in a test-bed platform, involving researchers from a wider range of sciences, sharing their complementary analysis. Even if some innovative experiments will remind necessary, a lot of time for a lot of us could be dedicated to deepen understanding, to compare and to validate thresholds values, analysis methods and tools, and to build large scale validation of them. In other words, the questions behind are: What is more efficient between sharing data and sharing format or tools (without data)? Whom for? The rest of this article is a proposal of “how to share” such a data collection. The next section defines the learning and teaching corpus and describes the structure of its main components.
3 Proposal Our proposal consists in (1) a formalism to describe learning and teaching corpora and (2) a platform to share these corpora [1]. The formalism defines the information which can be contained in a corpus and the structure of the data. Through the platform, researchers can share their corpora with the community and access the data shared by other members of the community.
Sharing Corpora and Tools to Improve Interaction Analysis
201
To share a corpus, a researcher has to provide metadata describing the corpus’ components and upload a file describing each component. While accessing a corpus, an identified researcher is provided with a variety of tools to browse the corpus components, to navigate through the contextualized interaction data, to visualize and to analyze them. 3.1 Proposal 1: Learning and Teaching Corpus Formalism In the many fields involved in computer mediated interaction analyses, we can find different research methodologies that result in different needs and especially different ways to save and describe the data. If the definition of “learning and teaching corpus” necessarily depends on the way research experimentations are conducted, we claim that our definition is general enough to fit this variety and a crucial point for the concrete structure is to make explicit the methodological choices for a given experiment. In this section, we first present the main phases involved in this methodological process. Then, we give the derived definition of a “learning and teaching corpus” and explore the structure of its main components. Building and recording interaction in an online training A general organization for an online experiment is illustrated on figure 1.
Learning Design Instantiation
Analyses
Results
Research Protocol Designing Stage
Learning Stage
Analyzing Stage
Publishing Stage
Fig. 1. Building a research experiment for an online training: chronology
In a first stage, the educational scenario is described, at abstract level, by defining the educational prerequisites and objectives, the abstract roles (learner, tutor, etc.), the learning activities and the support activities with their respective environments (abstract tools, e.g. chat, forum, etc.) When the training has to be observed for a research study, the researchers define on the one hand the research questions and objectives and on the other hand the list of observable events to be logged. This documentation makes explicit the research protocol or context of the experiment: i.e.: what will be evaluated, are there pre- or post- tests, or training interviews? In the second stage, the training actually takes place. The abstract roles (designed in both parts of the first stage) are endorsed by real actors, and abstract environments have been implemented in particular platforms including identified tools. This is the instantiation phase where embodied learners and tutors actually run the activities and identified researchers collect their observable actions (interactions and productions). Specific activities designed in the research protocol may also take place during this period: e.g. pre- or post- tests, interviews, etc. At the end of the training, i.e. when learners and tutors are gone, the collected data can be structured and analyzed by researchers. These analyses
202
C. Reffay and M.-L. Betbeder
hopefully lead to research publications that summarize the context and the methodology and try to explain the results. The data collection is not disseminated. Both documentations of the design phase describe the context of the experimentation. This information often stays in the head of the researchers involved in the experimentation. Instantiation phase produces the core data collection that is analyzed in the third stage. Having the context in mind, these researchers can interpret properly their results during the analysis phase. As a consequence, in order to make this data collection sharable with external researchers, we show how the various phases presented above become the main components of the corpus defined in the next section. Learning and Teaching Corpus: Definition We define a Learning & Teaching Corpus as a structured entity containing all the elements resulting from an on-line learning situation, whose context is described by an educational scenario and a research protocol. The core data collection includes all the interaction data, the training actors’ production, and the tracks, resulting from the actors’ actions in the learning environment and stored according to the research protocol. In order to be sharable, and to respect actor privacy, these data should be anonymised and a license for its use be provided in the corpus. A derived analysis can be linked to the set of data actually considered, used or computerized for this analysis. An analysis consisting in a data annotation/transcription/transformation, properly connected to its original data, can be merged in the corpus itself, in order for other researchers to compare their own results with a concurrent analysis or to build their complementary analysis upon these previous shared results. The definition of a Learning & Teaching Corpus as a whole entity comes from the need of explicit links, between interaction data, context and analyses. This explicit context is crucial for an external researcher to interpret the data and to perform its own analyses. The general idea of this definition intends to grasp the context of the data stemming from the training to allow a researcher to look for, understand and connect this information even though he has not attended the training course. Corpus composition and structure The main components of a learning & teaching corpus (see Fig. 2) are: - The Instantiation component, the heart of the corpus, which includes all the interaction data, production of the on-line training actors, completed by some system logs as well as information characterizing actors’ profile. - The Context concerns the educational scenario and the optional research protocol. - The License component specifies both corpus publisher’s (editor) and users’ rights and the ethical elements toward the actors of the training. A part of the license component is private, held only by the person in charge of the corpus. Only this private part may contain some personal information regarding the actors of the training. - The Analysis component contains global or partial analysis of the corpus as well as possible transcriptions.
Sharing Corpora and Tools to Improve Interaction Analysis
203
Fig. 2. Teaching and learning corpus: the main components in a Content Package
The Mulce structure aims at organizing the components of the corpus in a way that enables linking the components together. For example a researcher, while reading a chat session (which belongs to the instantiation component), must be able to read the objectives of the activity (which belongs to the pedagogical context). Moreover, it is important for the Mulce format to allow, digging the component data on the platform (cf. for these two points the section entitled “Browsing and analyzing corpora”). A standard exchange format is also required to download the whole corpus. Considering these constraints, we chose the IMS-CP formalism [21] as the global container. This XML formalism fits these constraints by expressing metadata, different levels of description, and an index pointing to the set of heterogeneous resources. Each corpus is thus archived as a Content Package [21], including metadata, descriptions and related resources used in each of the components. Instantiation component: Actors and environment description This component consists in describing (1) the actors, (2) the technological environments, (3) the tools used during the learning activity and (4) the groups and their members. We consider that the pedagogical scenario can describe the generic activity of a group by specifying the roles without assigning them to actors and declaring only the type of the involved tools. For example, in the abstract pedagogical scenario, one can define a negotiation activity for the production of a collective document that has to be performed by each group using a chat and a forum. In the instantiation part of the corpus, we have to define all actors involved and concrete environment used. For example, in the activity described previously, the main environment used was the WebCT platform, whose chat and forum tools have specific features (speech acts, attached file, etc…). This description results in the definition of the environment feature together with the specification of the structure of the tracks collected during the learning activity. Actors’ general description concerns their age, gender,
204
C. Reffay and M.-L. Betbeder
institution and some of cultural or cognitive profile attributes if needed (country, mother tongue, etc.) When more specific information is required, the structure may be extended by a specific XML schema. Instantiation component: Workspace concept The hierarchical structure of the learning stage is captured in the workspaces element, i.e.: a sequence of workspace elements (see Fig. 3).
Fig. 3. Extract of the XML Schema
A workspace is generally linked to a learning activity (of the pedagogical scenario). It encompasses all the events observed during this activity, in the tool spaces provided for this activity, for a given (instantiated) group of actors. As shown on figure 3, a workspace description includes its members (references to the actors registered in the learning activity), starting and ending dates, the provided tools and the tracks of interaction that occurred in these tools. In order to fit the hierarchical structure of learning and support activities, a workspace can recursively contain one or more workspace elements. The lists of places, sessions, descriptors, contributors and sources defined in the workspaces element can be referenced by workspace, contribution, or act elements. For example, descriptors may list identified categories so that each act of the acts element list could refer to one or more of these categories. This principle enables to browse the interaction data in many different ways, independent to the concrete storage organization in the XML document. Our specification describes communication tools and their features with a great level of precision. The corpus builder can specialize/particularize the schema (i.e.,
Sharing Corpora and Tools to Improve Interaction Analysis
205
restrict it) to fit the specific tools and features proposed to the learners in a specific learning environment. In the meantime, if a tool cannot be described with the specification, one can augment the schema by adding new elements, in order to take into account the tool's specificities. Both of these mechanisms offer two ways, the specification can be extended to fit the tools specificities or analysis needs. Moreover, recursive workspace description enables the corpus descriptor to choose the grain at which he needs to describe the environment. Thus, a workspace can be used to describe a complete curriculum, a semester, a module, a single activity or a work session (a concept generally related to synchronous learning activities). The workspace concept represents the space and time location where we can find interaction with identified tools. This notion has the same modularity as the EML learning units [22], [23]. Devices and tools within which interaction occurs can be as different as a forum, a chat or collaborative production tools (e.g., a conceptual map editor, a collaborative word processor, a collaborative drawing tool).
Fig. 4. Extract from the XML Schema – the act concept
Interaction tracks are stored according to the act’s structure presented on figure 4. All actions, wherever they come from, are described by an act element. An act necessarily refers to its author identifier (defined in the members list – Fig. 3), and a beginning_date. Depending on the nature of the act (act_type), an optional endind_date can be specified. The act_type element is a selector. The actual content (or value) of the act depending on its type, is stored in the appropriate structure. For example, a chat act (see Fig. 5) can have the type in/out (actor entering/leaving), it may contain a message, can be addressed to all the workspace members or to a specific one (e.g. if it is a private message). A chat act can contain an attached document (file) which in turn is described by a name, a type and a date. Optional element comment contains a sequence typed text of any type and can be used to store researchers’ annotations. The last optional element of the act’s structure (any) leads to any extension not provided in our schema.
206
C. Reffay and M.-L. Betbeder
Fig. 5. Extract from the XML Schema – the chat act concept
This XML Schema defines the storage structure for many act types, e.g.: forum message, chat act, transcribed voice act, and more. For lack of space, this paper only gives some of the main ideas of this schema, but the complete schema for structured information data is available online [24]. The definition, composition and structure of a Learning & Teaching Corpus have been presented in the sections above. The next one explains how these data structures can be shared and computerized on the Mulce platform. 3.2 Proposal 2: A Platform for Corpus Sharing Sharing corpora Once data have been collected, structured and described by metadata, we are ready to share them on the Mulce platform. Being connected with other Open Archive Initiative repositories [25] [26], the Mulce platform deals with sharing metadata and our corpus objects become visible for the whole community. Two mains corpora (Simuligne and Copeas) are already uploaded. About twenty related corpora containing analyses are also in our repository. This paper is also an invitation for all researchers to prepare their corpora in order to share them on the Mulce platform, keeping them readable. The deposit of a corpus consists in declaring it, describing it by means of general metadata, and uploading its components (described previously). Each component has a specific formalism. These can either be standard formalism such as Learning design [27] (used for the context components: educational scenario and research protocol), or the specific formalism described here above for structured interaction data. If these recommended formalisms are used to describe the various components of the uploaded corpus, the researchers will fully benefit from all the tools provided one the Mulce platform to navigate and analyse the entire corpus. Otherwise, the corpus will be downloadable as is by other researchers. Each component is described by its specific metadata. On the Mulce platform, these metadata can be used by a researcher to find corpora that fit particular constraints. For example the researcher can select the corpora pertaining to its own research interests, either in terms of used tools, of targeted audience or logged tracks.
Sharing Corpora and Tools to Improve Interaction Analysis
207
Browsing and analyzing corpora The second part of the platform proposes the visualization, the navigation and the analysis of structured interaction data. Corpora or selected parts of corpora can be downloaded by identified researchers. In this part, two distinct aspects are considered: the navigation / visualization aspect, and the analysis aspect of corpora. The interest of the navigation / visualization aspect is twofold. Firstly, the corpus becomes independent to the (evolving) software, where originally interaction took place. This is a major benefit for data longevity and reusability. Secondly, because of the main attention paid on the context of interaction in the Mulce project, the interaction navigator makes explicit links between interactions and their surrounded context. Finally, the researcher can select a part of a corpus by means of requests. He can, for example, select all the interactions of an actor using a specific communication tool. For each of the interactions he can access to the prescribed educational activity. We are currently developing a user interface enabling navigation through different corpora. A first form provides a selection of corpora according to the following criteria: participants (students, tutors, native speakers), technologies (asynchronous LMS, audio-graphic conference, discussion forum, chat, …), pedagogical dimensions (global simulation, intercultural scenario, English and ICT, …), learning fields (French as foreign language, English for ICT, …), analysis tools (forum analysis, synchronized multimodal layouts, social networks analysis, …) language used, interactions and modalities (spatial-, spoken-, textual-, iconic-, multimodal scaffolding language, ..) The result of this request is a list of corpora matching the criteria, with synthetic information. Once selected, a corpora can be described (metadata), browsed (each component with its specific interface), or scanned in order to select or highlight particular acts. The analysis aspect of corpora concerns the use of tools based on the instantiation component formalism. As an example, patterns of interactions can be detected by a pattern discovery tool [28]. The XML format being defined, we hope that different analysis tools (including indicator synthesis), coming from various teams, will have a version that can operate on the Mulce structure. Tools can be either integrated to the platform for an online use or downloaded from the platform for an offline use. The tools proposed on the platform will originate from our research team or from partnership. For example we have two running collaborations: the Calico project and Tatiana. The Calico project ([29], [30]) aims at proposing different visualization and analysis tools [31], specialized on discussion forums. Tatiana ([32], [33], [34]) includes a navigator, a replayer and an annotator. The replayer functionality synchronizes the various data sources and ... “aims at bridging the gap between having the data of an experiment and being in the flesh of the observer” [32]. We are currently adapting its XML schema to fit ours and extend its visualization functionalities to other communication tools. We are interested in other collaborations aiming at providing other analysis tools. Technology The Mulce platform is developed over a Java/JEE application stack running on the Tomcat servlet server [35]. The implementation conforms to the MVC2 Design Pattern, using the Struts framework [36]. This application provides a single point of control and then facilitates security concerns. In order to get independence between our
208
C. Reffay and M.-L. Betbeder
core computing process and the database, we consider a mapping by using the Hibernate framework [37]. Finally, the Graphical User Interface takes advantage of the SiteMesh frameset [38]. Because our field (linguistics) already owns its Open Archive Implementation (OLAC), we chose to connect our server (as a data repository) to the OLAC harvesting network (compliant to the OAI-PMH protocol).
4 Conclusion Joining the voice of other researchers, this paper deals with the problem of TEL research impact and focuses on the methodology to validate indicators and analysis tools provided by our communities. Because research (not only learning) also could benefit from collaboration tools on the Internet, we think that a more collaborative research could have a greater impact on indicators and then, on real world online learning. Due to the fact that experiments in online learning involve human beings, embarking their specificities and cultural context, the replication is very hard to achieve. This problem prevents two essential validation processes. Two concurrent indicators, used in two different contexts, cannot be compared. And, because original interaction data are not available for other researchers, none of the indicators can be tested on external experiments. This leads to a lack of large scale evaluation for each indicator or tool. In order to concretize a first step towards e-research, the Mulce project aims at sharing contextualized interaction data in a Learning & Teaching Corpus. Sharing corpora means building a test-bed to compare our indicators and analysis tools on fixed data. This paper proposes a definition, a composition and a structure for such a corpus. A related platform is currently implemented to share, browse and analyze shared corpora.
References 1. Reffay, C., Chanier, T., Noras, M., Betbeder, M.-L.: Contribution à la structuration de corpus d’apprentissage pour un meilleur partage en recherche. STICEF Journal (Sciences et Technologies de l´Information et de la Communication pour l´Éducation et la Formation) 15, 25 (2008) 2. Rourke, L., Anderson, T., Garrisson, D.R., Archer, W.: Methodological Issues in the Content Analysis of Computer Conference Transcripts. IJAIEd 12, 8–22 (2001) 3. Chan, T., Roschelle, J., Hsi, S., Kinshuk, S.M., Brown, T., Patton, C., Cherniavsky, J., Pea, R., Norris, C., Soloway, E., Balacheff, N., Scardamalia, M., Dillenbourg, P., Looi, C., Milrad, M., Hoppe, U.: One-to-one technology-enhanced learning: An opportunity for global research collaboration. Research and Practice in Technology Enhanced Learning 1(1), 3– 29 (2006) 4. Mulce: French national research project 2006-2010 (ANR-06-CORP-006), coordinated by Chanier, T., http://mulce.univ-fcomte.fr/axescient.htm#eng 5. The Pittsburgh Science of Learning Center (PSLC) DataShop, https://pslcdatashop.web.cmu.edu/
Sharing Corpora and Tools to Improve Interaction Analysis
209
6. Koedinger, K.R., Cunningham, K., Skogsholm, A.: An open repository and analysis tools for fine-grained, longitudinal learner data. In: Proceedings of the First International Conference on Educational Data Mining, pp. 157–166 (2008), http://www.educationaldatamining.org/EDM2008/uploads/proc/ 16_Koedinger_45.pdf 7. Osuna, C.: DELFOS: A Telematic and Educational Framework based on Layer oriented to Learning Situations. PhD Dissertation, Universidad de Valladolid, Valladolid, Spain (2000) 8. Osuna, C., Dimitriadis, Y., Martínez, A.: Using a Theoretical Framework for the Evaluation of Sequentiability, Reusability and Complexity of Development in CSCL Applications. In: Proceedings of the European Computer Supported Collaborative Learning Conference, Maastricht, NL (March 2001) 9. Martinez, A., De la Fuente, P., Dimitriadis, Y.: Towards an XML-based representation of collaborative action. In: Proceeding of Computer Supported Collaborative Learning conference (CSCL), Bergen (2003), http://hal.archives-ouvertes.fr/hal00190429/fr/ 10. Martinez, A., Harrer, A., Barros, B.: Library of Interaction Analysis Tools. Deliverable D.31.2 of the JEIRP IA (Jointly Executed Integrated Research Project on Interaction Analysis Supporting Teachers & Students’ Self-regulation). KaleidoScope (2005), http://www.rhodes.aegean.gr/ltee/kaleidoscope-ia/ Publications/D31-02-01-F%20Lirbary%20of%20IA%20tools%20.pdf 11. Virtual Math Team, http://www.ischool.drexel.edu/faculty/gerry/vmt/index.html 12. Stahl, G.: Studying Virtual Math Teams. Springer, New York (2009) 13. The Dataverse Network Project, http://thedata.org/ 14. King, G.: An Introduction to the Dataverse Network as an Infrastructure for Data Sharing. Socialogical Methods & Research 36(2), 173–199 (2007), http://gking.harvard.edu/files/dvn.pdf 15. Markauskaite, L., Reimann, P.: Enhancing and Scaling-up Design-based Research: The potential of E-Research. In: Int. Conference for the Learning Sciences, ICLS 2008, Utrecht, NL, June 2008, 8 p. (2008) 16. Soller, A., Martinez, A., Jermann, P., Muehlenbrock, M.: From Mirroring to Guiding: A Review of State of the Art Technology for Supporting Collaborative Learning. IJAIED 15, 261–290 (2005) 17. Reffay, C., Chanier, T.: How social network analysis can help to measure cohesion in collaborative distance-learning? In: proceeding of Computer Supported Collaborative Learning conference (CSCL 2003), Bergen, pp. 343–352 (2003) 18. Vetter, A., Chanier, T.: Supporting oral production for professional purpose, in synchronous communication with heterogeneous learners. The Journal of Computer Assisted Language Learning. Recall 18(1), 5–23 (2006) 19. Ciekanski, M., Chanier, T.: Developing online multimodal verbal communication to enhance the writing process in an audio-graphic conferencing environment. Recall 20(2), 162–182 (2008) 20. Kay, J., Reimann, P., Yacef, K.: Mirroring of group activity to support learning as participation. In: Luckin, R., Koedinger, K.R., Greer, J. (eds.) Proceedings of AIED 2007, 13th Int. Conf. on Artificial Intelligence in Education, vol. 158, pp. 584–586. IOS Press, Amsterdam (2007) 21. IMS-CP: Content Package Specification (IMS consortium). (2004), http://www.imsglobal.org/content/packaging/
210
C. Reffay and M.-L. Betbeder
22. EML: Educational Modelling Language, Open University of the Netherlands (OUNL) (2000), http://www.learningnetworks.org/?q=EML 23. Koper, R.: Modelling Units of Study from a pedagogical perspective: The pedagogical metamodel behind EML. Technical Report OUNL (June 2001) 24. Mce_sid: Full schema for the structured information data (instantiation component) of a Mulce corpus (2008), http://mulce.univ-fcomte.fr/metadata/ mce-schemas/mce_sid.xsd 25. Nelson, M., Warner, S.: The Open Archives Initiative Protocol for Metadata Harvesting. In: Lagoze, C., Van de Sompel, H. (eds.) Version 2.0 (2002), http://www.openarchives.org/OAI/2.0/openarchivesprotocol.htm 26. Simons, G., Bird, S.: OLAC: Open Language Archives Community (2007), http://www.language-archives.org/, http://www.language-archives.org/OLAC/metadata.html 27. IMS-LD: Learning Design Specification of the IMS consortium, version 1.0 (January 2003), http://www.imsglobal.org/learningdesign/ldv1p0/ imsld_infov1p0.html 28. Betbeder, M.-L., Tissot, R., Reffay, C.: Recherche de patterns dans un corpus d’actions multimodales. In: Nodenot, T., Wallet, J., Fernandes, E. (eds.) EIAH 2007 Conference: Environnements Informatiques pour l’Apprentissage Humain, Switzerland, June 2007, pp. 533–544 (2007) 29. Calico: French national research project coordinated by E. Bruillard (ERTÉ: Technical Research Team in Education) (French homepage) (2008), http://calico.inrp.fr/ 30. Bruillard, E.: Teacher development, discussion lists and forums: issues and results. In: McFerrin, K., Weber, R., Carlsen, R., Willis, D.A. (eds.) Proceedings of Society for Information Technology and Teacher Education International Conference, SITE 2008, pp. 2950–2955. AACE, Chesapeake (2008) 31. Giguet, E., Lucas, N.: Creating discussion threads graphs with Anagora. In: Proceedings of the 9th Computer Supported Collaborative Learning conference (CSCL 2009), Rhodes, Greece, pp. 616–620 (2009) 32. Corbel, A., Girardot, J.-J., Lund, K.: A method for capitalizing upon and synthesizing analyses of human interactions. In: van Diggelen, W., Scarano, V. (eds.) Workshop proceedings Exploring the potentials of networked-computing support for face-to-face collaborative learning, EC-TEL 2006, Crete, October 2006, pp. 38–47 (2006) 33. Dyke, G., Lund, K., Girardot, J.-J.: Tatiana: an environment to support the CSCL analysis process. In: Proceedings of the 9th Computer Supported Collaborative Learning conference (CSCL 2009), Rhodes, Greece, pp. 58–67 (2009) 34. Dyke, G.: Tatiana: Trace Analysis tool for interaction ANAlysts, European LEAD project outcome (2008), http://www.lead2learning.org/projectsite/ pagina.asp?pagkey=76663 35. Apache Tomcat: implementation of Java servlet, http://tomcat.apache.org/ 36. Apache Struts: a free open source framework for web applications, http://struts.apache.org/ 37. Hibernate: object/relational persistence and query service, https://www.hibernate.org/ 38. SiteMesh: a web-page layout system, https://sitemesh.dev.java.net/
Distributed Awareness for Class Orchestration Hamed S. Alavi, Pierre Dillenbourg, and Frederic Kaplan Swiss Federal Institute of Technology, CRAFT, Station 1, 1015 Lausanne, Switzerland {Hamed.Alavi,Pierre.Dillenbourg,Frederic.Kaplan}@epfl.ch
Abstract. The orchestration process consists of managing classroom interactions at multiple levels: individual activities, teamwork and class-wide sessions. We study the process of orchestration in recitation sections, i.e. when students work on their assignments individually or in small groups with the presence of teaching assistants who give help on demand. Our empirical study revealed that recitation sections suffer from inefficient orchestration. Too much attention is devoted to the management of the relationship between students and teaching assistants, which prevent both sides from concentrating on their main task. We present a model of students’ activities during recitation sections that emphasize the issue of mutual awareness, i.e. monitoring help needs and TA's availability. To tackle these difficulties, we developed two awareness tools. Both tools convey the same information: which exercise each group is working on, whether it has asked for help and for how long. In the centralized version, named Shelf, students provide information with a personal response system and the status of each team is juxtaposed on a central display. In the distributed version, named Lantern, each team provides information by interacting with a lamp placed on its table. The display is distributed over the classroom, the information being spatially associated to each group. We are now comparing these two versions in an empirical study with two first year undergraduate classes in Physics. Preliminary results show that both versions increase the efficiency of interaction between students and teaching assistants. This contribution focused on the distributed version. Keywords: Orchestration, Collaborative Problem Solving, Recitation Section, Distributed Awareness Tool.
1 Introduction This paper concerns the process of orchestration during collocated recitation sections, i.e. when students work on their assignments individually or in small groups with the presence of teaching assistants. Recitations sections play an important role in university teaching, namely as a complement to traditional lectures. However, their effectiveness is somewhat questionable for several reasons. For instance, teachers complain that students tend to come to get the solution instead of elaborating the solutions. Understanding the solution gives them the illusion of mastering skills but they discover at the exam how difficult it is to build a solution themselves. Teachers also complain that students ask help without trying hard enough to solve the problem. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 211–225, 2009. © Springer-Verlag Berlin Heidelberg 2009
212
H.S. Alavi, P. Dillenbourg, and F. Kaplan
Conversely, students complain that they have to wait long for receiving help. Moreover, students often complete only the first exercises of the series while exam items have a difficult level closer to the last exercises of the series. These management problems which make some recitations section less than optimal will be quantified in our study. During its two first decades, research on computer-supported collaborative learning (CSCL) focused on the interactions within a team. For a few years, scholars stated to pay more attention to the integration of teamwork [1] within broader scenarios or scripts that also include individual activities and class-wide activities (lectures, debriefing, etc.). The notion of "orchestration" refers to [2] the teacher's activity in managing the flow of activities across different social planes (solo, group, class). In CSCL scripts, the orchestration is partly offloaded by 'macro-scripts' [3] which manage this flow of activities. In recitation sections, orchestration is more complicated since there is no predefined flow (but the exercises series). Students working individually or in teams; they move alone the series at different speed and heave different needs. Teaching assistants (TAs) have to decide who should received help or, in some cases, if a short collective explanation would be more efficient. In other words, the orchestration of recitation sections is a challenging topic of research. In this contribution, we model the interactions between students and TAs based on the observations we made in classrooms. Using this model, we analyze the shortcomings of recitation sections, namely how the teaching assistant distributes her time to the different groups of students. In order to address these shortcomings, we designed two awareness tools. We have experimented them in recitation sections to see to what extent it changes the dynamics of recitation sections. The rest of this paper is structured as follows. Section 2 gives a review of the related works. Section 3 describes the empirical study we have done on several recitation sections, including the observation as well as the qualitative and quantitative analyses which led us to a model of orchestration in this context. In Section 4, we propose two awareness tools designed to resolve the shortcomings. Section 5 describes our second empirical study in which we use these tools in some other recitation sections.
2 Relevant Research Our work has been influenced by contributions from three different fields: (1) CSCL research on tools for regulating teams’ interactions, (2) the Computer Supported Cooperative Work (CSCW) research on awareness tools and (3) the work of ambient interface in human-computer interaction (HCI). In CSCL, Jermann et al. [4] provided a framework that categorizes collaborative learning supporting systems into three classes: (1) mirroring systems, which display raw indicators to collaborators (2) metacognitive tools, which monitor the interactions, process the collected data and represents the state of interaction via a set of high-level indicators (3) coaching systems, which offer advice based on an interpretation of those indicators. We make use of this framework to compare our work against the others. The tools we propose fits in the first category as they mirror the state of student groups to the groups themselves and to the TAs without any pre-processing. In
Distributed Awareness for Class Orchestration
213
contrast, Chen [5] designed a tool, called Assistant, which should be put in the third category (coaching systems). Assistant monitors the collaboration, visualizes the processed data and provides advice to the teacher. It can also learn from teacher’s feedback to improve its performance. However, Assistant is basically tailored for the context of distance collaborative learning, while our tools are designed for co-present settings. The difference between our two tools is precisely about how they exploit the physical layout of the classroom. In the middle category (metacognitive tools), Avouris et al. [6] developed a collaboration environment called Synergo, for collocated and distance learning. Synergo monitors the activity, makes analyses and visualizes quantitative parameters like density of interaction, symmetry of partner’s activity etc. While the Orchestration is not the primary goal of Synergo, it provides teachers with useful information to manage the interactions occur in the classroom. Our work is also different than Chen’s and Avouris’ in terms of the level of interaction it considers. While Assistant and Synergo are mostly centered on interactions within one group, we are looking at the higher level, i.e. interaction between several groups and TAs as well as the interactions among groups: the information we capture considers the group as a unit and does not provide information about interactions within the group. Supporting orchestration is less about an analytic account of team interactions and more about providing a global picture that can support on-the-fly decision making within large classes. In CSCW, there have been many efforts aiming for providing awareness information, that is, information about the presence, activities, and availability of participants in a collaborative activity. They principally vary in temporal nature (synchronous [7, 8, 9, 10], asynchronous [11, 12]), type of information they provide (workspace [13, 14, 15, 16, 9, 11, 17, 18], availability [19, 20, 21, 8, 22], activity [19, 20] etc.), and the task they are tailored for (conferencing [8], distance learning [23] etc.). The awareness tools we propose in this paper give real-time information on students’ activity in a collocated collaborative learning context (recitation sections). Finally, in HCI, the seminal idea of ambient interfaces is to extend classical user interfaces (display, keyboard, mice) to the whole environment. In contrast to the works described above, the primary concern of ambient display applications is the subtle embedding of information in our surroundings, while capturing and processing information is of a minor concern. While the effectiveness of ambient interfaces for providing awareness information has been shown in many cases [24, 25, 26, 27], a comprehensive study on the assessable advantages of going beyond classical interfaces is still missing. We have implemented two ways of presenting awareness information, a traditional approach with a central display and an ambient presentation with a cloud of table lamps. Our study compares the effects of these two tools.
3 Empirical Study in Recitation Sections Class orchestration is a complex process that takes different forms in different contexts. We present a model specific to the context of recitation sections. We made it deliberately simple. In this section, we describe (1) the observations we have made on actual recitation sections (2) qualitative and quantitative analyses on the collected data
214
H.S. Alavi, P. Dillenbourg, and F. Kaplan
which led us to a conceptual model of interaction in recitation sections as they are being held in universities (3) and the shortcomings of the existing process. 3.1 Initial Observations We observed and recorded 12 recitations sections at our university. They involved three first-year calculus courses given by three different lecturers and groups of teaching assistants. Each course was dedicated to the students from Chemistry, Electrical Engineering or Material Sciences. Each course encompassed a series of weekly lectures as well as recitation sections. We watched and videotaped the recitation classes for four consecutive weeks, each lasting 90 to 120 minutes. Observations were done silently, that is, we tried to retain the classes intact and to observe the dynamics of recitation sections as they take place normally. We videotaped the sessions. An analysis on the videos shows that students and TAs did not pay attention to us after a few minutes of the first session. Table 1 shows the basic parameters of the sections we have observed in four consecutive weeks. (The second week of the Materials class was holiday.) In all classes, grouping was free, i.e. students formed groups ranging from 1 to 6 students. Table 1. Observed Recitation Sections
# of students # of TAs Duration (min)
Materials W1 W2 15 1 100 -
W3 7 1 90
W4 7 1 100
Chemistry W1 W2 W3 21 21 22 1 1 1 90 100 108
W4 23 1 90
Electrical Eng. W1 W2 W3 22 34 26 1 1 1 100 90 105
W4 28 1 105
3.2 Qualitative Analysis For the rest of this paper, we refer to a group of students working collaboratively as a team. A team could consist of only one student. The interactions between teams and the teaching assistant seem to simply follow four steps: – If a team needs help, it raises hand. – If the TA is free, she comes to the team and answers the question. – If the TA is busy, the team waits until she becomes free. –When the TA finishes answering a question, she becomes free for the other questions. However, a deeper look at the process of questioning and answering shows that many subtle but important points are not considered in the above sequence: – The TA does not come to all the raised hands, but only to those she notices. – The order of answering does not follow the order of help request in a fair way. – The teams do not raise their hand as soon as they need help, but wait for the moment they can get the attention of the TA. They devote quite a lot of attention to monitoring the TA's availability. – Conversely, even when the TA is answering a question, she continuously monitors the room to check new raised-hands, which also takes some attention.
Distributed Awareness for Class Orchestration
215
Here we try to give a more precise model of teams-TA interaction. According to our observations, we separate teams’ activities into two categories: 1. 2.
Problem solving: It is the effort that a team put to solve the exercises. It includes individual and group work, exploration, and thinking. Self regulation: While being involved in problem solving, each team builds a dynamic understanding of (1) how much it needs TA’s help and (2) the possibility of catching the TA’s attention and ask for help. We argue that these two questions are highly interrelated. For example, when a TA is passing by, several teams took the advantage of this situation and asked a question they would probably have not asked if the TA was not easy to access.
Figure 1 depicts teams’ focus of attention during a period that includes a normal work progress, then facing a problem, trying to call assistant for help, and finally receiving help. This period usually repeats for 5-10 times during a recitation section. In the following, we justify the dynamics qualitatively shown in Figure 1. The next section, adds a quantitative analysis for some of the interesting parts.
Fig. 1. Problem Solving vs. Self Regulation
– Before time t1: In normal situations, most cognitive effort is devoted to problem solving, while self regulation is in a stand-by mode. – From t1 to t3: At time t1, the t team starts facing a difficulty and hence has to put extra effort on problem solving. After a while (t2), since task effort increases, the team starts to wonder if it needs to ask for help. – From t3 to t4: At a time like t3, when the need for help becomes obvious, the team verifies the TA's availability. Its attention partly diverges from problem solving and is devoted toward self regulation. – From t4 to t5: Since the time t1, the team has kept increasing its effort to solve the problem, while simultaneously increasing the effort on the self regulation. At some point, represented here by t4, this double increase of effort is not manageable anymore. What is expected to happen is that the team gives up with problem solving and starts to put lots of effort on chasing the TA in order to catch her attention. – From t5 to t6: t5 is the time when the team begins to wait for the TA and t6 is the time when the TA decides to help this team. Depending on the availability of the TA, this waiting time could be considerably long. (We report the average and worst case waiting times in the next section.) Our observations show that teams stop investing much effort on the task during 62% of this waiting period. This fact is shown, in this figure, by a low problem solving level during the waiting time.
216
H.S. Alavi, P. Dillenbourg, and F. Kaplan
– From t6 to t7: It usually takes a short while from when the TA decides to help a team (t6) until she starts helping (t7). – After t7: At t7, the TA starts giving help to the team which is supposed to pay attention and be contributive while receiving help (needs a level of problem solving). At this time, self regulation goes back to the stand-by mode. 3.3 Quantitative Analysis In this section, we quantitatively report the following parameters captured from the observed recitation sections: the waiting time, the while-waiting productivity (the fraction of waiting time used for problem solving ), the number of occasions in which the TAs poorly schedule their time in terms of fairness (question n+i is answered before question n) and never-noticed questions. Unfair answering and unnoticed questions are a sign of poor monitoring but also sometimes a sign of adaptive rescheduling (e.g. giving priority to students who are late). Let us formally define some concepts which we use in our quantitative analysis: A Demand identifies a help returns the time when the team raises hand request from a team. The function to show the demand , and the function returns the time when the TA starts to answer the demand . A set D .. includes all the demands that occur in a . (i.e. certain recitation section, sorted in ascending order with respect to is the demand gets answered right after .) 3.4 Waiting Time Considering the fact that teams do not raise hand as soon as they need help, handraising is not an accurate sign for the beginning of the waiting period. In the following we show (1) how significant this fact is, and (2) how we compute the beginning of the waiting period. Figure 2 splits a recitation section into the periods in which the TA is continuously busy (answering a question for another team) or continuously free. Two consecutive Busy and Free periods form BF iteration.
Fig. 2. Busy-Free Iterations
Figure 3 shows the cumulative distribution of hand-raisings within a single BF episode. For example, one point at (0.5, 0.1) tells that 10% of the teams raise hand during the first half of the BF. This curve is obtained by normalizing the length of all the BF iterations of the observed sections into a same unit of time. The fast growing slope of the curve at the end of the BF illustrates the fact that, in so many cases, the teams prefer to raise hand at the end of the BF, when the TA is free or looks to become free shortly. Figure 3 reveals that teams self-regulate: they refrain to ask questions when there is low probability to receive help. This self-regulation implies that teams devote significant attention to monitoring the TA's availability.
Distributed Awareness for Class Orchestration
217
Fig. 3. Most questions are asked at the end of BF episodes (right before TA is available))
Let us suppose that, within n a BF, the number of questions is uniformly distributedd in time, i.e. for any team, thee probability of facing a difficulty at any time of a BF F is uniform. Based on this assu umption, we compute the beginning of each waiting periiod, and consequently waiting tiime as: ܾ݁݃݅݊݅݊݃݊݅ݐ݅ܽݓ݂ ݊݃݀݀݊ܽ݉݁݀ݎ݂݀݅ݎ݁ ൌ ൣܶ ൫݀ାଵ ൯ െ ܶ ൫݀ ൯൧Ȁʹ ݁݉݅݅ܶ݃݊݅ݐܹ݅ܽ݃ݒܣൌ ሺσୀଵ ܶ ሺ݀ ሻ െ ൣܶ ൫݀ାଵ ൯ െ ܶ ൫݀ ൯൧ȀʹሻȀ݊
(1) (2)
݉݅ܶ݃݊݅ݐܹ݅ܽݔܽܯ ݉݁ ൌ ܺܣܯ൛ܶ ሺ݀ ሻ െ ൣܶ ൫݀ାଵ ൯ െ ܶ ൫݀ ൯൧Ȁʹห݀ ܦ אൟ in which ܦൌ ሼ݀ଵ Ǥ Ǥ ݀ ሽǡ ܶ ൫݀ ൯ ൏ ܶ ሺ݀ ሻ ൏ ܶ ൫݀ାଵ ൯ǡ ͳ ൏ ݆ ൏ ݊
(3)
As Formula 1 shows, in the averaging process, we suppose that every question m must be posed right at the midd dle of its BF period. The values of these parameters hhave been computed from the vid deo records and are given in Table 2. Table 2. Waiting Times
Avg Waiting Time Max Waiting Time
W1 107
Mateerials W2 W3 242
W4 191
W1 55
Chemistry W2 W3 59 111
W4 74
W1 139
260
-
680
90
300
168
673
532
637
Electrical Eng. W2 W3 W W4 88 1997 298 348
1270
9660
3.5 While-Waiting Produ uctivity According to our observations, when teams have to wait for the TA they decide between (1) keeping hand up and still do some problem solving or (2) chasing the TA A to capture her attention and minimize m the waiting time. We define While-Waiting Prroductivity as the fraction of th he waiting time that is not spent on chasing the TA. We use this parameter as an indicattor of the efficiency during the time teams have to waitt for the TA. On all 235 questions we have observed, in average, 62% of the waiting tiime is spent on chasing (38% % while-waiting productivity). Section 5 shows how our awareness tools improve while-waiting w productivity to 95%. We estimate the chassing time in the following way. For F each demand, the difference between the time at whhich the hand-raising happens an nd the time we consider as the beginning of waiting perriod
218
H.S. Alavi, P. Dillenbourg, and F. Kaplan
gives the fraction of the waiting time that has been spent on chasing the TA. The following formula gives us the average while-waiting productivity. 1
∑
/
/
(4)
We eliminate the questions which get answered immediately, as the productivity of a very short waiting period is almost zero. The averaging on the rest of the questions gives 38% while-waiting productivity. 3.6 Scheduling Table 3 shows the number of occasions when the TA answers a demand earlier than another demand while is posed before . Formally we count all demands for which: ( ) and (5) : ( ) Table 3 also shows the number of demands never answered by the TAs. Table 3. Unfairness, Non-answered
Unfairness cases Non-answered
Materials W1 W2 W3 W4 0 0 0 0 0 0
W1 0 0
Chemistry W2 W3 W4 1 1 0 0 1 0
W1 6 8
Electrical Eng. W2 W3 W4 0 3 7 3 5 2
In summary, our analyses confirm our initial hypotheses about the quality of orchestration during recitation sections: The main problems are: – In average 62% of the waiting time is spent on chasing the TA and trying to get her attention. This is while according to Table 2, in many sessions the average and especially worst-case waiting times is considerably long. – According to Table 3, TAs never notice some of the raised hands. – According to Table 3, in many cases TAs answer the demands in a wrong order. The importance of these problems led us to develop tools that could potentially smoothen the orchestration of recitations sections.
4 Technological Solutions We designed two tools to address the shortcomings analyzed in the previous section. Both provide information on teams, one on a centralized display, called Shelf, the other using lamps distributed in the classroom, called Lantern. Both solutions make use of the same visual grammar: 1. 2.
Color: Each color corresponds to one exercise in the series. Intensity of color: It indicates the time that has been spent on the current exercise, starting with lowest intensity and then gradually increasing with time.
Distributed Awareness for Class Orchestration
3. 4.
219
Blinking: It indicates a call for help. Frequency: The faster the rate of blinking the longer the time since help request.
4.1 Lantern Lantern (Figure 4) is a small (in size of 0.5L drink bottle) portable device which consists of five LEDs installed on a stub-shape PCB and covered by a blurry plastic cylinder, and one microprocessor to control the LEDs (see Figure 4). Users can: – Turn: by turning the Lantern, the teams choose the exercise they are working on. – Press: the teams press the Lantern when they need to call the TA for help.
Fig. 4. Lantern
Each lantern records all user interactions and the visual grammar we mentioned: – Color: turning the lantern forward or backward changes to the color /exercises – Intensity of color: Five flours of LEDs distinguish levels of intensity (Figure 5). – Blinking: as a team presses the Lantern to call for help, it starts blinking until TA comes and press it again. – Frequency of blinking gradually increases (during 3 minutes)
Fig. 5. Lantern; intensity of color increases with time
4.2 Shelf Shelf (Figure 6) uses a wide screen as output and infrared remote controls as input. Each team has a personal response system in hand. On the display, a progress bar is labeled with the letter referring to the team. Teams use the numbers on the remote control to indicate which exercise they are working on and push zero to call the TA. On Figure 6, teams B, E, A and F are working on exercise 1 (color red) and, among them, A has been working for a longer time. Teams H, J and D are working on
220
H.S. Alavi, P. Dillenbourg, and F. Kaplan
exercise 2, 4, and 3 respectively. (Blinking cannot be shown on paper). Shelf runs the visual grammar as following: – Color: When a team starts to work on exercise N, it presses the button N on its remote control and so the progress bar changes to the corresponding color. – Intensity of color: When the progress bar goes up gives the same impression as when the intensity of light increases in Lantern. – Blinking: When the team presses zero, its progress bar starts blinking until the TA comes and presses a special button on the same remote control. – Frequency of Blinking: It is set the same way as it is in Lantern.
Fig. 6. Shelf (snapshot)
Here we give some examples in which we expect Lantern better support orchestration than Shelf. A simple and frequent task for the teaching assistant is to see who needs help. While Lantern allows the teaching assistant immediately find out the demands just by having a look at the classroom, Shelf forces her to find the team that corresponds to the blinking bar. Another example is semi-public explanations: quite often, the TA gives an explanation to a few teams who face the same difficulty with a certain exercise and sit close to each other. While the need for semi-public explanation is easy to find out using Lanterns, it is more difficult using Shelf, as the TA has to check, on the display, all pairs of teams who have difficulty in the same exercise, and then see whether they are located close to each other or not.
5 Performance Evaluation We provided our awareness tools to two courses of Physics II, given by two different teachers and groups of TAs, at our university. The experiment classes are very similar to the classes in which we have made our initial observation (reported in Section 3), in terms of the task that the teams are committed with, as well as the type of interaction between teams and TAs. In the both classes, students and TAs used Shelf for
Distributed Awareness for Class Orchestration
221
three weeks, after that they switched to use Lantern for four weeks. Each week, for each class, one recitation section was planned. Each section took around two hours. In total, Shelf has been used for around 12 hours and Lantern for 14 hours (one session was not held). What we report here is the result of analysis we have made on this 26 hours observation. It is worth mentioning that, this observation is not a lab experiment where we could control all the variables, and so not every statistical comparison between different sections would be valid. We report a quantitative comparison on the while-waiting productivity (defined in Section 3), in three different conditions: no awareness, using Shelf, and using Lantern. We focus on this parameter because it influenced by the orchestration process and is independent of the properties of recitation sections like difficulty of exercises or number of students (uncontrolled variables). Then we describe, qualitatively, the other effects of our awareness tools in recitation sections which are based on our observation and the questionnaires. 5.1 While-Waiting Productivity (Quantitative Comparison) Using Formula 4, we computed the average while-waiting productivity when teams use Shelf and Lantern. The results show that when students use Shelf the average while-waiting productivity is around 84%. This number is around 94% for the sections in which Lantern is used. Table 4 compares the average while-waiting productivity in three different situations (1) without awareness tool (2) using Shelf and (3) using Lantern. The productivity increase related to the awareness tool results from the fact that it off-loads the concern to capture TA's attention. The outperformance of Lantern could be explained by the very high visibility of it to its owner, when it blinks and pronounces that there is no need to be worried about getting TA’s attention. Figure 6 shows the conceptual model of teams’ activities when they use no awareness tool (Section 3), and compares it to the situation when Lantern or Shelf is used. The high level of problem solving attention between times t5 and t6 (waiting period) Table 4. While-waiting productivity improves with awareness tools
Avg while-waiting productivity
No Awareness 38%
Fig. 2. Team's Activity in three modes
Shelf 84%
Lantern 94%
222
H.S. Alavi, P. Dillenbourg, and F. Kaplan
shows the high while-waiting productivity that our awareness tools brought about. The questionnaire we have collected from students validates the above result. Many of the students mentioned that Lantern and Shelf make it possible for them to work while waiting for the TA. Qualitative Comparison – Fairness: According to the TAs and our observations, Lantern and Shelf offer information that helps TAs to answer questions in a proper ordering, not only by seeing who has called for help before the others but also by realizing who needs help most urgently (for example the team who is stuck in the first question for a long time probably needs help more urgently).. – Unanswered questions: There are always some non-answered questions. The difference is in the reason why teams give up calling for help. According to our observations, when students use Lantern or Shelf, they often find the solution while waiting for the TA which we believe is due to a high while-waiting productivity that these awareness tools bring about. - No late/never demanding: Some teams hesitate to ask for help even when they have spent a long time on one exercise. The TA notices such cases when she sees a very bright lamp (or progress bar) which is not blinking, and reacts accordingly. – Progression Awareness: Lantern and Shelf inform students about their position/progression compared to the other teams, as they can see who is working on what exercise.(This fact is more serious in Shelf than Lantern according to the students) – Similar questions: Lantern and Shelf can notify the TA about the situation in which all the teams face difficulty with one certain exercise. In such cases TAs usually gives explanation publicly on the board. – Overview of the section: A quick look at Lantern or Shelf gives an overview to a visitor (for example the teacher of the course) of how the section is going, for example if more TAs are needed, if the exercises are too difficult for students or if the students progress in not a balanced speed.
6 Conclusions Our first study revealed problems that occur in recitation sections. Clearly, there is room for improving the way these classes are orchestrated by TAs. Hence, we developed tools for helping TA's orchestrating unscripted teamwork. Since the effectiveness of such a tool depends on several design choices, we compared two versions of the tool, one centralized and one decentralized version. Although the decentralized version seems more effective, our preliminary findings do not reveal main differences between these tools. The main result so far is that these tools enable the students to concentrate on their exercises instead of chasing TAs. This basic feature changes the dynamics of help seeking, for instance, letting students cancel their help request because they continued searching for a solution while waiting for the TA. All together, this study shows a different picture of orchestration in which students themselves play an active role, i.e. a distributed version of orchestration.
Distributed Awareness for Class Orchestration
223
Acknowledgments We would like to thank Prof. Ambrogio Fasoli, Prof. Jean-Philippe Ansermet, the teaching assistants and the students who helped us to conduct this study. In addition to the authors, Olivier Guédat has had a remarkable contribution especially in the hardware development. This project is funded by NSF grant PDFMI-118708.
References 1. Tewissen, F., Lingnau, A., Hoppe, H.U., Mannhaupt, G., Nischk, D.: Collaborative Writing in a Computer-integrated Classroom for Early Learning. In: Proceedings of the European Conference on Computer-Supported Collaborative Learning (Euro-CSCL 2001), Maastricht, The Netherlands, pp. 593–600 (2001) 2. Dillenbourg, P., Fischer, F.: Basics of Computer-Supported Collaborative Learning. Zeitschrift für Berufs- und Wirtschaftspädagogik 21, 111–130 (2007) 3. Dillenbourg, P., Hong, F.: The mechanics of CSCL macro scripts. International Journal of Computer-Supported Collaborative Learning 3(1), 5–23 (2008) 4. Jermann, P., Soller, A., Mulenbruck, M.: From mirroring to guiding: a review of state of the art technology for supporting collaborative learning. In: Proceedings of European Perspectives on Computer-Supported Collaborative Learning (ECSCL 2001), Maastricht McLuhanInstitute, Maastricht, the Netherlands, pp. 324–331 (2001) 5. Chen, W.: Supporting Teachers’ Intervention in Collaborative Knowledge Building. Journal of Network and Computer Applications 29(2-3), 200–215 (2005) 6. Avouris, N., Margaritis, M., Komis, V.: Modelling interaction during small-group synchronous problem solving activities: The Synergo approach. In: 2nd Int. Workshop on Designing Computational Models of Collaborative Learning Interaction, ITS 2004, 7th Conf. on Intelligent Tutoring Systems, Maceio, Brazil, pp. 13–18 (2004) 7. Beaudouin-Lafon, M., Karsenty, A.: Transparency and Awareness in a Real-time Groupware System. In: Proceedings of the ACM Symposium on User Interface Software and Technology - UIST 1992, Monterey, CA, November 15-18, pp. 171–180. ACM, New York (1992) 8. Shen, H., Sun, C.: Flexible Notification for Collaborative Systems. In: Proceedings of the ACM 2002 Conference on Computer-Supported Cooperative Work - CSCW 2002, New Orleans, LO, November 16-20, pp. 77–86. ACM, New York (2002) 9. Ishii, H., Kobayashi, M., Arita, K.: Iterative Design of Seamless Collaboration Media. Communications of the ACM 37(8), 83–97 (1994) 10. Shen, H., Sun, C.: Flexible Notification for Collaborative Systems. In: Proceedings of the ACM 2002 Conference on Computer-Supported Cooperative Work - CSCW 2002, New Orleans, LO, November 16-20, pp. 77–86. ACM, New York (2002) 11. Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave, S., Ullmer, B., Yarin, P.: Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information. In: Streitz, N.A., Konomi, S., Burkhardt, H.-J. (eds.) CoBuild 1998. LNCS, vol. 1370, p. 22. Springer, Heidelberg (1998)
224
H.S. Alavi, P. Dillenbourg, and F. Kaplan
12. Manohar, N.R., Prakash, A.: Replay by Re-Execution: A Paradigm for Asynchronous Collaboration via Record and Replay of Interactive Multimedia Streams. SIGOIS Bulletin 15(2), 32–34 (1994) 13. Fitzpatrick, G., Kaplan, S., Mansfield, T., Arnold, D., Segall, B.: Supporting Public Availability and Accessibility with Elvin: Experiences and Reflections. Computer Supported Cooperative Work: The Journal of Collaborative Computing 11(3-4), 447–474 (2002) 14. Fitzpatrick, G., Mansfield, T., Kaplan, S., Arnold, D., Phelps, T., Segall, B.: Augmenting the Workaday World with Elvin. In: Proceedings of the Sixth European Conference on Computer-Supported Cooperative Work - ECSCW 1999, Copenhagen, Denmark, September 12-16, pp. 431–450. Kluwer Academic Publishers, Dortrecht (1999) 15. Fuchs, L., Pankoke-Babatz, U., Prinz, W.: Supporting Cooperative Awareness with Local Event Mechanisms: The GroupDesk System. In: Proceedings of the Fourth European Conference on Computer-Supported Cooperative Work - ECSCW 1995, Stockholm, Sweden, September 10-14, pp. 247–262. Kluwer Academic Publishers, Dortrecht (1995) 16. Gross, T.: The CSCW3 Prototype—Supporting Collaboration in Global Information Systems. In: Conference Supplement of the Fifth European Conference on ComputerSupported Cooperative Work - ECSCW 1997, Lancaster, UK, September 7-11, pp. 43–44 (1997) 17. Loevstrand, L.: Being Selectively Aware with the Khronika System. In: Proceedings of the Second European Conference on Computer-Supported Cooperative Work - ECSCW 1991, Amsterdam, NL, September 24-27, pp. 265–278. Kluwer Academic Publishers, Dortrecht (1991) 18. Sohlenkamp, M.: Supporting Group Awareness in Multi-User Environments through Perceptualisation. Ph.D. thesis, Institute for Applied Information Technology, GMD–German National Research Center for Information Technology, St. Augustin, Germany (1999) 19. Borning, A., Travers, M.: Two Approaches to Casual Interaction Over Computer and Video Networks. In: Proceedings of the Conference on Human Factors in Computing Systems - CHI 1991, New Orleans, LO, April 27-May 2, pp. 13–20. ACM, New York (1991) 20. Dourish, P., Bly, S.: Portholes: Supporting Awareness in a Distributed Work Group. In: Proceedings of the Conference on Human Factors in Computing Systems - CHI 1992, Monterey, CA, May 3-7, pp. 541–547. ACM, New York (1992) 21. Gaver, W.W., Moran, T., MacLean, A., Lövstrand, L., Dourish, P., Carter, K.A., Buxton, W.: Realising a Video Environment: EUROPARC’s RAVE System. In: Proceedings of the Conference on Human Factors in Computing Systems - CHI 1992, Monterey, CA, May 37, pp. 27–35. ACM, New York (1992) 22. Tang, J.C., Rua, M.: Montage: Providing Teleproximity for Distributed Groups. In: Proceedings of the Conference on Human Factors in Computing Systems - CHI 1994, Boston, MA, April 24-28, pp. 37–43. ACM, New York (1994) 23. Sohlenkamp, M.: Supporting Group Awareness in Multi-User Environments through Perceptualisation. Ph.D. thesis, Institute for Applied Information Technology, GMD–German National Research Center for Information Technology, St. Augustin, Germany (1999) 24. Gross, T.: Ambient Interfaces in a Web-Based Theatre of Work. In: Proceedings of the Tenth Euromicro Workshop on Parallel, Distributed, and Network-Based Processing - PDP 2002, Gran Canaria, Spain, January 9-11, pp. 55–62. IEEE Computer Society Press, Los Alamitos (2002)
Distributed Awareness for Class Orchestration
225
25. Heiner, J.M., Hudson, S.E., Tanaka, K.: The Information Percolator: Ambient Information Display in a Decorative Object. In: Proceedings of the ACM Symposium on User Interface Software and Technology - UIST 1999, Asheville, NC, November 7-12, pp. 141–148. ACM, New York (1999) 26. Pedersen, E.R., Sokoler, T.: AROMA: Abstract Representation of Presence Supporting Mutual Awareness. In: Proceedings of the Conference on Human Factors in Computing Systems - CHI 1997, Atlanta, GA, March 22-27, pp. 51–58. ACM, New York (1997) 27. Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave, S., Ullmer, B., Yarin, P.: Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information. In: Streitz, N.A., Konomi, S., Burkhardt, H.-J. (eds.) CoBuild 1998. LNCS, vol. 1370, p. 22. Springer, Heidelberg (1998)
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality Matthias Krauß1, Kai Riege1, Marcus Winter2, and Lyn Pemberton2 1
Fraunhofer IAIS, Schloss Birlinghoven, 53754 Sankt Augustin, Germany {matthias.krauss,kai.riege}@iais.fraunhofer.de 2 University of Brighton, School of Computing, Mathematical and Information Sciences, Lewes Rd, Brighton BN2 4GJ, East Sussex, UK {Lyn.Pemberton,Marcus.Winter}@brighton.ac.uk
Abstract. One claim of Technology-Enhanced Learning (TEL) is to support and exploit benefits from distance learning and remote collaboration. On the other hand, several approaches to learning emphasize the importance of handson experience. Unfortunately, these two goals don't go well together with traditional learning techniques. Even though TEL technologies can alleviate this problem, it is not sufficiently solved yet - remote collaboration usually comes at the cost of losing direct hands-on access. The ARiSE project aimed at bringing Augmented Reality (AR) to School Environments, a technology that can potentially bridge the gap between the two goals mentioned. The project has designed, implemented and evaluated a pedagogical reference scenario where students worked hands-on together over large distances. This paper describes the AR learning approach we followed and discusses its implementation and its future potential. It shows a simple and successful distributed AR learning approach and suggests features for improvement. Keywords: Augmented Reality, Collaboration, Remote Presence, Virtual Reality, Technology-Enhanced Learning, Human Computer Interaction.
1 Introduction One major claim of the Technology-Enhanced Learning (TEL) domain is to foster collaborative learning processes. Thanks to electronically conveyed media and the Internet, collaboration is supposedly no longer limited to co-located work, but can be extended over long distances. Remote collaboration has been a vivid and fruitful topic of the Computer-Supported Collaborative Learning (CSCL) research community. Several different types of communication and collaboration have been developed, evaluated and implemented in everyday learning practice. In most of these settings, collaboration is centered around the concept of shared spaces – i.e. places to work together. In traditional co-located collaboration, multiple contributors can work jointly on one physical object and talk directly to each other. For remote collaboration, these means of communication are partially replicated: virtual shared spaces allow collaboration through multiple, technologically synchronized views on a common object. The choice of communication channels and U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 226–239, 2009. © Springer-Verlag Berlin Heidelberg 2009
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality
227
synchronized aspects opens new types of cooperation, but interaction limitations impose new problems and challenges. So far, the vast majority of remote collaboration tools for learning have been limited to desktop PC settings, using the WIMP (Windows, Icons, Menus, Pointing) interaction paradigm – for example, web-based learning applications. This type of interaction is simple to implement and its requirements are easy to meet. Due to its abstraction, tools can be designed to be generic and application-independent. However, due to its generic and limited interaction paradigm, WIMP interaction comes at the cost of limited graspability, with the learner’s experience remaining indirect. Recently, the TEL domain has increased efforts to extend e-learning experience from PCs to other platforms, allowing perceptually richer experiences and more direct interaction. Augmented Reality (AR) is one possible alternative. In Augmented Reality applications a certain part of the real world is combined with a virtual one. The user interaction within such an environment is usually characterized by a very direct means of manipulating references, i.e. physical objects with optically tracked markers, resulting in the adaptation of the corresponding virtual parts. This supports building scenarios in which a user can interact directly with his or her hands. This resembles learning-by-doing far more than moving a pointer on a screen by moving a computer mouse on a table. Because of this directness of experience, AR is claimed to open up new ways of learning. AR comes in several technological varieties, ranging from severely limited implementations on mobile phones over head-mounted displays (HMDs) to fully featured stationary AR workspaces. Our work, which we conducted within the EC research project ARiSE – Augmented Reality in School Environments [1], is based on the Spinnstube®, a fully featured, low-cost AR workspace specifically developed for education applications [2]. Due to its mixing of physical and virtual aspects, collaboration in AR imposes a new, specific challenge: physical objects cannot easily be shared over distance. Local AR collaboration can be accomplished by multiple virtual augmentations on shared physical objects, but remote collaboration requires either a solution for physical sharing of objects or it has to circumvent this problem in a different way. During the ARiSE project, we have developed an AR learning platform and a remote collaboration application. The result has been evaluated in several different aspects (see [3]). This paper focuses on qualitative evaluation of social interaction using the prototype. Main research questions of this evaluation were: Which communication problems does the chosen technology yield? Which shortcomings of communication and usability can be identified? How do learners deal with these shortcomings? After a discussion of related approaches to distributed AR collaboration, the paper describes the pedagogical reference scenario we used to design the AR application. The following section illustrates its technical implementation. We introduce our evaluation method and present evaluation results. The paper concludes with a summary of results, a discussion of potential enhancements of the system and future work.
228
M. Krauß et al.
2 Related Work Collaboration is a central aspect of the TEL and CSCL domains. As a consequence, various remote collaboration scenarios, most based on traditional PC interfaces, have been developed and evaluated. Mixed and Augmented Reality technologies have recently entered the e-learning research domain. Earlier research in AR collaboration was mostly focused on co-located collaboration, i.e. multiple users sharing the same physical space. Reitmayr and Schmalstieg [3], Ohshima et al [5] as well as Regenbrecht and Wagner [6] describe typical set-ups. More recently, different approaches have been taken to solve or circumvent the problem of distributing partially physical realities. Müller and Erbe [7] discuss various challenges of distant collaboration within mixed physical-virtual labs. Bruns’ Hyper-bonds approach [8] proposes synchronisation of physical objects through remote force-feedback, using networked sensors and actuators. This approach can provide physical remote synchronisation, but is inherently limited to specific physical set-ups and, due to its implementation requirements, only feasible for a low number of synchronized aspects. A different technique that resembles the approach taken in our studies is to share only virtual aspects while maintaining independent physical halfworlds for each participant, resulting in a rather virtual experience with directness of augmented reality interaction. Among others, Chastine et al [9] used this approach in their studies and highlight the problem of referencing in AR collaboration.
3 Pedagogical Reference Scenario AR has a range of affordances that support learning, including the ability to present objects in 3D, which helps in the development of spatial abilities [10], and the ability to combine real and virtual objects in tangible user interfaces, which may be more suitable for certain kinds of learning activities [11]. In addition, AR has the ability to offer different views on the same object or situation, which aids cognitive development [12], promotes knowledge transfer [13], and facilitates extrapolation by helping learners to go beyond the information given [14]. While these learning affordances have been exploited in previous prototypes [15, 16], the pedagogical reference scenario described here focuses on Spinnstube's support for remote collaboration in a shared workspace. Collaborative learning is based on social constructivist ideas of learning that emphasise learning through active knowledge construction [12, 14], communication and social interaction [17, 18]. The exchange of ideas amongst peers engaged in the same activity helps learners to develop a deeper understanding of the subject [19] and to reflect upon and conceptualise their experiences as they explain findings, e.g. the meaning of words [20]. In addition, collaborative settings often lead to situations involving peer tutoring [21], which according to Pask’s Conversation Theory [22] is a critical method of learning. Based on these ideas, the reference scenario involved students selecting suitable topics from their local history and culture, preparing 3D digital artefacts, and then using these artefacts in a summer school project to anchor and illustrate one-to-one
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality
229
remote discussions with a peer from another country. The preparation phase involved local collaboration between students to select and discuss suitable topics and artefacts, and it gave them an opportunity to familiarize themselves with the AR learning platform by creating their own demonstration models. The AR application allows learners to sculpt 3D models with simple operations in free space using a light pen. At the summer school, pairs of students first communicated via a video link to get to know each other, and then they started their collaborative AR session. Besides a shared, interactive 3D workspace, the remote collaboration application provides an audio link for verbal communication. Students used this set-up to discuss their mutual local cultures, taking turns to explain customs and traditions and scaffolding their presentation with the prepared artefacts. After their presentation, students asked their counterpart questions about the presented content, both to test the partner’s understanding and to enquire about similarities or equivalents in their own local culture. This part also included an exercise where the presenting student erased part of a prepared artefact, using the aforementioned light pen, and asked their counterpart to reconstruct it in the shared workspace. Both students were able to observe the reconstruction process and they could comment on the progress and result. Collaborative sessions took approximately one hour with students switching roles at half time so that each side could present their content. The summer school was followed by whole-class discussions where students consolidated and conceptualised what they had learnt.
4 Technical Implementation As a base for our distributed Augmented Reality set-up we used the Spinnstube® display system [2]. Simply attached to a desk, this projection-based display can be used as an extension to a conventional desktop work environment. The system consists of a stereo-capable video projector that throws an image onto a projection screen placed above the desktop. Through a half-silvered mirror, a learner can see the desktop, his or her hands and physical objects spatially augmented by virtual 3D content. The Spinnstube® hardware was designed to keep the workspace free from any technical parts in order to avoid obstacles for direct-hands interaction. Figure 1 (left) shows a sketch of the display. The Spinnstube® is equipped with two kinds of tracking systems to gather information about a user’s current viewpoint, as well as the interaction taking place on the desktop. Hence, two infra-red (IR) cameras are mounted on the mirror looking towards a user to track his or her head as well as the movement of the mirror itself. Two conventional FireWire cameras are used to observe the interaction area on and above the desktop. For the remote AR collaboration application, we developed a light pen consisting of an LED tip that can change its colour when a button is pressed (Figure 2). The two FireWire cameras are used to track the colour and position of this pen. Hence, the light pen can be used as a 3D cursor. The different colours are associated with functions such as adding, removing 3D material or colouring the surface. Figure 1 (right) shows a learner reviewing a 3D model created with the light pen.
230
M. Krauß et al.
Fig. 1. Sketch of the Spinnstube® display system (left) and a learner working inside (right)
Fig. 2. A handheld light pen as sculpting and interaction device
The software to drive the Spinnstube® is based on the Open Source VR/AR framework Avango® [24]. We enhanced this software with the necessary modules to communicate with the Spinnstube® hardware. Avango® provides a module for group-based communication of multiple Avango® applications on different machines via the Internet, using the Ensemble distributed communication system [25]. Each Avango® application can decide which part of the content should be local and which one distributed. In the remote collaboration application discussed here, the 3D workspace and the position of each user’s cursor are synchronized, whereas the menu interaction, status information and users’ perspectives onto the scene are local. Unless a connection to another machine is established, a fully functional standalone application is provided to a user. The distribution is implemented via group management. A gossip server manages the state of the group, i.e. its current participants [25]. It is used as the central instance to register as a group member. After joining the group, members communicate directly with each other via UDP, minimising latency. In the remote collaboration application, groups consist of two members.
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality
231
Shared content does not belong to a specific user. The group itself owns the content. If a member leaves the group, the member’s contribution remains. To support remote discussions, each Spinnstube® is equipped with headphones. When entering a collaboration session, a Skype [26] connection is established to the remote partner. The presence of a collaboration partner is indicated by their actions, the visibility of their 3D cursor, and their voice.
5 Evaluation Design The remote collaboration application was evaluated in a distributed summer school project involving 13-14 year old students in Siauliai, Lithuania and St. Augustin, Germany. Three independent evaluations were carried out, including summative pedagogical and usability evaluations involving questionnaires and interviews, and a formative evaluation involving synchronised video observation in the two locations. This discussion focuses on the formative evaluation with a view to inform the future development of this and other similar collaborative learning platforms. No established methods or techniques are described in the literature for evaluating AR display systems in a remote collaboration scenario. Gutwin and Greenberg [23] distinguish between taskwork and teamwork: Taskwork is no different for a group than it is for an individual, while teamwork is essentially the added effort of working together in a team. This distinction enables us to break down the evaluation into AR related aspects on the one hand, and collaboration-related aspects on the other hand. However, even on these partial aspects the literature offers no established evaluation approaches. An analysis of 266 AR related publications [27] found that only 8% addressed some aspect of HCI and involved formal user-based experiments. On a similar note, a review of 45 groupware evaluations [28] found that only one-quarter involved a real-world setting. Reasons for this "evaluation crisis facing distributed system development" [29] include logistical difficulties in collecting data and the number and complexity of variables to consider among the main barriers to groupware evaluation. The formative evaluation described in this paper draws on the idea that traditional methods developed for the evaluation of single-user systems do not properly address collaborative aspects and are therefore inappropriate for producing design solutions for collaborative systems [29, 30]. With respect to quantitative versus qualitative methods, it has been argued that quantitative metrics have not only been elusive, but also are rarely good indicators on their own for improving collaborative systems [29]. By contrast, naturalistic user-based methods are seen as the most promising for formative evaluations [31], and are acknowledged to significantly improve evaluation results [32]. The formative evaluation described here was therefore based on a naturalistic approach involving in-depth observation of the remote collaboration between students by a panel of usability experts. 5.1 Data Collection Two researcher teams were deployed to video record the remote collaboration between students in the two summer school locations. In each location, a front camera recorded participants’ gestures and facial expressions, and a rear camera recorded the
232
M. Krauß et al.
projection surface of the AR display to get an idea of what participants could see at any given time (Figure 3). To facilitate editing and analysing the resulting material, the cameras were synchronised via a common time server on the Internet, and replicated each other’s view-points and zoom settings.
Fig. 3. Synchronised video observation in two locations recording collaborative sessions
Recording the remote collaboration sessions from both ends posed a number of challenges. The AR display is stereoscopic: With the help of shutter glasses, the AR display system produces separate images for each eye, which are then merged by a user’s brain into one 3D image. While the video cameras only picked up the double image on the projection surface, but not the 3D image seen by a user, the resulting material still gave a sufficiently accurate idea of what users did actually see. Another challenge was the requirement for a semi-dark environment for the AR display due to specific requirements of the sensing mechanism. While this was not a problem for the rear view recording the projection surface, the front-view recording participants' gestures and facial expressions had to employ a special night-view mode in order to record sufficient detail. A total of eight collaborative sessions were recorded over two days, involving 16 students in each summer school location. Collaborative sessions were between 22 and 58 minutes in length. 5.2 Data Analysis To prepare the video material for analysis, a combined view was produced for each collaborative session, merging the front- and rear-views from both locations into a single screen. The resulting 4-in-1 overviews (Figure 4) comprehensively document each collaborative session from both ends, offering more detail about communication and collaboration issues than traditional observation techniques, which document only one side leaving evaluators to speculate about what happens at the remote end.
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality
233
Fig. 4. The 4-in-1 overview created from the video material for each collaborative session
With a view to hardware ergonomics of the AR display, two additional videos were produced that showed for each location how students prepared for their collaborative session, taking their seat in the AR display, putting on headphones and shutter glasses, adjusting the equipment and checking the interaction devices. The data analysis involved a panel of four usability experts from the Interactive Technologies Research Group at the University of Brighton watching the edited video material for each collaborative session. Critical scenes were reviewed and watched again as required to better understand the usability problems at hand. Notes were taken during the screening and then compared and discussed after each session. Guiding the analysis and discussion were three sets of usability heuristics, which the evaluators had discussed beforehand, and in addition had available in printed form, in order to provide a common reference frame. Each set of heuristics covered different aspects of the formative evaluation, including remote collaboration in a shared workspace, augmented reality specific aspects, and general usability heuristics to complement the first two more specialised sets. The heuristics relating to remote collaboration in a shared workspace were based on the assumption that small groups need to perform certain low level actions and interactions in order to collaborate effectively: these include communication, planning, monitoring, assistance, coordination, and protection [23]. As insufficient support for these mechanics of collaboration causes usability problems, groupware usability can be defined as “the degree to which a groupware system supports the mechanics of collaboration for a particular set of users and a particular set of tasks” [23]. A detailed description can be found in [33]. Guidelines relating to AR-related aspects draw on the idea that the specific hardware and software required by AR displays present usability issues relating to hardware ergonomics, software robustness, display, and interaction quality. As currently no set of common design guidelines exists for the development of AR systems [34], the guidelines used in the evaluation draw on a range of sources
234
M. Krauß et al.
including VR usability heuristics [35], a taxonomy of mixed reality visual displays [36], a previous usability evaluation of the Studierstube AR system [37], and previous project experience [15, 16]. The resulting five heuristics are shown in Table 1. Table 1. Five design guidelines for AR displays synthesised from [34, 35, 36, 37, 15, 16] 1. 2. 3. 4.
5.
Reproduction quality - a user should be unaware that overlaid objects are virtual. Reproduction of virtual objects should be in real-time, high-fidelity, 3D animation. Registration - users should not perceive gaps or discrepancies between real objects and augmented content; virtual objects should be fully aligned with the real world. Realistic feedback - the effect of users' actions on virtual objects should be instantly visible and conform to the laws of physics and the user’s perceptual expectations. Technical robustness - systems should be reliable and consistent, i.e. avoid freezes, crashes, and frequent need to re-calibrate Hardware ergonomics - display components should not create physical discomfort for users, e.g. accommodation problems, bad fitting helmets or headphones, eyestrain, cyber sickness.
The third set of heuristics relating to general issues is Nielsen's [38] well-known list of ten usability heuristics. These guidelines complement the heuristics focusing on collaboration and the design guidelines for AR systems. They cover usability aspects of taskwork [23], equally applicable to group and individual work, together with more traditional concepts and GUI components used in the AR application. A detailed description can be found in [38]. The evaluation took place over two days and was based on six videos of scheduled collaborations plus two videos of students getting ready for their collaborative session in the two locations.
6 Evaluation Results Analogous to the three sets of heuristics informing the expert evaluation of the video material, usability problems are presented in three sections relating to remote collaboration issues, AR related issues, and general usability issues. The presentation of results is rounded off by a general discussion covering all three aspects. 6.1 Remote Collaboration The AR remote collaboration prototype provides an audio channel but no video channel for explicit communication, which sharpened language problems between collaborating students from different countries. In addition, it led to monitoring and awareness problems, e.g. at the start of collaborative sessions students had to repeatedly ask (without getting a response) whether their partner was present in the remote AR display. The additional provision of a video channel would enable audiovisual communication and thereby improve support for explicit communication, monitoring and awareness. The prototype offered no functionality to synchronise perspectives between collaborating students, which led to a whole range of problems relating to consequential communication, coordination of action, monitoring and assistance. The video evidence suggests that some students were not aware of the independent
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality
235
perspectives, which further exacerbated the problem. Others however seemed aware of their decoupled perspectives, and one pair of students even managed to work around the problem by synchronising their views manually using the audio channel and their mutually visible pointing devices as a common reference. An operation mode allowing synchronisation of perspectives, and, in addition, resetting the perspectives to a common default when objects are loaded, would significantly improve support for remote collaboration. Students using the prototype were not aware of each other’s control and menu actions resulting in problems regarding coordination of action and monitoring, e.g. student A inspects an object in the shared workspace while student B loads a new object into the shared workspace. The implementation of functionality to make remote control and menu actions visible to collaborators and enable a veto on certain operations (e.g. loading a new object into the shared workspace) would improve the support for coordination of action, monitoring and awareness. 6.2 AR Display The video material did not allow a direct evaluation of the criteria of reproduction quality, 3D registration and realistic feedback as it showed only a 2D representation of the 3D image seen by the user. A complete absence of participants’ comments relating to these issues suggests however that these aspects are implemented in a satisfactory way. Similarly, there was no evidence of any system freezes or crashes, and neither system had to be re-calibrated during operation, suggesting an overall high technical robustness of the prototype. A wide range of issues relating to hardware ergonomics was observed in the video material. Some of these relate to specific products used in the AR display (e.g. accommodation problems with shutter glasses, headphones), while others relate more general to the design and technology of the AR display (e.g. semi-transparent mirror perceived as obstructing the line of view). 6.3 General Issues The AR remote collaboration prototype offers no undo / redo functionality, which reduced user control and freedom, and led some students to accept unsatisfactory sculpting results rather than un-doing their actions and trying an alternative approach. Another issue is that the prototype has no preview functionality for the currently selected object when loading 3D objects into the workspace, reducing the visibility of system status, and leaving students in some cases unable to find previously saved 3D objects. Finally, it was observed that students preferred to request assistance from each other, from their supervising teacher or from researchers present in the room, suggesting that the inbuilt help screen in the prototype does not fulfil its purpose. 6.4 Discussion The description of usability problems in the previous sections aims to inform the future development of the ARiSE platform and emphasises aspects that could be improved. However, it must not distract from the fact that the overall impression of the prototype was positive: Students showed high acceptance of the technology and
236
M. Krauß et al.
engaged in vivid discussions. Over large sections of the collaboration process, the video observation showed that learners were fully immersed into discussions without significant distractions caused by the technology surrounding them. Overall, the prototype seems well suited for remote collaboration, with some weaknesses being balanced by strong points of the platform. The combination of a 3D sculpting tool, shared interactive workspace, and additional audio channel supports most of the mechanics of collaboration [23], with particularly strong support for intentional communication (verbal, remote cursor gestures) and consequential communication based on the manipulation of shared artefacts (artefact feed-through [33]). The video analysis consistently showed that collaborative sessions became more animated, communicative and interactive when students used the sculpting tool to explain issues and complete collaborative tasks. AR specific usability problems overwhelmingly concern hardware ergonomics. These problems suggest that the system would benefit from more user involvement in the design process, and from exploring emerging lightweight technologies as alternatives to the current display design. There is a substantial overlap between the general usability heuristics [38] used in the evaluation and the more specialised guidelines for remote collaboration and AR specific aspects. General usability issues identified relate mainly to control and support aspects of the prototype that interface with the underlying operating system and are therefore based on standard GUI concepts. It can be expected that in the future these metaphors will be replaced by concepts more appropriate to the AR context. While many of the described problems were observed consistently for all sessions during the video analysis, it also was evident that ultimately their impact on the collaboration was limited, as participants naturally worked around these issues in order to get on with their session, confirming similar observations in the literature [23] about the resilience of users at adapting their interactions to overcome usability issues and succeed with their task.
7 Conclusions and Further Work We have developed an Augmented Reality system that supports remote collaboration of learners through a shared virtual space. Based on a pedagogically driven reference scenario of a learning unit, we have implemented a simple prototypical AR application for using AR in schools and evaluated it in a field test under classroom– similar conditions. While summative evaluations [3] found a high acceptance rate among students and teachers and confirmed the pedagogical effectiveness of the prototype AR application, the formative evaluation resulted in a number of recommendations informing future development: sharing viewpoints could simplify referencing problems in communication (see also [9]), video-conferencing features could avoid uncertainties related to remote presence and traditional Human-Computer Interaction features such as Undo could improve ease of use. Overall however, the formative evaluation found that the prototype is well suited for hands-on remote collaboration, and that minor implementation issues are more than compensated for by students' resilience and motivation to complete their collaborative tasks in the
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality
237
shared AR space. As additional features increase system complexity, reducing the charm of directness and simplicity seen in the current implementation, additional research is needed to solve the trade-off between usefulness of features and directness of AR interaction. The Spinnstube® AR system has been shown to be a useful tool and test-bed for AR applications in school environments. However, our evaluation has shown a number of usability issues of the workplace setup. These shortcomings will be solved in an upcoming design revision of the hardware infrastructure. The evaluation shows that AR technology can be a beneficial and learnermotivating addition to classroom learning. The results also give examples of how learners can develop their own problem-solving strategies to work around existing communication shortcomings with conceptually simple basic tools at hand.
Acknowledgements We thank the other members of the ARiSE project. Design, development and evaluation of the work described within this paper were a joint effort of all project partners. Furthermore, we wish to thank the participating students of the RabanusMaurus-Gymnasium Mainz, Germany and the Juventa Basic School, Siauliai, Lithuania, for their enthusiastic participation and willingness to do extra work in their free time. The ARiSE project was co-funded by the European Commission within the Sixth Framework Programme (contract number IST-027039). Last but not least we want to thank Jürgen Wind, formerly with the Fraunhofer Gesellschaft, who managed the ARiSE project between 2006 and 2008.
References 1. ARiSE Project home page, http://www.arise-project.org 2. Wind, J., Riege, K., Bogen, M.: Spinnstube: A seated augmented reality display system. In: Proceedings 13th Eurographics Symposium on Environments, 10th Immersive Projection Technology Workshop, Weimar, Germany, July 15-18. Eurographics Association, Aire-la-Ville (2007) 3. Lamanauskas, V., Pribeanu, C., Pemberton, L.: 4. Reitmayr, G., Schmalstieg, D.: Mobile collaborative augmented reality. In: Proceedings of IEEE and ACM International Symposium on Augmented Reality (ISAR 2001), New York, NY, USA, October 29-30 (2001) 5. Ohshima, T., Satoh, K., Yamamoto, H., Tamura, H.: AR2Hockey: a case study of collaborative augmented reality. In: Proceedings of the IEEE Virtual Reality Annual International Symposium, Atlanta, Georgia, USA, March 14-18 (1998) 6. Regenbrecht, H.T., Wagner, M.T.: Interaction in a collaborative augmented reality environment. In: CHI 2002 Extended Abstracts on Human Factors in Computing Systems, Minneapolis, Minnesota, USA, April 20-25 (2002) 7. Müller, D., Erbe, H.-H.: Collaborative Remote Laboratories in Engineering Education: Challenges and Visions. In: Gomes, L., Garcia-Zubia, J. (eds.) Advances on remote laboratories and e-learning experiences. University of Deusto, Bilbao (2007) 8. Bruns, W.: Hyper-bonds – distributed collaboration in mixed reality. Annual Reviews in Control. Elsevier, Oxford (2005)
238
M. Krauß et al.
9. Chastine, J.W., Nagel, K., Zhu, Y., Yearsovich, L.: Understanding the design space of referencing in collaborative augmented reality environments. In: Proceedings of Graphics interface 2007, Montreal, Canada, May 28-30. ACM, New York (2007) 10. Seichter, H.: Augmented Reality and Tangible Interfaces in Collaborative Urban Design. In: Proceedings of the 12th International CAAD Futures Conference: Integrating Technologies for Computer-Aided Design, July 11-13. University of Sydney, Sydney, Australia (2007) 11. Billinghurst, M.: Augmented Reality in Education. New Horizons for Learning (2002), http://it.civil.aau.dk/it/education/reports/ar_edu.pdf 12. Piaget, J.: The Science of Education and the Psychology of the Child. Grossman, New York (1970) 13. Spiro, R.J., Coulson, R.L., Feltovich, P.J., Anderson, D.K.: Cognitive flexibility theory: Advanced knowledge acquisition in ill-structured domains. In: Patel, V. (ed.) Proceedings of the 10th Annual Conference of the Cognitive Science Society. Erlbaum, Hillsdale (1988) 14. Bruner, J.: Going Beyond the Information Given. Norton, New York (1973) 15. Lamanauskas, V., Vilkonis, R., Bilbokaite, R.: Pedagogical Evaluation of the Augmented Reality Platform. Appendices P1-P8 of [3] (2009), http://www.arise-project.org – downloads section 16. Pribeanu, C., Balog, A., Iordache, D.: Usability Evaluation Summer School 2007. Appendix U2 of [3] (2009), http://www.arise-project.org – downloads section 17. Bandura, A.: Social Learning Theory. General Learning Press, New York (1977) 18. Vygotsky, L.S.: Mind in Society. Harvard University Press, Cambridge (1978) 19. Salomon, G. (ed.): Distributed Cognitions: Psychological and educational considerations. Cambridge University Press, Cambridge (1993) 20. Roschelle, J., Rosas, R., Nussbaum, M.: Towards a Design Framework for Mobile Computer-Supported Collaborative Learning. In: Proceedings of the 2005 Conference on Computer Supported Collaborative Learning, Taipei, Taiwan, pp. 520–524 (2005) 21. Ryokai, K., Vaucelle, C., Cassell, J.: Virtual Peers as Partners in Storytelling and Literacy Learning. Journal of Computer Assisted Learning 19(2), 195–208 (2003) 22. Pask, G.: Conversation, Cognition, and Learning. Elsevier, New York (1975) 23. Gutwin, C., Greenberg, S.: The Mechanics of Collaboration: Developing Low Cost Usability Evaluation Methods for Shared Workspaces. In: Proceedings of the 9th International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, WET ICE 2000 (2000) 24. Kuck, R., Wind, J., Riege, K., Bogen, M.: Improving the AVANGO VR/AR Framework: Lessons Learned. In: Schumann, M., et al. (eds.) Virtuelle und Erweiterte Realität: 5. Workshop der GI-Fachgruppe VR/AR. Berichte aus der Informatik, pp. 209–220. Shaker, Aachen (2008) 25. The Ensemble Distributed Communication System – A group communication toolkit developed at Cornell University as well as Hebrew University of Jerusalem, http://www.cs.technion.ac.il/dsl/projects/Ensemble/ 26. Skype home page, http://www.skype.com 27. Swan, J.E., Gabbard, J.L.: Survey of User-Based Experimentation in Augmented Reality. In: Proceedings of the 1st International Conference on Virtual Reality, Las Vegas, Nevada, July 22-27 (2005)
Remote Hands-On Experience: Distributed Collaboration with Augmented Reality
239
28. Pinelle, D., Gutwin, C.: A Review of Groupware Evaluations. In: Proceedings of WETICE 2000, Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, pp. 86–91. IEEE Computer Society, Los Alamitos (2000) 29. Neale, D.C., Carroll, J.M., Rosson, M.B.: Evaluating Computer-Supported Cooperative Work: Models and Frameworks. In: Proceedings of CSCW 2004: Conference on Computer-Supported Cooperative Work, pp. 368–377. ACM Press, New York (2004) 30. Baker, K., Greenberg, S., Gutwin, C.: Empirical development of a heuristic evaluation methodology for shared workspace groupware. In: Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work, New Orleans, pp. 96–105. ACM Press, New York (2002) 31. Steves, M., Morse, E., Gutwin, C., Greenberg, S.: A comparison of usage evaluation and inspection methods for assessing groupware usability. In: Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Group Work, pp. 125–134. ACM Press, New York (2001) 32. Pinelle, D., Gutwin, C.: Groupware walkthrough: Adding context to groupware usability evaluation. In: Proceedings of the 2002 SIGCHI Conference on Human Factors in Computing Systems, pp. 455–462. ACM Press, New York (2002) 33. Baker, K., Greenberg, S., Gutwin, C.: Heuristic evaluation of groupware based on the mechanics of collaboration. In: Nigay, L., Little, M.R. (eds.) EHCI 2001. LNCS, vol. 2254, pp. 123–139. Springer, Heidelberg (2001) 34. Dünser, A., Grasset, R., Seichter, H., Billinghurst, M.: Applying HCI principles to AR systems design. In: MRUI 2007: Second International Workshop at the IEEE Virtual Reality Conference, Charlotte, North Carolina, USA (2007) 35. Sutcliffe, A., Gault, B.: Heuristic evaluation of virtual reality applications. Interacting with Computers 16, 831–849 (2004) 36. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems (Special Issue on Networked Reality) E77-D(12), 1321–1329 (1994) 37. Kaufmann, H., Dünser, A.: Summary of Usability Evaluations of an Educational Augmented Reality Application. In: Shumaker, R. (ed.) HCI International Conference, Beijing, China, pp. 660–669 (2007) 38. Nielsen, J.: Heuristic evaluation. In: Nielsen, J., Mack, R.L. (eds.) Usability Inspection Methods. John Wiley & Sons, New York (1994)
A Comparison of Paper-Based and Online Annotations in the Workplace Ricardo Kawase, Eelco Herder, and Wolfgang Nejdl L3S Research Center, Leibniz Universität Hannover Appelstr. 4, 30167 Hannover, Germany {Kawase,Herder,Nejdl}@L3S.de
Abstract. While reading documents, people commonly make annotations: they underline or highlight text and write comments in the margin. Making annotations during reading activities has been shown to be an efficient method for aiding understanding and interpretation. In this paper we present a comparison of paper-based and online annotations in the workplace. Online annotations were collected in a laboratory study, making use of the Web-based annotation tool SpreadCrumbs. A field study was out to gather paper-based annotations. The results validate the benefits of Web annotations. A comparison of the online annotations with paper-based annotations provides several insights in user needs for enhanced online annotation tools, from which design guidelines can be drawn. Keywords: Web Annotation, Online Collaboration, e-Learning, User’s Behavior, SpreadCrumbs.
1 Introduction Learning has become an integral part of many people’s everyday working life. Due to a more knowledge-based society and rapid changes in technology, one often has to search for and read information in order to keep up-to-date. Each individual presents a set of cognitive strategies that involve the learning process: each person learns in her own way, style and pace. At the same time, the character of learning at the workplace has shifted from a solitary, paper-based activity to a Web-based activity, making use of various resources, including discussion forums and social networking sites [1]. As a result, one ends up with a large collection of scattered digital resources; due to limitations of the Web, annotations – if any – are typically made separately (in a word processor or on a paper sheet). By contrast, annotating paper documents is a natural activity that involves direct interaction with the document and that is known to support understanding and memorization [2]. The term annotation comprises several methods, including underlining and highlighting text and writing additional comments in the margin. These activities are shown to stimulate critical thinking in a process that can be called active reading [3]. All additional writing done by the reader can be considered a variety of annotation, irrespective of its form - formal or informal, implicit or explicit, permanent or transient - or its function - signaling for future attention, memory aiding, interpretation or even reflections out of the subject. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 240–253, 2009. © Springer-Verlag Berlin Heidelberg 2009
A Comparison of Paper-Based and Online Annotations in the Workplace
241
In order to understand how to better support active reading and annotations in the digital context, we carried out a study to compare how people annotate online with how people create paper-based annotations. Specific attention is given to the type of annotations, their function and perceived difficulties in creating and using these annotations. Before presenting the comparative study, we present some theoretical underpinnings. In section 2 we describe background research on annotations in the learning process – including a categorization of annotation types and a comparison of screen-based reading with paper-based reading. Specifics of annotation in the elearning context are discussed in section 3. We continue with our comparative study, which consisted of a laboratory study making use of an online annotation tool – SpreadCrumbs – and a field study in which we investigated common annotation habits in the paper-based context. We end this paper with a discussion of the results and their implications.
2 Annotations in Learning In this section we provide an overview on the role of annotations in learning. First we discuss a classification of different forms of annotation. We continue with a categorization of reasons why people annotate while learning. At the end of this section we explore various impediments for the take-up of annotation in the online context. Based on an extensive field research on textbooks, Marshall [4] categorized the different kinds of annotations by forms and its functions. Below, we will discuss the forms of annotation that are relevant for learning proposes and their functions during the learning process: -
-
-
-
underlining or highlighting titles and section headings: this kind of annotation serves as signaling for future attention. Drawing an asterisk near a heading or highlighting it will remind the reader that there is something special about that topic, something to be considered or explored in more detail. highlighting and marking words or phrases and within-text markings: similar to above, the main goal is signaling for future attention – from themselves or from collaborators. The annotated pieces of text typically carry important and valuable observations. The act of highlighting text also helps in memorizing it. notation in margins or near figures: any kind of diagrams, formulas and calculations that structure and elaborate the document contents. This type of annotation is specifically meant to serve comprehension. An example is a calculation near an equation or theorem presented in a text, to quickly check its meaning and correctness. notes in the margins or between lines of text: these descriptive annotations are usually interpretations of the document’s contents. These can be phrases in the margin that summarize or comment upon a section or a page. Single words are typically general terms, keywords and classification of a section. Such annotations help the interpretation of the whole text where the reader better establish the topic of the content of each part of the text creating his own mental structure and decreasing the overall cognitive load.
242
R. Kawase, E. Herder, and W. Nejdl
In all of these cases the value of annotations are for both annotators and future readers. Memory adding, signaling attention, problem working and interpretation annotations definitely benefit the annotator but may also benefit other readers – provided that the annotations are explicit, readable and understandable. In collaborative group work, students typically work on the same content, but this content is extracted from different resources: for example, they all have their own copies of the obligatory textbook. This is a limitation inherent to paper-based annotations. Even though the annotations are still useful for personal use, they fail to play a role in the communicative and collaborative learning processes, which is a barrier for the leverage of learning by social constructivism [5]. Web 2.0 technologies explicitly facilitate these processes and their benefits on knowledge gathering and construction have been lately discussed [6]. Moreover, by exchange of documents, including annotations, remarks and insights, does not only serve the direct, contentrelated goals, but also contributes to motivation and enjoyable professional relationships [7]. Despite the many potential benefits of online collaborative environments in comparison with traditional paper-based annotation, there are quite some issues related to migrating reading and annotation to the computer. There is a vast body of research [8, 9, 10, and 11] that discuss the many issues when moving from paperbased reading to screen display reading: -
-
-
-
tangibility: in contrast to a text displayed on a computer screen, paper offers physical tangibility. Readers can hold the paper as they like, they can move it around to adjust their perspective and distance [9] – in order to improve legibility [8] and even to facilitate handwriting [12]. Paper is also superior to electronic devices in terms of legibility. Further, while reading one page, readers can use another page for writing notes. orientation: paper documents give readers a better sense of location within the text, by physical cues, such as the thickness on the sides of a book or different paper materials in a magazine [10]. These cues support text skimming and cross-reading and they are instrumental when trying to relocate some text [13, 14]. Digital documents do not hold these characteristics [8, 10], an issue that needs to be overcome by increased attention for usability in device design and interface design. multiple displays: paper provides a single canvas for each page of text [15]. Each one holds unique properties of physical tangibility, text content, modifications and additions from the readers. The virtual pages simulate this on the single device screen, but in some cases supporting concurrence reading from several documents turns to be an unwieldy task [10]. cooperative interaction: by circulating a piece of paper, more than one person can interact with the content and build upon each others’ annotations [11]. Whereas groupware facilitates simultaneous revisions, versioning and collaboration, it does not yet reach the intuitive interaction as provided by circulating paper-based documents [16].
In addition to these usability issues, there are several technical issues that have been examined [14] to understand the challenge of digital reading. In the context of this paper, we are mainly concerned with the implications for annotations. A major
A Comparison of Paper-Based and Online Annotations in the Workplace
243
question is whether – given the required progress in terms of technology and interface design – electronic annotations will be used in the same manner as the traditional paper-based annotations. From the above there is evidence that due to inherent differences when moving from the paper-based world to electronic devices, the character of annotations will necessarily change. Paper-based annotations have been used for centuries and can therefore be considered a highly developed activity, one that represents an important part of reading, writing, and scholarship. Annotation occurs in a wide variety of forms and it is applied for many different purposes. Annotations not only add substance to the text but also implicitly may reveal the reader’s engagement with the material [4]. Previous research has verified that no matter the form or purposes of the annotations, the benefits are immediately clear to the future reader [17]. Further, some researchers state that people’s needs for making annotations in the Web environment do not differ significantly from their needs in the paper environment [18]. In section 4 we shed some more light on this discrepancy by empirically comparing these situations. Before we continue to this section, we shortly discuss the role of annotations in Webbased interaction and e-learning.
3 Web Annotations in e-Learning The benefits and opportunities of electronic and automatic annotations, elaborating on their paper-based counterparts, have long long ago envisioned by Vanevar Bush in the Memex [19]. Bush envisaged that by relating all documents that users have read and attaching their annotations to these documents, individuals could organize and re-find information resources in an associative manner, together with any earlier annotations. Whereas the original rich forms of annotations in Hypertext systems – with different categories, directions and even multi-links – allowed for these associative trails, in the Web as it is today this functionality is not totally fulfilled, as readers have limited possibilities for sharing comments or questions by writing back to the pages. As a result, users spend a lot of effort trying to comprehend the different formats of how people comment on-line resources using coping strategies such as sending comments via e-mail [20]. Recent Web 2.0 technologies provide an open resource environment where individuals can freely collaborate. Nevertheless, these technologies typically only cover just a slight portion of the Web or one specific kind of annotation. These technologies are typically implemented as Web servers or browser enhancements. The basic idea of a Web annotation system is that the user has the ability to change, add or attach any type of content to any online resource, similar as she would do it with a paper document. An application (usually a browser plug-in) enables the user to modify the Web pages, highlight parts of it and add tags or comments, while the back-end of the system just need to check these annotations and associate them with the specific user and the specific URL. As discussed in the previous section, by actively being involved with the text, users can better memorize and understand it. By contrast, annotating on a computer-screen is an activity that competes with the reading itself, due to the lack of direct manipulation. However, users will do so when the benefits are higher than the costs in terms of effort. These benefits may include the saving of time needed for re-finding,
244
R. Kawase, E. Herder, and W. Nejdl
summarizing, organizing, sharing and contributing online annotations. A rather economical view on the balance between the drawbacks and benefits has been given by [21]’s information foraging theory, in which the author described the above activities as information enrichment. Today, both companies and academia institutions train learners to complete tasks and solve problems through project-centered learning. Since it may not be feasible for all participants involved in the projects to meet on a regular basis, they must be assisted by information and communication technology. To support this collaboration there are specific methods for Computer Supported Collaborative Learning (CSCL) provided by learning environments and other platforms can be adapted to fit this need. For the best results of the learning process, the methods should help each learner to act individually to reach her own goals and to cooperate by sharing and discussing ideas to accomplish an assignment. As discussed in the previous section, in the same way annotations contribute for memory aiding, text interpretation and information re-finding, Web annotations provide the same functionality in the online environment. Web annotations are accessible anytime and anywhere, with diverse sharing possibilities, clearly enhancing workgroup collaboration [22] for cooperative tasks and learning processes. However it is important to remark that the full richness of paper annotations will only be achieved if the digital annotations hold the same beneficial feature of being ‘incontext’. ‘In-context’ annotations are visible within the original resource, enhancing it with the observations and remarks of the annotator, which are likely to help in individual tasks in similar ways as is the case with paper documents [10]. Despite the limitations in terms of usability and tangibility, advantages of Web annotation tools go far beyond the advantages of regular paper annotations. In addition to the sharing capabilities within online communities, digital annotations can be indexed, ordered, rated and searched. These benefits are confirmed by several studies on annotations tools [e.g. 18], in which participants have remarked that search the annotations is a very desirable feature. Even though there are currently systems that support annotations, studies have shown that users often resort to different strategies for simulating annotation tools, making use of e-mails and messages to self and separated text documents. The main reason for this phenomenon lies mainly in the necessary effort required for creating and organizing annotations: “If it takes three clicks to get it down, it's easier to email” [2]. As users will inevitably resort to other strategies if annotation tools require too much effort, it is necessary to have a lightweight capture tool, with flexible organizational capacity, visibility and practical reminding. In particular if one takes into account that many annotations are primarily meant as temporary storage, or a means for cognitive support or as reminders, it becomes clear that these factors need to be better taken into account in annotation tools for personal information management and learning systems.
4 A Comparative Study on Paper-Based and Online Annotations In order to better understand the real use of annotations and Web annotations, we have implemented a straightforward online annotation system, SpreadCrumbs. SpreadCrumbs provides a minimalistic interface for adding post-its notes, crumbs, to any point within a Web page. Crumbs are used as personal reminders, for information
A Comparison of Paper-Based and Online Annotations in the Workplace
245
re-finding and for collaboration and social navigation support [26]. With SpreadCrumbs, users can add annotations to any Web resource creating a collection of bookmarks, add comments to the resources, visualize the annotations on the page (in-context) and share these annotations. The post-it note contains the author, the other users that can see this annotation, the topic and the comments – as shown in Fig. 1. To add a annotation, the user just has to select the option “Add Crumb” from the right-click context menu. This action will pop-up a window where the user just need to fill the topic and comments. Further, the user can choose some friends from her social network to share this annotation with.
Fig. 1. Annotation with SpreadCrumbs on EC-TEL 2009 Webpage
Using SpreadCrumbs, we have conducted a number of experiments. In the next section we report a selection of the results, which provide insight in how users create annotations for their personal use and for sharing. This laboratory study is complemented by a field study in which we investigated in which situations users chose to print documents, how they annotate them and whether and how they share these annotated documents. The main goal of this research is to investigate the types of annotations encountered online and on paper, and to find differences between these two situations. The results of this study are expected to provide insight in differences between these two situations and to provide design guidelines for the design of annotation tools and the way they are used. 4.1 First Study: Annotation on the Web The experiments with our annotation tool were conducted with 18 participants, who all stated to be very proficient working with computer and internet technology. From those, 16 are working in the field of computer science.
246
R. Kawase, E. Herder, and W. Nejdl
At the beginning of each session, in which only the participant and the experimenter were present, the tool was introduced to the participant by giving a brief overview of the usage of it. Following the introduction, we asked the participants to answer a set of 10 questions by writing down the answer and annotating the resource. These questions were specific information finding tasks that could be solved by a brief internet search with any popular search engine. We ensured that most of the questions were very specific domain questions or numerical in nature to reduce the possibility of the participants to know the answers – an example: “What is the estimate percentage of Chinese among the population of Brunei?”. The experiment setup enforced the participant to annotate useful but hard to memorize information for future reference – in fact, in a second round, we will ask the same participants to actually re-find the information by making use of the annotations provided in the first round. During the experiment, the participants created a total of 207 annotations, covering 81 different Web resources. The average number of words per annotation was 4.1. An important observation was that the participants in general carefully positioned the annotations in the context of the Web page: from the 18 participants using SpreadCrumbs, 16 placed the annotations of each question near the text, table, or paragraph where they found the answers. This type of behavior is not supported by the simple bookmarking functionality of regular browsers. We noticed that out of the 18 participants who used SpreadCrumbs, only six of them included the answers in the annotations while the majority opted for using keywords of the respective question. Just one participant typed explicit full sentences when annotating the pages: “There seem to be different walks - I'm not sure whether the 9.4km walk brings us to the top, but I think so.” ; “.. made 35 homeruns in 2005. Yes, I think this should be the right answer.” Although the participants were very proficient with the computer, all of them stated that they regularly print digital documents for reading, even when these documents are relatively short (up to 8 pages). All of them confirmed that they usually annotate those printed documents in one way or another, by means of highlighting text and adding their own comments or insights in the margin. This somehow contradicts a very interesting observation during the experiment. One of the answers consisted of a short passage from a book (2 sentences with less than 40 words). However, all of the participants demonstrated laziness when having to write down the quote on paper. All of them asked the same question: “Do I have to write the whole sentence?”. We allowed them to write down only the reference for the passage (page and paragraph), a suggestion that was followed by all of the participants. The contradiction arises since the participants do not desire to write if they have the option of typing (or copy and paste) still they keep annotating with the pen even though several means of digital annotation exist. None of the users demonstrated problems regarding the usage of the tool. After the short introduction, all of them performed the tasks of annotating and consulting annotated resources without any effort or mistake. The participants demonstrated enjoyment with the tool interface and functionalities. The direct manipulation and the ‘in-context’
A Comparison of Paper-Based and Online Annotations in the Workplace
247
features were the most appreciated. After having conducted the tasks, the participants were handed over a questionnaire in which they had to choose terms from a list of adjectives gave us a data set of the user perspective over the tool. This questionnaire1 measures usability and satisfaction with a list of 118 adjectives, positives and negatives. This methodology gives the participants more confidence to be critical to the system choosing negative terms. The top 10 terms chosen were: Easy to use, Usable, Useful, Collaborative, Helpful, Convenient, Connected, Friendly, Innovative, Straight Forward. These results show us that the participants would be willing to use such tool on a more regular basis. Regular Use of SpreadCrumbs. In addition to the laboratory study, we collected and analyzed log files from users that were not involved in the experiments. The results show some interesting differences that distinguish two behaviors when annotating. Examining 177 shared annotations, we identified an average length of 10.35 words per annotation, whereas from 371 personal annotations we found an average of 4.56 words per annotation. With the permission of the users we extracted some examples of annotations that illustrate these numbers and the difference between the linguistic structures of the notes – see Table 1. The examples of personal notes show that these private annotations in many cases contain a rather short, cryptic message. These annotations typically just consist of keywords or some sort of reminders for the authors, of which the purpose often is only understandable by the users themselves. It should be noted that these keywords should not be mistaken for tags. While tags have a descriptive nature, these keywordbased annotations carry additional (sometimes implicit) information. By contrast, shared annotations are very explicit and well-described with full meaningful sentences, in form similar to chat or text messages. Table 1. Example of personal and shared Web annotations Personal “Conference Deadline: October 29” “Flat 64m 2 rooms windthorststr. 8”
Shared “All artists are from Sweden, I think, and do Jazz music (quite soft) but nice...” “Let me know if there's anything else to be done.”
“TO DO!”
4.2 Second Study: How People Annotate on Paper To compare annotations in the online context with paper-based annotations, we visited the working place of 22 PhDs students and pos-Docs. We asked each one of them to take a look at the last 3 research papers or articles that they have printed and read. In total we have collected 66 articles, covering a total of 591 pages of text. We found 1778 annotations and an average of 3.08 annotations per page. The table below shows the average of each type of annotation per page. 1
http://www.userfocus.co.uk/articles/satisfaction.html
248
R. Kawase, E. Herder, and W. Nejdl Table 2. Annotations found by type Annotation types Highlighting/Mark sections headings Highlighting/Mark text Problem solving General notes (Notes in the margins)
153 1297 2 326
8.6% 73% 0.1% 18.3%
The far majority of the annotations (73%) involved the highlighting and marking of text. Some participants had the tendency to only highlight main words within a sentence or paragraph. In these cases we counted the collection of highlighted words belonging to a continuous block of text as one piece of annotation. 9% of the documents discussed with the participants turned out to be part of collaborative work in which two or more people were involved. All except two participants reported that they shared their comments via email or some online communication tool; only two participants shared the same sheet of paper, which contained annotations from both parties. Another valuable observation is that all of the participants who share annotations said that they do annotate in a different (more careful) way when they annotate concerning another reader. To examine in more detail the annotation strategies, we asked our participants to classify the goal of reading the paper. We distinguished between the following categories: reading for writing, reading for learning, reviewing and other. Reading for writing is the common activity of reading related articles to extract ideas and references specifically for propose of writing. Reading for learning includes the act of getting updated in some particular field, read about new publications or learning some new approaches to apply in some other activity, such as solving math problems or implementing algorithms. Reviewing consist exclusively of reading papers to give feedback to the author. Finally, any other type of reading was categorized as other. The table below shows some numbers of the field research by the type of reading activity. Table 3. Results by reading goal
Articles Articles annotated Annotations/Page
Writing 31 28 2.36
Learning 23 16 4.7
Review 9 7 1.11
Other 3 3 6.3
Annotation types Highlighting/Mark sections headings Highlighting/Mark text Problem solving General notes (Notes in the margins)
10.5% 66.0% 0.1% 23.3%
7.5% 82.9% 9.6%
9.4% 40.6% 0.9% 49.1%
4.8% 72.2% 23.0%
In addition to comments directly put on paper, three participants also used the technique of attaching annotations to the original document with post-its that were attached to the paper. From the 66 articles analyzed, 10 (15%) did not contain any
A Comparison of Paper-Based and Online Annotations in the Workplace
249
Fig. 2. Examples of annotated papers examined during the field research
annotation. One participant that did not have any annotation in any printed paper said that she keeps her annotations in a separated file in her computer for each digital article. Two other participants said that they first do a very quick reading on the
250
R. Kawase, E. Herder, and W. Nejdl
computer to check the relevance of the text, and if it is relevant than they print it. In their own words: “First I read on the computer to see if I really need to print”. We have noticed that in many cases participants also used different marking colors for highlighting with the purpose of attributing different levels of importance. From the annotations we identified many different ways of signaling important parts on the text. As an example, one participant created her own symbology for annotating: squares around the terms means new terminology, underline means definitions and circles means open question or issues over some topic. Those annotations symbols were used combined with highlighting (importance) and many times they even overlapped. One last interesting observation was the behavior of one of the participants who keeps two printed versions of every paper: one with annotations and one clean print. As stated, the clean print is for a future reading when she may want to get the idea without influence of her previous readings. Although the vast number of highlighting annotations on the papers, none of the participants use such mechanisms that allow persistent highlighting on digital documents or web resources. In summary of the observations we identified two main clusters of annotations: relevance adjustment annotations where implicit highlight and signaling indicate different levels of importance in the text and contributive annotations where explicit readable remarks are added attached to the text. As a last part of our interviews we asked the subjects to describe how they arrange their papers that lay on their desktops. The relevant categories described were topic, quality, importance, date of reading and task. This simple observation may guide us to design better metaphors of the possible dimensions when trailing online resources.
5 Discussion From the results presented above we can sketch some impressions on some user’s behaviors. Apparently, the high amount of highlighting/marking signifies “laziness” of the annotators. This laziness is in fact a way to reduce cognitive overload (because of switching between tasks) and to keep focused on the main task (the reading itself) while still providing meaningful cues. The higher amount of annotations per page for the “learning” papers shows that these annotations have a clear function for memorizing certain parts of the text (by actively doing something with it). The category of “review” papers shows a higher frequency of notes in the margin comparing to the other categories. These are almost certainly comments to be included in the review. Additionally, the low number of highlights clearly shows that the readers are not concerned about signaling for future attention. Out of this we draw the conclusion that there is indeed a significant difference between the goals and behaviors of papers based and digital online annotations. The papers that had higher amount of notes and the lower number of highlight (as explained before an action that means signaling for future attention) indicate a non-concern of the reader about a future reading. In the other hand, online annotations (notes in the margin as used in the experiment) are mostly used on resources that are meant to be reused and found in a future work session. We conclude that, although online annotations are similar in its
A Comparison of Paper-Based and Online Annotations in the Workplace
251
structure to margin notes, its scope is more comparable to highlighting where the real main goal remains in signaling for future attention and facilitation for re-finding. Within the collected data of online annotations, the average number of words (4.56) in private annotations does not cover the average length of short sentences while the shared annotations (average of 10.35 words per annotations) fit the average of short and medium sentences statistically measured in plain text documents [27]. We deduce that private annotations, in general, don’t contain full sentences and as in the paper based texts they are just a perspective over the topic context or keywords and classification of a section (or resource) – in the digital environment mostly used for re-finding. The shared online annotations clearly hold more explicit meanings where the authors tend to be clearer when sharing their thoughts. This evidently shows the different behavior and concerns of the individual when writing personal or shared annotations. Although differences have been found between paper and digital annotations, if we use the same reading goals classification for online readings and translate the annotations meanings, we find out that in-context notes annotations are the optimized form for attention signaling, summarization, interpretation and improving bookmarks search, in both personal and shared environments. The sum of our two studies suggests some design implications for annotation systems. First of all the annotation action must be effortless in all senses – easy to access and visualize, as few interactions as possible and in-context interactions to minimize the lose focus. Online resources can be used for all sorts of reading tasks, thus annotation systems must supply all forms of annotations, not by similar representations but by providing the means to achieve the same goals. The necessary effort still requires some engagement from the user, however the benefits discussed should overcome and become in hand to the users: re-finding tools, easy manipulation and organization of the annotations and resources and sharing capabilities.
6 Conclusions In this paper we discussed the role of annotation in learning in general and in elearning in particular. From the background research it has become clear that the act of annotating supports the learning process in paper-based situation. However, when it comes to online learning, annotation becomes an additional cognitive burden, due to the lack of suitable tools and intrinsic problems related to reading from a screen and interacting via keyboard and mouse. From the comparison of online annotation with paper-based annotation it becomes clear that there is a difference between both types. Online annotations were typically short and had a certain purpose in terms of re-finding, sharing or commenting. The high amount of highlighting in paper-based annotations has an intrinsic value. Based on the results we conclude that emphasis in the development of annotation tools should be put on added value by better exploiting the annotations (for example for enhanced re-finding tools, visual overviews, grouping, sharing, collaborating) rather than to try and mimic the ‘old-fashioned’ paper-based annotation. At the same time, writing an annotation should cost as little effort as possible, as otherwise people will inevitably resort to other ways of getting their things done [2].
252
R. Kawase, E. Herder, and W. Nejdl
This poses a design challenge for the development of annotation systems and provides an explanation why these kinds of systems have not found an audience yet. Furthermore, we think that the development of added value for annotations will provide many more opportunities for personalizing the learning environment and for facilitating communication and collaboration between learners. Acknowledgments. The authors' efforts were partly funded by the European Commission in the EU FP7 Network of Excellence Stellar.
References 1. Chatti, M.A., Jarke, M.: The Future of E-Learning: A Shift to Knowledge Networking and Social Software. Int. J. Knowledge and Learning 3(4/5) (2007) 2. van Kleek, M., Karger, D.: Information Scraps How and Why Information Eludes Our Personal Information Management Tools. ACM Trans. Information Systems 26(4) (2008) 3. Adler, M.J., van Doren, C.: How to Read a Book. Simon and Schuster, New York (1972) 4. Marshall, C.: Annotation: From Paper Books to the Digital Library. In: Proceedings of the 1997 ACM International Conference on Digital Libraries, DL 1997 (1997) 5. Vygotsky, L.S.: Mind in society. Harvard University Press, Cambridge (1978) 6. Ullrich, C., Borau, K., Luo, H., Tan, X., Shen, L., Shen, R.: Why web 2.0 is good for learning and for research: principles and prototypes. In: Proceeding of the 17th international conference on World Wide Web, Beijing, China, April 21-25 (2008) 7. Leland, M.D.P., Fish, R.S., Kraut, R.E.: Collaborative document production using quilt. In: Proceedings of the 1988 ACM conference on Computer-supported cooperative work, Portland, Oregon, United States, September 26-28, pp. 206–215 (1988) 8. Dillon, A.: Reading from paper versus screens: a critical review of the empirical literature. Ergonomics 35(10), 1297–1326 (1992) 9. Haas, C.: Writing Technology: Studies on the materiality of literacy. Lawrence Erlbaum Associates, Mabwah (1996) 10. O’Hara, K., Sellen, A.: A Comparison of Reading Paper and On-Line Documents. In: Proceedings of CHZ 1997, Atlanta, GA, pp. 335–342. ACM Press, New York (1997) 11. Sellen, A., Harper, R.: Paper as an Analytic Resource in the Design of New Technologies. In: Proceedings of CHZ 1997, Atlanta, GA, pp. 319–326. ACM Press, New York (1997) 12. Guiard, Y.: Asymmetric Division of Labor in Human Skilled Bimanual Action: The Kinematic Chain as a Model. Journal of Motor Behavior 19(4), 486–517 (1987) 13. Dillon, A.: Designing Usable Electronic Text. Taylor and Francis, London (1994) 14. Mills, C.B., Weldon, L.J.: Reading text from computer screens. ACM Computing Surveys (CSUR) 19(4), 329–357 (1987) 15. Adler, A., Gujar, A., Harrison, B.L., O’Hara, K., Sellen, A.: A Diary Study of WorkRelated Reading: Design Implications for Digital Reading Devices. In: Proceedings of CHI 1998, Los Angeles, CA. ACM Press, New York (1998) 16. Gutwin, C., Greenberg, S.: The effects of workspace awareness support on the usability of real-time distributed groupware. ACM Transactions on Computer-Human Interaction (TOCHI) 6(3), 243–281 (1999) 17. Ovsiannikov, I.A., Arbib, M.A., McNeill, T.H.: Annotation Technology. International Journal of Human-Computer Studies 50(4), 329–362 (1999) 18. Fu, X., Ciszek, T., Marchionini, G., Solomon, P.: Annotating the Web: An Exploratory Study of Web Users’ Needs for Personal Annotation Tools. In: Grove, Andrew (2005)
A Comparison of Paper-Based and Online Annotations in the Workplace
253
19. Pirolli, P.: Information Foraging Theory: Adaptive Interaction with Information. Oxford University Press, Oxford (2007) 20. Farzan, R., Brusilovsky, P.: AnnotatEd: A Social Navigation and Annotation Service for Web-based Educational Resources. In: Reeves, T.C., Yamashita, S.F. (eds.) Proceedings of World Conference on E-Learning, E-Learn 2006, Honolulu, HI, USA, October 13-17, pp. 2794–2802. AACE (2006) 21. Kraut, R., Galegher, J., Egido, C.: Relationships and tasks in scientific research collaborations. In: Proceedings of the 1986 ACM conference on Computer-supported cooperative work, Austin, Texas, December 3-5 (1986) 22. Fish, R.: Comparison of Remote and Standard Collaborations. In: Proceedings of the Conference on Technology and Cooperative Work, Tucson, Arizona (February 1988) 23. Marlow, C., Naaman, M., Boyd, d., Davis, M.: HT 2006, Tagging Paper, Taxonomy, Flickr, Academic Article,ToRead. In: Proc. Hypertext 2006. ACM Press, New York (2006) 24. Kawase, R., Nejdl, W.: A straightforward approach for online annotations: SpreadCrumbs. In: Proceedings of the 5th International Conference on Web Information Systems and Technologies, WEBIST 2009 (2009) 25. Altmann, G.: Verteilungen der Satzlängen (Distribution of Sentence Lengths). In: Schulz, K.-P. (ed.) Glottometrika 9. Brockmeyer (1988)
Learning by Foraging: The Impact of Social Tags on Knowledge Acquisition Christoph Held and Ulrike Cress Knowledge Media Research Center Konrad-Adenauer-Straße 40, 72072 Tübingen, Germany
[email protected],
[email protected]
Abstract. In the last few years, social tagging systems have become a standard application of the World Wide Web. These systems can be considered as shared external knowledge structures of users on the Internet. In this paper, we describe how social tagging systems relate to individual semantic memory structures and how social tags affect individual processes of learning and information foraging. Furthermore, we present an experimental online study aimed at evaluating this interaction of external and internal structures of spreading activation. We report on effects of social tagging systems as visualized collective knowledge representations on individual processes of information search and learning. Keywords: Tagging, tag cloud, learning, information foraging, spreading activation.
1 Introduction In only few years, the Internet has changed fundamentally: It has developed from a platform, where people simply retrieve information from few providers into an active network of Web users who frequently contribute and exchange content [1]. Millions of people use social software applications such as wikis, blogs and social tagging systems and participate in the Web 2.0 creating and providing vast amounts of information every day. This may lead to the question of how people can benefit from all this user-generated information on the Web and, furthermore, how external knowledge on the Internet and individual knowledge may cross-fertilize. Regarding this specific question only few theoretical frameworks have been developed [2,3] and, as stated by Fu in 2008 [3], surprisingly little is known about how Web 2.0 technologies may directly interact with individuals at the knowledge and cognitive level. In this paper, we want to address this question by investigating the technology of social tagging. We focus on the question of how social tags may influence individual processes of information seeking and knowledge acquisition. Therefore, we provide a cognitive perspective on social tagging and present first results of an experimental study which investigates processes of navigation and learning. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 254–266, 2009. © Springer-Verlag Berlin Heidelberg 2009
Learning by Foraging: The Impact of Social Tags on Knowledge Acquisition
255
2 Theoretical Background 2.1 Characteristics of Social Tagging Systems Social tagging is the activity of annotating and classifying digital resources with keywords (tags as metadata). For instance, digital resources may include bookmarks (e.g., delicious.com), pictures (e.g., flickr.com), books (e.g., librarything.com) or products, like books (e.g., amazon.com) or wine (e.g., vinorati.com, snooth.com). Each user can choose individual tags for stored resources. Such tags reflect personal associations, categories and concepts and are individual representations, based on meaning and relevance to that individual. On the one hand, tags help users to structure, organize and re-find vast amounts of their own stored digital resources and information on the Web. On the other hand, tags offer a social aspect: Other users’ tags can be used as navigation links for exploratory search processes. Browsing through social tagging systems may lead to finding new resources, which were stored by other users. Moreover, social tagging systems provide the feature of aggregating tags from different users. This way, digital resources can be described by the annotated tags of all users. These descriptions develop as a bottom-up process of collective tagging activities representing “folksonomies” of a specific tagging community. These may essentially differ from classifications or metadata created by single experts in top-down processes. The strength of the connection between annotated tags and a resource can vary. Those tags which are used more frequently for a resource by the tagging community are associated more strongly with it. In addition, tags themselves are connected to each other. Tags which are annotated to the same resource are related by their co-occurrence. The aggregation of all tags of a social tagging community leads to a representation of the connections among related tags and the strength of their association. The more often two tags co-occur, the stronger is the relation between them. Tag clouds typically visualize these connections and their strength: The font size of a tag illustrates the strength of association (see Figure 1).
Fig. 1. Related tags to “red wine” (from vinorati.com)
Social tagging systems can be considered shared external knowledge structures of communities [3]. Furthermore, these structures can be regarded as a kind of external and transparent spreading activation networks of tagging communities. Tags and resources represent the nodes in this network. These are connected to each other by a varying strength of spreading activation. When users select - or activate - a tag or a resource, related tags and their strength of association or activation can be visualized
256
C. Held and U. Cress
Fig. 2. Illustration of a spreading activation network of related tags
in tag clouds (Figure 2 illustrates a network of related tags). These processes within social tagging systems are similar to spreading activation processes of individual semantic memory systems, which are explained in the next chapter. 2.2 Models of Semantic Memory A quite analogous problem of storing and retrieving vast amounts of information on the Internet and in social tagging systems is the internal representation of knowledge in the semantic memory system of individuals. Although the long-term memory is a repository of thousands of different facts and concepts, we are still able to retrieve single items of information quickly and almost effortlessly. In the following, we take a closer look on cognitive models of storage and retrieval in human memory. The most elaborated cognitive models of semantic memory are based on the idea that separate units of knowledge, like facts, are connected to each other in the human brain as a vast network of associations (e.g., [4,5,6]). These cognitive structures, or chunks, are considered as nodes of a large network. Each of the chunks has certain associations to other chunks. An important assumption of these models is that different associations may be of different strength. The strength of associations derives from past experiences and mainly reflects the co-occurrence of two chunks in a meaningful context. For instance, the date “1492” will for most people be associated with “the discovery of America”. This represents a strong association, because these two pieces of information co-occur quite frequently and these two chunks of individual knowledge have repeatedly been placed in a meaningful context. The retrieval of chunks is performed by a process of spreading activation from one chunk to another. A chunk must be activated by connected chunks to receive a certain level of activation. This process is determined by the strength of the associations among chunks. It is more probable that a chunk is activated by another chunk when these chunks are strongly connected to each other. An example for such a process would be the activation of a chunk associated with “red wine” and “France”. Most people would activate the word “Bordeaux” more easily than the word “Loire”. Several experiments support the model of spreading activation of semantic memory. For instance, the phenomenon of associative priming demonstrates that the retrieval of chunks is facilitated by a previous activation of strongly associated chunks. In this way, a chunk which is meant to be retrieved can be pre-activated by other associated
Learning by Foraging: The Impact of Social Tags on Knowledge Acquisition
257
concepts, thus allowing faster access and retrieval. Several experiments show this effect of associative priming and spreading activation. For example, in an experiment by Meyer and Schvaneveldt [7] subjects had to decide if pairs of words were lexically correct. It was demonstrated that identification of the word pairs was significantly faster when a strong semantic association existed (e.g., “bread” and “butter”) than in those cases where such an association was weak (e.g., “nurse” and “butter”). The priming effect shows that retrieval of information is facilitated by the activation strength of associated nodes and that the association of chunks is decisive for knowledge representation in semantic memory structures. 2.3 Spreading Activation and Information Foraging on the Web In the last chapter we described internal processes in semantic memory systems. How does this relate to actual behavior of navigation and information retrieval on the Web? One of the most influential theories of navigation on the Web is the Information Foraging Theory by Pirolli and Card [8,9]. This theory describes processes of link selection and navigation paths on the Internet, the so-called information foraging. One of the core concepts of this theory is the information scent. This concept is based on Brunswik’s lens model [10], which was extended to deal with Web foraging (see Figure 3). Web users have to decide which links or tags may lead to a desired - and not directly accessible - resource (a distal object). Web users have to make a judgement based on these proximal cues (e.g., links or tags) which of them may lead to a desired resource and whether the navigation path is successful. Proximal cues with a high probability of leading to a desired distal resource have a high information scent, and therefore, a high probability of being selected as link. Proximal Cues (Links)
Judgement
Distal Object
Fig. 3. Brunswik’s Lens Model (adapted from Pirolli [9])
The individual evaluation of information scent is based on internal spreading activation structures of semantic memory of each Web user. A search goal – the desired distal resource - activates cognitive structures in the semantic memory of a user. Based on these activations a user decides which link to select, which navigation path to pursue and which to skip. A simplified example may make this clearer: Imagine a Web user is searching a typical red wine from France. In her semantic memory the chunks “red wine” and
258
C. Held and U. Cress
“France” are strongly connected to “Bordeaux”. Associations to other chunks, like “Loire”, “Merlot” or “elegant”, also exist, but have a much weaker spreading activation. On a Web site of an Internet wine dealer she has to choose a link in order to see a selection of available wines. These links may represent regions or characteristics of French wine. The link selection of the user is based on the strength of spreading activation for these available links, given the desired distal resource. The highest spreading activation will lead to a high information scent and, accordingly, to a high probability of link selection (see Figure 4).
Fig. 4. Illustration of information scent (freely adapted from Pirolli [9])
2.4 Interaction of Social Tagging Systems, Individual Learning and Foraging In this chapter we address the research issue of how external knowledge of tagging communities and individual knowledge representations may interact. More specifically, we focus on the question whether the structures of spreading activation of tagging communities may have an impact on individual spreading activation structures of Web users. This process of individual learning and the change of semantic memory structures would lead to a transfer of knowledge from communities to individuals and would consequently modify the information foraging behavior of users. Many search tasks on the Internet require browsing activities and cannot be accomplished with the help of short queries typed into search boxes [11]. In exploratory search tasks user must choose among several links and navigation paths in order to find a desired resource. Even if users have a more or less specific idea of a resource, which they want to find, the navigation path and the eventual result may still be influenced by the available links. Individual spreading activation and, consequently, link selection may change during information foraging and navigation [3,12]. These learning processes can be considered as a byproduct of information foraging behavior and are mostly non-intentional and incidental [13]. Social tagging systems may provide a browsing environment, which facilitates informal learning and may lead to an improved process of information foraging. The framework of Cress & Kimmerle [2] describes a theoretical model for the interplay between collective knowledge structures and processes of individual learning. In this case, the user-generated social artifact of tags represents the externalized knowledge structure of a community, which emerges by the collective process of annotating resources with tags. This social tagging system interacts with an individual
Learning by Foraging: The Impact of Social Tags on Knowledge Acquisition
259
user in a search task. The framework is based on both a systemic as well as a cognitive approach. From a systems-theoretical point of view [14] people’s cognitive systems are different from a social system, which is being represented by an artifact. Cognitive systems and social systems have different kinds of operations and because of their different modes of operation both systems cannot simply merge. But one system can affect another one in its development by irritating it. Irritation is interpreted in the sense of Piaget’s cognitive conflicts [15] and it is assumed that cognitive systems develop when people solve cognitive conflicts. A cognitive conflict exists when people’s prior knowledge is incongruent to the information of the external artifact. From an individual perspective, a cognitive conflict can be solved by processes of equilibration and internalizing information from the environment. People either add new information to their prior knowledge or restructure their present knowledge and adapt to the new information. In this paper we focus on processes of internalization, and based on the model of Cress & Kimmerle [2], we assume that cognitive conflicts result from different associations and spreading activation networks of a social tagging system and an individual system of semantic memory. This irritation may lead to a process of cognitive equilibration, an adaptation to the community’s knowledge and, consequently, to a change of individual spreading activation structures. 2.5 Related Work Research on social tagging systems and tag clouds has mainly focused on the description of regularities in social tagging systems, like frequency-rank-distributions [16,17], and the development of new tagging tools and their technical aspects [18,19]. Research in cognitive aspects of social tagging systems and the interplay between external and internal knowledge structures is reported by Fu (2008) [3]. Within the framework of Distributed Cognition [20] he presents a rational model of social tagging and provides evidence for the interaction of social and cognitive systems. The study shows the impact of externalized knowledge structures on processes of internalization, especially the formation of mental categories. Research in tag clouds shows that the size of tags influences processes of visual attention, recognition and tag selection [21,22]. These results are based on visual features of tag clouds and do not focus on aspects of externalized knowledge within social tagging systems. The semantic meaning of tags is taken into account by the study of [22] and effects of font size on processes of learning, in particular impression formation, are reported.
3 Hypotheses With these theoretical considerations we interpret the use of social tags as a situation where two systems with different kinds of knowledge come together: A user with her specific semantic memory (prior knowledge) is confronted with tag clouds. These can comprise and visualize the externalized knowledge of communities with its specific structure of spreading activation. This externalized knowledge may interact with the internal knowledge of users in the process of information seeking with tag clouds.
260
C. Held and U. Cress
When the knowledge of an individual and a community is incongruent, a user may internalize the externalized structure of tags, which represents a kind of externalized spreading activation of a community, and may change the individual structure of spreading activation accordingly. This may also lead to a corresponding change of information foraging behavior. Because this spreading activation of the community is represented by the size of the tag, we expect that presenting tag clouds with weighted tags has a higher potential for changing the individual spreading activation than tag clouds with non-weighted tags where the specific strengths of external spreading activations are not visible. In the following, we address the scenario of a so-called negative bias: Subjects’ prior knowledge and spreading activation contradict the information represented by the community’s tag clouds. 3.1 Navigation Prior knowledge about a domain and the corresponding internal spreading activation network strongly influence Web navigation and information foraging behavior. Web users follow those links which are most appropriate for finding a desired goal. This assessment of links is based on their prior associations and spreading activation related to the desired goal. A user, who thinks that a link is closely related to a desired goal, will select it. When users have deficient knowledge about a domain, the process of information foraging will lead to a deficient or suboptimal search result as well. In a browsing environment where the externalized knowledge of a community and the strength of their associations are not visible, the navigation of Web users is primarily based on their prior knowledge. Contrary to such an environment, social tagging systems provide the opportunity to visualize the knowledge of a community and, in particular, the strength of specific associations with the help of weighted tags. We assume that this visualization affects the process of information foraging: When internal and external associations contradict each other, we expect that users will consider the knowledge of the community, change their information seeking behavior and adapt to the information given by the tag cloud. Based on this, we assume the following two hypotheses for navigation: H1: With a tag cloud of unweighted tags, people will primarily follow links which are closely related to their prior knowledge. A weighted tag cloud will reduce the focus on those links and users will follow them less frequently. H2: With weighted tag clouds, users will select those links more frequently, which are suggested by the community (compared to an environment of non-weighted tags). 3.2 Change of Individual Spreading Activation Browsing and navigation may lead to a process of incidental learning and knowledge acquisition. Hence in a scenario of incongruent internal and external spreading activation, users may adapt to the knowledge of a community. The users’ prior associations and semantic connections may change and attenuate, whereas associations of a community may be internalized. This may lead to a change of the subject’s association strengths and a modification of the individual structure of spreading activation. For this process of incidental learning, we assume the following specific hypothesis:
Learning by Foraging: The Impact of Social Tags on Knowledge Acquisition
261
H3: Users with weighted tag clouds show a higher degree of modification of association strengths in the process of navigation. This means that - compared to users without weighted tags - those of the subjects’ associations, which are based on a “wrong” negative bias (the prior knowledge) become weaker and associations with tags, which were suggested by the community, become stronger. They adapt to the knowledge of the community more strongly and reduce the strength of their negative bias to a greater extent.
4 Method To test these three hypotheses, we ran an experiment in an online setting. In the following, we present sample, materials, procedure, design and dependent measures of the study. 4.1 Participants We tested 115 participants, who were recruited on Amazon Mechanical Turk (mturk.com), an Internet marketplace for engaging users in online micro-tasks [23]. Of the 115 subjects who were considered for the experiment 57 were female and 58 male. The average age was 29.61 years (SD=10.22). Subjects came from 19 different countries, most of them from the United States (53.0%) followed by India (16.5%) and Canada (6.0%). Of these subjects, 93.0% spend at least 11 hours per week on the Internet and 97.4% rate their own knowledge of new Internet technologies as good, very good or excellent. 4.2 Materials and Procedure We chose the topic domain of wine, in particular the mainly unknown domain of wine from the country of Georgia. Participants were presented tag clouds with tags representing typical wine regions, grape varieties and wine aromas of Georgian wine. All tags were created by the authors and were similar to real characteristics of Georgian wines. Subjects were only presented tag clouds, no corresponding resources were shown. The participants’ task was to search for wines in order to build up a typical wine cellar of Georgian wine. For each tag cloud subjects were asked to click on this tag which seemed most appropriate for finding typical wines of Georgia. Subjects had to select and click on one tag of each presented tag cloud (as navigation link) for finding wines either from a typical region, grape variety or a wine aroma (see Figure 5). For each tag cloud a specific description about the presented tags was given. After having clicked on a tag, it was color-marked and 2 seconds later the next tag cloud appeared. The next visualized tag cloud was independent of the previous selection and a new selection of a tag had to be made. Overall, each participant was presented 9 tag clouds, with 3 tag clouds for each of the wine characteristics region, grape variety and aroma. The tag clouds represented related tags to features like Central Georgia, red wine etc. The 3 tag clouds for one characteristic were very similar or identical to the each other.
262
C. Held and U. Cress
Fig. 5. Screenshot of a short task description and a related tag cloud
Before the navigation task, subjects completed a short survey on demographics and background, followed by basic information on social tagging systems, related tags and tag clouds. After that, the subjects received the information that the tag clouds originated from a social tagging system, which was dealing with wines (especially Georgian wines). The community members of this social tagging system were introduced as wine lovers and experts. The navigation task was preceded by a detailed task description and the specifics of the task procedure. When subjects had finished the navigation task, they had to complete a knowledge test. No information about the knowledge test was given before or during the navigation task, so there was indication of a learning experiment for the subjects. There were no time limits during any part of the experiment. 4.3 Design and Dependent Measures As independent variable the visualization of tag clouds was varied. In one condition all tags had the same size (non-weighted tags). In the other condition (weighted tags), the size of the tags varied and the corresponding association strengths of a community were visible (see Figure 6). This independent variable was tested in a betweensubjects design. The overall design also included a varying prior knowledge on Georgian wine characteristics, like typical Georgian wine regions, grape varieties and wine aromas. The information about the Georgian wine characteristics, which represented the manipulated prior knowledge of the subjects, was given to the subjects before the navigation task had started. In this paper we report on the condition of a negative bias in which subjects receive information, which is contrary to the community’s knowledge. An example of a negative bias is the prior knowledge that “Tsageri” is the most typical wine region of Georgia, although the tag cloud of the community suggests that “Kakheti” is more typical (see Figure 6). This information (e.g., that “Tsageri” is the most typical wine region of Georgia) was presented as a “comment from an anonymous blogger” and was provided before the navigation task had started. With this information a prior knowledge was induced which led to different associations between the internal memory structure of a user and the external knowledge of the community, represented in tag clouds.
Learning by Foraging: The Impact of Social Tags on Knowledge Acquisition
263
Fig. 6. Screenshots with weighted tags (left) and non-weighted tags (right)
The subjects were randomly assigned to one of the conditions of tag visualization. Each participant randomly received a negative bias either for Georgian wine regions, grape varieties or wine aromas. The order of presentation for tag clouds of each wine characteristic (region, grape variety and aroma) was varied in a within-design of a Latin square. Subjects were randomly assigned to one of these orders. We determined the navigation behavior of subjects by analyzing log-files and calculated the average percentage of how often a subject selected a specific tag when presented the 3 tag clouds of a wine characteristic. As a measure of association strength, subjects had to complete a test at the end of the experiment. They had to indicate the strength of their association for each wine characteristic (e.g., “How typical is the wine region Kakheti for Georgia?”) by a rating. For analyzing the dependent variables we combined the results across all subjects of those conditions in which the subject had received a negative bias as prior knowledge (i.e., for each subject either region, grape variety or aroma).
5 Results 5.1 Navigation In a first step we analyzed subjects’ navigation behavior by t-tests (see Figure 7). Two different kinds of tags had relevance for testing H1 and H2. For H1 those tags were of interest, which represented the subjects’ prior knowledge (e.g., the prior knowledge for wine region was a high spreading activation of “Tsageri”, see Figure 6). This prior knowledge reflects the negative (or “wrong”) bias, which was presented by the blogger before the navigation task started. On average, subjects in the environment of non-weighted tags selected these tags significantly more often (M = 71.19, SE = 4.22) than subjects with weighted tags (M = 39.29, SE = 4.58, t(113) = - 5.13, p < .001). The tags of interest for H2 were those with the highest spreading activation of the wine community (e.g., “Kakheti” as wine region, see Figure 6). In the weighted tag clouds these tags were most prominent and reflected the externalized knowledge of the community. This externalized knowledge of the community was contrary to the prior knowledge of the subjects (e.g., “Tsageri”), which represented the individual negative bias. On average, subjects in the environment of weighted tags selected those tags, which were suggested by the community, significantly more often (M = 36.90, SE = 4.22) than subjects with non-weighted tags (M = 4.52, SE = 1.50, t(68.71) = 7.23, p < .001).
C. Held and U. Cress
80
Rate of tag selection in %
Rate of tag selection in %
264
70 60 50 40 30 20 10 0
Non-weighted
Weighted
50 40 30 20 10 0
Non-weighted
Weighted
Fig. 7. Selection of tags representing the negative bias (H1, left) and representing the knowledge of the community (H2, right)
5.2 Change of Individual Spreading Activation
Strength of negative bias
As a next step we report on the strength of the negative bias after the navigation process, i.e., how much is the individual spreading activation of the negative bias still stronger than an association with tags suggested by the community (H3). Therefore, we subtracted the subjects’ ratings of tags which represented the individual prior knowledge (e.g., “Tsageri”) from those representing the contradicting externalized knowledge of the community (e.g., “Kakheti) and analyzed this difference - the strength of the negative bias - by a t-test (see Figure 8). In line with the expectation, the results show that, on average, subjects with weighted tags have a significant lower negative bias (M = .57, SE = .30) than subjects with non-weighted tags (M = 2.20, SE = .25, t(113) = -4.17, p < .001).
2,00 1,50 1,00 0,50 0,00
Non-weighted
Weighted
Fig. 8. Strength of negative bias (difference of ratings)
6 Discussion and Future Research The results presented in this study support the hypothesis that social tagging systems can trigger learning and knowledge acquisition during processes of information seeking. Our results show that individual spreading activation can be changed by the visualized knowledge representation of a community. The externalized knowledge structure of many Web users affects the cognitive system of individuals and leads to a process of learning. Individual users may benefit from this interaction of collective and individual knowledge when foraging the Web. When people search for information they follow their prior knowledge. A deficient prior knowledge may lead to a
Learning by Foraging: The Impact of Social Tags on Knowledge Acquisition
265
deficient and suboptimal outcome. Our results suggest that social tagging systems may help to improve the process of information foraging by changing the internal spreading activation of users. With the technology of tagging, users could take advantage of the wisdom of crowds [24] and use information, which is provided by thousands of other users. To find good information and resources on the Internet, users need to know, which path of navigation to select. Social tags may help to choose a successful path and to acquire knowledge while following this pathway. Our experiment provides first results, which show the potential of social tagging systems for learning processes and for improving individual search processes. This experiment represents a first step and starting point for further research. There are some limitations of our experiment, which we have to consider, and which we hope to address in our next studies. In our scenario, the presented tag clouds were independent of previous tag selections of subjects, and no feedback on the specific steps of navigation were provided. Future experiments could be based on a more complex, dynamic and interrelated social tagging system. Additionally, digital resources could be implemented in the experimental setting as well. Further research could also address the credibility and expertise of social tagging communities, as well as the strength of users’ prior knowledge or biases, and their influence on processes of learning and information foraging. The identification of important factors, which affect knowledge acquisition and information retrieval in social tagging systems may lead to a better understanding of the potential of social tagging and the challenge of how to benefit from the wisdom of crowds on the World Wide Web.
References 1. O’Reilly, T.: What is Web 2.0?, http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/ what-is-web-20.html 2. Cress, U., Kimmerle, J.: A systemic and cognitive view on collaborative knowledge building with wikis. International J. of Computer-Supported Collaborative Learning 3, 105–122 (2008) 3. Fu, W.: The microstructures of social tagging: a rational model. In: Proceedings of the ACM 2008 Conference on Computer Supported Cooperative Work, pp. 229–238. ACM, New York (2008) 4. Anderson, J.R.: Language, memory, and thought. Lawrence Erlbaum, Hillsdale (1976) 5. Anderson, J.R.: Cognitive psychology and its implications. Freeman, San Francisco (1980) 6. Collins, A., Loftus, E.: A spreading activation theory of semantic processing. Psychological Review 82, 407–428 (1975) 7. Meyer, D.E., Schvaneveldt, R.W.: Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. J. of Experimental Psychology 90, 227–234 (1971) 8. Pirolli, P., Card, S.K.: Information foraging. Psychological Review 106(4), 643–675 (1999) 9. Pirolli, P.: Information foraging theory: Adaptive interaction with information. Oxford University Press, New York (2007) 10. Brunswik, E.: Perception and the representative design of psychological experiments. University of California Press, Berkeley (1956)
266
C. Held and U. Cress
11. Marchionini, G.: Exploratory search: From finding to understanding. Comm. of the ACM 49, 41–46 (2006) 12. Qu, Y., Furnas, G.W.: Sources of structure in sensemaking. In: CHI 2005 Extended Abstracts on Human Factors in Computing Systems, pp. 1989–1992. ACM, New York (2005) 13. Cress, U., Knabel, O.B.: Previews in hypertexts: Effects on navigation and knowledge acquisition. J. of Computer Assisted Learning 19(4), 517–527 (2003) 14. Luhmann, N.: Social systems. Stanford University Press, Stanford (1995) 15. Piaget, J.: The development of thought: Equilibration of cognitive structures. The Viking Press, New York (1977) 16. Cattuto, C., Loreto, V., Pietronero, L.: Semiotic dynamics and collaborative tagging. Proceedings of the National Academy of Sciences of the United States of America 104(5), 1461–1464 (2007) 17. Golder, S., Huberman, B.A.: Usage patterns of collaborative tagging systems. J. of Information Science 32(2), 198–208 (2006) 18. Hassan-Montero, Y., Herrero-Solana, V.: Improving tag-clouds as visual information retrieval interfaces. Paper presented at the International Conference on Multidisciplinary Information Sciences and Technologies, Merida, Spain (October 2006) 19. Hong, L., Chi, E.H., Budiu, R., Pirolli, P.: SparTag. us: a low cost tagging system for foraging of web content. In: Proceedings of the Working Conference on Advanced Visual Interfaces 2008, pp. 65–72. ACM Press, New York (2008) 20. Hutchins, E.: How a cockpit remembers its speed. Cognitive Science 19, 265–288 (1995) 21. Bateman, S., Gutwin, C., Nacenta, M.: Seeing things in the clouds: the effect of visual features on tag cloud selections. In: Proceedings of the 19th ACM Conference on Hypertext and Hypermedia 2008, pp. 193–202. ACM Press, New York (2008) 22. Rivadeneira, A.W., Gruen, D.M., Muller, M.J., Millen, D.R.: Getting our head in the clouds: toward evaluation studies of tagclouds. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2007, pp. 995–998. ACM Press, New York (2007) 23. Kittur, A., Chi, E.H., Suh, B.: Crowdsourcing user studies with Mechanical Turk. In: Proceedings of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems 2008, pp. 453–456. ACM, New York (2008) 24. Surowiecki, J.: The wisdom of crowds: why the many are smarter than the few and how collective wisdom shapes business, economies, societies and nations. Doubleday, New York (2004)
Assessing Collaboration Quality in Synchronous CSCL Problem-Solving Activities: Adaptation and Empirical Evaluation of a Rating Scheme Georgios Kahrimanis1, Anne Meier2, Irene-Angelica Chounta1, Eleni Voyiatzaki1, Hans Spada2, Nikol Rummel2, and Nikolaos Avouris1 1
Human-Computer Interaction Group, University of Patras, GR-26500 Rio Patras, Greece {kahrimanis@ece,houren@ece,evoyiatz@ece,avouris@}upatras.gr 2 Institute of Psychology, University of Freiburg, Engelbergerstr. 41 D-79085 Freiburg, Germany {anne.meier,nikol.rummel,hans.spada}@psychologie.uni-freiburg.de
Abstract. The work described is part of an ongoing interdisciplinary collaboration between two research teams of the University of Patras, Greece and the University of Freiburg, Germany, which aims at the exchange of analysis tools and data sets in order to broaden the scope of analysis methods and tools available for Computer-Supported Collaborative Learning (CSCL) support. This article describes the adaptation, generalization and application of a rating scheme which had been developed by the Freiburg team for assessing collaboration quality on several dimensions [1]. The scheme was successfully adapted to suit data gathered by the Patras team in a different CSCL scenario. Collaboration quality is assessed by quantitative ratings of seven qualitatively defined rating dimensions. An empirical evaluation based on a dataset of 101 collaborative sessions showed high inter-rater agreement for all dimensions. Keywords: Computer-Supported Collaborative Learning (CSCL), rating scheme, interaction analysis, collaboration quality.
1 Introduction Research on technology-enhanced collaborative learning has become more and more interested in studying not only the conditions and outcomes, but also the interactions and processes involved in collaborative knowledge building. As a consequence, analysis tools have been developed for studying interaction processes in a wide variety of technology-enhanced learning settings, using a variety of methodologies [2, 3, 4]. Recently, however, efforts are being made to achieve greater convergence regarding both theoretical models and analysis tools [5]. One way towards achieving convergence is to adapt existing analysis methods, which have been developed and tested in one learning setting, to a novel learning setting. Tool adaptation does not only save time and effort that would be necessary for developing a new analysis tool from scratch; it is also a test of whether the theoretical model underlying the tool is capable of capturing important U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 267–272, 2009. © Springer-Verlag Berlin Heidelberg 2009
268
G. Kahrimanis et al.
aspects of technology-enhanced collaborative learning across different settings. In this paper, we report on the successful adaptation of an established rating scheme for assessing collaboration quality to data from a novel learning setting, demonstrating that the rating scheme’s dimensions, and thus its underlying theoretical model, are capable of capturing the main aspects of collaboration quality across different technologyenhanced learning settings. The work described is part of an ongoing interdisciplinary collaboration between our two research teams at the University of Patras, Greece, and the University of Freiburg, Germany
2 Tool Adaptation Originally, the rating scheme had been developed by the Freiburg team for the purpose of analyzing collaboration quality in the context of interdisciplinary problemsolving between medical students and students of psychology who communicated over a desktop video-conferencing system [6].The scheme employed a multidimensional model of collaboration covering aspects of communication, joint information processing, coordination, relationship management, and motivation [1]. The scheme was adapted to suit data gathered by the Patras team in a very different CSCL scenario: dyads of first-year computer science students interacted through Synergo [7], a network-based synchronous collaborative drawing tool that includes a shared whiteboard and a textual communication facility. Students collaborated in the scope of 45’75’ laboratory sessions without having face-to-face contact. The learning domain was algorithm building in computer science. Each dyad was asked to solve an elementary algorithm exercise by developing a flow-chart representation of the algorithm described in Synergo’s shared whiteboard. The rating scheme was adapted to this specific task and setting by adjusting the number and definitions of the dimensions of the original scheme. Two main phases of adaptation were followed; the first resulted in an adapted definition of the rating scheme’s dimensions, and the second served to fine-tune the rating instructions. In the first phase of adaptation a bottom-up approach, which involved identification of “best practice” examples in the sample data, was combined with a top-down process, during which the definitions of all original dimensions were reformulated taking into account constraints arising from the specific collaboration setting (e.g. chat communication; design task). In the second phase of adaptation, the dimensions’ definitions were finetuned and illustrated with more detail, grounding each dimension’s theoretical concepts in specific examples of collaboration practice from the data pool of the first round of adaptation.
3 Dimensions of Good Collaboration In the Synergo algorithm task, good collaboration can be characterized on seven rating dimensions (Table 1), covering the same five aspects of collaboration quality that had been defined in the original tool. The first two dimensions assess the aspect of students’ communication in the Synergo learning environment. First of all, the success students have in achieving a seamless and efficient communication is determined by observing in how far they maintain collaboration flow, i.e. manage dialogue and actions in a way that facilitates references to earlier utterances and actions and helps
Assessing Collaboration Quality in Synchronous CSCL Problem-Solving Activities
269
students maintain a joint focus. For example, students must make sure to react to each others' messages and actions, and must coordinate between the verbal discussion in the chat and the ongoing design of the algorithm in the shared whiteboard. Second, students need to sustain mutual understanding, i.e. work towards “common ground” [8]. For example, students should strive to make their actions and chat messages understandable for their partner, e.g. by telling them which object in the whiteboard they are referring to, by explaining the variables they are using, or by informing their partner about the purpose of their actions in the shared whiteboard. Students should also give each other feedback on their level of understanding, e.g. by sending short affirmative messages, or by asking clarifying questions. Two further dimensions cover the aspect of joint information processing. One dimension, knowledge exchange, assesses how effectively students exchange information and give explanations. Information in this setting refers mainly to elementary knowledge of algorithm concepts and flowchart notation restrictions. For example, students typically develop small parts of the solution in the form of pseudocode notes individually, which they are expected to exchange and explain to each other. Further, a second information processing dimension assesses students’ argumentation quality, e.g. when defending a proposed solution for a part of the algorithm. Negotiating alternatives to the solution and exchanging arguments on the optimal formation of the algorithm also constitute desirable argumentation practices. Another kind of good practice that pertains to the algorithm building domain and relates to this dimension is the “simulation” of the algorithm’s behavior by applying values in the variables Concerning the aspect of coordination, only one dimension was defined: structuring the problem solving process. It assesses the extent to which students follow a coherent and efficient plan for jointly developing the algorithm. For example, students can improve their efficiency by defining subtasks and working on different parts of the algorithm in parallel for some time. Further, they should efficiently distribute their time resources to subtasks of the problem. The aspect of relationship management is covered by a dimension assessing students’ cooperative orientation. For example, students are expected to assist each other, and to handle of conflicts and disagreements in a constructive fashion. Finally, the motivational aspect of collaboration is assessed by the dimension of individual task orientation, which is rated for each student separately. It assesses the extent to which students are actually committed to solving the task and actively in engage in its solution.
4 Empirical Evaluation The rating scheme was evaluated in a sample of 101 dyads from the Patras data set in order to test whether the instructions provided in the final version of the adapted rating scheme would allow satisfactory inter-rater agreement, and to prepare future activities. 4.1 Rating Procedure Ratings were made while reviewing a dyad’s collaboration based on the data logged by Synergo. Logged activities can be reproduced in a video-like format in Synergo’s playback mode, allowing raters to jump back and forth and to replay particularly rich collaboration episodes. Each dimension is rated on a 5-point scale ranging from “very
270
G. Kahrimanis et al.
low” to “very high” collaboration quality on that dimension. A rating handbook stated the scope and purpose of each dimension, gave an operational definition in a short paragraph, and provided raters with illustrative examples. The rating procedure was conducted by two raters, one of which had already gained experience from the first round of adaptation. One third of the data was rated by both raters jointly as a training phase for the new rater. After finishing this training phase, another 34 dyads were rated by both raters separately in order to establish inter-rater reliability. The rest of the dataset was split into two so that each rater assessed half of the remaining activities. 4.2 Results In the co-rated sample, absolute agreement of the ratings ranged between 65% (argumentation) to 85% (knowledge exchange); differences of more than one point on the five-point rating scale were very rare. Accordingly, measures of inter-rater reliability (intra-class correlations for absolute values) were high for all dimensions (Table 1). Thus, this part of the empirical evaluation was considered successful. An analysis with the complete sample 101 dyads further showed that all dimensions (except the individually rated dimension of “individual task orientation”) intercorrelated quite highly (r > .60). On the one hand, this shows that the rating scheme is a useful means of obtaining consistent measures of overall collaboration quality; on the other hand, lower inter-correlations would be desirable for obtaining differential assessments of specific aspects of collaboration. This is, however, only possible for dyads that show a medium level of collaboration quality overall, while the sample in which the inter-correlation results were obtained also contained many dyads who collaborated either extremely well or extremely bad and thus obtained extreme ratings on nearly all dimensions (as one would expect, inter-correlations are much lower empirically if only dyads of medium collaboration quality are considered). Table 1. Rating dimensions and inter-rater agreement for the adapted rating scheme
Rating dimensions Collaboration Flow Sustaining Mutual Understanding Knowledge Exchange Argumentation Structuring the Problem Solving Process Cooperative Orientation Individual Task Orientation
Inter-rater agreement (ICCs) in the co-rated sample (n=34 dyads) .88 .92 .96 .91 .96 .96 .92
5 Conclusions and Future Plans We have described how a rating scheme that had been developed to assess the quality of students’ collaboration in one collaborative learning setting (involving students with complementary knowledge backgrounds engaged in medical decision-making
Assessing Collaboration Quality in Synchronous CSCL Problem-Solving Activities
271
and collaborating over a desktop-videoconferencing system) was successfully adapted to assess the quality of students’ collaboration in a novel collaborative learning setting (involving students with similar knowledge backgrounds engaged in algorithm building and collaborating using the chat and shared whiteboard facilities in the Synergo learning environment). The adaptation and application of the rating scheme was successful in terms of establishing high inter-rater reliability. Although some significant modifications to the rating scheme were made, the resultant version was close enough to the original so as not to violate it’s core rationale. Thus, the multidimensional model of collaboration underlying the tool has been shown to be applicable across very dissimilar CSCL settings, and to be a useful basis for assessment. The development of the adapted rating scheme has also paved the way for further research paths with both practical and methodological implications. Two studies using the rating scheme and the model of collaboration quality underlying it are currently under way. One project studies whether feedback provided based on the ratings in specific dimensions can lead to improvement of students’ subsequent collaboration. A pilot study has already been conducted, in which a human tutor assessed collaboration quality using the rating scheme, and then gave feedback assembled according to a corresponding feedback scheme. Feedback that was based on a profile of high and low ratings achieved by a dyad, and thus tailored to students’ specific strengths and weaknesses, was effective in improving students’ collaboration [9]. As long-term goals we further aim to develop technical support for facilitating the rating process so that the rating may be used more efficiently in real classroom settings. An interesting new research thread we follow in this context is to use a larger set of activities evaluated with the rating scheme as a rigorous and mature way for predicting collaboration quality in unrated activities based on automatic interaction analysis metrics provided by the Synergo tool. Such metrics can refer to different aspects of collaboration that are reflected in summations of events, like the total number of alterations in students’ chat messages or the balance of workspace contributions between two participants. Automatic measures can then be used in statistical methods like regression for the prediction of the scores of collaboration quality according to the rating scheme. In more sophisticated means, machine learning algorithms can be trained for the same purpose. Possible success in this study would provide an efficient way to automatically evaluate activities based not only in log-file based measures, but in in-depth interpretations of researchers as well. This would then provide new opportunities for assessing collaboration quality that demands less work from teachers or researchers. Furthermore, integration of prediction algorithms in Synergo or a tool for supervising multiple CSCL activities could give the opportunity for metacognitive elaboration on the part of students or a scaffold for sophisticated and efficient feedback on behalf of the teachers.
References 1. Meier, A., Spada, H., Rummel, N.: A rating scheme for assessing the quality of computersupported collaboration processes. International Journal of Computer-Supported Collaborative Learning 2(1), 63–86 (2007) 2. De Wever, B., Schellens, T., Valcke, M., Van Keer, H.: Content analysis schemes to analyze transcripts of online asynchronous discussion groups: a review. Computers & Education 46, 6–28 (2006)
272
G. Kahrimanis et al.
3. Dillenbourg, P., Baker, M., Blaye, A., O’Malley, C.: The evolution of research on collaborative learning. In: Reimann, P., Spada, H. (eds.) Learning in humans and machines: Towards an interdisciplinary learning science, pp. 189–211. Elsevier/Pergamon, Oxford (1995) 4. Weinberger, A., Fischer, F.: A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers & Education 46, 71–95 (2006) 5. Suthers, D., Law, N., Rose, C., Dwyer, N.: A common framework for CSCL interaction analysis. Pre-conference workshop at the 2008 ICLS conference, Utrecht, The Netherlands (2008) 6. Rummel, N., Spada, H.: Learning to collaborate: An instructional approach to promoting problem-solving in computer-mediated settings. The Journal of the Learning Sciences 14(2), 201–241 (2005) 7. Avouris, N., Margaritis, M., Komis, V.: Modelling interaction during small-group synchronous problem solving activities: the Synergo approach. In: Proceedings of 2nd International Workshop on Designing Computational Models of Collaborative Learning Interaction, ITS 2004, 7th Conference on Intelligent Tutoring Systems, Maceio, Brasil, pp. 13–18 (2004) 8. Clark, H.H., Brennan, S.E.: Grounding in communication. In: Resnick, L.B., Levine, J.M., Teasley, S.D. (eds.) Perspectives on socially shared cognition, pp. 127–148. American Psychological Association, Washington (1991) 9. Meier, A., Voyatzaki, E., Kahrimanis, G., Rummel, N., Spada, H., Avouris, N.: Teaching students how to improve their collaboration: Assessing collaboration quality and providing adaptive feedback in a CSCL setting. Submitted as part of the symposium New Challenges in CSCL: Towards adaptive script support (Nikol Rummel and Armin Weinberger), ICLS 2008, Utrecht (June 2008)
Facilitate On-Line Teacher Know-How Transfer Using Knowledge Capitalization and Case Based Reasoning Celine Quenu-Joiron and Thierry Condamines MIS Laboratory, Université de Picardie Jules Verne, 33 rue Saint-Leu 80000 Amiens, France {celine.quenu,thierry.condamines}@u-picardie.fr
Abstract. Case Based Reasoning (CBR) methods have been used in various domain specific applications, mainly dedicated to decision making. The aim of this paper is to present how CBR is integrated in an educative project, in order to contribute to life-long teacher training. In fact, the acquisition of professional know-how, for example in the class management domain, can constitute a major difficulty for novice teachers. Thus, the TETRAKAP project (TEacher TRAining by Knowledge cAPitalization) aims to develop a web community platform dedicated to knowledge capitalization and know-how transfer between experienced teachers and beginners. Keywords: Case-Based Reasoning, teacher training, knowledge capitalization, know-how transfer.
1 Introduction Case-Based Reasoning (CBR) is a reasoning-by-analogy approach on which a part of computerized decision support systems are based, since about fifteen years [1] [2]. These systems aim at helping to solve problems using a case base. A case then represents a problem associated with its solution. To solve a new problem, the system extracts from the case base, a case representing a similar problem, and adapts its solution to specificities of this new problem. In order to memorize this new experiment, and be able to re-use it later on, each solved problem is then memorized into the case base. The CBR developed thus in many fields like health, law, gastronomy or engineering [3]. The constitution of these case bases also allowed the construction of training systems based on teaching scenarios implying these cases. The case base is then used like a bank of exercises to propose to learners, or like a means of generating new exercises [4] [5]. The teaching areas are then multiples and the suggested pedagogical approach varies as much. For example, the French AMBRE project proposes a training support system to make learners acquire problem-solving methods [6]. In a very different approach, the DIACOM project [7] proposes to use a case base in the medical diagnosis field and to associate it with a discussion forum as a mediator for peer-to-peer training. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 273–282, 2009. © Springer-Verlag Berlin Heidelberg 2009
274
C. Quenu-Joiron and T. Condamines
In spite of this diversity, little works in CBR are interested in the teacher training area and particularly nursery or primary school1 teachers. However, teaching implies the implementation of complex know-how coming from experience. The problems encountered by young teachers are concrete and numerous. The progressive acquisition of this know-how must help them to face new situations. An online platform helping to reinforce the acquisition of this know-how answers a real need. Moreover, these problems generally were already encountered and solved before by other more experienced teachers. It thus seems to us relevant to base pedagogical approach of this platform on know-how capitalization, reasoning by analogy and finally on know-how transfer between experienced teachers and novices. The design and development of this platform are the subjects of the TETRAKAP project (TEacher TRAining by Knowledge cAPitalization). The objective of this paper is to present how CBR is implemented within this project. Thus, in the first section we will present the context of our research, the TETRAKAP project, its origin and the problems it has the ambition to answer. In the second part we will show how CBR is integrated within the TETRAKAP system. Finally, in the last section, we will conclude with some research perspectives.
2 The TETRAKAP Project The TETRAKAP project aims to design, develop and experiment a computerized knowledge capitalization platform to preserve knowledge coming from experienced teachers and make it “easily” accessible to other teachers as a training solution. It is interested in particular in the primary and nursery teacher’s population and their accompaniment during the first years of their career. In this section we present the project context as well as the main functionalities aimed by the platform. 2.1 The Context Several reports have justified the starting of this project. Indeed, in the French context, when we follow current events of official texts, we realize that teachers’ professionalization is a major concern for education managers. This is due to the fact that the act of teaching requires a thorough initial training, the control of the practical experience, autonomy and the individual and collective ethical responsibility [8]. It’s not easy to control all these facets and the entry in the job is often very hard due to the fact that a young teacher must face a double learning: pupils learning and his own learning [9]. A representative example of these difficulties is undoubtedly the class management. Many studies indeed show that it’s the major obsession of young teachers, at the beginning of their career, and one of the main sources of difficulty ([10], [11], [12], [13]). Several systems in higher education have been set up to try to manage these difficulties and help these teachers, in particular by means of Communities of practice [14]. In France, few experiments have been realized for primary teaching and none has been sufficiently convincing to be extended to the whole of the community. 1
In France, only one and single diploma allows a professor to teach in primary and in nursery school. Then a professor can teach children from 3 years old to 11 years old.
Facilitate On-Line Teacher Know-How Transfer
275
We can notice on the contrary a multitude of “mini-forums” disseminated on the Web, place of exchanges or resources sharing for teachers. In such a system, it has been stated a certain difficulty in structuring the exchanges, following different discussion threads. Nearly identical questions show the difficulty for the young teachers in searching an answer among passed discussions. In fact the forum doesn’t keep a sufficiently structured memory of the exchanges [15]. As it was already evoked in introduction, TETRAKAP proposes an answer to these difficulties by offering the teachers a platform, accessible on the Web, which facilitates at the same time the help process, know-how transfer, but also information retrieval and the reasoning by analogy. The following section specifies the functionalities and objectives of this platform. 2.2 Functionalities and Goals of the Platform Within the TETRAKAP platform, each teacher has a personalized space. This space include a detailed description of his activity profile which he is invited to describe at the first connection, and which he will be able to complete if necessary. This profile included a structured description of his current job context (school, class, pupils …) as well as information on his specific competences (specialization, certificates …) and experience (career description, special experiences …) [16] [17]. Moreover, during interviews of novice teachers performed in 20072, two main categories of problems have emerged. The first one concerns the methodology of the preparing course task. This preparing tasks are mainly performed at home and more experienced teachers could help novice to answers to questions of methodology in order to improve their practice. The second category of problems deals with the non current events that appear with children during class activities. In that case, teacher doesn’t know how to solve this problem. Consequently, starting from these two categories, TETRAKAP proposes to integrate two modules that aim to help and assist teachers in their different problem solving task, relatively to their category. Theses two modules are called TETRA-KM and TETRA-BC. TETRA-KM (Teacher TRAining by Knowledge Management) collects and restitutes experts knowledge concerning the main teacher tasks of preparing courses. It refers to knowledge management methods and has to preserve and show to novices the diversity of experts point of views and the diversity of their tasks [16] [17]. A teacher (in particular a novice) has the possibility to navigate in the knowledge base to search for information. A search engine then makes it possible to retrieve relevant information while comparing to the requests and the teacher’s profile. He can choose and adapt these answers, according to its needs, to the context of his practice and his personality. TETRA-BC (Teacher TRAining Based on Cases) capitalizes a memory of problem solving episodes, called “cases”. Those are related to events appeared directly during class activities with children. These events didn’t be anticipated by the teacher during its preparation activity, and he is not apt to solve it spontaneously because of his lack 2
Ten semi-directives interviews have been performed with novice teachers who had to describe their difficulties and interrogations during the beginning of their teacher career.
276
C. Quenu-Joiron and T. Condamines
of experience. Thus, beyond the possibility of exploring and questioning the case base, TETRA-BC makes it possible to teachers to benefit from know-how of other “experienced” teachers connected to the platform. Indeed, TETRA-BC proposes to connect, through asynchronous discussions, experienced teachers and the novice who seeks help. When these discussions aims to solve a new problem, the result is then capitalized in the system in the form of a new case. The following section presents how TETRA-BC is based on Case Based Reasoning methods.
3 The TETRA-BC Module This section aims at presenting the principles on which the TETRA-BC module is based. The first section starts with some generalities on the fundaments of CBR, necessary for the continuation of the article. The second presents the structure of a case and the third part presents the case base. Finally the last section describes the main CBR scenario used in TERTRA-BC. 3.1 Generalities about CBR In a CBR system, a case represents a problem solving episode. Generally, a case is divided into two parts, one representing a problem (called pb) and the other representing the solution of this problem (called sol(Pb)). The problem we search to solve is named target-case, and the case we will take as a starting point to perform it, is named source-case. Lastly, a case, its problem and its solution are described by a set of descriptors, themselves composed of couples (attribute, value). According to [1], the CBR cycle is composed of 4 phases as shown in figure 1. The CBR cycle is organized around a case base and a knowledge base. These last are used progressively in order to help with the various tasks of the system. The first phase of the cycle, called “RETRIEVE” allows, using a pairing process, to extract from the base a source case. This source case refers to a problem near to the target case and which will be taken by the system as a starting point to solve the target problem. The second phase is the “REUSE” phase. The system adapt the solution of the source case
Fig. 1. The CBR Cycle
Facilitate On-Line Teacher Know-How Transfer
277
to the context of the target case. We obtain an “adapted” target case. Then a phase called “REVISE” consists in checking, evaluating and possibly correcting the adapted case. Lastly, when the case is suited and in conformity it can be memorized in the case base, during the “RETAIN” phase. In more recent french publications, we can find a preliminary phase, called “ELABORATE”. This one established the link between a problem described by a user in a poor structured way and a structured form of the target case (using a collection of descriptors). We will see in the continuation of this document that this phase is very important in our project. In the following section we will see the case structure within TETRA-BC. 3.2 A Case in TETRA-BC As evoked previously, a case represents a problem solving episode characterized by a couple (pb, sol(pb)). Thus, TETRA-BC is focused on problems encountered in class by a novice teacher. The following figure 2 presents an example of such a problem. In a teaching context, and particularly concerning the organization and management of a class, a problem can only be solved by taking into account a whole set of factors related to the context in which this problem occurred. Indeed, a problem occurring in a nursery school in a urban environment will not have the same solution as a similar problem occurred in a rural environment and a primary class. This is why, a case in TETRA-BC contains not only information concerning the problem specification itself but also information concerning the context in which this problem occurred3.
Fig. 2. Example of a teacher problem description 3
It should be noticed that most of teaching context information are include in the teacher profile.
278
C. Quenu-Joiron and T. Condamines
Thus, pb is divided into two parts: context_pb, which represents the context of this problem and specif_pb, representing the specification of this problem. For example specif_pb will contain the description of a quarrel with a pupil, whereas context_pb will contain a subset of the teacher profile such as information about its geographical environment of teaching (urban, rural) or the level of its class. With regard to sol(pb), this solution must make it possible to represent how the teacher tried to overcome this problem in his class, characterizing the actions carried out by the teacher to solve this problem. But, as a case aims at bringing assistance to other teachers to build their own solution, the solution of this problem also integrates the effects of these actions. Lastly, we will see in the CBR scenario of section 4.4 that the experts can have to bring a qualitative evaluation of each new case. So the solution of a case also integrates a part dedicated to the result of this evaluation. Finally, sol(pb) breaks down into three parts: sol_actions, sol_effets and sol_eval. Figure 3 synthesizes the complete structure of a case in TETRA-BC.
Fig. 3. The five components of a case
One part of the work in TETRA-BC consists in specifying the list and nature of descriptors. Particularly, we have to realize a conceptualization of a teaching problem, by means of a taxonomy or an ontology. The latter must be built from a model of the teacher tasks. This ontology is currently under construction. The following section presents the two-level structure of the TETRA-BC case base. 3.3 A Two Level Case Base The participation of the teachers in the TETRA-BC platform is based on their wishes to share their experience and practice. Thus, it is necessary for this system to be as simple as possible to handle. Indeed, if contributing on the platform asks for a too great effort to the teachers, it will be quickly unused. To avoid that, the manner of describing a problem or a solution must rest as much as possible on the natural language. Nevertheless, to be able to make relevant pairing between cases, and in particular to use CBR techniques, it is important for cases to be represented by a collection of significant descriptors. Lastly, when a teacher explores the case base, the system can’t only present this collection of descriptors to him but must give him access to a description in natural language. This is why, as shown in the figure 4, TETRA-BC is based on a two level model of the case base. The first one, called “narrative level” aims to preserve the description of a case freely written by the teacher in a semi-structured description (called descr(Case i)) in figure 4), strongly based on the natural language. The second level, called “knowledge level”, modelize the representation of a case (called Case i in figure 3), structured in the form of a set of descriptors. It is this representation of a
Facilitate On-Line Teacher Know-How Transfer
279
Fig. 4. The two levels of the TETRAKAP case base
case which is used in particular for pairing and whose structure was presented in the previous section. Consequently, each case stored in the knowledge level is associated with its description in the narrative level. Each time a case is presented to a teacher at the conclusion of retrieval in the case base, it is the representation of the case at the narrative level that will be presented. Moreover, each time a new case is described at the narrative level, the system builds then its representation on the level knowledge. To inform the specif_pb part, the system extracts from relevant information in the description made by the teacher. The use of the ontology during this step is mast important4. Lastly, it is at this time that the relative information with the context of teaching are extracted from the teacher profile to aliment the context_pb part of the case. The following section presents the CBR scenario used in TETRA-BC. 3.4 The CBR Cycle of TETRA-BC The CBR cycle of TETRA-BC aims to propose an alternative solution to direct interactions with the peers through the platform. Indeed, the peers can’t be solicited systematically for the resolution of each problem, all the more for problems previously solved. The idea is to really limit traditional difficulties of discussion forums when a similar problem is discussed. Figure 5 below propose an UML sequence diagram, which represents the sequence of the interactions in TETRA-BC between a novice teacher, the system, and its peers. Initially a teacher describes the problem that he wishes to solve. It is then the description of the target problem, called descr(target-pb). Then the system deduces a targetpb (by identifying descriptors). The system can then proceed to pairing between target_pb and the case base. It extracts one or more close cases, called source_case. The description as of these cases (descr(source_case)) is then presented to the novice who decides if he plans to use them directly or not. Two possibilities arise then. The first, if the teacher is satisfied, he then builds its own solution of his problem. The second, if he is not satisfied with the retrieved cases, he can request the help of experts to solve his problem. A maximum of 3 experts are selected by the system. Three simultaneous discussion threads are then open on the platform. Then the novice can 4
In the long term, the techniques of knowledge acquisition starting from texts could be used to optimize this step.
280
C. Quenu-Joiron and T. Condamines
Fig. 5. Sequence diagram and CBR cycle of TETRA-BC
discuss with these experts and try to build a solution. These three discussions are private and independent in order to avoid the influence of experts with different opinions. Once a solution is built, the novice is requested to describe it on the platform (sol_action and sol_effects). At this time three experts are contacted to give an evaluation of this solution. The last part of the case is then completed (sol_eval). An arbitration algorithm then decides the memorization of the case. So pb, sol(pb), descr(pb) and descr (sol(pb)) are simultaneously registered. This scenario highlights the fact that each stage of the CBR cycle, presented at the beginning of this document, is well implemented. First of all the ELABORATE stage consists in building target-pb starting from descr(target-pb). The RETRIEVE stage is implemented via the pairing process between target_pb and the source_pb. Then, the REUSE stage is not controlled by the system. It is the teacher that carries out the adaptation, either using the cases extracted from the base, or using the discussions held with experts. The REVISE stage implies on the one hand the novice who must describes his solution (actions and effects) and on the other hand the experts so that they evaluate and validate this solution. Finally the RETAIN stage is obviously implemented since a case lately solved is capitalized in the system according to the arbitration algorithm. Several other alternative scenarios corresponding to typical situations were also modeled. All of them are founded on the CBR cycle. The following section concludes this article on the perspectives of our work.
4 Conclusion and Perspectives In this paper we showed how we choose to use the techniques resulting from CBR in the TETRAKAP platform, in order to support know-how transfer between teachers, in an aim of training.
Facilitate On-Line Teacher Know-How Transfer
281
Obviously, there still remain several stages before having an operational tool. The current effort carried out around TETRA-BC consists in the specification of the descriptors to precisely characterize cases, starting from the descriptors of the problem part of a case. To do this, we work on a corpus of textual cases received from teachers, professional documentations dedicated to teacher training, and reports of former students who are now teachers. In parallel, we work on a semantic characterization of teaching problems via an ontology. Lastly, an interface making it possible for the teacher to describe a case at the narrative level should also be realized. It will allow us to collect new cases in order to be able to link the narrative level and the knowledge level of the case base.
References 1. Aamodt, A., Plaza, E.: Case Based Reasoning: Fundational Issues, Methodological Variations ans System Approach. AI Communications 7(1), 39–59 (1994) 2. Kolodner, J.: Case Based Reasoning. Morgan Kaufman, San Francisco (1993) 3. Watson, I.: Applying Case Based Reasoning – Techniques for Enterprise systems. Morgan Kaufmann, San Francisco (1997) 4. Aamodt, A.: Knowledge-Intensive Case-Based Reasoning and Intelligent Tutoring. In: Funk, P., Rognvaldsson, T., Xiong, N. (eds.) SAIS-SSLS Proceedings, Västerås, Sweden, April 20-22, pp. 8–22. Mälardalen Högskola, Västerås (2005) 5. Kolodner, J., Cox, M., Gonzalez-Calero, P.: Case-based Reasoning-inspired approaches to education. The Knowledge Engineering Review, 1–4 (2005) 6. Guin-Duclosson, N., Jean-Daubias, S., Nogry, S.: The Ambre ILE: How to Use CaseBased Reasoning to Teach Methods. In: Cerri, S.A., Gouardéres, G., Paraguaçu, F. (eds.) ITS 2002. LNCS, vol. 2363, pp. 782–790. Springer, Heidelberg (2002) 7. Joiron, C., Leclet, D.: A case base model for a case based forum: experimentation on pediatric pain management. In: Artificial Intelligence in Education AIED 2001, San Antonio, Texas, USA, May 2001, pp. 111–121 (2001) 8. Lang, V.: La professionnalisation des enseignants: sens et enjeux d’une politique institutionnelle. Formation permanente – éducation des adultes, PUF (1999) 9. Saujat, F.: Spécificité de l’activité d’enseignants débutants et genre de l’activité professorale. Polifonia (8), 67–93 (2005) 10. Johnson, V.G.: Student teachers’ conceptions of classroom control. In: Annual congress of the American Educational Research Association, New Orleans (1993) 11. Johnson, V.G.: Eyes in the back of your head: student teachers concept of management and control. In: Annual congress of the American Educational Research Association, New Orleans (1994) 12. Reynolds, A.: The knowledge base for beginning teachers: educational professionals expectations versus research findings on learning to teach. The Elementary School Journal 95(3), 199–221 (1995) 13. Naele, D.C., Johnson, V.G.: Changes in elementary teachers’ theories of classroom management and control after one year: two cases. In: Annual congress of the American Educational Research Association, New Orleans (1994)
282
C. Quenu-Joiron and T. Condamines
14. Charlier, B., Boukottaya, A., Daele, A., Deschryver, N., El Helou, S., Naudet, Y.: Designing services for CoPs: first results of the PALETTE project. In: 2nd International workshop on Building Technlogy Enhanced Learning solutions for Communities of Practice, pp. 76–86 (2007) 15. Depover, C., De Lièvre, B., Temperman, G.: Points de vue sur les échanges électroniques et leurs usages en formation à distance. Sticef, Rubrique 13 (2006) 16. Condamines, T.: How to favor know-how transfer from experienced teachers to novices? A hard challenge for the knowledge society. In: 20th World Computer Congress (WCC 2008), Milan, September 7-10 (2008) 17. Condamines, T.: How can knowledge capitalization techniques help young teachers professional insertion? A new approach of teachers’ long-life training. In: IEEE 6th International Conference on Human System Learning (ICHSL.6), Toulouse, May 14-16 (2008)
Edushare, a Step beyond Learning Platforms Romain Sauvain and Nicolas Szilas TECFA, FPSE, University of Geneva, CH 1211 Genève 4, Switzerland
[email protected],
[email protected]
Abstract. This papers presents Edushare, a web-based learning environment that has been designed for cognitive remediation applied to autistic children. While existing learning platforms integrate various services in a web-based environment, they meet limitation where specific software must be integrated. Their role is then mostly confined to hosting the external software, without deep integration. Therefore, Edushare a service-rich integration platform, was created. It consists in centralizing within the platform a series of services shared by many educational softwares. These services include data logging, logs visualization, media management or parameterization. As a result, software development benefits from these services and focuses on its core goal, learning activities. This approach is described with a case study concerning facial emotion recognition in autistic children. Keywords: learning platforms, service-rich integration platforms, special education, cognitive remediation, educational technologies, learning software.
1 Introduction 1.1 Context This research project is concerned with helping autistic children through the use of educational software. These children are part of a special education program in Geneva, Switzerland. During the day, they attend a specialized institution with only eight children, where they are followed by psychologists and educators. The children are severely autistic persons, with an I.Q. below 80. The goal of the project is to develop training software based on Cognitive Remediation for the children. Cognitive remediation consists in training a specific basic skill via repeated exposition of stimuli, hypothesizing that such a remediation have a more global impact on the everyday behavior of the subject. Cognitive remediation has been used successfully with people with disabilities including ADHD [1][2], schizophrenia [3] and age related cognitive impairments [4]. Cognitive remediation is advantageously administrated via computerized exercises, which allows both a precise timing of activities and an automatic reporting of the patient activity (as far as the keyboard/mouse activity is concerned). The use of virtual environments of learning and practicing living skills has been the central point of several projects [5][6]. Furthermore, many other benefits have been observed when U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 283–297, 2009. © Springer-Verlag Berlin Heidelberg 2009
284
R. Sauvain and N. Szilas
autistic people get trained with computers, such as the safety and predictability of the environment or the control of the interactions (see [7] pp. 91-101). Specific software for psychological experimentation built on top of E-Prime [8] for example, is used in laboratory settings. But in a real context, the deployment of such software is not easy. Therefore, the first requirement was to implement the cognitive remediation activities via Internet. In this way, there is no need to install software at each institution interested in the cognitive remediation activities. Since several institutions in Geneva are potentially interested in implementing the activities, it is particularly relevant to allow psychologists and researchers involved in the program who are not based in the institutions to follow the progression of the children via Internet. Communication among all the actors of the project is also needed in order to better accompany each child following the program. Furthermore, it is desirable that some parameters and data in the activities could be modifiable by non-programmers (researchers, educators, teachers, etc.), such as stimuli/media (e.g. images, sounds) used in the experiments. These initial requirements have naturally led us to examine learning platforms, which gather several functionalities for on line teaching and learning. 1.2 What Platforms Do, What They Do Not Do Existing learning platforms, such as Moodle [9], Dokeos [10] or Claroline [11] provide various services for distance learning with the following functions [12] [13] [14]: − − − −
Information exchange: page display, download and upload of any file (usually a document) Communication (synchronous and asynchronous): forums, chat Collaboration: wiki Management: learner management, activity management, usage logging, time management (deadlines, calendar)
Given the above initial requirements, learning platforms and content management systems (CMS) could be used in two ways: −
−
activities fit into an existing authoring tool. For example, if the activities only contain simple quizzes to answer, the eXe editor [15] could be used in conjunction with Moodle, because eXe products can be integrated in Moodle as a learning object. However, content management (media modification) is restricted to people who know how to use eXe tool. Furthermore, the logs retrieved from the activities would be limited to the SCORM [16] standard. Another option consists in developing a plugin for a platform such as LAMS [14], but this requires a high degree of programming expertise. activities constitute a separate executable file, which is downloaded from the platform, as in the abuledu website [17]. In that case, the activity developers have total freedom in their design, but the trace of their activity cannot be easily uploaded to other actors. The log file must be manually sent back to the platform for further processing. Furthermore, content management tools must be developed specifically for each software, and the content management itself cannot be performed on the
Edushare, a Step beyond Learning Platforms
285
platform. A variant of this solution consists in using a web-based development tool, such as Adobe Flash, in which case the activity is still separate but integrated within the web browser. See as an example the paraschool website [18] It results from the above analysis that existing platforms appear insufficient to both: − −
host activities that go beyond simple e-learning functions, that is beyond material presentation and quizzes, allow various actors to “get inside” these activities, either in terms of content management or data log processing.
Therefore, we chose to overcome these limitations by developing our own learning environment, that we named Edushare. 1.3 The Concept of a “Service-Rich Integration Platform” In order to combine the ease of use of classical e-learning platform in terms of course preparation and the richness of a specific software development, a different concept of platform needs to be invented. This new concept is illustrated in Fig. 1. While a platform, as its name suggests, hosts autonomous complete software for activities (as an independent file), the idea is to host partial software, which reuses standard services provided by the platform. Typically, all cognitive remediation exercises need media (content), that should not be hard-coded into the software. Developing a content management interface is always a time consuming task, which is remote from the core role of the software, namely the pedagogical/educational components. It is proposed that the software delegate the content management to the platform. Similarly, other common functionalities lie inside the platform (see Section 2).
Fig. 1. General principle of Edushare. Instead of hosting fully autonomous external software (left), the new platform contains some services, represented by rails, which complete the external software (right).
286
R. Sauvain and N. Szilas
Technically, each module is developed in a development language chosen according to the specific needs of the exercises and the expertise of the development team. It can be Flash, Java, Authorware, provided that the two following conditions are met: − −
the software produced by the language/tool can be played on a browser (possibly with a plugin), the software can connect to a MySql database, either directly or by calling php functions.
Within the software, developed in a suitable language, calls to the database allow to access to the various services provided to the software. Platform's services consist in recurrent components in educational software that are not specific to the learning task. Most of these services (detailed below) benefit from the networking characteristics of the platform. Edushare can be called a “service-rich integration platform”, even if, in a way, it is no longer a platform, since it does not only host documents and exercises but provides them with common services that would otherwise be part of the software. At the same time, the goal is not to impose a specific language or tool, for the development itself, beside the conditions mentioned above. It is interesting to compare Edushare with the Educlasse website [19], which enables advanced integration features such as media
Fig. 2. Overview of the home page of the "Edushare" environment
Edushare, a Step beyond Learning Platforms
287
management or logging of learning activities, but does not provide the means for an independent developer to add a new exercises in the website. Note that while Edushare is devoted to special education and autism treatment via cognitive remediation, it can be used for various educational needs.
2 Functionalities Before listing the functionalities implemented within Edushare, it is relevant to identify the various roles related to the use of such an environment. In our context, 5 roles can be distinguished: −
−
−
− −
Learners: they are the final users, faced with the educational content. In our case, they are the autistic children. A particularity of Edushare, compared to classical platforms, is that these users do not directly log in (see below). Accompaniers: They are the people next to the student, who are responsible for the proper execution of the learning session. They log in the platform and specify the identity of the learner. They might contribute to the session actively, either during the exercises or after, for reporting. Accompanier could be educators, psychologist, or members of the family. Program director: It is the person who is in charge of the learners and is responsible of their progress. She/He knows the learners outside of the computer-based learning program. Developers: They are the people who create new exercises. They should be proficient in the development language or authoring tool they decide to use. Distant analysts: They are interested in analyzing the efficiency of the exercises. Typically, psychologists who prescribed the exercises are interested in evaluating them, in terms of learner's progresses, time of usage, etc. Researchers in cognitive remediation also play this role, as they want to gather data to evaluate statically the effect of the activities.
These different roles correspond various functionalities that have been implemented within Edushare: For the learner: Execution of the activities for learning and remediation. During the execution, detailed logs are stored. For the Accompanier: − Login and learners management: As in any platform, it is possible to create accounts for each user of the environment. Specific admin accounts are created for this purpose. More specific to Edushare is the possibility to create for each learner an identity, that is automatically attached to the corresponding logs. This identity does not correspond to a login account, since learners are not autonomous. For the program director: − Modules and activities management: The atomic exercise in Edushare is an activity. Activities are stored within Edushare and can be combined together to build modules. Furthermore, several modes of sequencing are proposed: fixed sequencing of activities or free sequencing (order chosen during interaction). This parametrization is performed on the platform by the person who wants to build a −
288
R. Sauvain and N. Szilas
specific learning/training program. This allows some flexibility in the design of a specific course. − Content Management: A main originality of Edushare is that the media are not stored and managed within the exercises but in the database included in the platform. In this way, the program director can upload new media or assets (pictures, images, sounds etc.) and assign them to activities. Note that one asset can be shared by several activities. Content management allows an activity to be modified/customized by non technical people. − Parametrization: variables are attached to activities and media. They can be modified outside the software, on the platform. As for content management, this allows more flexibility for non programmers. Note that parametrization and Content Management have been implemented with generic educational games [20], to enable teachers to make their own games. − Off line communication: a simple forum has been added to the platform, to organize the communication between the program director, the accompaniers, the analysts. For developers: − Development: This activity is performed outside the platform. This strong choice gives developers a maximum of freedom, rather than forcing them into a specific language or approach. However, this freedom is limited by two constraints: (1) use an appropriate development language, (2) use specific functions for accessing Edushare's functionalities: data logging, media storage and parameterizable variables. For distant analyzers: − Log visualization: This allows to produce visual logs of the learners activity: It is detailed in Section 4. − Log export: For statistical analysis, it makes more sense to export raw logs, and use specialized software such as SPSS or Statistica.
3 Technical Development and Architecture Edushare is a Web-based platform and is built with the most popular technologies currently used on the Internet. Php 5 is in the center of the technical development with Html, the predominant markup language for Web pages. In addition to these core technologies, we find other standards such as : CSS (Cascading Style Sheets) to provide more flexibility and control in the specification of presentation characteristics; Javascript, scripting language used for the development of dynamic, real-time functionalities; Ajax (Asynchronous Javascript and XML) that allows to retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page; as well as MySql, the relational database management system used to store all the data. Depending on the type of user visiting Edushare, two distinct parts are available (Fig. 3). Specially designed for learners, a simplified interface allows them to select and execute the existing exercises easily. We name this side of the platform the “Learner Interface”. The extreme simplicity of the design is important to avoid any source of distraction during the execution of activities, especially for learners who have difficulties such as autistic children.
Edushare, a Step beyond Learning Platforms
289
Fig. 3. General outline of the organization of the platform
The other side is much more complete and includes most functionalities. This “administration interface” is built with Joomla [21], an Open Source content management system, and offers users like developers, educators and psychologist to upload new activities, change the images and medias used in the exercises or monitor the progress of learners. Several data models have been established in order to maximize the flexibility of the tool. Activities, medias, modules and learners are the entities in the database offered by Edushare to advanced users. The “activity” entity is the one used to create a new exercise and upload / modify the needed files. As shown in Fig. 3, each time a new exercise is uploaded on Edushare, a new directory is created to store the needed files. The platform automatically proposes the newly created activity in the list of all available activities and allows learners to directly access to it. Furthermore, with the “media” entity, Edushare proposes a complete system to manage the data used in the exercises. A .php file interacts with the database to get those information and transmits them to the current activity. This system allows to manage directly on the platform which images or texts will be used in the exercise without any modification of the exercise files. An educator, for example, will be able to adapt a stimulus for a child without knowing anything about programming and without modifying the files of the exercise. The purpose of the “module” entity is to combine several activities. One module contains one or more activities that can be displayed either as a unordered pack of activities or as a sequence of activities. In the latter case, each activity is assigned with a rank by the user (see Fig. 4). For example, this feature allows a psychologist to propose sessions with a steady progression. Finally, the “learner” entity is necessary to follow the evolution of the learners on the platform.
290
R. Sauvain and N. Szilas
Fig. 4. Management of the sequencing of activities within a module
As a final point on the technical side, the opportunity to make comments should be described. From both interfaces, users can write comments about a learner, an activity or a module at any time. These comments will appear on the management section of the entity, but also directly on the home page of the administration interface for each person involved with the entity in question. For example, the user who created an activity will see the comment saying “This exercise has been appreciated by the learner but could offer several levels of difficulty” and can reply directly by writing another comment. This technical overview highlights the fact that Edushare is more than a simple platform where you can only host exercises. We would like to further illustrate this point by describing in detail one of the main feature of Edushare, the ability to manage logs.
4 Log Management As presented before, an important feature offered by Edushare is a complete management of logs. When a learner executes an exercise, “session” logs are automatically saved in the database. Information such as the current learner, the date and time, the module and the activity are stored. In addition to these general data, the activity creators can include a simple call to a .php file that will save any desired variables and link them directly with the session logs. The following programming example works for an exercise created with Macromedia Flash 8 and stores two variables: the correct answer and the execution time of the activity. loadVariablesNum("../../scripts/ logs.php?correct_answer ="+correct_answer_value+"&execution_time="+execution_ti me_value, 0, "POST"); This call to the logs.php file must be sent each time a learner gives an answer or finishes an exercise. Of course there is no limit to the number of variables saved as logs, but we consider that a small selection of pertinent data is better than a large amount of useless information in which it will be difficult to make a precise analysis.
Edushare, a Step beyond Learning Platforms
291
Beyond the mere storing of this information, Edushare offers the possibility to extract and view them. For this purpose, a complete section of the platform is dedicated to data analysis. An extensive study was conducted during the creation of the platform to propose an optimized tool to the user. Indeed, user accesses pre-processed data that are more relevant than raw data. A form allows selecting precisely the desired logs with parameters such as learners, modules, activities, variables, dates or sessions. After the validation of these settings, a new window appears containing the results of the query. A table containing all the data as well as a button to export them as .xls file are present, but Edushare offers directly basic analysis without using an external tool. Depending on the type and amount of data to display, different visualization modes are offered. In case of multiple variables, a time line is generated and shows only the repartition of the sessions for each learner. Figure 5 brings out the sessions made by three learners during one month. In case of a single variable, a table and a graph are generated in real time. They display the first analysis for the data (Fig. 6). If the variable is a numeric one, the mean value and the distribution are calculated automatically for each learner and are presented in the table. The graph that accompanies the table is very useful since it allows to visualize quickly the evolution of the learners over the multiple trainings and tests they have performed.
Fig. 5. Example of time table generated for a multiple variable analysis. Each vertical bar indicates that a session has occurred.
Fig. 6. Example of table and graph generated for a numeric single variable analysis
292
R. Sauvain and N. Szilas
Fig. 7. Example of table and graph generated for a non-numeric single variable analysis
The last possible case, corresponding to a single non-numeric variable, generates a table containing the distribution of the values for each learner and a bar graph that visually represents this distribution (Fig. 7). Only after this first step, for checking the quality of data, should the user use external tools to perform further analysis. We think that getting a quick overview of the logs will gain time and will be appreciated by users.
5 Example: Facial Emotion Recognition As stated in introduction, Edushare was designed in the context of cognitive remediation for autistic children. The first phase of the project focused on the content hosted and managed by Edushare. This content consists in educational software for people with learning difficulties. In this framework, a software called “Remédion” was created, a Macromedia Flash program connected to a MySql database for media management. The purpose of this application was to exercise autistic children in order to improve their recognition of facial emotions. In collaboration with the Service Médico-Pédagogique of Geneva (SMP), medicinal and psychological research center, and the Centre des Amandiers (CA), center for diseased children, we have been able to study the kind of exercises suitable for those children and how these activities should be presented. Tests were then conducted and the positive feedback of educators on the children's behavior with Remédion allowed us to proceed to following step: the Edushare platform itself. Starting from this software, we had to create a platform to host it, to offer the management of the medias used by the exercises and to log the results, as seen in chapters 2 to 4. The platform is now fully functional and it hosts the Remédion software, a module that contains four distinct activities. The testing of this module within the full version of Edushare is underway. In parallel, another module has been implemented (for memory training) and partially integrated within Edushare.
Edushare, a Step beyond Learning Platforms
293
In the following paragraphs, we will outline a few possible routes for users of the platform who are researcher from SMP, educators from CA and the autistic children. For a full access to different topics and to take full advantage of features offered by Edushare, it is necessary to get logged in. Users, researchers and educators but not the children, need an account in order to have the basic rights. It is necessary to complete the provided form to register and to use the link in the e-mail automatically sent by the platform to confirm the registration. Once registered, many options offered by the platform are available. One of the common use of the platform for an accompanying person is to record learners they are in charge of and help them practicing with activities. For that, the educator starts by creating a new entity in the “Learners” topic of the administration part for the child to whom he wants to propose exercises. Once this first stage is completed, he should go to the learner interface dedicated to the use of activities, which proposes various modules currently available in the platform. The educator selects “Remedion” in the list of existing modules. This module is made of four cognitive remediation activities dedicated to facial emotion recognition (Fig. 8). He will now let the child try the exercises itself, while providing assistance and support as the children received by the center are severe autistic children and are not autonomous. At any moment during the training, the educator can make a comment about the current module, activity or learner simply by clicking on the corresponding button located on the top left corner. This whole process is described in Fig. 9.
Fig. 8. Selecting the facial emotion recognition module (Remedion)
294
R. Sauvain and N. Szilas
Fig. 9. Steps for creating a new learner and starting the activities with him
Logs have been automatically generated during the use of the activities by the child. Let us consider the case of a psychological researcher of SMP who wishes to examine them in order to monitor the progress of a child. After logging in, he will go to the section dedicated to the extraction and visualization of logs and select the options corresponding to the logs he wants to extract. The first possible parameter will certainly be the name of the specific child he wants to follow. Then he can be interested in choosing an activity built for a special purpose (Fig. 11), like Remédion for facial recognition. Defining a start and end date could be relevant too, in order to view the results starting from the last check (Fig. 12). Having validated the selection, the results window opens, showing precisely the logs for this child, following one activity between the selected dates. The diagram below (Fig. 10) details each step in order to select and visualize logs using the platform's user interface. The use of Edushare within this context was appreciated by educators and cognitive remediation researchers, particularly the ability to change the media and log comments. But a full experiment with the platform is still to be performed to systematically assess its strengths and weaknesses.
Fig. 10. Steps for selecting and visualizing logs
Edushare, a Step beyond Learning Platforms
295
Fig. 11. Overview of the logs form, selection of an activity
6 Conclusion and Future Work We have proposed a novel approach for combining the flexibility of specific educational software development with the advantages of learning platforms in terms of integration and communication. In the current implementation, services performed by the platform include user management, data logging management, media management and simple parameterization. Among these services, much effort has been put on data logging because it moves educational software towards more openness. Indeed, if laboratory initiated cognitive remediation activities obviously logs the learner activity, it is far from being the case for other educational software. Typically, learning and edutainment products are usually designed as “black boxes”: the learner uses them but the other actors – the parents, the teachers – have a quite limited feedback on what has happened, beyond the mere percentage of completion of the software. We believe that this closeness is one of the reasons for their limited usage. Adapting such software to be used within a service-rich integration platform such as Edushare should increase the usage of the software. Other services could be developed to help integration and openness of educational software. We will mention here two of them. Firstly, in case of a complex software (for example a learning game), it is not easy for the potential user (a teacher) to get a clear view on the content of the software, in terms of both general learning goals and specific pedagogical strategies used within the software. We are considering adding this feature to the platform, that is letting the possibility for a software developer to make a
296
R. Sauvain and N. Szilas
“detailed overview” of his/her product, without executing the whole product, similar to the walkthrough available (often via “cheat codes”) in some video games. Secondly, the current parameterization capability offered by the platform is limited to modifying variables. We would like to extend this to advanced parameterization interfaces that would allow a non programmer to gain more authorial control of the final educational activity. The platform would make available templates of parameterization graphical user interfaces that could be used by developers to make their product more visible. Given its current and future possibilities, it appears that Edushare and its underlying approach, initially designed for special education and cognitive remediation, could be of interest for a much wider population, in terms of both learners and trainers.
Acknowledgements Authors would like to thank the Service Médico-Pédagogique in Geneva, for initiating this project on cognitive remediation, in particular Stephan Eliez and Martin Debanné. Also, the authors thank the Centre des Amandiers and its staff, for allowing us to work with the autistic children in their institution.
References 1. Stevenson, C.S., Whitmon, S., Bornholt, L., Livesey, D., Stevenson, R.J.: A cognitive remediation programme for adults with Attention Deficit Hyperactivity Disorder, Australian and New Zealand. Journal of Psychiatry 36, 610–616 (2002) 2. Klingberg, T., Fernell, E., Olesen, P.J., Johnson, M., Gustafsson, P., Dahlström, K.: Computerized training of working memory in children with ADHD — A randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry 44, 177–186 (2005) 3. Medalia, A., Richardson, R.: What Predicts a Good Response to Cognitive Remediation Interventions? Schizophrenia Bulletin 31, 942–953 (2005) 4. Ball, K.K., Wadley, V.G., Vance, D.E., Edwards, J.D.: Cognitive skills: Training, maintenance, and daily usage. Encyclopedia of Applied Psychology 1, 387–392 (2004) 5. Cobb, S.V.G., Neale, H.R., Reynolds, H.: Evaluation of virtual learning environments. In: Proc. 2nd Euro. Conf. Disability, Virtual Reality & Assoc. Tech., Skövde, Sweden (1998) 6. Beardon, L., Parsons, S., Neale, H.: An interdisciplinary approach to investigating the use of virtual reality environments for people with Asperger syndrome. Educational and Child Psychology (2001) 7. Grynszpan, O.: Multimedia Human Computer Interfaces: designing educational applications adapted to high functioning autism. Phd Thesis, University of Paris-Sud (2005), http://www.risc.cnrs.fr/detail_memt.php?ID=875 (accessed April 17, 2009) 8. E-Prime, http://www.pstnet.com/products/e-prime/ (accessed April 17, 2009) 9. Moodle, http://moodle.org (accessed April 17, 2009) 10. Dokeos, http://www.dokeos.com (accessed April 17, 2009) 11. Claroline, http://www.claroline.net (accessed April 17, 2009)
Edushare, a Step beyond Learning Platforms
297
12. Peraya, D.: De la correspondance au campus virtuel: formation à distance et dispositifs médiatiques. In: Charlier, B., Peraya, D. (eds.) Technologie et innovation en pédagogie. Dispositifs innovants de formation pour l’enseignement supérieur, pp. 79–92. De Boeck, Bruxelles (2003) 13. Britain, S.: A Review of Learning Design: Concept, Specifications and Tools. A report for the JISC E-learning Pedagogy Programme (2004) 14. Dalziel, J.R.: From re-usable e-learning content to re-usable learning designs: lessons from LAMS, Macquarie University (2005), http://wiki.lamsfoundation.org/ download/attachments/9469955/dalziel_reusable.pdf 15. eXe, http://exelearning.org/ (accessed April 17, 2009) 16. Advanced Distributed Learning – SCORM, http://www.adlnet.gov/Technologies/scorm/ (accessed April 17, 2009) 17. Abuledu, http://www.abuledu.org (accessed April 17, 2009) 18. Paraschool, http://www.paraschool.com (accessed April 17, 2009) 19. Educlasse, http://www.educlasse.ch (accessed April 17, 2009) 20. Sauvé, L., Samson, D.: Rapport d’évaluation de la coquille générique du Jeu de l’oie du projet Jeux génériques: multiplicateurs de contenu multimédia éducatif canadien sur l’inforoute. SAVIE et Fonds Inukshuk, Québec (2004) 21. Joomla, http://www.joomla.org (accessed April 17, 2009)
Design in Use of Services and Scenarios to Support Learning in Communities of Practice Bernadette Charlier1 and Amaury Daele2 1
University of Fribourg, Centre de Didactique Universitaire, Boulevard de Pérolles, 90, 1700 Fribourg, Switzerland 2 University of Lausanne, Centre de Soutien à l’Enseignement, Quartier-UNIL, Bât. Unicentre, 1015 Lausanne, Switzerland
[email protected],
[email protected]
Abstract. This paper presents a research realised in the framework of the PALETTE project (FP6-TEL) which aimed at observing and analysing the design in use of web services and tools in the context of Communities of Practice (CoPs). Design in use consists in trialling the prototypes and their scenarios of uses and to observe the instrumental genesis carried out by the CoPs services. We first present our conceptual framework based on the instrumental genesis theory. We then present our methodology for the generation of data and the analysis. Results of a cross-case analysis done on seven cases of design in use of PALETTE services and scenarios by CoPs are then described and analysed. The discussion provides reflection that may inform the use of PALETTE services by other CoPs in other contexts. Finally, in the conclusion, we reflect on our methodological approach and results, and provide guidelines for further research. Keywords: Community of practice, participatory design, instrumental genesis, uses of services and scenarios to support learning in CoPs.
1 Introduction This paper presents a research realised in the framework of the PALETTE project (FP6-TEL)1 that aimed at observing and analysing the design in use of web services and tools in Communities of Practice (CoPs). The services provided by PALETTE are classified into three categories: information management, knowledge management and collaboration. These three types of services aimed at supporting three types of CoPs activities that we called ‘generic scenarios’: Knowledge reification focusing on document production, retrieval and reuse; Collaboration: debate and decide concentrating on the exposition of opinions, their comparison with others and the selection of the most salient ones; Animation and moderation: identity building specialising in the development of a feeling of membership and the evolution of the group. They 1
PALETTE (Pedagogically sustained Adaptive Learning through the Exploitation of Tacit and Explicit Knowledge - http://palette.ercim.org), Integrated Project co-funded by the European Commission (FP6 Call4 Febr. 2006 – Jan. 2009).
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 298–303, 2009. © Springer-Verlag Berlin Heidelberg 2009
Design in Use of Services and Scenarios to Support Learning
299
have been designed and developed through a participatory design methodology which included three main steps: (i) Analysing the CoPs contexts and needs; (ii) Designing for use through the elaboration of scenarios of uses of technological services; (iii) Designing in use carrying out trials of both scenarios and services and making appropriate modifications to scenarios and services if necessary. Seven CoPs have been involved in this process. All were involved in the training or education domains. In this paper we report a research aiming at understanding how these CoPs have appropriated the services and scenarios we developed with them.
2 Conceptual Framework This section presents the main concepts used in the research: instrument, instrumental genesis, instrumental genesis in groups and mediation of the instrument. The instrument-mediated approach is based on one fundamental concept: the instrument. An instrument is not only an object, an artefact – or a tool (material or symbolic) – that is used by an actor in order to carry out an activity. It is a “mediator” between the actor and his/her activity [1, p.175]. As a mediator, the instrument is able to act on the activity of the actor and on the actor him/herself to change his/her relation with the purpose of the activity. In return, the actor can act on the instrument in order to better support his/her activity. This twofold influence is called “instrumental genesis” which is the progressive construction of uses of an artefact by an actor and depends of course on the social environment of the actor and his/her purpose. The action of the instrument on the actor and his/her purpose is called instrumentation and the action of the actor on the instrument to appropriate it is called instrumentalization [2]. Typical examples of instrumentalization are catachresis, i.e. uses of an artefact in another way than it has been designed for. The emergence of catachresis shows that uses created by users do not necessarily fit with what the designers had expected at the beginning. This highlights the need for designing flexible and adaptable artefacts, especially when the user is a distributed group who has to collectively negotiate the use of the artefact. Moreover, this also highlights the usefulness of participatory design approaches through which the design process is collaboration between designers and users. The definition of these processes highlights the need for a group to use artefacts not only to achieve an objective (by differentiating different tasks) but also to coordinate the different tasks (by articulating them). In other words, in order to observe and analyse the activities of a CoP, we need to observe both how the CoP uses the artefacts for achieving its activity, and how the CoP uses the artefacts for coordinating/negotiating the different tasks performed by different members. From a methodological point a view, Cerrato [3] suggests to observe and analyse two dimensions of a collective activity: the relations within a group of subjects who coordinate their actions and the integration of different products into a collective production. For her, the crucial point is to analyse the relations between the actions and the individual products of the subjects. This analysis highlights the “activity schemes” of a group, i.e. its “way of acting” for producing outcomes using specific instruments. To understand these schemes can help for elaborating new instruments for the group, advising it in using its instruments or new instruments or analysing the evolution of its activities regarding its issues and needs. This implies direct observation of group’s activities with precise grids of observation.
300
B. Charlier and A. Daele
3 Methodology The trials of the services with the CoPs have been organised into six stages: 1. Selecting activities of the CoPs to be observed. 2. Describing the initiation/familiarisation processes of the CoPs with the PALETTE services in order to understand the contexts of use of the PALETTE services. 3. Observation of PALETTE services in use. Data were collected through direct observations of online or face-to-face CoPs activities and coded with a common grid of analysis focused on the highlight of the instrumental genesis lived by the CoPs. 4. Data analysis through content analysis methods: thematic analysis (information content), category analysis (frequency characteristic grouped in significant categories), and evaluation analysis (judgements: frequency, direction -positive or negative-, intensity). A reflection is also conducted about the possibility to generalise the results of the analysis in some ways. 5. Reporting to CoPs and developers of the services in order to inform the evolution of the CoP activities and providing feedback to the developers 6. Realising a cross-case analysis [4] that was dedicated to several questions: In what extent the produced analysis propose developments of the scenarios? By considering the seven analysed cases, what are the common conditions so that they are useful and consistent for other CoPs? How could we inform other CoPs in their process of development on the basis of our analysis? Concretely, in order to carry out our cross-case analysis, we proceeded as follows: (i) we wrote the analysis of each individual case based on the same conceptual framework and general questions of research; (ii) we combined the analysis of each case into a common matrix so that the cases can be compared following common questions; (iii) we finally wrote a general synthesis. For each scenario, we then identified three main questions that could be of interest for other CoPs: • What are the conditions of use of the services: need, purpose, training of members, mastery of the tools, process of negotiation of use, habit of carrying out such activities, etc.? • What are the changes (in CoP activities, communication, social interactions, etc.) that occurred through the use of the services? • What are the perspectives of development of uses after the first experience?
4 Results First, here is a brief introduction to the context of the seven CoPs. The description of the services used is available on the PALETTE website (http://palette.ercim.org). • Did@cTIC, a CoP of young university teachers in Switzerland who regularly meet face-to-face in order to discuss pedagogical issues and needs they face in their daily practice. Their main aim was to report (to reify) their practices and oral discussions in order to share them. This has been done through the use of a HTML editor (Amaya) with a template to take notes during the meetings and a web service (DocReuse) supporting semi-automatic and automatic reuse of parts of different documents in order to produce new ones with other purposes.
Design in Use of Services and Scenarios to Support Learning
301
• ePrep, a CoP of Higher Education teachers in the domain of Sciences in France in collaboration with the French Grandes Ecoles across the world. Their main goal was to produce course (multimedia) documents and share them to the whole community of French-speaking teachers in the world. They used a multimedia open source editor (LimSee3) and a Wiki service. • Learn-Nett, a CoP of tutors involved in a distance course for student teachers in the domain of the use of ICT for education. Their students form international groups aiming at producing a course sequence based on the use of ICT. The aims of the tutors were to share the pedagogical issues they face when supporting collaborating groups at a distance and to collect useful documents for them. They used a semantic Wiki (SweetWiki) and a web-based repository able to classify documents on the basis of a ontology (BayFac). • CoPe-L, a CoP of e-learning trainers who aim at sharing resources on e-learning in companies and Higher Education through the use a web-based repository able to classify documents on the basis of a ontology (BayFac). • TFT, a young CoP of teaching nurses who aim at defining their common objectives and better knowing each others’ individual goals and competences. They used a semantic Wiki (SweetWiki). • TIC-FA, a community of students participating in a course on ICT for Adult Learning and aiming at producing and collaboratively editing documents, and debating at a distance. They used a Wiki (SweetWiki), a web-based collaborative tool (CoPe_it!), a web-based repository able to classify documents on the basis of a ontology (BayFac), a HTML editor (Amaya) and a tool for reusing parts of documents (DocReuse). • TIC-EF, a community of students participating in a course on ICT for teaching and learning and aiming at producing and collaboratively editing documents. They used a Wiki (SweetWiki), a web-based repository able to classify documents on the basis of a ontology (BayFac), a HTML editor (Amaya) and a tool for reusing parts of documents (DocReuse). Here, we present only the synthesis of the results of the cross-case analysis regarding the three generic scenarios. When considering the changes that occur for CoPs while carrying out the PALETTE services and scenarios, it is interesting to note that the three Generic Scenarios are interrelated. Some changes in reification process have an impact on the processes of debate and decide, and identity building of the CoP. More precisely: • Reification allows developing or confirming CoP identity (Learn-Nett, ePrep) or making the CoP more confident into its skills to develop projects (CoPe-L) or defining better its domain (CoPe-L); • Reification allows discussing and debating practices (Learn-Nett); • Identity building requires debate and decision making about the development and activities of the CoP (TFT). It also requires reification of the “CoP identity”: a logo, participants’ yellow pages, etc. (TFT, TIC-FA, TIC-EF); • Reification supports decentration of CoP members from their own practice by considering other ways to do (CoPe-L, Did@cTIC). For the new comers, it is a way to put their mind at rest regarding their first experience (Learn-Nett);
302
B. Charlier and A. Daele
• Reification changes the way to work within a CoP through the passage from oral to written expression and descriptions of practice (Learn-Nett, Did@cTIC); • Reification is a way to present the CoP to an external audience (Learn-Nett, CoPe-L) or to motivate peripheral members to participate in the core activities (ePrep). In order to carry out these changes, at least two conditions seem to be common to the CoPs we have worked with: • Training: it can take different forms (at a distance, in face-to-face, through individual or collective activities, etc.) and concern different objectives (mastery of tools, reification of one’s practices, basic notions such as ontology, structured documents, etc.). However, its main purpose beyond the training of the CoP members is to develop a sense of belonging and getting involved in a common project in a wide sense. Training together is also an opportunity to meet, to discuss the points of the CoP, to debate the projects, to negotiate the next activities, etc. • Continuing analysis of needs and reflection on CoP purpose and activities: again this can take different forms (reflection with a focus group, discussions with external experts, etc.). However the point here is to never think that CoP needs are static. Once they have highlighted their needs and main processes, the CoPs continue to reflect on their activities. They are dynamic in order to be consistent and up-to-date with their domain and members’ needs and personal objectives. This continuing reflection also comprises development of uses of tools and curiosity about new tools and uses. A third condition could be highlighted but is peculiar to the PALETTE project. It is the presence of mediators between the CoPs and the PALETTE developers. This condition has been very important for accompanying the activities and processes of change within the CoPs. As external experts, the mediators have closely participated in the development of the CoPs. When one of these conditions was missing, the CoPs experienced issues in implementing new activities and new tools with their members. It is then not surprising that in their perspectives, the CoPs were willing to continue the development of training activities and reflection on their internal processes of reification, debate, decision making and identity building.
5 Conclusion and Perspectives In conclusion, and regarding the uses of the PALETTE services, the analysis of our seven cases comes out onto a picture with sharp contrasts. Some CoPs trialled PALETTE services and will clearly continue to develop their uses. Some others concluded that the PALETTE services are not necessarily the most suitable for their purpose and either will use other tools or change their activities. However the fact remains that all have developed their ways to reify their members’ practices, organise debates and decision making, and develop their identity through better description of their purpose or activities. In other words we could say they all learned, changed and developed. This is the lesson we learn from our within-case and cross-case analysis.
Design in Use of Services and Scenarios to Support Learning
303
Proposing general advices from individual contrasted cases is a difficult exercise [4]. However, on the basis of our analysis, we could try to propose some important points to other CoPs: • Evaluate the members’ mastery of ICT and attitudes towards ICT. If they are used to work with ICT, new tools could be tested then accepted or rejected. If they are not used, common training is crucial. • Analysis of needs and objectives is important: negotiation of meaning of the CoP activities allows developing CoP identity and members’ sense of belonging. • Elaborate short and concrete activity scenarios with clear added-value from the members’ point of view and outcomes easy to evaluate. • To keep connected even at a distance in order to keep the members involved in the processes of change. If we consider our participative methodology (participating observations, interviews, questionnaires, etc.), it probably influenced the CoP members in the sense that we paid real attention to them. We also were closely involved in the CoP processes of development. During 3 years we have worked with them and they very well know our objectives and methodologies. Maybe they answered for pleasing us in some way. Maybe more ethnographic observation would have shown different activity. However, our involvement led to high validity of our within-case analysis. Finally, in terms of perspectives, each CoP has been informed about the conclusions and advices produced from the within-case analysis. The developers have also participated in the follow-up after the observations and they will continue to develop their services and tools in that way. In addition, the four general advices to CoPs that we stated here above are useful regarding the development and improvement of learning resources about mastery of ICT by CoP members, attitudes towards ICT within CoPs, and changes and development of uses of tools.
References 1. Béguin, P., Rabardel, P.: Designing for instrument-mediated activity. Scand. J. Inf. Syst. 12, 173–190 (2001) 2. Cerratto, T.I.: Pour une conception des technologies centrée sur l’activité du sujet. Le cas de l’écriture de groupe avec collecticiel. In: Rabardel, P., Pastré, P. (eds.) Modèles du sujet pour la conception. Dialectiques activités développement, pp. 157–188. Octarès, Toulouse (2005) 3. Trouche, L.: Des artefacts aux instruments, une approche pour guider et intégrer les usages des outils de calcul dans l’enseignement des mathématiques. In: Actes de l’Université d’été de Saint-Flour “Le calcul sous toutes ses formes”, pp. 265–290. Saint-Flour (2005) 4. Miles, M.B., Huberman, A.M.: Qualitative Data Analysis: An Expanded Sourcebook. SAGE, Thousand Oaks (1994)
Creating an Innovative Palette of Services for Communities of Practice with Participatory Design Outcomes of the European Project PALETTE Liliane Esnault, Amaury Daele, Romain Zeiliger, and Bernadette Charlier EM LYON (France) University of Lausanne (Switzerland) Gate-CNRS (France) University of Fribourg (Switzerland)
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. The paper aims at presenting and analyzing the implementation of a Participatory Design Methodology within the large multidisciplinary European Project PALETTE. This methodology successfully enabled and supported the development and implementation of a "palette" of interoperable services dedicated to Communities of Practice in order to manage their knowledge asset, support collaboration, communication and decision making, and help to better animate the community life. Finally, it presents some lessons learnt that could be of interest for other multidisciplinary project in the Technology Enhanced Learning community. Keywords: Communities of Practice, Participatory Design, Actor-Network Theory, Web services, widgets, scenarios, mediators, instrumentation, boundary construction, boundary objects.
1 Introduction The objectives of the paper are to: (i) present how Participatory Design was implemented in a large European project called PALETTE, in order to help communities of practice (CoPs) enhance their practice and learning; (ii) explain the elaboration of the participatory design methodology that was developed for this purpose; present the rationale, principles, main steps and instruments, and analyze key aspects such the role of mediators and scenarios as boundary constructions; (iii) share some lessons learnt that seemed of interest for future projects linking multidisciplinary teams of users, researchers, developers, which seems particularly relevant to the Technology Enhance Learning community. The successful unrolling of the PALETTE project and the quality of its findings and outcomes is mostly due to the conjunction of four factors: (i) the choice of working with a dozen real life Communities of Practice (CoPs), scattered among different areas of interest, and displaying a range of different practices and behaviors; (ii) the choice of providing a broad span of software elements as interoperating services available through widgets within a customizable interface; (iii) the choice of a Participatory Design based methodology and organizational process, relying on an approach of rich U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 304–309, 2009. © Springer-Verlag Berlin Heidelberg 2009
Creating an Innovative Palette of Services
305
scenarios, enabling a strong commitment on the part of all the stakeholders throughout the project phases; and (iv) the choice of a longitudinal, formative evaluation process, which supported a continuous reflective and self-analysis attitude.
2 Rationale for Using Participatory Design in PALETTE: The PALETTE Actor-Network The PALETTE project aimed at both developing socio-organizational knowledge by researching on CoPs functioning, learning and knowledge processing; and developing technical knowledge by researching on the interoperability of social software intended to sustain and support the functioning of Communities of Practice. According to the nature of PALETTE and taking into account its main goals, Participatory Design seemed to be the best framework within which to develop a suitable project methodology [1]. The Participatory Design approach was considered as a process of negotiation of usefulness to be achieved through reconciling the contrasting perspectives of various stakeholders, including users, designers and other researchers or trainers [2]. In PALETTE, Participatory Design was used together with Actor Network Theory [3]. The PALETTE actor-network - an actor–network being any kind of element, person, object, theory, organization, device, etc., which has an influence and all the interactions between them [3] comprised the following elements: (i) researchers from education science with a common constructivist perspective; (ii) researchers from computer sciences, such as Knowledge Management, mediation tools, multimedia authoring, document management and structuring, awareness, collaborative editing, etc.; (iii) twelve CoPs implied in PALETTE as external partners; (iv) applications or tools, more or less previously developed, as well as technical standards from the Open Source community (W3C, XML, REST, etc.); (v) organizational actors such as project coordination, project management, project work organization, tasks groups, management tools (reports, time-sheets, deliverables); (vi) methodological tools: social sciences methodologies, interviews, scenarios, data collection methods, data representation methods, Actor Network Theory; and (vii) many practices from previous European projects, research management, previous socio-technical experience, etc. The situation of an actor within an actor network is not fully defined by the existence of the actor. Some links have to be knitted with other actors to materialize the presence of the actor in the network, through enrolment. Enrolling an actor within an actor-network means that some agreed common interests between this specific actor and the actor-network can be constructed through participative activities. To enroll CoP members, a participative interview process was first used to gather data about the CoPs. Then the design, trialing, and validation of different versions of scenarios of uses of PALETTE services were also designed as participative activities. A feedback process was continuously used to discuss, improve and validate each step of the project development. A tool inventory and categorization process was applied both to tools used within the project and other tools used by the CoPs. The move from "tools" to "services" is a process that testifies from the enrolment process that specifically applied to this kind of actors. Though the PALETTE work plan explicitly mentioned the use of a Participatory Design, the alignment of PALETTE members
306
L. Esnault et al.
with the methodological tools was not easily realized in the first steps of the project. Eventually, it has taken up to the two thirds of the project to have a sufficient understanding of the effects, positive advantages and impacts of the methodology.
3 Organization, Implementation and Documentation of the Participatory Design The key elements to operationalize Participatory Design in PALETTE comprise: (i) the process followed; (ii) the participative activities, illustrated by the work of the mediators; (iii) the scenarios; and (iv) the instruments. The Process: Design-for-Use and Design-in-Use Three main processes have been followed [8]: (i) Analysis was related to the first stages of analysis of the PALETTE tools and CoPs activities, context and needs, to their modeling, and to the characterization of tools and services. This was done through interviews and discussions with CoPs members; (ii) Design-for-use concerned the development of the services and related scenarios, as well as the validation of the scenarios for each CoP. This was done through first tests of services by CoPs, common elaboration of scenarios, analysis of services usability, training of CoPs members, etc.; and (iii) Design-in-use was related to the ongoing development of services and scenarios with a continuous trialing by he CoPs. This was done through “playing” the scenarios into real activities of the CoPs and ongoing discussions and negotiation between the CoPs and the developers. The Activities, as Illustrated by the Mediators The PALETTE project was a distributed project from three points of view [8]: (i) Interdisciplinarity: the PALETTE researchers were from different fields, and the Cops covered different domains; (ii) Time: participative activities were scattered among different moments and stages throughout the project; and (iii) Space: the PALETTE researchers and the CoP members were from 5 countries. Some CoPs were even themselves distributed in space. This involved working at a distance with distributed teams. The organization of participative activities had to cope with this situation. In order to improve them, PALETTE introduced the "mediators" who played a role as representatives, spoke persons, and interconnectors. The mediator facilitated the participation of the different actors, especially by organizing participative activities where the collaboration process could take place. In PALETTE there were two kinds of mediators: the CoP mediator, a researcher who builds a bridge between a CoP and some of the PALETTE services, and the Service mediator, also a researcher, but acting as ‘spokesperson’ of a service and its developers. Example of participative activities including CoP members, CoP mediators, service mediators, and other researchers are: design and writing of generic scenarios; organization of trainings around different services and their possible uses; organization of the trialing of the generic scenarios; effective trials of generic scenarios and debriefing; validation of the generic scenarios; elaboration and trials of Learning and Organizational Resources (LORs); PALETTE plenary meetings; etc.
Creating an Innovative Palette of Services
307
The Scenarios Scenarios are tools for envisioning the future. They convey stories that happen in the real world, as well as stories we imagine happening in possible worlds. According to Caroll [9] scenarios describe key situations of use, in terms of actors, goals, context, tools, actions and events. Here lies a first valuable aspect of scenarios: they do not come with a strong semantic. In the design process their vagueness is an affordance. A second important aspect of scenario descriptions is that - in a Participatory Design process - most stakeholders would understand them, even though they shed different perspectives on them. Theyare thinking tools [9] [10]. Scenarios are not requirements – they are deliberately incomplete and easily revised. They facilitate the innovative exploration of design possibilities. For users, scenarios are meaningful because "the elements of the envisioned system appear embedded in the interactions that are meaningful for them to achieve their goals. They describe the future system in terms of the work that people will have to achieve". The initial state of the methodology used in the project was based on the writing of scenarios of use, called "specific scenarios" because they were specific to each different CoP, and different uses of separate tools. There was an "attraction effect" from the current existing tools and current existing uses, preventing a real boundary construction to take place [4], and hampering to imagine really new and enhanced activities and uses. It was then decided to shift towards the design of "generic" scenarios, i.e. scenarios that could sustain the focus on current and future activities requiring and illustrating the interoperability of services, building over the literature about boundary construction processes in relation with Participatory Design [5][6]. The necessity for every actor to emancipate from their current history [7] led to two de-construction processes: on the CoPs side, activity theory was used to detail CoPs activities in actions and operations; on the technical side, the tools were decomposed into components, which implement modular functions, including specific services (multimedia authoring, support of debate, ontology development and management, specific editing, etc.) and common support services (single sign-on, global search, single store, notification, annotation and visual integration). After going through a reconstruction process, a generic scenario becomes the description of a set of activities and actions, supported by specific services and common support services in order to achieve an intention. The intentions taken into account are those that concern mainly a CoP life: knowledge reification and document management, debate and decision making, facilitation and animation of the CoP life. The Instruments According to the Activity Theory an instrument is not only an artifact – or a tool – that is used by an actor in order to carry out an activity. It is a "mediator" between the actor and her activity [11], composed both of an artifact and the actor’s psychological structure (or "scheme") to use the artifact within a situated activity [11]. As a mediator, the instrument is not neutral regarding the achievement of the activity by the actor. Depending on its use, it is able to change the activity… and the actor herself. In order to carry out the methodology, the PALETTE developers have constructed their own instruments, either material (a technological tool) or symbolic
308
L. Esnault et al.
(a model, a grid of analysis). They have been widely discussed and negotiated. As such, they have worked as "boundary objects" between the developers, the Service mediators, the CoP members and the CoP mediators to facilitate the appropriation and the implementation of the methodology. Main Achievements of PALETTE Regarding Participatory Design At the end of the PALETTE project, there are some interesting achievements regarding the Participatory Design domain that come out of the project. There is a common vocabulary, which is strongly influenced by Participatory Design and even by Actor Network Theory. Words such as "participatory design", "heterogeneity of the network of actors", "boundary objects", are commonly used. A common ground is build and appropriated by all the stakeholders. Everybody systematically refers to the generic scenarios, to describe some functions of services, or some specific features of the user interface, or the activities in such or such CoP, or in the design of learning resources or unrolling of training sessions, etc. Though the notion of generic scenario is still fuzzy, this very fuzziness enables the wide use of them as the most powerful boundary concept and methodological tool. The "Participatory Design culture" is seen as a distinctive value of the project. CoPs, though still external to the project organization stricto sensu are fully engaged in participating in the project activities (scenarios, trials, validations, trainings, etc.). They agree with the Participatory Design process, they use the common vocabulary, at least to some extent, they share a good part of the common ground for understanding, they use the palette of Services. They provide the embryos of the future PALETTE services users' community. The process of "negotiation of usefulness" leads to efficient outcomes: the innovative palette of Services is available, usable and used. It is ready to help CoPs not only to perform their current activities, but, hopefully develop and enhance them and find ways to innovate in their own practice
4 Conclusion: Lessons Learnt and Further Developments The main lesson learnt from the three years of implementing Participatory Design in PALETTE is that participation is not given by the simple fact that persons and things are declared to be "in" or part of the project. Participation has to be constructed during the whole process. Considering the Participatory Design methodology as a boundary object, the boundary construction process lasts until the last minutes. A constant will (and good will) is necessary to permanently associate actors in participative activities, enroll new actors (the PALETTE Portal, the on-line training modules, new CoPs, new researchers, new instances of the generic scenarios, etc.). Pitfalls are numerous, but learning from the challenges is a constant rewarding process. Eventually, we were able to build a common language (though the discussion about the use of the word "needs" is still pending). We were able to agree upon common representations, common instruments. We were able to cope with time issues; though sometimes it seemed that Participatory Design was "a waste of time", at the end it proved to be a gain in efficiency.
Creating an Innovative Palette of Services
309
The Key Success Factors in the project were clearly the general good will and openmindedness of all participants; the role of mediators and scenarios as key actors of the process; and also the key role played by a multiple level reflective evaluation process that enabled and sustained the successful achievement despite the pitfalls and provisional disagreements. We tried to document the Participatory Design process as careful and as thoroughly as possible, so that it could be useful for further uses in further projects.
References 1. Esnault, L., Zeiliger, R., Vermeulin, F.: On the Use of Actor-Network Theory for developing Web Services Dedicated to Communities of Practice. In: Tomadaki, E., Scott, P. (eds.) EC-TEL 2006. First European Conference on Technology Enhanced Learning, Crete, Greece, vol. 213, pp. 298–306. CEUR (2006), http://ftp.informatik.rwth-aachen.de/Publications/ CEUR-WS/Vol-213/paper42.pdf 2. Abreu de Paula, R.: The Construction of Usefulness: How Users and Context Create Meaning with a Social Networking System – Dissertation (2004), http://www.ics.uci.edu/~depaula/publications/ dissertation-depaula-2004.pdf 3. Latour, B.: On Recalling ANT. In: Law, J., Hassard, J. (eds.) Actor network Theory and After. Blackwell Publishing, Oxford (1999) 4. Zeiliger, R., Esnault, L., Vermeulin, F., Cherchem, N.: Experiencing Pitfalls in the Participatory Design of Social Computing Services. In: Proceedings of the Participatory Design Conference 2008, Bloomington, IN, USA, October 1-4 (2008) 5. Holford, W.D., Ebrahimi, M., Aktouf, O., Simon, L.: Viewing Boundary Objects as Boundary Construction. In: Proceedings of the 41st Hawaiian International Conference on Systems Sciences (2008) 6. Esnault, L., Gillet, D., Rossier-Morel, A.: From Personal to Community Spaces: Interplay between Boundary Construction and Deconstruction. In: How social is my Personal learning Environment? Symposium, ED MEDIA 2008, Vienna, June 30 – July 4 (2008) 7. Hansen, T.H.: Strings of Experiments: Looking at the Design Process as a set of SocioTechnical Experiments. In: Proceedings of PDC 2006 Conference, Trento (2006) 8. Daele, A., Henri, F., Charlier, B., Esnault, L.: Participatory Design for Developing Instruments for and with Communities of Practice: a Case Study. In: CHI 2008, Distributed Participatory Design Workshop, Florence, Italy, April 6. Wiksell International. Lawrence Erlbaum Associate, Hillsdale (2008) 9. Carroll, J.: Scenario-based design: envisioning work and technology in system development. John Wiley Son, New York (1995) 10. Campbell, R.L.: Will the real scenario please stand up? SIGCHI Bulletin 24(2), 6–8 (1992) 11. Béguin, P., Rabardel, P.: Designing for instrument-mediated activity. Scand. J. Inf. Syst. 12(1-2), 173–190 (2001)
NetLearn: Social Network Analysis and Visualizations for Learning Mohamed Amine Chatti1 , Matthias Jarke1, Theresia Devi Indriasari1 , and Marcus Specht2 1
Informatik 5 {Information Systems}, RWTH Aachen University {chatti,jarke,indria}@dbis.rwth-aachen.de 2 Open University Heerlen, Netherlands
[email protected]
Abstract. The most valuable and innovative knowledge is hard to find, and it lies within distributed communities and networks. Locating the right community or person who can provide us with exactly the knowledge that we need and who can help us solve exactly the problems that we come upon, can be an efficient way to learn forward. In this paper, we present the details of NetLearn; a service that acts as a knowledge filter for learning. The primary aim of NetLearn is to leverage social network analysis and visualization techniques to help learners mine communities and locate experts that can populate their personal learning environments.
1
Introduction
One of the core issues in Technology Enhanced Learning (TEL) is the personalization of the learning experience. Learners have a variety of learning styles, which are mirrored in the way they learn. There is a shared belief among TEL researchers that TEL models require a move away from one-size-fits-all models toward a learner-centric model that puts the learner at the centre and gives her the control over the learning experience. Moreover, learning and knowledge are fundamentally social in nature, as has been emphasized by many researchers [1,2,3,4]. At the heart of learning and knowledge lie people. This requires a change in focus from technology-push to people-driven models of learning. New social skills become increasingly important for better performance and thus have to be learned and continuously improved. Learn-what referring to the high-quality learning resource that has to be acquired has to be supplemented with learn-who referring to the person or the entire community/network with the required know-how that can help achieving better results. Learn-who also involves the ability to navigate on one’s own through a constantly shifting network of assistance. Siemens [5] stresses that the challenge today is not what you know but who you know and states that knowledge rests in an individual and resides in the collective. The author introduces connectivism as a new learning theory that presents learning as a connection/network-forming process [6]. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 310–324, 2009. c Springer-Verlag Berlin Heidelberg 2009
NetLearn: Social Network Analysis and Visualizations
311
The personal and social nature of knowledge and learning implies a need for new TEL models that start from the learners and enable them to learn by creating meaningful connections. Furthermore, services that enable learners to widen their personal knowledge network circles to embrace new knowledge nodes become crucial. Social network analysis and visualization methods provide one possible solution that can augment and assist the locating of valuable knowledge nodes. In this paper, we discuss the details of NetLearn, a learning service that, based on social network analysis and visualization techniques, supports learners in mining communities and finding experts in specific contexts, thus enabling them to extend their personal knowledge networks with new knowledge nodes. The paper proceeds as follows. In section 2, we discuss the importance of community mining, expertise finding, and social network analysis approaches for learning. In section 3, we introduce some of the principles of social network analysis, which are relevant to this work. Section 4 presents a case study of how NetLearn has been used. We follow in section 5 with a detailed description of the NetLearn system. In section 6, we briefly review related work. And finally, we summarize our findings in section 7.
2
Personal Learning Environments
Self-organized learning provides a base for the establishment of a model of learning that goes beyond curriculum and organisation centric models, and envisions a new learning model characterized by the convergence of lifelong, informal, and ecological learning within a learner-controlled space. In recent years this way of learning is increasingly supported by responsive, open, and personal learning environments, where the learner is in control of her own development and learning. The Personal Learning Environment (PLE) concept translates the principles of self-organized learning into actual practice. PLE is a relatively new term, first introduced in 2004 [7]. van Harmelen [7] describes PLEs as: Systems that help learners take control of and manage their own learning. This includes providing support for learners to – set their own learning goals – manage their learning; managing both content and process – communicate with others in the process of learning and thereby achieve learning goals. A PLE-driven approach to learning suggests a shift in emphasis from a knowledgepush to a knowledge-pull learning model. In a learning model based on knowledgepush, the information flow is directed by the institution/teacher. In a learning model driven by knowledge-pull, the learner navigates toward knowledge. One concern with knowledge-pull approaches is knowledge overload. Therefore, there is a need for knowledge filters to help learners find quality in the Long Tail [8]. These knowledge filters can be services that support learners in locating valuable knowledge nodes. A distinction that is often cited in the literature is made between
312
M.A. Chatti et al.
explicit and tacit knowledge. Explicit knowledge (or information) is systematic knowledge that is easily codified in formal language and objective. In contrast, tacit knowledge is hard to formalize, difficult to communicate and subjective [3]. Tacit knowledge resides in people. Hence, tacit knowledge nodes are people performing in diverse, frequently overlapping social domains, who act together and help each other see connections. In the KM literature, there is a wide recognition that only a small fraction of valuable knowledge is explicit and there is a huge mass of high-quality knowledge embedded in people, which is not easily expressible and cannot be recorded in a codified form [9]. Thus, searching for information (explicit knowledge node) becomes a matter of searching the social network for an expert (tacit knowledge node) who might provide that information. This situation - in addition to the view of learning as the creation of meaningful connections - implies a need for effective community/network mining and expertise finding systems. Community mining systems attempt to extract useful and reliable information from communities [10]. Expertise finding systems are a type of recommendation systems, which are designed in order to facilitate the finding of people with specific knowledge in a certain problem domain [11,12,13]. The process of finding an expert can be viewed as a search through the network of social relationships between individuals [14]. Social network analysis and visualization methods provide one powerful means for mining communities and finding expertise.
3
Social Network Analysis and Visualization
Social networks often represent groups of people and the connections among them. Social Network Analysis (SNA) is the quantitative study of the relationships between individuals or organizations. In SNA, a social network is modeled by a graph G = (V , E), where V is the set of nodes (also known as vertices) representing actors, and E is a set of edges (also known as arcs, links, or ties), representing a certain type of linkage between actors. By quantifying social structures, we can determine the most important nodes in the network. One of the key characteristics of networks is centrality, which relates to the structural position of a node within a network and details the prominence of a node and the nature of its relation to the rest of the network [15]. Three centrality measures are widely used in SNA: degree, closeness, and betweenness centrality. The degree centrality finds the node with the most influence over the network. The degree centrality of a vertex v ∈ V is simply the degree of that vertex, CD (v) = dG (v), i.e. the number of incident edges [16] . Closeness centrality focuses on how close an actor is to all the other actors in the network. Closeness centrality is defined as inverse closeness, i.e., the sum of the distances to all other vertices [16], CC (v) =
t∈V
1 , dG (v, t)
NetLearn: Social Network Analysis and Visualizations
313
where dG (v, t) is the distance between v and t, that is, the number of edges to traverse in their shortest path. Finally, betweenness centrality finds nodes that control the information flow of the network. Betweenness centrality is defined as the sum of the fractions of shortest paths between other actors that an actor sits on, σst (v) CB (v) = , σst s=v=t
where σst is the number of shortest paths between vertices s and t and σst (v) is the number of shortest paths between s and t passing through v [16]. Visualization also plays a major role in SNA. By representing a social network visually, interesting connections could be seen, explored and communicated at a glance. Node-link diagrams and matrix-based visualization are the most common visualization techniques in social networks. A detailed discussion of possible social network visualizations and their application is provided in [17].
4
Case Study
Online bibliographies provide a rich source of relationships using the co-author relationship [14]. The effective visualization and analysis of large complex coauthorship networks is often a means of finding a community or a specific person with the required expertise. In fact, a co-authorship network is an expertise network. The way a co-authorship network is structured can be used to rank authors’ expertise. As Wasserman and Faust [15] point out, expertise is closely related to structural prestige measures and rankings in social network studies. NetLearn applies the prestige idea to co-authorship networks in order to mine communities and locate experts in a given research field. In this paper, we explore a case study in which NetLearn has been used. In this case study, we consider an extendible co-authorship network illustrating the relationships and extent of collaboration among over 1000 TEL researchers. We use the PROLEARN publication database to store and extract the bibliography information. The database contains bibliography entries, such as title, authors, publication year, abstract, and keywords. New bibliography entries can be given via the Plone1 -based PROLEARN Academy publication interface2 . Keywords can either be entered via the interface or are automatically generated using the ALOA Framework, which applies data/text mining algorithms to generate keyword values from text bodies [18]. The next sections describe the actual NetLearn system.
5
NetLearn: Design and Implementation
In the ensuing sections, we will describe NetLearn with an eye on the architectural and implementation details. The system design will be followed by a detailed description of the different modules and their underlying functionalities. 1 2
http://plone.org/ http://www.prolearn-academy.org/Publications
314
M.A. Chatti et al.
5.1
NetLearn Design
NetLearn is built upon a three-Tier architecture. An overview of the NetLearn architecture is provided in Figure 1. The Data Storage Tier is an IBM DB2 database which provides persistent storage for publications and authors. In the Web Tier, servlets and JavaServer Pages (JSP) are used to generate the front-end. The Business Tier encompasses a collection of modules, that enable to construct, visualize, cluster, filter, and analyze the co-authorship network, in order to mine communities and locate experts around a given topic. These modules are initiated in response to learner queries. The visualization modules use the graph model of the yFiles programming toolkit. yFiles3 is a Java class library that provides efficient analysis and visualization algorithms for viewing, editing, optimizing, laying out, rendering, and animating network graphs. yFiles also provides implementations of important measures and algorithms used in social network analysis. We differentiate between two types of analysis modules, namely community mining modules and expertise finding modules.
Fig. 1. NetLearn Architecture
Community Mining Modules. Typically, a learner is only aware of her own personal knowledge network. By visualizing, analyzing, clustering, and filtering the larger network, the learner can discover connections to people and information that would otherwise be outside her own network. The two community 3
http://www.yworks.com/en/products_yfiles_about.html
NetLearn: Social Network Analysis and Visualizations
315
mining modules in NetLearn, namely author mining module and keyword mining module, help learners explore the large co-authorship network, so that they can find potential communities working on particular topics. These communities are in general bound by shared interests among their members. The author mining module clusters researchers based on co-authorship ties, whereas the keyword mining module clusters the scientific network based on keywords. NetLearn uses a community mining approach based on the active mining model. Active mining focuses on active information gathering and data mining in accordance with the purposes and interests of the users [19]. The basic concept of active mining is to utilize spiral interactions between the user and the computer [20]. Community mining in NetLearn is performed in two steps. The first step is user-centered mining in response to a user query, based on graph mining and network clustering techniques. The second step a refinement of the mining result according to the user interaction/reaction with the system. Expertise Finding Modules. One of the aims of NetLearn is to match learners looking for expertise with individuals likely to have this expertise. In NetLearn, we consider two approaches for expertise finding in scientific co-authorship networks. The first is locating a person with some specific expertise or knowledge based on keywords and topics of interests (who knows what?). The second is an automatization of the small world effect [21], and is used for finding a path to an expert, when he or she is known in advance (who knows who knows what?). The expertise finding modules in NetLearn include the local author module, the keyword community module, the interest community module, and the referral chain module. The local author module enables a learner to interactively explore a graphic representation of an author’s egocentric network, that is the portion of the network around that author. Each ego network consists of the author, the ties to other authors he or she interacts with directly, and interactions between those authors. The keyword community module enables a learner to search for an expert by specifying a keyword. In general, the centrality measures of an author in a keyword community network is highly correlated with his or her expertise. That is, highest degree nodes in a keyword community network are potential experts in a research area around a specific keyword. Moreover, we surmise that if an author has a high number of publications associated with a given keyword, it is often the case that he or she has high expertise in the research field around that keyword. The interest community module helps a learner identify researchers who are likely to have expertise in a specified area. Frequent publishing on a given topic is a good source of evidence of a researchers interests and areas of expertise. The interest community module differs from the keyword community module in that the former accepts query input from learners on keywords, title, and abstract. Once we know who is the potential expert with the required knowledge we need to know how to reach this expert. This operation is supported by the referral chain module, which enables to find a path between two nodes in the network, that is, a series of links that leads from the requester node to an expert node [14].
316
5.2
M.A. Chatti et al.
NetLearn Implementation
The following sections illustrate NetLearn in action. In order to demonstrate how the system works, we show the functionalities of the different modules using actual examples. Author Mining Module. The author mining module provides a graph visualization of the global co-authorship network. In this graph, each node represents an author, and each edge models a co-authorship of a published paper. An example of a global visualization of the co-authorship network is shown in Figure 2. The learner can browse the graph interactively by zooming in/out, switching between the different layouts (i.e. circular, hierarchic, organic, orthogonal layouts), and setting an edge betweenness clustering threshold. The author mining module implements graph clustering based on edge betweenness centrality. The algorithm is iterative, at each step it computes the edge betweenness centrality and removes the edge with the highest betweenness centrality from the graph when it is above the given threshold. The algorithm terminates, if there are no more edges to remove. Figure 2 illustrates the result of a clustering of the initial graph, with an edge betweenness centrality threshold set to 170. The figure clearly shows that the co-authorship network is a refection of the Long Tail phenomenon [8]. In fact, the majority of nodes are each connected to just a handful of neighbors, but there are a few hub nodes that have a disproportionately large number of neighbors. This is a common feature of many known complex networks, such as Web pages, forum, and open source software development networks [13]. The author mining module computes centrality statistics (i.e. degree-, closenessand betweenness-centrality), as discussed in Section 3, for all nodes in the result graph. The module also enables to select a node in the graph to see the egocentric network of the researcher represented by that node. This visualization is suitable for finding communities that are built around a researcher. The learner can also filter the co-authorship network graph by specifying a publication year. Keyword Mining Module. The aim of the keyword mining module is to mine communities around specific keywords. This module provides an interactive graph visualization of keyword communities in the co-authorship network. A keyword community is a cluster that is densely connected by the same keyword. In a keyword mining graph, each node represents an author, and each edge models a shared keyword. Figure 3 shows network clusters that have their edges labeled by the keywords ”E-Learning”, ”knowledge management”, ”social software”, and ”Web 2.0”, highlighted by different colors. The learner can then navigate through the keyword community that he or she is interested in, and browse the publication list of the same community. Local Author Module. The local author graph models the ego-centric network of the author. Figure 4 depicts the local author graph of M. Jarke.
NetLearn: Social Network Analysis and Visualizations
317
Fig. 2. Author Mining Clustered Graph
Fig. 3. Keyword Mining Graph
The size of each node in the graph represents the number of publications co-authored by the author modeled by that node. It is defined by a weight as follows: |pi | weight(ai ) = , |pG | where ai is the author whose size of node is being calculated, pi represents the publications of ai , and pG represents all distinct publications by authors in the graph.
318
M.A. Chatti et al.
The thickness of the edge between two authors represents the number of joint papers, which can also be an indicator for the strength of the relationship between the two authors. The tie strength between two authors is calculated by a weight as follows: |pi ∩ pj | weight(ai → aj ) = . |pG | When a particular edge is selected from the graph, a list of publications, coauthored by the two authors at the ends of the edge, is shown. To note that these are common features for all graphs produced by NetLearn.
Fig. 4. Local Author Graph
Keyword Community Module. The keyword community module enables expertise finding based on keywords. It does not only support locating researchers working on a specific topic, but also researchers working on topics closely related to that topic. The module consists of four components: (a) keyword graph view, (b) keyword chart view, (c) keyword community view, and (d) keyword table view. Let’s suppose a query which asks to find an expert on ”knowledge management”. The first component, keyword graph view, provides a graph visualization of the keywords used in conjunction with the keyword ”knowledge management”. Figure 5 shows that ”knowledge management” is associated with 15 other related keywords, such as ”learning management”, ”Web 2.0”, and ”communities”. The thickness of the edge between two keywords represents the number of publications described by these keywords. For instance, the keyword combination ”knowledge management-learning management” has been used more frequently than other
NetLearn: Social Network Analysis and Visualizations
319
combinations, which shows a close relationship between the two research topics. Clicking on the edge between two keywords in the graph lists the publications tagged by both keywords. The second component, keyword chart view, shows the trend of a keyword over the last years. The third component, keyword community view, enables to explore the communities around ”knowledge management” and other keywords related to the same. Figure 6 shows a graph representation of sub-networks of authors who have co-authored papers using those keywords. The graph has two types of nodes, namely author nodes and keyword nodes. Selecting an author node shows the keywords used by the author as well as his or her co-authors. And, selecting a keyword node highlights the authors using that keyword. Finally, the fourth component, keyword table view, lists the publications around the specified keyword in a detailed tabular form.
Fig. 5. Keyword Graph
Interest Community Module. The interest community module enables to identify researchers involved in particular research topic, find publications relevant to the same topic, group researchers together based on mutual research interests, and rank them based on the different structural centrality measures. The module provides the possibility to query the researchers’ network based on publication title, abstract, keyword, or any combination of the same.
320
M.A. Chatti et al.
Fig. 6. Keyword Community Graph
Referral Chain Module. The referral chain module enables to find the chain between two given researchers. In Figure 6, the chain from M. Jarke (pink node) to W. Nejdl (green node) is highlighted. The chain goes through M. Kravcik. The referral chain module uses the shortest path algorithm from yFiles, which computes the shortest distance from a given node to all other nodes in the graph.
Fig. 7. Referral Chain Graph
6
Related Work
In this section, we review the rich CSCW and KM literatures about searching for people in social networks. We briefly touch upon the various solutions aimed at community mining and expertise finding, and compare them to our proposed solution.
NetLearn: Social Network Analysis and Visualizations
6.1
321
Community Mining
Several systems for community mining have been proposed. Flink [22], for instance, is a system for the extraction, aggregation and visualization of networks of Semantic Web researchers. It employs semantic technology for reasoning with personal information extracted from a number of electronic information sources including web pages, emails, publication archives and FOAF profiles. Flink provides a graph visualization of the researchers’ network and the ego-networks of the different researchers, as well as some basic statistics of the social network. The system, however, does not use the network to facilitate finding a potential expert in a specified domain. Ichise et al. [20] propose a community mining system using bibliography data in order to find communities of researchers. The system provides interactive visualization of both local and global research communities. In a related work, Ichise et al. [10] note that their first system does not identify the research topics of the communities obtained, and propose a new version of the system, which enables to discover research communities with identified topics. This system, however, only uses words in the title as keywords. NetLearn, by contrast, makes use of the ALOA framework [18], to generate keywords from the whole paper text. Moreover, Ichise et al.’s system doesn’t enable to find people with expertise and the paths to reach them. Chan et al. [23] also provide a system for the visualization and clustering of author social networks. As compared to Chan et al.’s system, NetLearn does not only focuses on forming clusters of subnetworks for all authors, but also mines the co-authorship network for experts in specified research areas, as well familiarizes learners with the social network in which the expert belongs. 6.2
Expertise Finding
Many systems attempt to use social networks as a mechanism to find potential experts. Three related systems in particular, Expertise Recommender [24], ReferralWeb [14,25], and NetExpert [26], are discussed. Expertise Recommender (ER) [24] is designed to assist with expertise location in an organization, by recommending sets of potential answerers for queries. ER makes recommendations in two steps. First it finds a set of individuals who are likely to have the necessary expertise. These potential recommendations are then matched to the person requesting expertise using a social network. In ER, the users are never explicitly shown a visualization of the social network. The ER matching approach is thus different from that taken by NetLearn, which explicitly show a social network visualization of the filtering result. ReferralWeb [14,25] uses social networks to assist in locating experts. In ReferralWeb, co-authoring and co-citation relationships are mined to create a social network and the resulting visualization is used to find a possible expert. ReferralWeb can provide a chain of personal referrals from the searcher to the expert. The ReferralWeb approach, however, lacks mechanisms for associating authors with topics of expertise. Moreover, it is not suitable for determining the position of the author within a global or local network of researchers.
322
M.A. Chatti et al.
NetExpert [26] also addresses the problem of finding people with expertise in communities and the paths to reach them. The system allows to search for an expert through names and expertise areas, based on knowledge profiles. NetExpert, however, does not provide means to discover potential experts in the whole network through clustering, filtering, and interactive visualization. In summary, NetLearn offers several advances over previously existing community mining and expertise finding systems. Table 1 provides a detailed feature comparison of the studied systems. Table 1. System Comparison Flink [22] Chan et Ichise et ER [24] al. [23] al. [20,10]
Referral Web [14]
Community Mining Features + -
NetExpert [26]
NetLearn
-
+
author clustering keyword clustering network filtering interactive visualization centrality measures
+
+
-
+
+
-
-
-
+
+
-
+
-
-
-
+
-
-
+
-
-
-
+
+
-
-
-
-
-
+
search by keyword search by name search by research interest ego network referral chain
-
-
-
+
-
-
+
-
-
+
+
-
-
-
+
-
+
+
+
-
+
-
-
+
+
-
-
-
-
+
+
+
7
Expertise Finding Features + -
Conclusions
In this paper, we presented the details of NetLearn, an interactive learning service that leverages social network analysis and visualizations techniques to aid in the search for information and expertise in complex social networks, thus enabling learners to enhance their personal knowledge networks with valuable knowledge nodes. We demonstrated how the proposed system works, based on a co-authorship network of TEL researchers. We also provided an overview of
NetLearn: Social Network Analysis and Visualizations
323
prior systems that support community mining and expertise locating, and compared them to NetLearn. We showed that NetLearn has a number of important functionalities and features that sets it apart from such systems. The results of this project have been used at the PROLEARN final review, mainly to track and analyze the evolution of PROLEARN network of researchers over the last four years of the project. All of these efforts have been made available on the NetLearn project homepage4 . We welcome feedback from colleagues and users on both their experiences with the system and new ways they would suggest to harness social network analysis and visualization methods to help learners locate communities of interest and experts.
References 1. Polanyi, M.: The Tacit Dimension. Anchor books, New York (1967); based on the 1962 Terry lectures 2. Lave, J., Wenger, E.: Situated Learning. Legitimate Peripheral Participation. Cambridge University Press, New York (1991) 3. Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University, New York (1995) 4. Wenger, E.: Communities of Practice. Learning, Meaning and Identity. Cambridge University Press, Cambridge (1998) 5. Siemens, G.: Knowing Knowledge. Lulu.com (2006) 6. Siemens, G.: Connectivism: Learning as network-creation (2005) 7. van Harmelen, M.: Personal learning environments (2006) 8. Anderson, C.: The Long Tail: Why the Future of Business is Selling Less of More. Hyperion (2006) 9. Chatti, M.A., Jarke, M., Frosch-Wilke, D.: The future of e-learning: a shift to knowledge networking and social software. International Journal of Knowledge and Learning 3(4/5), 404–420 (2007) 10. Ichise, R., Takeda, H., Muraki, T.: Research community mining with topic identification. In: 10th International Conference on Information Visualisation, IV 2006, London, UK, July 5-7, pp. 276–281. IEEE Computer Society, Los Alamitos (2006) 11. McDonald, D.W.: Recommending collaboration with social networks: a comparative evaluation. In: Proceedings of the SIGCHI conference on Human factors in computing systems, Ft. Lauderdale, Florida, USA, pp. 593–600 (2003) 12. Zhang, J., Ackerman, M.S.: Searching for expertise in social networks: a simulation of potential strategies. In: Proceedings of the 2005 international ACM SIGGROUP conference on Supporting group work, pp. 71–80 (2005) 13. Zhang, J., Ackerman, M.S., Adamic, L.A.: Expertise networks in online communities: structure and algorithms. In: Williamson, C.L., Zurko, M.E., Patel-Schneider, P.F., Shenoy, P.J. (eds.) WWW, pp. 221–230. ACM, New York (2007) 14. Kautz, H.A., Selman, B., Shah, M.A.: The hidden web. AI Magazine 18(2), 27–36 (1997) 15. Wasserman, S., Faust, K.: Social Network Analysis: Methods and Applications. Cambridge University Press, Cambridge (1994) 4
http://eiche.informatik.rwth-aachen.de:3333/NetLearn/
324
M.A. Chatti et al.
16. Brandes, U., Kenis, P., Wagner, D.: Communicating centrality in policy network drawings. IEEE Transactions on Visualization and Computer Graphics 9(2), 241– 253 (2003) 17. Bertini, E.: Social networks visualization: A brief survey (2005) 18. Chatti, M.A., Muhammad, N.F., Jarke, M.: ALOA: A web services driven framework for automatic learning object annotation. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 86–91. Springer, Heidelberg (2008) 19. Motoda, H. (ed.): Active mining: new directions of data mining. IOS Press, Amsterdam (2002) 20. Ichise, R., Takeda, H., Ueyama, K.: Community mining tool using bibliography data. In: Proceedings of the 9th International Conference on Information Visualisation, IV 2005, London, UK, July 6-8, pp. 953–958. IEEE Computer Society, Los Alamitos (2005) 21. Travers, J., Milgram, S.: An experimental study of the small world problem. Sociometry 32, 425–443 (1969) 22. Mika, P.: Flink: Semantic web technology for the extraction and analysis of social networks. Journal Web Semantics 3(2-3), 211–223 (2005) 23. Chan, S., Pon, R.K., Cardenas, A.F.: Visualization and clustering of author social networks. In: 2006 Distributed Multimedia Systems Conference, Grand Canyon, Arizona, August 30 - September 1, pp. 174–180 (2006) 24. McDonald, D.W., Ackerman, M.S.: Expertise recommender: a flexible recommendation system and architecture. In: CSCW, pp. 231–240 (2000) 25. Kautz, H.A., Selman, B., Shah, M.A.: Referral web: Combining social networks and collaborative filtering. Commun. ACM 40(3), 63–65 (1997) 26. Sang¨ uesa, R., Pujol, J.: Netexpert: A multiagent system for expertise location. In: Proceedings of Workshop on Organizational Memories and Knowledge Management in the 17th International Joint Conference on Artificial Intelligence, IJCAI 2001, Seattle, pp. 85–93 (2001)
Bridging Formal and Informal Learning – A Case Study on Students’ Perceptions of the Use of Social Networking Tools Margarida Lucas and António Moreira Department of Didactics and Educational Technology University of Aveiro, Portugal {mlucas,moreira}@ua.pt
Abstract. Social networking tools have been enthusiastically heralded as a means to support different learning types and innovative pedagogical practices. They have also been recognized as potential tools to promote informal learning. In this paper we describe work carried out using the synergy of social web tools, learning models and innovative pedagogical practices across a Masters Degree Course. Findings suggest that the use of these tools as a means to distribute an open and flexible learning environment fosters informal interactions and such interactions are perceived by students to have a significant impact over their formal learning outcomes. Keywords: Social Web, informal learning, distributed cognition, connectivism, case study.
1 Introduction Although it has been around since individuals started to organize and communicate with each other, informal learning has gained considerable relevance within the knowledge-based society and lifelong learning. The awareness and the increasing importance of a personalized, learner-centred approach have led policy makers and educational agents to look at ways of harnessing the benefits of informal learning. Recent reports and guidelines [1][2][3][4] recommend the adoption and the integration of communication technologies (CT) as a means to offer flexible, comprehensive and tailored learning opportunities to all individuals at all stages of their lives. As a result, the open and distance learning paradigm has been increasingly recognized as a requisite for all educational systems that wish to assure the acquisition and development of lifelong learning skills by their students. In the past few years, the evolution of the Internet applications, now called social web or web 2.0, has transformed the web from a place for telling into a place for talking, where emphasis is put upon sharing, participating and collaborating. The web has become a social platform where we can interact, experiment, create and learn. In it, we all are ‘prosumers’, managers and knowledge builders, we share meanings, [re]build knowledge and [re]define ways of working and learning. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 325–337, 2009. © Springer-Verlag Berlin Heidelberg 2009
326
M. Lucas and A. Moreira
This dynamic platform, comprised by the integration of several tools, applications and services, has been used as a potential and suitable alternative for the support of different innovative pedagogical practices and different types of learning. From an informal perspective, we can describe it as a ‘land of opportunities’ for the exploration and harnessing of informal learning, due to the interaction and connections it enables, the non-linearity it bases on, the multiple paths it affords and the learner empowerment it provides. The use of social web tools, such as wikis, blogs, social bookmarking sites, etc., to distribute learning environments is known to foster and promote the development of learning communities or learning networks, in which learning can happen unexpectedly as a result of the connections and interactions of their members [5][6][7][8]. Thus, informal learning can become a product of social knowledge through distributed, yet context situated and highly connected learning sustained by a collective practice. But how can informal interactions be incorporated into formal learning contexts without becoming formal as well? Can the social Web and the social tools it provides be explored to foster informal learning? What are the benefits brought by such tools to formal learning contexts? And what are students’ perceptions about the informal learning opportunities that derive from the use of these tools? However, and although there is a range of studies that have been exploring the use of social networking tools and participation in social networking environments [9][10][11][12] as well as the importance and processes of informal learning [13][14][15][16] evidence of such learning as a complement of formal education is scarce and research exploring the students’ perceptions about the use of these tools for learning within formal instructional contexts does not abound. The present work is an attempt to present data on students’ experiences of using social networking tools to support learning in both formal and informal domains. The study presented is by no means comprehensive due to its limited duration and scope, but it provides data on how the use of these technologies impacted learning outcomes and on students’ perceptions of their use. The case study was conducted in the Multimedia and Cognitive Architectures (MAC) course, which is one curricular component of the Masters Degree Course on Multimedia in Education (MMEdu) of the University of Aveiro. We will present the theoretical scope of the work, followed by a description of the study and the methodology applied. After presenting the findings, we put forward some thoughts that stem both from the researchers’ reflections and from the students’ feedback.
2 Informal Learning and the Social Web Despite a lack of agreement related to its definition and to what exactly distinguishes formal from informal learning, the fact is that the latter, in the various forms it can assume, be it self-directed, incidental, intentional, non-intentional or social, is perceived today as a fundamental element in the life of all individuals. It is typically described as being “undertaken on our own, either individually or collectively, without either externally imposed criteria or the presence of an institutionally authorized instructor” [17]. Thus, whereas formal learning can be characterized as “typically institutionally sponsored, classroom-based, and highly structured”, informal learning
Bridging Formal and Informal Learning
327
is “not typically classroom based or highly structured, and control of learning rests primarily in the hands of the learner” [18], with obvious impacts on the evaluation of that same learning [19][20]. Informal learning may be intentional, for example, through intensive, significant and deliberate learning ‘projects’ as Tough puts it [21], or it may be accidental, by acquiring information through conversations, reading or watching TV, observing the world or even experiencing an accident or embarrassing situation. Livingstone [17] established a clear distinction between explicit informal learning and tacit informal learning, which is incorporated into other social or ad hoc activities. Both forms of learning result in the acquisition of new knowledge or skills; however, only the explicit informal learning project is motivated by some immediate problem or need as defined in Tough’s definition of informal learning. Our assumption about informal learning is that it is a vital and continuous process, along which people gain skills, attitudes and knowledge that derive from their daily activities as well as from the multiple contexts they experience. It is not directed or controlled by any formal institution or central body other than the individual alone. It is solely driven by individual activity involving the pursuit of understanding without externally imposed curriculum. Learning control and management is in the hands of the learner itself and not in the hands of a teacher, a tutor or any of the structures one finds in formal education. [17][22][23][24]. Nevertheless, the fact that it is not ‘highly-structured’ does not mean informal learning has no structure at all. As Downes [25] points out, informal learning has a structure, but a “different kind of structure. (…) one that is not dictatorial, not organized or managed by an organizer and not rule-based.” Instead this structure is open, decentralized, distributed, dynamic, and democratic and above all connected – the kind of properties that can only be found in “networks, as opposed to hierarchies.” The social Web, Web 2.0 or second generation of the World Wide Web has become a network based on a philosophy of collaboration, social interaction and participation. The philosophy underlying the Web carries us to a boundless space, where we can connect with whatever or whoever we want to see, hear or know, but this would not be possible without the technology that could support it. Social networking tools, such as wikis, blogs, social networking sites, social bookmarking sites, among many others, have two specific features in common: the personal control they allow to users and the interaction they afford. Social networking tools allow the exploration of different learning paths, and learning through exploring, wandering and finding the way. They allow students to make connections, to take individual choices and define their own knowledge areas. As a result, knowledge building can be understood as a process of ‘coming to know’ based on constant sharing, negotiation and readjustments. Therefore, interaction becomes a learning instrument, specifically in the realm of informal learning. The power and ability to choose both the tools we want to use and the people and contents we want to interact with are principles that oppose to the ones found in the structured models. These principles are consistent with connectivism, whose roots can be found in many learning theories, social structures and technology to create a theory of learning consistent with the needs of the twenty first century. The synergy of works by McLuhan, Vygosky, Gibson, Lave and Wenger, Papert, Bandura, Hutchins and other
328
M. Lucas and A. Moreira
authors in the field of education, neurology, mathematics, sociology or physics has led Siemens to present connectivism as the “new theory for learning based on network structures, complex changing environments and distributed cognition” [26]. The notion of distributed cognition within connectivism is specifically relevant for our work for it opposes the notion of cognitions being “possessed and residing in the heads of individuals” [27]. For distributed cognition, tools, artefacts and social interactions residing outside people’s heads are not mere “sources of stimulation and guidance, but are actually vehicles of thought. (…) it is not just the ‘person-solo’ who learns, but the ‘person-plus’, the whole system of interrelated factors” [28]. In this way, when we talk about the learning that occurs within a networked structure we talk about cognitions that are distributed across the entities that comprise network(s): people, artefacts, tools or contexts. Siemens [29] defines network as free, non-sequencing, but organized connections between entities to create an integrated whole. The power of networks rests in their ability to expand, grow, [re]act and [re]adapt; they grow in diversity and value through the connections established with other nodes – entities, people, contents or other networks. The network structure is dynamic, distributed and decentralized with no need for a central entity to control it; each individual controls his/her network connections and learning happens when we connect, when we are able to build, organize, expand and recognize patterns that allow us to interpret and understand the knowledge and cognitions found along the way or left by other nodes. Siemens also stresses that the challenge today is not what one knows but who one knows, for other people’s experiences become the surrogate for knowledge. The more people we know and connect to and the more diverse they are, the more we can gain from their diversity and knowledge. Context plays an important role as well, for it “brings as much to a space of knowledge connection/exchange as do the parties involved in the exchange” [26]. The context of the study we are about to describe is a formal one. The purpose of integrating social networking tools to open and distribute the learning environment was not to ‘informalize’ it, if it ever would be possible. Our purpose was to improve formal learning outcomes by building upon the informal interactions/processes that the use of such tools to enhance learning affords.
3 Background of the Study and Methodology The course on MAC is part of the Masters Degree on Multimedia in Education (MMEdu) offered to students under a b-learning regime at the University of Aveiro. It comprised two face-to-face (f2f) sessions – one at the beginning of the course, another one at the end - and distance work for the span of four weeks. In this course, students, mainly in-service teachers, were expected to: i) deepen their knowledge about cognitive systems; ii) reflect upon learning theories related to the process of knowledge building; iii) explore the potential of social networking tools to augment interaction; iv) develop a plan for the development of interaction (PDI) in the form of an in-class activity – based on social networking tools – to increment interaction in their pedagogical practices; v) implement the planned activity in a classroom context; vi) reflect upon the results elapsed from the activity implemented.
Bridging Formal and Informal Learning
329
Along with the aforementioned objectives, the course aimed at developing skills and competences related to students’ professional activities: i) the integration of CT into teaching practices; ii) promoting and exploring interaction practices when planning pedagogical activities – curricular or non-curricular; iii) harness informal learning outcomes that derive from the use of CT or the participation in such activities; iv) develop collaborative work and v) develop research, management and information organization. Having embraced the idea that learning is about social participation and knowledge building and not just about delivery and acquisition, the course teachers adopted a pedagogy in which they served as managers and guides of the learning environment. The course organization was grounded on the connectivist learning principles and therefore it focused on the inclusion of technology to distribute its environment and everybody’s cognition and knowledge. The course was distributed across 2 blogs, Slideshare, Wikispaces and Ma.gnolia. The institutional elearning platform – Blackboard – was mainly used for administrative issues related to the course. Table 1 shows the tools used to support MAC and the purposes of their uses. Table 1. Tools chosen to distribute the course learning environment and uses Tools Course Blog
BestofPdi Blog
Group Blogs Slideshare Ma.gnolia Wiki Blackboard
Uses - to share content discussed in the 1st f2f session; - to publish articles/links/teasers; - to support students, teachers and content interaction; - to share personal reflections, specific information and questions; - to upload the management scale of the blog bestofpdi. - to discuss issues related to the project works developed (PDI) and concerns related to technology enhanced learning. - to support intra/inter groups and teachers interaction; - to publish videos, articles, links related to the PDI. - to publish content presented at the 1st f2f session. - to share bookmarks useful to the community. - to write the final reports and reflections collaboratively. - to provide information about the course organization, assessment and bibliography; - to direct students to the alternative learning platform; - to inquire about technological tools being used in the development of the PDI.
Before the course started, teachers launched the first challenge in the course blog (www.mundomac.blogs.ca.ua.pt) related to the bibliography available in Blackboard for reading. Balckboard was used to make course guidelines and general bibliography available and as a router to the aforementioned blog which became the main learning platform for MAC. After this moment, the institutional platform Blackboard was only used to deal with administrative issues. During the first f2f session, teachers and students engaged in discussing the principles underlying architectures and cognitive systems, which were already being
330
M. Lucas and A. Moreira
discussed in the course blog. Also in this session, working groups were created and the schedule of the different four-week tasks was presented. Students started brainstorming about possible interaction development practices to be planned and implemented later on. After this first f2f session, and besides participating actively in the discussions/tasks launched in the course blog, students, divided into 10 groups (of 5-6 members each), started developing their projects, working collaboratively, sharing proposals and ideas in order to plan and prepare the in-classroom activity that used social networking tools to promote interaction for their students (first and second course week). During the third week of the course and on a daily rotating basis, each of the groups was also responsible for the administration and dinamization of the blog http://bestofpdi.blogs.ca.ua.pt, in which they presented ideas for different interaction activities, discussed issues related to the project works being developed and share concerns related to technology enhanced learning and their teaching practices. Also during this week, students started to implement the planned activities in their schools with their own students. Conclusions and results regarding the implementation of the activities planned during the third and fourth week of MAC (the majority of which extended beyond the time span of the course) suggest that they were enthusiastically embraced by both teachers and students and that they contributed for high levels of motivation and engagement. Activities included for instance: i) the creation of an online “radio station” to support the teaching of History topics; ii) the use of online games to promote problem solving and critical thinking skills; iii) an online journal to account for a contest of popular sayings aiming at developing oral and language skills in primary school pupils; iv) the use of Second Life to discuss Biology topics; v) a blog to call the school’s community attention about a disease affecting pine trees in the school area, among others. All activities enjoyed the participation and interaction of different students and different teachers from different schools. Results are worth a deeper analysis and they will be the object of a future communication. Links to and detailed information about the activities/projects developed can be found in the final reports available at http://wikimmed.blogs.ca.ua.pt. For the projects developed in MAC and the implementation of the PDI, students were free to explore and use other tools that best served their purposes. From the inquiry done, we know that students used a wider range of tools, which we present in the following table. We adopted the case study methodology to allow greater insight of the process and practice that the students were engaged in and to provide an analysis of the students’ experiences. Drawing on the characteristics of a case study a range of research methods were used [30] and involved both quantitative and qualitative methods: i) direct observation; ii) content analysis from the online interactions; iii) analysis of the participation dynamics and iv) survey through questionnaire, which was submitted to students 4 months after MAC ended. The questionnaire, which used closed questions and an open one, sought to: characterize the students’ professional situation; to acknowledge students’ motivation for the enrolment in the course; to identify and quantify the use of different tools used during MAC and in the development of the project; to identify the skills, attitudes and competences developed by students and to acknowledge the students’ perceptions about the course itself and the importance
Bridging Formal and Informal Learning
331
Table 2. Tools used and purposes assigned by students in the development and implementation of the PDI Tools Email MSN GoogleTalk Skype
Blog GoogleDocs Buzzword Wikispaces GoogleGroups L(C)MS
Youtube
Second Life PHP Forum Audition Podomatic Ma.gnolia Fa.voritos Deli.icio.us Netvibes iGoogle Zohocreater Slideshare Mindmap
Uses - to exchange information, files and bibliography among group elements. - to communicate synchronously; - to share and exchange ideas and files; - to solve problems, find solutions related with the PDI in a fast and practical way; - to socialize informally; - to feedback and support participating students in the PDI. - publish group projects and results. - to write collaboratively and share proposals; - to host interaction activities for collaborative writing among PDI students. - to implement and host the PDI; - to support and provide feedback to the participating students in the PDI. - to publish videos produced during the PDI; - to search videos related to the topics dealt with; - to select ‘teasers’ to be used in the PDI. - to host a virtual meeting. - to share ideas, experiences and knowledge about online games. - to create podcasts. - to share bookmarks.
- to create a personal learning environment and aggregate rss feeds. - to produce self and hetero evaluation documents. - to share presentations. - to create concept maps.
afforded to informal learning in the context of formal education. The closed questions were statistically treated and the open one was subjected to a content analysis. The reason why we chose to submit the questionnaire four months after the conclusion of the course relates to the fact that we wanted students to have the opportunity to reflect upon and internalize attitudes, practices and skills that they may not have been aware of by the time MAC ended, such were the demanding and time consuming project works proposed and developed. The data collected and presented in this paper refers specifically to the students’ perceptions of the course organization and of the use of social networking tools as a means to promote and foster informal learning.
332
M. Lucas and A. Moreira
4 Findings From the 56 students enrolled in MAC, 42 (75%) answered the questionnaire; 39 were teachers and only one was not working at the time the questionnaire was submitted. By the time MAC started, all students felt confident in using social networking tools and used them on a regular basis. 50% of the students stated they used social networking tools very often, 24% used them always, another 24% used them sometimes and only 2% referred a rare use of them. The choice of the tools referred to in table 1 was classified as being good by 52% of the students and as being very good by 43%. Only 5% of the students mentioned the choice as being reasonable. The most pointed reason by students for the enrolment in the course was the personal desire to learn more, followed by personal interest and professional perspectives it could imply. Results found suggest that the use of social networking tools prove successful in the development of skills/attitudes related to the social, professional and technological area; there were 85% of students referring the development of skills/attitudes within the referred areas, 7% referring they did not develop them and 8 % opting for a ‘no opinion’ answer [31]. The following findings are related to a set of statements presented to students. They had to position themselves according to an agreement scale in which SA stood for strongly agree, A for agree, N for neutral, D for disagree and SD for strongly disagree. Statements included aspects about the instructional design applied, the learning model underlying it, the environment created and the impact of the projects developed. Statements presented in Figure 1 are the ones from which the answers to our research questions emerge.
SA
64,3%
A
N
D
66,7%
SD
66,7%
57,1%
54,8% 50%
52,4%
50% 45,2%
50%
40,5% 35,7%
33,3%
33,3% 23,8%
11,9% 2,4% 0%
23,8%
11,9%
21,4%
19,0%
11,9% 4,8%
0% 0%
26,2%
26,2%
0% 0%
0%
0%
7,1% 0% 0%
0%0%
The social I developed my The informal The learning In MAC there Unity and learning skills networking environment approach of was a spirit of diversity and I was open and tools chosen the topics community applied to this contributed for MAC community discussed was flexible and towards other fostered a allowed the one of the members' participation collective able reasons you learning to distribute and the felt motivated and build to participate integration of shared different perspectives to knowledge build new ones
4,8% 2,4% 0%
2,4% 0%
My Informal participation interactions was valued among the and community's contributed to members the made me community's aware of the expertise importance of different types of learning
Fig. 1. Students’ position regarding some of the statements presented
0% 0% 0%
I recognize that emphasizing learning contexts other than the mere formal ones is very important for new learning outcomes
Bridging Formal and Informal Learning
333
On the whole there was an average of 51,3% answers within the agree option and 34,7% within the strongly agree one. These (SA and A) account for 86% of the total options pointed out by students. 12,7% is the average of the neutral answers and 1,3% is the average for disagree answers. There were no strongly disagree answers. The already mentioned idea that using social networking tools to foster dynamic learning communities seems to be suggested by the findings. Almost 86% (SA plus A answers) of the students sensed a spirit of community in MAC and nearly 93% (SA plus A) felt that the social networking tools chosen for MAC - described in table 1 – fostered a collective able to distribute and build shared knowledge. These findings are also consistent with students’ opinions collected in the blogs during the course: “The tree of collective knowledge grows with the sharing among the members of the community”1 or “(...) this community seems to have a life of its own enabled by the dynamics of discussing and sharing”2. The notion of a networked community based on a dynamic, open and decentralized structure, in which members feel motivated to participate and share and find ways to [re]build knowledge is consistent with the total percentage of positive answers 95,2% (SA and A) found in the statement referring the openness and flexibility of the learning environment as an incentive for participation and the integration of new and diverse perspectives, i.e. the construction of new ones. However, we found 4,8% of the students disagreeing with this statement, which despite being a relatively low percentage, seems to be incoherent with, for instance, the statement in which it is referred that the distribution of the learning environment and the tools used fostered a collective able to distribute and build shared knowledge, which obtained no disagreeing answers. There were 50% of students referring to having developed their learning skills and contributed towards other members’ learning, while nearly 24% strongly agreed. Also in relation to the participation within the community, circa 67% agree but only 4,8% strongly agree that their participation was valued and that it added value to the community’s expertise. Interestingly there is in both statements a percentage of 26,2% students referring a neutral opinion and 2,4% disagreeing with the last statement. This may be related to lack of feedback or understanding regarding students’ participation, or it may be related to the fact that not all students engaged in the same way in the discussions and projects developed. If this is the case, it is natural that students may have felt they acted as newbies and not as leaders within the community and therefore, did not acknowledge such recognition. 100% (SA plus A) of the students recognize the importance of exploring learning contexts other than the mere formal ones. We believe this may be related to the projects developed during the course as well as with the course organization and its distribution across the various social networking tools. This suggestion emerges from findings previously described and from the informal approach conferred both to the course itself and to the interactions involving peers and teachers. We find such evidences in one of the course blogs: “Informal learning may be harnessed by this type of projects [referring to PDI]. We must realize that what our students learn in school is by no means what makes him a responsible and critical thinker.”3 Or: “Informal 1
2 3
http://bestofpdi.blogs.ca.ua.pt/2008/02/26/arvore-a-arvore-se-constroi-umafloresta/#comment-478 http://bestofpdi.blogs.ca.ua.pt/2008/02/20/isto-e-uma-especie-de-ultimo-post-d/ http://bestofpdi.blogs.ca.ua.pt/2008/02/23/etwinning/#comment-288
334
M. Lucas and A. Moreira
learning emerges from interaction and even in a formal context it is possible to create a collective awareness of knowledge seeking based on interaction.”4 Students were then asked to reflect upon the learning outcomes that, from an informal point of view, had contributed towards the enrichment of their formal learning. They were also asked to account for the importance they attributed to the use of social web tools in fostering informal spaces and contexts, from which learning could emerge. Although only 26 students out of 42 answered this question, the findings show that all responding students recognized informal interactions as a way “to learn without people having that immediate notion”. They referred that the dynamics created around the learning environment and the informal interactions contradict the compulsory ones usually associated to the institutionalised and “formatted” learning delivery and, for that reason, it was easier and more pleasant to achieve “with less consciousness but more clarity” the formal learning objectives outlined for the course. They also shared the opinion that the social networking tools used to distribute the MAC learning environment, but mostly, the freedom to explore and use whichever tools they felt comfortable with, enabled collaborative work to be developed more easily and motivation indexes to increase. Furthermore, this free control afforded new learning outcomes – “I realized I had been learning alone only by exploring all the tools I came to know during the course”, which means informal learning is fostered by giving up control to people and not tightening it. The notion of learning as a self controlled activity and the importance it has in the present days is also suggested by one of the students: “(…) informal interactions are even more valuable than the ones considered formal, because they account for positive consequences as far as motivation and learning outcomes are concerned. Learning derives from almost every informal interaction; if the outcome is considered formal or informal content is a second matter issue. What matters is that learning happens.” Almost every student mentioned the openness of the learning environment based on social web tools as one of the main reasons underlying their participation and their will in sharing knowledge (cf. Figure 1), something they felt completely opposed to the nature of the f2f environments. Another student referred she felt more at ease and motivated to engage in interactions, because she could “participate at her own pace, without feeling pressured or ashamed and with the possibility to reflect.” Two students referred the impact of “the pedagogy merged with technology” practiced in the “dilution of the hierarchies usually found in traditional settings, for the social web embodies the absence of formality and annuls the social status and the professional labels.” For her, in this technological setting, each one is: “(…) uno inter pares. (…) Each person can gain presence and importance in the global world and from the interrelations that one establishes or the networks one creates, informal interactions emerge and become as relevant as any others.”
5 Conclusions In line with other studies [19][20][32][33], the findings presented here seem to suggest that, when merged with new pedagogies and innovative methods – transfer of 4
http://bestofpdi.blogs.ca.ua.pt/2008/02/26/aprendizagem-e-web-20-em-vinte-e-umminutos/#comment-463
Bridging Formal and Informal Learning
335
responsibility for students, autonomous learning, context situated problem based learning, intra and inter-group collaborative work –, social networking tools support the distribution of learning environments in which knowledge building and sharing can emerge in an informal way. Students’ perceptions seem to show that the informal learning outcomes and informal interactions fostered by the use and exploration of social networking tools (see table 1 and 2) helped supporting the formal activity of learning. Despite the appetite for educational uses that the social Web has been raising, there are few studies conducted referring the use of it to promote informal learning. Further reflection and discussion about such learning environments are needed as well as an evaluation able to account for the outcomes that result from its use. This will be one of the tasks. We may be led to say that the use of social networking tools helps to blur and blend the lines between formal and informal settings and where formal leaning fails to deliver emerging social technologies can bridge the gap. To be able to realize the benefits afforded by social technologies in formal education, the starting point is changes in pedagogy. The challenge is in introducing new technologies that reflect the new pedagogical principles that guide current educational models. Formalizing informal practices may destroy any benefits that informal learning offer, we want to understand how tools and processes used informally could be harnessed to help support the formal activity of learning. Acknowledgments. The authors wish to thank the Portuguese Foundation for Science and Technology -FCT- for supporting the study under Contract No. SFRH/BD/41797/ 2007.
References 1. UNESCO: ICT-CST-Policy Framework (2008), http://cst.unesco-ci.org/sites/projects/cst/The%20Standards/ ICT-CST-Policy%20Framework.pdf 2. UNESCO: Open and Distance Learning – trends, policy and strategy considerations (2002), http://unesdoc.unesco.org/images/0012/001284/128463e.pdf 3. OECD: Education at a Glance 2008: OECD Indicators (2008), http://www.oecd.org/edu/eag 4. European Union: Delivering Lifelong Learning for Knowledge, Creativity and Innovation. Communication from the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Region, Brussels (2007) 5. Wenger, E.: Communities of Practice and Informal Learning. In: Proceedings of the eLearningLisboa 2007 Conference, Lisbon (2008) 6. Siemens, G.: Learning as Network Creation (2005), http://www.elearnspace.org/Articles/networks.htm 7. Downes, S.: Learning Networks and Connective Knowledge (2006), http://it.coe.uga.edu/itforum/paper92/paper92.html
336
M. Lucas and A. Moreira
8. Gan, Y., Zhu, Z.: A Learning framework for Knowledge Building and Collective Wisdom Advancement in Virtual Learning Communities. Educational Technology and Society 10, 206–226 (2007) 9. Price, S., Rogers, Y.: Let’s get physical: the learning benefits of interacting in digitally augmented physical spaces. Computers and Education 43(1-2), 137–151 (2004) 10. Paulus, T.: CMC modes for learning tasks at a distance. Journal of Computer-Mediated Communication 12(4), article 9 (2007) 11. Ducate, L.C., Lomicka, L.L.: Adventures in the Blogosphere: From Blog Readers to Blog Writers. Computer Assisted Language Learning 21, 9–28 (2008) 12. Alexander, B.: Web 2.0: A new wave of innovation for teaching and learning. Educause Review, 33–40 (March/April 2006), http://www.educause.edu/ir/library/pdf/ERM0621.pdf 13. Eraut, M.: Non-formal learning, implicit learning and tacit knowledge. In: Coffield, F. (ed.) The Necessity of Informal Learning. Policy Press, Bristol (2000) 14. Calvani, A., Giovanni, B., Antonio, F., Maria, R.: Towards e-Learning 2.0: New Paths for Informal Learning and Lifelong Learning – an Application with Personal Learning Environments. In: Proceedings of the EDEN Annual Conference 2007, Naples, Italy (2007) 15. Pettenati, M., Ranieri, M.: Informal Learning Theories to Support Knowledge Management in Distributed CoPs. In: Tomadaki, E., Scott, P. (eds.) Innovative Approaches for Learning and Knowledge Sharing, EC-TEL 2006 Workshops Proceedings, pp. 345–355 (2006) 16. Sefton-Green, J.: Literature Review in Informal Learning with Technology outside School, NESTA Futurelab Report No. 7 (2004), http://www.futurelab.org.uk/research/reviews/07_01.htm 17. Livingstone, D.: Exploring the Icebergs of Adult Learning: Findings of the First Canadian Survey of Informal Learning Practices. Ontario Institute for Studies in Education. University of Toronto, Toronto (2000) 18. Marsick, V., Watkins, K.: Informal and Incident Learning. New Directions for Adult and Continuing Education 89, 25–34 (2001) 19. Iadecola, G., Piave, N.: Evaluating Informal Learning in a Virtual Context. Paper presented at the fourth International Scientific Conference, eLSE, Bucharest (2008), http://adl.unap.ro/else/papers/085.-794.1.%20Iadecola%20e% 20Piave%20-%20Evaluating%20informal.pdf 20. Selwyn, N.: Web 2.0 Applications as alternative Environments for Informal Learning - a Critical Review. Paper presented at the OECD-KERIS expert meeting. Cheju Island, North Korea (2007) 21. Tough, A.: The adult’s learning projects, 2nd edn. Ontario Institute for Studies in Education, Toronto (1979) 22. Schugurensky, D.: The forms of informal learning: Towards a conceptualization of the field, NALL Working Paper 19 (2000) 23. Downes, S.: E-Learning 2.0. eLearn Magazine, 10 (2005) 24. Cross, J.: Informal Learning: Rediscovering the Natural Pathways That Inspire Innovation and Performance. John Wiley & Sons, Inc., San Francisco (2007) 25. Downes, S.: The Form of Informal – 2 (2006), http://www.downes.ca/post/38637 26. Siemens, G.: Connectivism: A Learning Theory for the Digital Age. International Journal of Instructional Technology and Distance Learning 2(1) (2005)
Bridging Formal and Informal Learning
337
27. Salomon, G.: Editor’s Introduction. In: Salomon, G. (ed.) Distributed Cognitions - Psychological and Educational Considerations, pp. xi-xxi. Cambridge University Press, New York (1993) 28. Perkins, D.N.: Person-plus: a distributed view of thinking and learning. In: Salomon, G. (ed.) Distributed Cognitions - Psychological and Educational Considerations, pp. 88–110. Cambridge University Press, New York (1993) 29. Siemens, G.: Knowing Knowledge, Lulu.com (2006) 30. Yin, R.: Case Study Research: Design and Methods, 2nd edn. Sage Publishing, Beverly Hills (1994) 31. Lucas, M., Moreira, A.: Social Web Tools for the Education of Educators. In: Proceedings of the IADIS e-Learning Conference, Algarve, June 17-23, vol. 2, pp. 150–154 (2009) 32. Clough, G., Jones, A.C., McAndrew, P., Scanlon, E.: Informal Learning with PDAs and Smartphones. Journal of Computer Assisted Learning 24(5), 359–371 (2008) 33. Conole, G., de Laat, M., Dillon, I., Darby, J.: LXP: Student Experience of Technologies. Final Report, JISC, UK (2006), http://www.jisc.ac.uk/whatwedo/programmes/ elearning_pedagogy/elp_learneroutcomes.aspx
How to Get Proper Profiles? A Psychological Perspective on Social Networking Sites Katrin Wodzicki, Eva Schwämmlein, and Ulrike Cress Knowledge Media Research Center, Konrad-Adenauer-Str. 40, 72072 Tübingen, Germany {k.wodzicki,e.schwaemmlein,u.cress}@iwm-kmrc.de
Abstract. Research on transactive memory systems has shown the importance of knowledge about who knows what. Going beyond the issue of work group settings, the article underlines the importance of such knowledge for finding one’s way through today’s knowledge society. We discuss how social networking sites can be used to manage individual knowledge networks. Therefore, we describe characteristics of social networking sites resulting in the conclusion that user profiles serve as a base for an external transactive memory system. Furthermore, we discuss guiding propositions for self-presentation in user profiles that could promote the establishment of an useful external transactive memory system. Keywords: social networking, transactive memory, profiles.
1 Introduction People are important knowledge resources, for example, underlined in research on transactive memory systems (TMS). Social networking sites (SNS) provide new opportunities to establish and maintain access to people from different domains of expertise and to use them as knowledge resources. These new possibilities may also be utilized for processes of formal and informal learning. In this paper, we will demonstrate how user profiles on SNS can be used as base for an external TMS, and propose ways to guide self-presentation in user profiles.
2 Applying Transactive Memory Systems to Social Networking Sites 2.1 Transactive Memory Systems (TMS) The psychological theory of transactive memory [1] makes clear that not only individuals have an individual memory system, but that groups also develop a group memory, referred to as a transactive memory system. A transactive memory system is defined as a specialized division of labor that develops with respect to encoding, storage, and retrieval of knowledge from different domains [2]. As we know from U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 338–343, 2009. © Springer-Verlag Berlin Heidelberg 2009
How to Get Proper Profiles? A Psychological Perspective on SNS
339
research on TMS, people not only have knowledge about specific domains, but also about external knowledge resources. These external knowledge resources may be external storage media (e.g., books or digital resources), but also other people. Work groups often distribute domains of expertise among their members. This specialization can be formally assigned or informally evolves over time by interaction and negotiation between members [2]. Consequently, incoming information is subject to transactive encoding and storage, guided by the group members’ responsibility for a specific domain of expertise. Transactive retrieval is based on knowledge about what items of information are needed (i.e., label) and where they can be found (i.e., location) [1]. The development of a well working transactive memory takes time for learning the others’ expertise, and is supported by interdependence between group members and an incentive structure that rewards specialization [3]. Individuals gain from being part of a smoothly functioning transactive memory in several ways: First of all, a transactive memory reduces an individual’s cognitive load, because that person will not need to learn every detail of a knowledge domain. It is sufficient to know who knows what and who has to be consulted to obtain further information. Second, a transactive memory results in successful attainment of group goals which is satisfying for the individual as well. Third, a transactive memory dramatically expands an individual’s expertise. Finally, it provides the opportunity to specialize in a specific domain of knowledge [1]. Previous research on transactive memory systems has mainly concentrated on couples [4], interdependent small group situations [3], or organizations [5]. But beyond that, it becomes more and more important to have elaborated networks of people from different professional domains on which to draw on demand. SNS support the establishment and maintenance of such networks. This is possibly one of the reasons why they are so popular. The theory of TMS might also apply in the context of such broader networks, and may be useful to define optimal conditions under which user profiles in SNS can function as an external TMS. 2.2 Characteristics of Social Networking Sites Boyd & Ellison [6] define social networking sites (SNS) as web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system. Following this definition, profiles of users are the main content. But the scope of different SNS can vary from private to professional or from thematically open to focussed on one specific topic of interest. From a psychological point of view, two types of groups can be distinguished allowing to depict variation in scope: the distinction between common-bond and common-identity groups. This distinction was made for offline groups [7] as well as for online groups [8,9]. Common-bond groups are characterised by bonds and interactions between their members. The affective attachment to these groups is based on the attraction of the other group members. Consequently, such groups are not very stable over time, because members come and go. Moreover, individuality plays an important role which might undermine cooperation and exchange. In contrast, common-identity groups have developed out of some common characteristic or a
340
K. Wodzicki, E. Schwämmlein, and U. Cress
common goal. The affective attachment to these groups is based on the attraction of the group itself. Consequently, such groups are very stable over time, because the coming and going of other members do not carry weight. Moreover, these groups mostly develop strong group norms which might be supportive for cooperation and exchange. Applied to SNS, a site might be characterized as a common-bond group, if its users are strongly connected with each other (e.g., Facebook.com). It would be characterized as common-identity group, if it is addressed to a specific target group or aims at attaining a common goal (e.g., Researchgate.net). When considering existing SNS, it is obvious that common bonds between its members are always relevant to some degree – this is inherent in supporting networking. Thus, they cannot easily be categorized as either of the two groups; they rather mix some features of both [9]. Common identities, however, are not always relevant in this context. But when considering the consequences of the different attachments to groups it, becomes obvious that establishing a common identity or a common goal enhances the stability of group as well as the orientation on common norms and common interests. Accordingly, for promoting information exchange and learning with the help of SNS, it might be advantageous to do so, too. The common identity or common goal might be given by focussing on a specific target group or specific topic of interest. However, operating such an SNS is a real challenge, because establishing a common identity is most successful if information provided about SNS users concentrates on identityrelated items. But users are also interested in establishing common bonds which is promoted by providing personal information. So, operators of SNS will need to control the information provided by users, bearing in mind its relevance to the community and the common goal. 2.3 Profiles within SNS as External TMS Through their profiles, users provide personal and contact information as well as information about their professional background or private concerns. Depending on the type of SNS, the amount of different kinds of information varies. Profiles of other users within a SNS (or, under restricted access, other users’ profiles of established contacts) can be considered as an external TMS. Then, the processes of transactive encoding, storage and retrieval are also relevant, but might differ from previously investigated settings. Transactive encoding and storage in SNS is different from transactive encoding and storage in previously considered contexts. SNS usually have a large number of users. On the one hand, this is an advantage, because more than one individual can be responsible for one domain of expertise. Such a distribution of responsibilities eliminates the risk that knowledge gets lost for the collective when one user leaves the group. On the other hand, in contrast to work groups or organizations with clear-cut boundaries and - in most cases - interdependencies between group members, users within a SNS are rather independent from each other. The larger the size of a SNS is the less clearly defined are the responsibilities of single members. To face this problem, a narrow scope and a common goal can help to establish interdependencies among the users, to allocate responsibilities between them, and to promote cooperation and exchange by group norms.
How to Get Proper Profiles? A Psychological Perspective on SNS
341
Because of the large number of users, optimizing processes of transactive encoding and storage might not be so central on SNS. However, optimizing processes of transactive retrieval in SNS is even more important: In general, members of a group or organization have to learn the location and the label of an item of information (who knows what). In SNS, knowledge about location is no longer necessary, because each user is represented with a profile. Consequently, the label is really important: Only if users agree on labels for relevant fields of expertise and use these labels to describe their own expertise within their profiles, searching for locations (i.e., people) with the help of these labels will be successful. SNS then substitute personal experience and conversation between members to some degree: Users will not need to learn in a long process who knows what. That is a specific advantage for newcomers or people who want to get in touch with unknown users. That means that profiles can supplement or even replace the time-consuming process of developing an internal transactive memory. However, in diverse or interdisciplinary networks, finding common labels and promoting their use is far from being trivial. Hence, we will have a closer look on how the usage of common labels can be promoted.
3 Affecting Self-presentation of the Individual User Although SNS operators have a certain influence on profile structures and even parts of the content, it is the user who creates his or her profile. Users will decide which impression they wish to convey to others by providing certain types of content. Users have to keep in mind their individual self-presentation goals as well as the demands of the audience they address. The individual users have to decide, for example, whether they want to be perceived as friendly and likeable or as highly competent [10]. Moreover, bearing in mind the “multiple audience” problem [11] or privacy concerns [12], users have to think about restricting access to their personal data. In the context of groups, individuals have to balance two opposing needs [13]: the need for assimilation and the need for differentiation. Social identification will be strongest when both needs are equally satisfied. Applying this motivation to SNS, user profiles should provide the opportunity to balance both needs by simultaneously demonstrating similarities with and differences from other users. By orientating their profile entries on entries of other users, they might adapt personal descriptions already presented that fit their own and, at the same time, try to present personal descriptions not provided by other users so far. Recommendations during Profile Completion. The orientation on others’ personal descriptions in profiles can be technically supported by recommendation systems that provide information about previous inputs by other members during own profile completion. On the one hand, such recommendations will save time of searching for entries of other users and will thus be welcomed by the users. On the other hand, and this is even more important, such recommendations will support a common labelling practice of all SNS users. Although it might not be easy to find labels that are accepted by all users and sufficient to describe all relevant expertise, it is really worthwhile to invest time in finding these labels. Because a commonly accepted practice of labelling is of crucial importance and the number of SNS users is usually large, the agreement on labels that are used cannot be left to the individual users themselves.
342
K. Wodzicki, E. Schwämmlein, and U. Cress
Recommendations based on Profile Entries. Moreover, providing recommendations of possible interesting information owners or experts within SNS can encourage members to provide useful profile information in order to get these recommendations. These recommendations can be based on different types of profile entries: Concerning some types of profile entries, similarities between information seekers and owners are important and, consequently, information seekers should get recommendations about information owners with similar entries. In other cases, differences between information seekers and owners are important and, consequently, information seekers should get recommendations about information owners with different entries. For example, if information seekers try to obtain information that is specific to an organization, they will try to find users from the same organization. But if they look for cooperation partners, they will rather search for users from other organizations whose profiles match on the item of cooperation. Because users differ in information they are seeking for, SNS operators could leave the decision about the kinds of recommendations up to the users themselves. Technically this could be implemented by providing an individual recommendation page on which each user can select from a list of possible recommendations.
4 Conclusion The paper applies the theory of TMS to the context of SNS. SNS profiles can function as external TMS, if some preconditions are met. The most important one is a common labeling for relevant domains of expertise. Only if generally accepted labels are used, the search for relevant information owners can be successful. Then, users will no longer need to know the location of information (i.e., who knows), but can effectively search using these labels. However, the usage of common labels has to be supported. Therefore, two of different guiding mechanisms are proposed. However, research is needed on how different guiding mechanisms affect self-presentation. Such research would give the basis for deducing additional guiding propositions that could further support self-presentation, information exchange and learning.
References 1. Wegner, D.M.: Transactive memory: A Contemporary Analysis of the Group Mind. In: Mullen, B., Goethals, G.R. (eds.) Theories of Group Behavior, pp. 185–208. Springer, New York (1986) 2. Hollingshead, A.B., Fulk, J., Monge, P.: Fostering Intranet Knowledge Sharing: An Integration of Transactive Memory and Public Goods Approaches. In: Hinds, P., Kiesler, S. (eds.) Distributed work, pp. 335–355. MIT Press, Cambridge (2002) 3. Hollingshead, A.B.: Communication, Learning and Retrival in Transactive Memory systems. Journal of Experimental Social Psychology 34, 423–442 4. Wegner, D.M., Erber, R., Raymond, P.: Transactive Memory in Close Relationships. Journal of Personality and Social Psychology 61, 923–929 (1991) 5. Brandon, D.P., Hollingshead, A.B.: Transactive Memory Systems in Organizations: Matching Tasks, Expertise, and People. Organization Science 15, 633–644 (2004)
How to Get Proper Profiles? A Psychological Perspective on SNS
343
6. Boyd, D.M., Ellison, N.B.: Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication 13(1), article 11 (2007), http://jcmc.indiana.edu/vol13/issue1/boyd.ellison.html (last access April 17, 2009) 7. Prentice, D.A., Miller, D.T., Lightdale, J.R.: Personality and Social. Psychology Bulletin 20, 484–493 (1994) 8. Ren, Y., Kraut, R., Kiesler, S.: Applying Common Identity and Bond Theory to Design of Online Communities. Organization Studies 28, 377–408 (2007) 9. Sassenberg, K.: Soziale Bindungen von Usern an Web 2.0-Angebote. In: Hass, B.H., Walsh, G., Killian, T. (eds.) Web 2.0 - Neue Perspektiven für Marketing und Medien, pp. 57–72. Springer, Heidelberg (2008) 10. Jones, E.E., Pittman, T.S.: Toward a general theory of strategic self presentation. In: Suls, J. (ed.) Psychological perspectives on the self, vol. 1, pp. 231–262. Erlbaum, Hillsdale (1982) 11. DiMicco, J.M., Millen, D.R.: Identity Management: Multiple Presentations of Self in Facebook. In: Proceedings of the 2007 international ACM conference on Supporting group work, Sanibel Island, Florida, USA, pp. 383–386 (2007) 12. Gross, R., Acquisti, A.: Information Revelation and Privacy in Online Social Networks. In: Proceedings of the ACM Workshop on Privacy in the Electronic Society (WPES) 2005, Alexandria, VA, pp. 71–80. ACM, New York (2005) 13. Brewer, M.B.: The Social Self: On Being the Same and Different at the Same Time. Personality and Social Psychology Bulletin 17, 475–482 (1991)
Collaborative Learning in Virtual Classroom Scenarios Katrin Allmendinger1, Fabian Kempf2, and Karin Hamann3 1
acontrain, Koberleweg 8, 78464 Konstanz, Germany
[email protected] 2 vitero GmbH, Nobelstr. 15, 70569 Stuttgart, Germany
[email protected] 3 Fraunhofer IAO, Nobelstr. 12, 70569 Stuttgart, Germany
[email protected]
Abstract. Possibilities are described to affect the feeling of social presence and group awareness in desktop collaborative virtual environments, also known as virtual classrooms. Social presence is the feeling of being present with another person in a virtual environment. Awareness information of the activities of other group members serves as a background for one’s own activities. In general, virtual classrooms allow various kinds of verbal and nonverbal communication between tutors and learners. The communication channels can be adapted according to the needs of the users in a specific collaborative learning situation. The article provides an overview of the representation of users (avatars) and the available communication channels in virtual classrooms. In particular, the possibility of conveying nonverbal information is addressed as it has the potential to affect the feeling of social presence and group awareness in learning situations. Keywords: nonverbal signal, avatar, social presence, group awareness, collaborative virtual environments, virtual classroom.
1 Introduction The concept of learning as an interactive and collaborative process has gained attention within the last years [1]. Learning as a fundamentally social phenomenon is also perceived in the domain of computer-supported learning, especially in the growing research area of computer-supported collaborative learning. Computer-supported collaborative learning scenarios offer many opportunities, for instance by overcoming specific barriers of place and time. But they also lead to new challenges: Learners and tutors have to construct “meaning” mutually, structure social interaction and establish and maintain motivation in learning situations [2]. This article deals with possibilities of overcoming the “structure barrier” and the “motivation barrier” in synchronous computer-supported collaborative learning scenarios, also known as real time, virtual classroom or online learning scenarios. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 344–349, 2009. © Springer-Verlag Berlin Heidelberg 2009
Collaborative Learning in Virtual Classroom Scenarios
345
The term “social affordances” was coined for social-contextual facilitators relevant for social interaction [3]. These facilitators aim at overcoming specific aspects of the “structure barrier” and the “motivation barrier” by various communication media accompanied by awareness information that help users to feel present in a virtual environment with other users. This article describes possibilities to influence the feeling of social presence and group awareness in virtual classrooms. We especially focus on the possibility to convey nonverbal information since it is often reported as being connected to the feeling of social presence and group awareness (e.g., [4], [5]). First, the properties of virtual classrooms and the main issues of communicative behavior in collaborative learning contexts are addressed. The article then focuses on social presence and group awareness. The empirical section comprises studies testing the influence of nonverbal signals in several collaborative situations within virtual classrooms. The discussion provides a summary and a research outlook concerning learning in virtual classrooms.
2
Avatars and Communication Channels in Virtual Classrooms
Virtual classrooms can be defined as computer-based environments that provide the possibility to synchronously communicate, collaborate and learn in a computer-mediated context with other users. Virtual classrooms can include multiple communication channels (e.g., text, audio, nonverbal signals) and external representations (e.g., shared software applications). Users are represented in virtual classrooms as virtual images, also called avatars. In learning situations, humanoid avatars are often used (e.g., [6], [7]). In synchronous computer-mediated contexts verbal signs can be sent via text chat and audio channel. Using keyboard symbols to display facial expressions or emotional states (“emoticons”) and using text (e.g., bold letters, exclamation marks) to display intonation make it possible to give additional nonverbal background information to text chat contributions (e.g., [7], [8]). As already mentioned, social interaction in computer-supported collaborative learning situations needs structure. Such structure can be provided, for example, by adapting the technology used according to the needs of the learning situation. The following section will provide an overview of aspects that can be adapted prior to or, circumstances permitting, during a synchronous learning session in a virtual classroom. For this purpose, we will focus on a particular virtual classroom, the virtual team room (vitero) system, used for learning as well as collaboration at public organizations and companies in Europe (www.vitero.de). Preliminary versions of the vitero system were the basis of the line of research which will be presented in the empirical section of this article. The vitero system includes a slide projector for rendering and controlling presentations (including marker, pointer, zoom, full screen), application sharing (including joint browsing and creativity techniques such as brainstorming), the possibility of splitting the group up into subgroups, and a language lab. The users are represented by avatars with name tags placed around a virtual table (see Figure 1).
346
K. Allmendinger, F. Kempf, and K. Hamann
Fig. 1. Virtual team room (vitero)
In general, the users are represented by avatars based on uploaded digital photos, but the tutor can also permit the use of video avatars in vitero. In this case, the users sit in front of their respective webcam and their video image is displayed at their virtual chair. One factor influencing the tutor’s and learner’s decision regarding video avatars is the perceived requirement of achieving a certain level of social presence within the learning setting.
3 Social Presence and Group Awareness Social presence was originally defined as a quality of the communications medium representing the capacity to transmit nonverbal information [9]. The representation of users plays an important role in achieving a feeling of social presence: “At a very basic level, bodies root us and make us present, to ourselves and to others” ([10], p. 41). In general, the main functions of avatars in virtual environments are identification (e.g., who is tutor, who is learner) and interaction support ([6], [11]). To ensure an easy identification in learning contexts, avatars resembling the user and name tags displayed near the avatars are often used (e.g., [6], [7]). Concerning interaction support, two components can be distinguished: avatars give us cues for perceiving the actions of other users and they facilitate communication. Being aware of the activities of others, and thus having “group awareness,” serves as a background for own activities [12].
4
Nonverbal Signs and Their Influence on Social Presence
The evaluation studies were conducted in preliminary versions of the alreadyintroduced vitero system. These communication modes were available: audio channel, text chat, and nonverbal signs (“thumb up”, “thumb down”, “hand raising”,
Collaborative Learning in Virtual Classroom Scenarios
347
“applause”, “question mark” for signalling to have a question or general bewilderment, “wave hello”, as well as a virtual microphone and participants’ arrows to refer to, e.g., information on the slides). All the nonverbal signs required conscious use. User studies had revealed them as the most relevant signs for collaborative situations. Our first study is based on the communication data of five groups with two to five employees [13]. Altogether, 19 people participated and filled out a questionnaire after their learning sessions. On average 0.32 nonverbal signs per minute were displayed by user’s avatar. The “thumb up” sign as well as the arrows were used most often. The questionnaire data show that the participants liked the sessions and felt that they were able to communicate successfully in the virtual environment. Our second evaluative study was conducted during a seminar, with 21 university students participating in at least 4 of 6 vitero sessions [14]. In a randomly chosen session objective communication data were observed on the basis of logfiles: 1.28 text chat contributions were made by the group per minute and 5.96 nonverbal signs per minute were displayed within the course of the session (the number is similar to the first evaluation study, where the score is mentioned per individual). In particular, the signs “thumb up” (2.63 signs per minute), “applause” (2.50 signs per minute) and “thumb down” (0.60 signs per minute) were used often. The questionnaire data show that the students liked the sessions in general (M = 6.1, scale from 1 “absolutely not” to 7 “very good”). They said that the text chat (M = 6.1, scale from 1 “absolutely not” to 7 “very much”) as well as the nonverbal signs (M = 6.1, same scale) contributed to convenient communication. The signs for “thumb up” (M = 6.8), “thumb down” (M = 6.7) and “hand raising” (M = 6.0, scale from 1 “very unimportant” to 7 “very important”) were in particular rated as important. The students and the lecturer were represented in the desktop virtual environment by photo avatars. They indicated approval of the sensibility of using avatars for representation in the virtual environment (M = 6.7, scale from 1 “absolutely not” to 7 “very much”) as well as liking for the photo avatars (M = 5.7, same scale). They did not regret the absence of a video representation of themselves and the others (M = 2.1, same scale). Although few studies have examined social presence in synchronous computerbased learning environments, there is some evidence based on asynchronous settings that social presence has a positive effect on learning (e.g. [15]). Regarding learning in a desktop virtual environment that incorporates avatars and text chat (Active Worlds), the majority of learners said that the use of a personal avatar contributed to the creation of a sense of social presence [7]. Concerning social presence in our own study [14], the learners reported having felt rather present with the others at a remote (virtual) location (M = 5.1, scale from 1 “not at all” to 7 “very much”). It can be speculated that using video representations would have led to a higher feeling of social presence. Furthermore, the students did 15-minutes presentations during the lecture in vitero. They reported being rather more nervous presenting face to face in comparison to presenting in the vitero system (M = 5.0, scale from 1 “more nervous in vitero” to 7 “more nervous face to face”) and that it was slightly irritating not to have the full range of nonverbal feedback from the listeners (e.g., concerning their attention and understanding; M = 4.4, scale from 1 “not at all” to 7 “very irritating”). These results show that using synchronous online learning scenarios has side-effects that are strongly connected to the feeling of social presence and group awareness. Overall, the data of the two evaluation studies show that users not only appreciate but
348
K. Allmendinger, F. Kempf, and K. Hamann
also actually utilize the different communication modes that are provided. The results reveal that virtual classroom scenarios have the potential to support instructional communication and thereby to overcome certain aspects of the “structure barrier” as well as the “motivation barrier” [2].
5
Discussion
In general, it is evident, that user representation and choice of communication channels depend highly on the envisioned collaborative learning situation (e.g., number of learners, computer literacy of the learners etc.). The empirical studies show that relatively basic avatars, for example, photo avatars, can contribute to favourable perceptions of a virtual setting [14]. This is in line with other studies (e.g., robotic heads displaying gender and name had the same affect [16]). The assessment of subjective measures has practical significance, because acceptance of a virtual classroom and positive interaction experiences are basic preconditions influencing how learning takes place. Moreover, emotional and motivational aspects are highly connected to successful learning. Future research attempts are necessary to fully understand what level of user representation is needed for specific types of learning situations and how user representation influences learning outcomes and processes as well as subjective variables. Another open question remains concerning the trade-off between enriching avatar communication and reducing the cognitive capacity necessary for sending and receiving signals from multiple channels. Future research will have to show which amount of enrichment in synchronous virtual settings is feasible for the users as well as positive for social presence and group awareness.
References 1. Gulz, A., Haake, M.: Design of animated pedagogical agents – A look at their look. International Journal of Human-Computer Studies 64, 322–339 (2006) 2. Bromme, R., Hesse, F.W., Spada, H.: Barriers, biases and opportunities of communication and cooperation with computers: Introduction and overview. In: Bromme, R., Hesse, F.W., Spada, H. (eds.) Barriers and biases in computer-mediated knowledge communication, pp. 1–14. Springer, New York (2005) 3. Kirschner, P.A., Kreijns, K.: Enhancing sociability of computer-supported collaborative learning environments. In: Bromme, R., Hesse, F.W., Spada, H. (eds.) Barriers and biases in computer-mediated knowledge communication, pp. 169–191. Springer, New York (2005) 4. Hesse, F.W., Garsoffky, B., Hron, A.: Interface-Design für computerunterstütztes kooperatives Lernen [Interface design for computer-supported collaborative learning]. In: Issing, L.J., Klimsa, P. (eds.) Information und Lernen mit Multimedia, pp. 252–267. Psychologie Verlags Union, Weinheim (1995) 5. Schweizer, K., Paechter, M., Weidenmann, B.: Sozial wahrnehmbare Merkmale von Agenten in virtuellen Lernumgebungen aus Rezipientensicht [Socially perceivable features of agents in virtual environments from the recipient’s perspective]. Künstliche Intelligenz 2, 22–27 (2000)
Collaborative Learning in Virtual Classroom Scenarios
349
6. Allmendinger, K.: Passung von Medium und Aufgabentyp: Der Einfluss nonverbaler Signale in desktop-basierten kollaborativen virtuellen Umgebungen [Fit between medium and task type: The influence of nonverbal signals in desktop-based collaborative virtual environments] (2005), http://w210.ub.uni-tuebingen.de/dbt/volltexte/2005/1658 (retrieved March 24, 2007) 7. Peterson, M.: Learner interaction management in an avatar and chat-based virtual world. Computer Assisted Language Learning 19(1), 79–103 (2006) 8. Walther, J.B., Tidwell, L.C.: Nonverbal cues in computer-mediated communication, and the effect of chronemics on relational communication. Journal of Organizational Computing 5(4), 355–378 (1995) 9. Short, J., Williams, E., Christie, B.: The social psychology of telecommunications. Wiley, London (1976) 10. Taylor, T.L.: Living digitally: Embodiment in virtual worlds. In: Schroeder, R. (ed.) The social life of avatars: Presence and interaction in shared virtual environments, pp. 40–62. Springer, London (2002) 11. Garau, M.: Selective Fidelity: Investigating priorities for the creation of expressive avatars. In: Schroeder, R., Axelsson, A.-S. (eds.) Avatars at work and play: Collaboration and interaction in shared virtual environments, pp. 17–38. Springer, Dordrecht (2006) 12. Dourish, P., Bellotti, V.: Awareness and coordination in shared workspaces. In: Proceedings of CSCW 1992, Toronto, pp. 107–114. ACM Press, New York (1992), http://www.informatik.uni-trier.de/~ley/db/conf/cscw/ cscw1992.html (retrieved May 6, 2004) 13. Müller, K., Kempf, F., Leukert, S.: Besser Kollaborieren durch VR? Evaluation einer VRUmgebung für kollaboratives Lernen [Better collaboration using VR? Evaluation of a VREnvironment for collaborative learning]. In: Beck, U., Sommer, W. (eds.) Learntec 2002, Bd. 2, pp. 475–482. Karlsruher Messe- und Kongress-GmbH, Karlsruhe (2002) 14. Allmendinger, K., Richter, K., Tullius, G.: Synchrones Online-Lernen in einer kollaborativen virtuellen Umgebung: Evaluation der interaktiven Möglichkeiten [Synchronous online-learning in a CVE: Evaluation of the interactive possibilities]. In: Merkt, M., Mayrberger, K., Schulmeister, R., Sommer, A., van den Berk, I. (eds.) Studieren neu erfinden – Hochschule neu denken, pp. 95–104. Waxmann, Münster (2007) 15. Richardson, J.C., Swan, K.: Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks 7(1), 68–88 (2003) 16. Clayes, E.L., Anderson, A.H.: Real faces and robot faces: The effects of representation on computer-mediated communication. International Journal of Human-Computer Studies 65, 480–496 (2007)
Review of Learning in Online Networks and Communities Kirsti Ala-Mutka*, Yves Punie, and Anusca Ferrari Institute for Prospective Technological Studies (IPTS), European Commission, Joint Research Centre, Edificio Expo, C/Inca Garcilaso 3, 41092 Seville, Spain {Kirsti.Ala-Mutka,Yves.Punie,Anusca.Ferrari}@ec.europa.eu
Abstract. This paper reports on a review of learning opportunities that are emerging in online networks and communities. Participation in these new virtual spaces is not mandatory, but rather motivated by an interest to know, share, create, connect and find support, and these activities lead to a range of learning outcomes. New technologies offer the tools and means for people to participate in online networks and communities in a personally meaningful way. However, not all individuals are necessarily equipped with the skills or knowledge to benefit from these opportunities for their lifelong learning. It is suggested that educational institutions should find ways to connect with and get inspiration from these new learning approaches and settings in order to bring about their own transformation for the 21st century, and also to support competencebuilding for new jobs and personal development with a learner-centred and lifelong perspective. Keywords: Online Communities, Learning Networks, Lifelong Learning, Social Computing, Key Competences, Informal Learning.
1 Introduction Lifelong learning plays a crucial role in contemporary society where jobs and required skills are changing [1], [2], [3]. Cedefop [4] forecasts that the qualification structure of jobs in Europe will change significantly by 2020 and that the new generation entering the labour market will not be able to fulfil all the demands for qualified employees. New ways to support, value and acknowledge learning are therefore needed in order to provide high-quality learning opportunities for all, and foster skills for innovation and lifelong learning. At the same time, there is an increase in the use of social computing applications, which provide new platforms where people can connect, share and create together. Already two-thirds of the world’s Internet population visit social networking or blogging *
The views expressed in this article are the sole responsibility of the authors and do not necessarily reflect the views of the European Commission.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 350–364, 2009. © European Communities 2009
Review of Learning in Online Networks and Communities
351
sites [5]. A great variety of different collaborative initiatives exists and more are continuously emerging and being used for work, leisure, learning, and civic activities [6]. Furthermore, online networking and collaboration not only attract young people, but also workers and older people [5], [7]. The Institute for Prospective Technological Studies (IPTS) launched a project in collaboration with DG Education and Culture of the European Commission to study the innovative approaches to learning that are emerging in the various new ICTenabled online spaces and communities. The main research questions of the study are: what contributes to the emergence and success of learning in ICT-enabled communities, and how can they promote quality and innovation for lifelong learning and education systems in Europe? The study combines desk research, literature review, case studies and expert consultations. This paper elaborates on the results of the literature and online resource review, focusing on how learning opportunities are emerging in these online settings. The paper is structured as follows. After the introduction, Section 2 reviews theoretical perspectives for considering and discussing informal learning in online collaborative settings. Section 3 gives an overview of different types of online networks and communities, approaching them through the drivers for participation in them. Section 4 discusses how learning in these settings is emerging in different ways from traditional classroom education, and Section 5 raises challenges for learning through informal collaboration. Finally, Section 6 concludes the paper by pointing out messages for educational institutions from these emerging learning settings.
2 Relevant Theories and Concepts Learning in communities and technology mediated learning are not new phenomena. The current novelty is the variety of new learning opportunities which are now becoming available. The new learning environments are no longer restricted by physical distances and the traditional ways of connecting with and getting to know people. This section reminds the reader about the new ways in which learning theories and concepts are now starting to be supported in online collaborative settings. 2.1 Learning in Social Context Current learning approaches often aim to support constructivist theories, emphasizing the active role of the learner and interaction with the environment. Within this perspective, learning includes assimilation of new knowledge to the existing structures as well as accommodation of existing knowledge and structures to new situations [8]. Based on his/her present knowledge and interest, the learner is actively inquisitive and therefore discovers new facts, relationships and truths [9]. Kolb [10] defines learning as a process whereby knowledge is created through the transformation of experience. He describes learning as an ongoing cycle of a sequence of phases where concrete experiences generate an opportunity for observation and reflection. Some people prefer concrete experience to abstract conceptualization, or active experimentation to reflective observation, giving rise to four main learning styles as different combinations of these.
352
K. Ala-Mutka, Y. Punie, and A. Ferrari
In networked learning literature, learning is often seen as a social activity that takes place through a social process of knowledge construction, highlighting the importance of discussion, the creation of shared meanings and opportunities for reflection with others [11]. Each learner has a 'zone of proximal development' (ZPD) describing knowledge that can be learned with the guidance of an expert [12], and shared cultural artefacts and language are important mediators of the process. Externalizing ideas is essential for communicating with others as it is possible to negotiate meaning only through tangible forms [13]. Bandura [14] emphasizes the importance of observing and modelling the behaviours, attitudes, and emotional reactions of others. People do not need to learn everything by trying it out themselves, they can learn from observing others. Situated learning literature [15] emphasizes that learning occurs in a function of the activity, context and culture. Learning is not necessarily deliberate but it can happen in unintentional ways. Participation itself can be seen as a process of appropriation and transformation, with social and cultural aspects of learning [16]. Learners participate in a wide variety of joint activities which provide the opportunity for synthesizing several influences into the learner's modes of understanding and participation. By internalizing the effects of working together, the novice acquires useful strategies and crucial knowledge [17]. In informal settings, the responsibility of learning often falls on the shoulders of the learner, emphasizing the need for specific personal skills. Self-regulated learning is guided by metacognition (thinking about one's thinking), strategic action (planning, monitoring, and evaluating personal progress against a standard), and motivation to learn [18]. It is important to take into account learners’ personal goals, which play an important role in learning and may conflict with the goals imposed from outside [19]. Strong individual interests can even help to overcome low ability and/or perceptual disabilities [20]. However, a certain level of understanding of the topic is needed for developing further interest and curiosity in it. 2.2 Approaching Learning in Online Networks The availability of ICT and internet gives new opportunities for finding, forming, managing and participating in networks and communities with members from different locations and with diverse characteristics. Siemens [21] argued that the traditional learning theories were developed in a time when learning was not impacted by technology. According to him, the network itself is the basis of the learning processes. As the knowledge society requires the individual to continuously update his knowledge, this cannot happen as a process of progressive “knowledge accumulation”, but through building, maintaining and utilizing connections. Community of Inquiry (COI) is a framework developed for modelling online collaborative learning processes based on Dewey's problem-based learning cycle [22]. The framework consists of three elements – social, cognitive and teaching presence. Evidence shows the importance of all the elements and that care needs to be taken to encourage social interaction and to provide structure and support early on [22]. Although socio-emotional communication is an important aspect of learning, it is not enough for educational purposes, as reflective discussions are needed to develop towards real inquiry.
Review of Learning in Online Networks and Communities
353
Another relevant framework for approaching online collaboration and work is the activity theory, which provides a means to study actions and interactions with artefacts within a historical and cultural context [23]. Activity systems are composed of actors, community and objects, which through labour division, rules and mediating artefacts (instruments) engage in a transformation process. Analysing the elements and the relationships between them aids the understanding and development of the overall activity system. Informed by the models, concepts and theories reviewed, we believe that it is useful to take into account several aspects, as depicted in Fig. 1, when discussing learning in informal online settings. The following sections which discuss learning in online collaborative settings will show links with all these aspects.
Social context expert-novice interaction, ZPD situated learning observing & modelling externalizing Prerequisites curiosity and motivation awareness, previous knowledge perceived self-efficacy abilities for mediating artefacts (ICT access, digital skills, language skills)
Learning outcomes Individual meaning making reforming cognitive structures, internalizing inquiry and discovery experience and reflection
situated knowledge network formation strategies for tasks and further learning becoming part of the a community identity development
Structure/process self-regulation guided participation culture -- rules, tools and labour division
Fig. 1. Concepts and aspects for learning in online networks
3 Emerging Online Networks and Communities Information and Communication Technologies (ICT), and especially social computing (SC) applications, have brought with them a large number of networking opportunities and collaborative initiatives, with both looser and tighter relationships between network members. Preece [24] defines an online community as a place where i) people interact socially, ii) people have a shared purpose that provides a reason for the community to exist, iii) there are policies for interactions, and iv) there are mediating computer systems. Collective activities also follow on from individual intentions [25]. In these cases, there is not necessarily an overall ‘sense of togetherness’ but small reciprocal circles and communities which dynamically form around interactions, under a larger community-enabling framework. This has been labelled ‘networked individualism’ [26], whereby individuals have opportunities to express and build their identity by being connected and mutually reliant with others.
354
K. Ala-Mutka, Y. Punie, and A. Ferrari
Based on literature review and surveys on online networks, platforms and communities, it is suggested that there are three major drivers for participation in online networks and communities, i.e. i) a common interest; ii) a common task or production; iii) a social connection. Furthermore, some online communities are driven by an organizational setup (educational institution, workplace, associations), while others connect and invite members horizontally in an open manner. 3.1 A Common Interest Topic-driven participation gathers together people who have a common knowledge or interest in sharing and learning from each other’s experiences. Sometimes, they also share a common background. For example, communities relating to specific professions or tools provide interaction and collaboration platforms for professionals within and between different organizations, and for novices learning about these issues. In addition to job-related interests, various communities have formed around topics like personal well-being, health, culture, leisure and also specific learning interests, such as LiveMocha (www.livemocha.com) for social language learning. Participation in these communities is motivated by the desire to connect with others in a similar situation, and to receive and give support and knowledge. In professional communities not only novices learn new skills and concepts, but also experts, who learn new aspects of their work and develop their professional identities through interaction with each other [27]. Participation in a professional community can provide opportunities to learn situated and current knowledge, to make more informed decisions about professional practices and to keep abreast of the latest changes in their specialty areas [28]. In the personal sphere, health-related communities, such as TuDiabetes (www.tudiabetes.com) can help people to learn about managing their health condition with targeted and on-demand support. 3.2 A Common Task or Production Some communities have an explicit goal or production activity to which community participants jointly contribute, although participants with only a spectator function are also accepted. In these communities, situated and social learning results when interaction tools link members working on a joint task, and serve as a means of negotiation for creating and building the collaborative product or resource. For example, Wikipedia and open source software development gather people together around a larger production goal and allow task-specific communities to form.1 The motivation to participate in production-driven activities can be both intrinsic (usefulness of the product, enjoyment in accomplishing tasks) or extrinsic (participation is part of, or supports, work tasks). In these communities, the members learn the skills needed for the joint activity and also the negotiation and collaboration skills necessary to be a part of the productive community. Furthermore, in this social context they can learn from the experts involved in the collective production by observing their practices and getting feedback and comments on their own work. In open source 1
Lulu (www.lulu.com) and WreckAMovie (www.wreckamovie.com) are examples of new types of platforms that enable collaborative production communities to emerge around joint products.
Review of Learning in Online Networks and Communities
355
software communities, the main reasons for contributing are participation in an intellectually stimulating project (44.9%) and improvement of one’s own programming skills (41.8%) [29]. 3.3 Social Connection New tools, technologies and the internet allow people from different places to gather online in order to interact asynchronously and in real-time with each other. Sociallyoriented participation in communities arises from people's need to express themselves and to connect with others, in ways not always related to certain work objectives, topics or joint contexts. Platforms for media sharing, social networking, gaming communities and the blogosphere allow people to pursue their personal goals as well as support the formation of groups and communities around common interests. Research shows that the motivation for using social media is often linked to expressing oneself, having fun, achieving fame through one's creations, or sharing knowledge [6], [30]. Young users appreciate social networking and media especially because 'it is fun', and 'friends do so', whereas for the 50+ year-old users, the most common motivations were 'it is useful', and 'to be part of community' [31]. Social computing empowers users to develop and share their knowledge with others and for the benefit of others. For example, blog writers are more active when they receive feedback and sometimes even spend more time responding to comments than on writing the blog posts [25]. 3.4 Closed and Open Communities Many online platforms are established by organisations with a view to facilitating work processes or social integration. Communities are often used as working tools, e.g. for courses at universities, work teams in workplaces, or as interaction mechanisms within organisations for cross-departmental collaboration but also for crossorganisational activities. The literature reviewed shows examples of companies using wiki approaches, blogs and social knowledge management [32], [33]. Educational institutions have set up platforms for sharing information, thus enabling communication and supporting collaboration between their students and staff, and also for supporting students in integrating their educational life with other activities [34]. A lot of learning in organisations takes the form of informal interactions in the working process and collaborative settings. Therefore facilities for shared knowledge management, networking and dynamic communities in a common context can support learning and working on the tasks, and develop new knowledge on a collective level. On the one hand, closed communities, where participation is, for instance, restricted to people within one organization, can enhance effectiveness and trust in knowledge sharing. On the other hand, the openness of the community can increase the diversity of participants and ideas and thus contribute to developing new ideas and innovations. Both types of approach can have positive effects on learning, and task-based teams can develop into communities, where people jointly engage in developing themselves, the community, and in achieving a collective goal.
356
K. Ala-Mutka, Y. Punie, and A. Ferrari
4 Learning in Online Networks and Communities The literature and data reviewed show that the motivations for participation in informal online networking and collaboration are not dominated by an explicit desire to learn, although examples of work- and profession-related communities show this motivation as well [27], [29]. However, the literature suggests that people are learning a variety of topics and skills in these settings. Online networks and communities often support learning and meaning-making processes in ways which differ from traditional instructional approaches. Connecting with others and participating in online networks provide important emotional and cognitive support for learning, not only replacing face-to-face interaction or enhancing access to information resources but also increasing effectiveness and allowing personalisation of learning. Online networking and collaboration can facilitate learning related to all the key competences for lifelong learning2 [35]. These new collaborative approaches can also promote several transversal skills, such as collaboration and critical thinking, which are important elements of lifelong learning competences. In addition to topic-specific and transversal competences, participation in a community enables members to learn how to be a part of it, which entails various skills and knowledge relating to the values, practices and attitudes developed in the community, thus contributing to one’s own identity as a practitioner and a person. ICT is crucial for online networks and communities, as it allows them to form and provides specific affordances for learning by enabling new ways of encouraging reflection, experimentation, and creativity. It supports a social experience which is different from face-to-face settings, and provides tools for personalising learning paths and knowledge management. Furthermore, ICT provides new ways of gathering and tracking explicit and implicit knowledge which comes from online interactions and activities. 4.1 New Ways and Skills to Learn ICT and, in particular, social computing tools allow easy creation and sharing of a variety of media materials, which develop personal creativity and give the learner a sense of ownership and responsibility with regards to learning. Multimedia opportunities and the diverse availability of resources and connections can help individuals use their imagination, make new connections, draft and explore ideas and creations [36], thereby personalising the inquiry and discovery processes. Creating and sharing photos, videos or podcasts, or writing a blog, enable users to practice skills such as using one's mother tongue, or a foreign language, and writing and media production, through creating and discussing with others. Blogs and community profiles are also tools for building professional identities and for showing skills and competences acquired via individual learning paths. Blogs, wikis and online writing can also enable one to learn important transversal competences such as critical and reflective thinking, active participation, and metacognition [37], [38]. Participating in a global community with members from different 2
Communication in mother tongue; communication in a foreign language; mathematical competence and basic competences in science and technology; digital competence; learning to learn; social and civic competences; entrepreneurship; and cultural awareness and expression.
Review of Learning in Online Networks and Communities
357
cultures allows the participant to learn and to become aware of cultural expressions and differences. Furthermore, evidence shows that certain virtual environments, like the blogosphere, enhance the ability to participate in social and civic activities. 61% of blog writers want, to a greater or lesser extent, to motivate people to take action [30], and on average two in ten people have been spurred into action as a result of reading a blog [39]. Online profiles and identities provide young people with a new tool for identity exploration and development [40]. Young people use social networking sites in the micro-management of their social lives, as an arena for social exploration and to develop social networking skills [41]. Sharing stories enables learning through narratives situated in different contexts, providing new sources for reflection. For example, 52% of teenagers surveyed said that they had thought about moral issues during social online gaming [42]. Adults too find learning opportunities online, as even 62% of the adults participating in online social networking reported learning related activities, such as reflection, sustaining social bonding, acquiring specific knowledge, and cultivating a constructive life [43]. 4.2 Different Social Contexts for Learning Social computing tools enable collaboration with large scale and reach, allowing experts and novices to connect, discuss and work together. Online communities of practice empower the practitioners to communicate and share knowledge, and let novices learn from their expertise. Furthermore, global communities make it possible to quickly connect with someone to ask for advice.3 75% of the IT professionals who are using IT online communities said that communities help them to do a better job and 68% stated that participating in an online community helped their professional development [44]. Traditional educational settings often target the development and measuring of individual achievement, therefore not encouraging or sometimes not even allowing informal interaction and collaboration. This differs in online collaborative settings, where the objective is not to obtain individual grades but to collaborate in building the joint product and obtain personal satisfaction from sharing and connecting with others. This develops social skills (leadership, group work) and also professional skills through social contexts, as for instance with collaborative production. Being able to connect with others in a similar situation and context, but via virtual facilities (possibly allowing anonymous interaction), can provide important personal and emotional support for learning and development. A virtual learning community can be a safe place for exploring roles and identities, and can help the adult learners to enlarge their professional horizons and even make significant life changes [11]. Online communities relating to personally difficult topics can help personal development and integration in society, as studied in the case of the GayTV community in Italy [45].
3
An example of the educative responsiveness of a global community: In the World of Warcraft game community, novices get the first answer to their question on average in 32 seconds, and the community culture is to educate novices into the rules and ethos of the game environment [48].
358
K. Ala-Mutka, Y. Punie, and A. Ferrari
4.3 New Ways to Access and Structure Learning New availability of a broad range of multimedia resources enables learning that is based on inquiry and exploration, where users are free to select the resources, communities and activities that match their interests and learning styles. Following a broad diversity of online networks and weak ties provides exposure to new information, opinions, and ideas different from our own, and new approaches to problemsolving [46]. Availability and participation in the various networking opportunities also supports developing personal knowledge management and skills for further learning. Students are already appropriating networked technologies in multiple ways to support their personal and school-related needs [47]. Communities with established policies aid newcomers to learn to participate according to the norms and objectives of the community. For example, Wikipedia provides a context where, through the guidance and comments by more experienced members, newcomers learn about quality requirements for contributions, negotiation and collaboration. New contributors may then become committed wikipedians themselves, taking responsibility for the initiative as a whole [49]. Learning and skills obtained are recognized by the community, and can lead to more responsibilities and a ‘higher’ position, which is acquired through the contributions and not by official external certificates. This is happening, for instance, in the Englishforums community (www.englishforums.com), where there are hundreds of 'trusted users', i.e. regular users who have taken on more responsibility for answering questions posted on the site [45]. In addition to active productive participation, users also learn by observing and following the experts and activities in the communities [50], [51]. Although often the majority of community members are 'lurkers', studies show examples where actually around 40% of community members were 'active lurkers', who did not contribute to discussions, but were propagating and transferring knowledge developed in the community [52]. Furthermore, freedom to participate through observing is an important route to active participation [28].
5 Challenges Major barriers for learning in online social environments relate to a series of prerequisites, i.e. access, interest, previous knowledge, skills and awareness of the new learning opportunities (see Fig. 1). Not all individuals have the necessary skills for selfregulated learning, and they need support and structure. The internet is creating new opportunities to become informed, to raise issues for discussion, to connect and learn; it is thus important that everyone gets a chance to develop the capability to participate in communities and benefit from the information, resources and connections available. Furthermore, in order to encourage and to nurture lifelong learners, educational institutions themselves should be able to transform and develop their practices, taking into account these new opportunities for learning. 5.1 Access and Skills for Digital Participation Access to ICT and internet is still a concrete barrier for many in European society. According to 2008 Eurostat surveys, 62% of the EU27 population on average had accessed
Review of Learning in Online Networks and Communities
359
the internet in the previous three months [53]. However, there are large differences between and within countries, especially in rural and poor areas, where internet penetration can be low. As a lot of internet content exists only in the English language, this may hinder participation for people with other native languages and/or low English proficiency. Furthermore, there are different social groups at risk of exclusion, such as older people, the less educated or the unemployed. In 2008 in the EU27, 63% of 55-74 year olds surveyed had never used internet as opposed to only 7% of 16-24 year olds. The level of education also has a strong impact. The percentage of people with no internet experience was 33% for the total population aged 16-74, but only 8% for the highly educated and 55% for those with low or no education [53]. Effective participation and learning in online communities requires that users acquire advanced digital competence with critical evaluation skills in producing and using resources and collaborating with others. These advanced digital skills do not result automatically from basic ICT usage skills [54]. Critical skills are required to ensure awareness of unaccountable quality of content, privacy and security aspects, and respect for intellectual property rights. User-generated content that has not gone through traditional quality checks may reflect ill-informed or biased viewpoints. For instance, 13% of Wikipedia articles have been shown to have mistakes [55]. Online activities also raise new questions concerning the visibility and traceability of people and opinions, as online contributions and discussions can build up a visible and permanent digital trail. For example, 22% of managers hiring staff in the US use social networking sites to screen potential employees [56]. 5.2 Skills and Interest for Learning Surveys are suggesting that level of education is strongly related to participating and showing an interest in lifelong learning. In 2006, of the 25-64 age group, only 3.7% of people with low education, as opposed to 18.7% with tertiary qualifications, participated in education and training activities [57]. Among older adults, high income is a significant factor with regards to desire to learn, and low education can be considered an obstacle to being capable of learning more [58]. Furthermore, the perception of 'learning' maybe at odds with the actual performance, as collaborative knowledge construction activities are not necessarily perceived as being effective by the learners themselves [59], who have often been used to defining learning as traditionally measured and obtained knowledge acquisition. Therefore, they may not perceive value or have interest in the learning opportunities in informal collaborative settings. Few lifelong learners have systematically practised self-directed learning in their basic education, and it is likely that many people start participating in online communities without strong self-regulation skills for learning. They need a framework to learn the topics of their interest, as well as to acquire better skills for self-regulated learning. Studies suggest that for learners with low prior knowledge externally facilitated learning is more effective than self-regulated learning [60], although they may themselves prefer self-directed activities [61]. 5.3 Effectiveness of Community for Learning A community with strongly linked members and joint engagement for collective meaning-making facilitates individual members' learning in the community. However,
360
K. Ala-Mutka, Y. Punie, and A. Ferrari
communities run the risk of becoming static, as knowledge that supports the identity and current practices of its members is likely to be adopted more readily than knowledge that challenges current identity and practice [62]. Online community studies often bring up the need for moderating discussions in order to facilitate knowledge construction [63], [64]. Evaluating and ensuring the quality of a community as a place for learning is difficult, and in the longer term, the communities might need dedicated facilitators to maintain the knowledge construction. Studies show those benefiting most from online collaborative activities are the ones who already have previous knowledge and skills on the topic [65] [66]. Furthermore, a lot of what happens in the offline world cannot be learned or expressed through online interaction. Therefore, it is important to study further which types of learning are well supported in networked online environments, and which skills should be ensured for enabling people to develop the capability to benefit from these opportunities. Overall, there is need for more research on how to nurture communities in order to enable them to renew their content and structure and to develop effective models for guiding newcomers, encouraging observers to become active members and existing members to engage as bottom-up leaders. 5.4 Implications for Education and Training It is clear that online networking is becoming an important part of the activities related to work, leisure, learning and citizenship in the knowledge society. Educational institutions need to prepare people for taking part in these activities, and focus on the skills that are needed for lifelong learning. Students should be encouraged to participate in relevant communities during their studies, and, furthermore, educational institutions should aim to develop learners' digital and self-regulated lifelong learning skills throughout all levels of their educational path. Research shows that these skills can be practised from early on, even in primary and secondary school [67], [68]. Examples reviewed demonstrate that online communities enable the learning of important transversal competences, such as collaboration, critical thinking, personal knowledge management and identity development. These are important skills for everyone, and institutions should look at what they could learn from informal online communities about providing learning opportunities for these – or whether they should encourage participation in the communities as part of self-directed learning activities during formal education. Both for the initial education and lifelong learning, there is a need to change the assessment systems, which should move from assessing traditional knowledge on an individual basis to recognizing the new competences acquired in different ways.
6 Conclusions ICT applications are enabling a large variety of communities to emerge, along with new ways for people to reach them and to collaborate in them. These online social contexts are becoming increasingly important among students in schools and universities, workers in the workplace, and citizens in society, supporting the learning of contemporary and relevant skills and knowledge. Furthermore, they provide environments for learning vital transversal skills for future jobs and personal development
Review of Learning in Online Networks and Communities
361
through collaboration with others. As a result, educational institutions need to prepare learners for acquiring and developing the capability to participate in them. In order to achieve and identify personally relevant learning in these informal settings, people need skills and support for managing their learning trajectories through the many different networking opportunities available. The literature reviewed and discussed in this paper suggests that efforts are needed i) to close the digital gaps by fostering basic and advanced digital competence, ii) to support people with low initial skills and a low perception of their learning capabilities to start participating in networked learning opportunities, iii) to improve the awareness and appreciation of the different forms of learning available in the emerging settings, iv) to support and encourage developing networks and communities with effective community models that also guide the learning of newcomers, v) to study further the limitations and opportunities of learning in online networks and communities, and vi) to acknowledge the vital role of these informal online networks for learning, employment, participation and self-development. Currently, online communities typically exist separately from learning institutions, although they often provide spaces for the learning of similar topics. Integrating learning in these informal environments with recognised educational systems would call for innovative transformation of practices through new technologies as well as encouraging, valuing and acknowledging different forms of learning. This would require a paradigm shift in the objectives, management, and funding of organizations and educational institutions. Teachers play a key role in changing education and training practices, and special attention should be paid to professional communities which support their work and professional development. By tapping into the knowledge of the various experts, communities can provide many important innovations for transforming the educational institutions for the 21st century.
References 1. European Commission: Commission Staff Working Document. The use of ICT to support innovation and lifelong learning for all - A report on progress. SEC(2008) 2629 final (2008) 2. European Commission: New Skills for New Jobs: Anticipating and matching labour market and skills needs. COM(2008) 868/3 (2008) 3. European Commission: An updated strategic framework for European cooperation in education and training. COM(2008) 865 final (2008) 4. Centre for the Development of Vocational Training: Skill needs in Europe: Focus on 2020. Cedefop Panorama series; 160, Luxembourg: Office for Official Publications of the European Communities (2008) 5. Nielsen Online: Global Faces and Networked Places. A Nielsen report on Social Networking’s New Global Footprint (2009) 6. Ala-Mutka, K.: Social Computing: Use and Impacts of Collaborative Content. IPTS Exploratory Research on Social Computing. Institute for Prospective Technological Studies (IPTS), Joint Research Centre, European Commission. EUR 23572 EN (2008) 7. Facetime: The Collaborative Internet: Usage Trends, Employee Attitudes and IT Impacts Fourth Annual Survey (2008) 8. Piaget, J.: The essential Piaget. Basic Books, Inc., New York (1977)
362
K. Ala-Mutka, Y. Punie, and A. Ferrari
9. Bruner, J.S.: The act of discovery. Harvard Educational Review 31(1), 21–32 (1961) 10. Kolb, D.A.: Experiential learning: experience as the source of learning and development. Prentice- Hall, Englewood Cliffs (1984) 11. Allan, B., Lewis, D.: The impact of membership of a virtual learning community on individual learning careers and professional identity. British J. of Educational Technology 37(6), 841–852 (2006) 12. Vygotsky, L.: Mind in Society: The Development of Higher Mental Processes. Harvard University Press, Cambridge (1978) 13. Ackermann, E.K.: Constructing knowledge and transforming the world. In: Tokoro, M., Steels, L. (eds.) A learning zone of one’s own: Sharing representations and flow in collaborative learning environments, pp. 15–37. IOS Press, Amsterdam (2004) 14. Bandura, A.: Social Learning Theory. Prentice Hall, Englewood Cliffs (1977) 15. Lave, J., Wenger, E.: Situated learning: Legitimate peripheral participation. Cambridge University Press, Cambridge (1991) 16. Rogoff, B., Paradise, R., Mejía Arauz, R., Correa-Chávez, M., Angelillo, C.: Firsthand learning through intent participation. Annual Review of Psychology 54, 175–203 (2003) 17. John-Steiner, V., Mahn, H.: Sociocultural approaches to learning and development: A Vygotskian framework. Ed. Psychologist 37(3/4), 191–206 (1996) 18. Zimmerman, B.J.: Self-regulated learning and academic achievement: An overview. Ed. Psychologist 25(1), 3–17 (1990) 19. Boekaerts, M., Niemivirta, M.: Self-regulated learning. Finding a balance between learning goals and ego-protective goals. In: Boekarts, M., Pintrich, P.R., Zeidner, M. (eds.) Handbook of self- regulation, San Diego, CA, pp. 417–450. Academic Press, London (2000) 20. Hidi, S.: Interest: A unique motivational variable. Ed. Research Review 1, 69–82 (2006) 21. Siemens, G.: Knowing Knowledge (2006), http://www.knowingknowledge.com/ 22. Garrison, D., Arbaugh, J.: Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education 10(3), 157–172 (2007) 23. Engeström, Y.: Learning by expanding: An activity-theoretical approach to developmental research. Orienta-Konsultit, Helsinki (1987) 24. Preece, J.: Online Communities: Designing Usability, Supporting Sociability. Wiley, Chichester (2000) 25. Cardon, D., Aguiton, C.: The Strength of Weak Cooperation: an Attempt to Understand the Meaning of Web 2.0. Communications & Strategies 65, 51–65 (2007) 26. Ryberg, T., Larsen, M.: Networked identities: understanding relationships between strong and weak ties in networked environments. J. Comp. Assisted Learning 24, 103–115 (2008) 27. Gray, B.: Informal Learning in an Online Community of Practice. Journal of Distance Education 19(1), 20–35 (2004) 28. Hew, K.F., Hara, N.: An online listserv for nurse practitioners: A viable venue for continuous nursing professional development? Nurse Education Today 28, 450–457 (2008) 29. Lakhani, K., Wolf, R.: Why Hackers Do What They Do: Understanding Motivation and Effort in Free/Open Source Software Projects. In: Feler, J., Fitxgerald, B., Hissam, S., Lakhani, K. (eds.) Perspectives on Free and Open Source Software. MIT Press, Cambridge (2005) 30. Lenhart, A., Fox, S.: A portrait of the internet’s new storytellers. Pew/Internet (2006) 31. OCLC: Sharing, Privacy and Trust in Our Networked World. A Report to the OCLC Membership (2007) 32. Osimo, D.: Web 2.0 in Government: Why and How? Institute for Prospective Technological Studies (IPTS), Joint Research Centre, European Commission, EUR 23358 EN (2008)
Review of Learning in Online Networks and Communities
363
33. Majchrzak, A., Wagner, C., Yates, D.: Corporate Wiki Users: Results of a Survey. In: Proceedings of WikiSym 2006, pp. 99–104 (2006) 34. Redecker, C.: Review of Learning 2.0 Practices: Study on the Impact of Web 2.0 Innovations on Education and Training in Europe. Institute for Prospective Technological Studies (IPTS), Joint Research Centre, European Commission. EUR 23664 EN (2009) 35. European Parliament and the Council: Recommendation of the European Parliament and the Council on key competences for lifelong learning. Official Journal of the European Union, L394 (2006) 36. Loveless, A.: Report 4 update: Creativity, technology and learning – a review of recent literature. Futurelab series (2007) 37. Xie, Y., Ke, F., Sharma, P.: The effect of peer feedback for blogging on college students’ reflective learning processes. The Internet and Higher Education 11, 18–25 (2008) 38. Antoniou, P., Siskos, A.: The Use of Online Journals in a Distance Education Course. In: Proceedings of the EDEN Annual Conference 2007 (2007) 39. Edelman, A.: Corporate Guide to the Global Blogosphere (2007) 40. Cachia, R.: Social Computing: The Case of Social Networking. IPTS Exploratory Research on Social Computing. Institute for Prospective Technological Studies (IPTS), Joint Research Centre, European Commission. EUR 23565 EN (2008) 41. Perkel, D.: Copy and paste literacy: Literacy practices in the production of a MySpace profile. In: DREAM Conference of Digital Media and Informal Learning (2006) 42. Lenhart, A., Kahne, J., Middaugh, E., Macgill, A., Evans, C., Vitak, J.: Teens, Video Games, and Civics. Pew Internet & American Life Project (2008) 43. Park, Y., Heo, G., Lee, R.: Cyworld is my world: Korean adult experiences in an online community for learning. Int. Journal of Web Based Communities 4(1), 33–51 (2008) 44. King Research: The Value of Online Communities: A Survey of Technology Professionals (2007) 45. Aceto, S., Dondi, C., Marzotto, P.: Case Studies on Pedagogical Innovations in New Learning Communities. Report prepared for IPTS (to be published) 46. Haythornthwaite, C.: Learning relations and networks in web-based communities. International Journal of Web Based Communities 4(2), 140–158 (2008) 47. Conole, G., de Laat, M., Dillon, T., Darby, J.: ‘Disruptive technologies’, ‘pedagogical innovation’: What’s new? Findings from an in-depth study of students’ use and perception of technology. Computers & Education 50(2), 511–524 (2008) 48. Nardi, B., Ly, S., Harris, J.: Learning Conversations in World of Warcraft. In: Proceedings of the 40th Hawaii International Conference on System Sciences. IEEE Press, Los Alamitos (2007) 49. Bryant, S., Forte, A., Bruckman, A.: Becoming Wikipedian: Transformation of Participation in a Collaborative Online Encyclopaedia. In: Proc. of ACM GROUP Conference (2005) 50. Dennen, V.: Pedagogical lurking: Student engagement in non-posting discussion behaviour. Computers in Human Behaviour 24, 1624–1633 (2008) 51. Holliman, R., Scanlon, E.: Investigating cooperation and collaboration in near synchronous computer mediated conferences. Computers and Education 46, 322–335 (2006) 52. Takahashi, M., Fujimoto, M., Yamasaki, N.: The Active Lurker: Influence of an In-house Online Community on its Outside Environment. In: Proceedings of ACM GROUP 2003 (2003) 53. Eurostat database, http://epp.eurostat.ec.europa.eu/
364
K. Ala-Mutka, Y. Punie, and A. Ferrari
54. Ala-Mutka, K., Punie, Y., Redecker, C.: Digital Competence for Lifelong Learning. Institute for Prospective Technological Studies (IPTS), European Commission, Joint Research Centre. Technical Note: JRC 48708 (2008) 55. Chesney, T.: An empirical examination of Wikipedia’s credibility. First Monday, 11 (2006) 56. Careerbuilder: One-in-Five Employers Use Social Networking Sites to Research Job Candidates, Press release (September 10, 2008) 57. Eurostat: The social situation in the European Union 2007 (2009) 58. Boulton-Lewis, G., Buys, L., Lovie-Kitchin, J.: Learning and active aging. Educational Gerontology 32, 271–282 (2006) 59. Benbunan-Fich, R., Arbaugh, J.B.: Separating the effects of knowledge construction and group collaboration in learning outcomes of web-based courses. Information & Management 43, 778–793 (2006) 60. Azevedo, R., Moos, D., Greene, J., Winters, F., Cromley, J.: Why is externally-facilitated regulated learning more effective than self-regulated learning with hypermedia? Education Technology Research and Development 56(1), 45–72 (2008) 61. Kopcha, T., Sullivan, H.: Learner preferences and prior knowledge in learner-controlled computer-based instruction. Ed. Tech. Research and Development 56(3), 265–286 (2008) 62. Roberts, J.: Limits to Communities of Practice. J. of Management Studies 43(3), 623–639 (2006) 63. Song, L., Singleton, E., Hill, J., Koh, M.H.: Improving online learning: Student perceptions of useful and challenging characteristics. Internet and Higher Education 7, 59–70 (2004) 64. Dennen, V.: Looking for evidence of learning: Assessment and analysis methods for online discourse. Computers in Human Behaviour 24, 205–219 (2008) 65. Turvey, K.: Towards deeper learning through creativity within online communities in primary education. Computers & Education 46, 309–321 (2006) 66. Prinsen, F., Volman, M., Terwel, J.: The influence of learner characteristics on degree and type of participation in a CSCL environment. British J. of Ed. Tech. 38(6), 1037–1055 (2007) 67. Steffens, K.: Self-Regulated Learning in Technology Enhanced Learning Environments: lessons of a European peer review. European J. of Education 41(3/4), 353–380 (2006) 68. Dignath, C., Buettner, G., Langfeldt, H.-P.: How can primary school students learn selfregulated learning strategies most effectively? A meta-analysis on self-regulation training programmes. Educational Research Review 3, 101–129 (2008)
Self-profiling of Competences for the Digital Media Industry: An Exploratory Study Svenja Schröder, Sabrina Ziebarth, Nils Malzahn, and H. Ulrich Hoppe Collide Research Group, Department for Computer Science and Applied Cognitive Science, University of Duisburg-Essen, Forsthausweg, 47057 Duisburg, Germany {schroeder,ziebarth,malzahn,hoppe}@collide.info Abstract. The IT and media sector is characterized by rapid changes in market relevant competences. These include "creative", technical as well as other competences. In an ongoing R&D project, we study the interrelation of competence development and innovation in this field. In this context, a study of different interfaces for self-profiling has been conducted with students from two related but different study programs as subjects. The aim was to find dependencies between personal characteristics (especially creativity), self-profiling behavior and the perception of matched job offers with respect to innovativeness, attractiveness and overstrain. Although the different student groups (interactive media and computer science) do not show differences on the personal creativity scale they differ considerably in their competence preferences. More flexible options in the profiling interfaces are not used for a stronger differentiation between competences. Keywords: self-profiling, creativity, competences, ontology, matching.
1 Introduction Competence modeling and assessment are important issues in vocational education and training [1]. Especially in the emerging field of the digital economy, job requirements change rapidly. New trends and the continuing convergence of technologies in this sector create a need for ongoing competence development [2]. Avoiding issues of operationalization, competences can be considered as the central terms describing the abilities and skills of potential applicants should possess [3]. We assume that the development of competencies will become a key issue for innovativeness and thus form a critical success factor in this field, especially for small and medium sized companies. Against this background, the project KoPIWA, which is funded by the German BMBF (01FM07067-72), aims at developing a comprehensive model for software supported competency management in the IT and media industry. Specific requirements are derived from the dynamics of "open innovation" developments. One fundamental part of this approach is the creation of user competence profiles. Conventional self-profiling interfaces often provide lists of competences to be rated on a pre-defined scale and therefore lack an amount of freedom for creating individual profiles1. 1
http://www.tencompetence.org/node/182 http://www.stepstonesolutions.de/Loesungen/Skill_Kompetenz_Management/ Skill_Kompetenz_Management.php
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 365–378, 2009. © Springer-Verlag Berlin Heidelberg 2009
366
S. Schröder et al.
In this paper, we report on our findings regarding user interfaces for self-profiling. We focus on dependencies between personality characteristics (especially creativity), self-profiling behavior and the perception of matched job offers with respect to innovativeness, attractiveness and overstrain. Additionally we have a deeper look on the impact of the students’ study programs. The study itself consists of two parts: a questionary asking for demographical data, study program and creativity indicators, and the creation of a competence profile with our profiling system. The profiling system design used in this study allows examining the profiling behavior of the participants. They were asked to create a profile from competences considered relevant by them concerning their job interest. Then their profiles were matched against all job profiles in a database. The three most relevant profiles according to the matching were then presented and several feedback measures were collected. The profiling system consists of several parts: • Profiling interfaces: Each user is provided with one of the three profiling interfaces differing in the degrees of freedom • Job database: A sample of 152 job offers originating from the online job portal of the Federation of German Digital Economy (BVDW) 2. • Matching algorithm: The matching of user and job profiles considers the user’s weighted choice of competences and the taxonomic relations of the competences resulting in a list of ranked job offers. • Job offer presentation: The three job offers with the highest rank are presented to the user. • Feedback: The user is asked to characterize the presented job offers with regard to innovation (Does this job deal with innovative tasks?), attractiveness (Does this job offer fit my interests?), and demands (Am I able to comply with the job offer’s requirements?).
2 Tools and Interfaces 2.1 Interfaces for Profiling The interfaces for competence profiling provide different degrees of freedom to the user with regard to the possibilities during profile creation. Competence profiles consist of at least one and at most seven competences. The competences are chosen from an underlying competence ontology consisting of 225 competences of the digital media sector. The user interface for profiling consists of two parts: the first part shows a list of all competences available for profiling (see figure 2.1). The list on the left shows the general concepts of the ontology (categories) and is used for filtering. The specific competences are listed on the right. Depending on the provided interface, there are different ways how the user can express their individual order of importance of the chosen competences.
2
http://www.bvdw.org/index.php?id=stellenangebote
Self-profiling of Competences for the Digital Media Industry
367
Fig. 2.1. Interface with list of available competences
The first interface arranges the selected competences in an ordered list (see figure 2.2). Therefore, the competences have to be strictly prioritized forcing the user to decide which competence matters most to the user for the job search.
Fig. 2.2. Profiling interface 1 with sorted list
The second interface allows the user to assign a limited number of 28 ranking points to the selected competences (see figure 2.3). The user can freely spend the points on the competences. Thus, the user has more possibilities to weight the competences according to their preferences, but is still bounded by the overall total.
Fig. 2.3. Profiling interface 2 with bounded number of ranking points
Fig. 2.4. Profiling interface 3 with relative ranking
368
S. Schröder et al.
The number of ranking points in the third interface (see figure 2.4) is not limited, so the user can assign as many points as he wants to each competence. The visualization shows the ranking of each competence relative to the other competences according to the assigned points. Table 2.1 shows the users’ freedom degrees where the first interface (Sorted) has the fewest degrees of freedom according to weighting and the third interface (Relative) has the most. Table 2.1. Degrees of freedom of the three profiling interfaces
Forced order 1) Sorted
Yes
Flexible weighting interval steps No
(Nearly) unlimited possibilities No
2) Bounded
No
Yes
No
3) Relative
No
Yes
Yes
2.2 Ontology-Based Matching Job offers contain several terms describing the same competences. For instance, the terms “team work”, “team player”, “work in teams” and “team spirit” (which occur in the job offers) are instances of the competence concept “ability to work in teams”. These terms were extracted and aggregated to competence concepts resulting in a competence vocabulary.
Fig. 2.5. Extraction of the ontology
Self-profiling of Competences for the Digital Media Industry
369
The competence concepts have been arranged in a taxonomy to represent the relations (generalization/specialization) between the single competences (see figure 2.5). The root concept of “competence” divides into the concepts of “professional competence”, “conceptual competence”, “personal competence” and “social competence”, which again have sub concepts. Because most of the extracted competences were professional competences, this taxonomy-branch has the deepest hierarchy. The controlled vocabulary and its taxonomic layout relieve the creation of the user profiles, because the number of available competences is constrained and the meaning of the competences is defined by their semantic environment avoiding the problems of synonymy and polysemy. Additionally the taxonomic relations of the competences can be used for matching. Table 2.2. Example of user and job profiles User Profile Object oriented Programming (0,5) Scripting (0,3) Data bases (0,1) Ability to work in teams (0,1)
Job Profile 1 Scripting Data bases
Job Profile 2 Java PHP SQL teamwork
A standard boolean matching only compares the appearance of terms in documents, so in the example (table 2.2) user profile and job profile 1 share two of four terms and thus give a 50% match. On the other hand, user profile and job profile 2 share no terms resulting in a 0% match. Boolean matching does not consider that there are relations between the terms, e.g. Java is an object oriented programming language, or that there are term meaning the same like “ability to work in teams” and “teamwork”. Thus, the matching between user profile and job profile 2 should be much better than 0%. This can be achieved by including the taxonomy and its instances to improve the matching. Our algorithm maps the competence concepts of the user profile to their instances in the job offers. If the job offer contains one of the instances, it is considered to contain the competence. The taxonomy is involved by also considering all of the instances of the sub competences of the user profile’s competences (term expansion). This leads to a match of 100% in example 2. The users weight their competences by assigning ranking points or sequencing them according to their importance. To avoid the need for different ranking functions interface variant 1 (sorted order) is mapped to equidistant weights, i.e. the first competence in the list gets as many points, as there are competences in the profile and the last one point. If the user e.g. chooses seven competences, the first competence gets seven points, the second six etc., if the user chooses only three competences, the first gets three points, the second two points and the last one point. The weighting of a competence is the fraction of its ranking points compared to the total amount of ranking points in the profile. Considering the weightings our ranking function is: Let U be a user profile containing competence concepts uc with the weighting uw ∈ [0,1] and J be a job profile consisting of competence instances. Furthermore
containsCompetence(uc, J ) is a function, which returns, if an instance of a given uc from U is contained in J :
370
S. Schröder et al.
⎧1, if uc or a sub concept of uc has an instance included in J
containsCompetence(uc, J ) = ⎨ ⎩0, else
Then the ranking of a J according to a given U is calculated by rank ( J ,U ) :
rank ( J ,U ) =
∑ containsCompetence(uc, J ) * uw
uc ∈U
3 Hypotheses and Research Questions The study examines the profiling behavior of people with different degrees of creativity and from different courses of studies. How do participants from the media and IT sector model their competence profiles when they search for a job? Do creative minds have other requirements or needs when it comes to creating competence profiles than non-creative people? Do they make use of the degrees of freedom provided by the interfaces? What are the differences between creative and non-creative persons? Another strand of questions was how the participants perceive the resulting job offers, presented by the system. How does perceived innovativeness of the job offer influence the perceived attractiveness? Are innovative job offers more attractive to the participants? How do perceived demands correlate with perceived attractiveness? Do the participants prefer challenging job offers? Furthermore, different competence preferences in the profiling process between different student groups are examined. Do students in interactive media have other preferences than students in computer science? How do their preferences differ?
4 Study Setup 4.1 Participants and Setting
Sixty-six students of either “Applied Communication and Media Science” or “Applied Computer Science” participated in the study. The majority of these participants were in their introductory study period (semesters 1 to 4). Both study programs qualify for jobs in the sector of IT and digital media. The study was performed locally at university to avoid interfering influences. The study had two phases: The first phase of the study consisted of a survey with basic questions about demographics (e.g. course of studies) and questions for measuring creativity indicators. In the second phase of the study, the participants were asked to enter their profiles into the profiling system described above. 4.2 Measuring Creativity
The concept of creativity is fuzzy and difficult to operationalize. Schuler et al. [4] list several facets of the concept of creativity: creativity as a character trait, as a product, as a requirement and an aim of career advancement. Measuring creativity requires the
Self-profiling of Competences for the Digital Media Industry
371
use of several different measuring methods [4], e.g. personality test procedures, biographical interviews and simulations to provoke typical creative behavior. In this study, only personality test procedures were used. Therefore the outcome of those test procedures is referred to as ‘creativity indicators’, as it is not possible to determine the exact creativity of the participants. A questionnaire was adopted that consists of three factors to measure indicators of creativity. The three factors assessed the degree of “investigative interest”, “artistic interest” and “work interest”. The first two factors were derived from the General Interest Structure Test - AIST [5], which is based on the RIASEC model by Holland [6]. It is a standard instrument in the area of career counseling. According to Schuler [4] the two factors ‘investigative’ and ‘artistic’ of the AIST can be seen as creativity indicators. Additionally a third factor was developed to determine specific work interest according to whether a person prefers to work creatively or to deal with routine tasks. All three factors use a five point Likert scale where “1” represents “not interested at all” and “5” represents “very interested”. Table 4.1. Reliability analysis of the scales AIST-I (“investigative”), AIST-A (“artistic”), “work interest” Scales AIST-I AIST-A Work interest All 3 scales together
Number of items 10 10 9 29
Cronbach’s Alpha 0.848 0.826 0.850 0.717
Those scales measure different aspects of creativity. The results of the three scales were then combined in an overall score since the internal consistency (Cronbach’s Alpha) of all three scales together was 0.717. The combined value is then an indicator for the creative potential of the person. 4.3 Profiling and Job Offer Evaluation
During the second phase of the study, the students were randomly assigned to one of the three profiling interfaces to create their personal interest profile. They were asked to specify their profile with up to seven competences. These competences represented their requirements concerning the search for suitable job offers. The competence profiles were saved for further examination.
Fig. 4.1. Start of the second part of the study
372
S. Schröder et al.
A list of the three highest rated job offers was presented to every user, who had to read the job offers thoroughly. Afterwards she or he was asked to assess each offer with respect to attractiveness, innovativeness, and overstrain of this job offer. These factors use a five point Likert scale where “1” represents “not at all true” and “5” represents “very true”. Table 4.2. Reliability analysis of the scales “result attractiveness”, “result innovativeness” “result overstrain” Scales Attractiveness Innovativeness Overstrain
Number of items 5 4 6
Cronbach’s Alpha 0.943 0.734 0.745
After the participants finished the assessment of the job offers, they had to answer questions concerning the needed effort for creating their personal profile with the profiling interface. Those questions used a five point Likert scale ranging from “very easy” (1) to “highly demanding” (5). Table 4.3. Reliability analysis of the scale “mental effort” Scale Mental effort
Number of items 5
Cronbach’s Alpha 0.943
5 Results The competence preferences in both student groups were different. Table 5.1 shows the most frequent competences in the users’ profiles. Table 5.1. Most frequent competences overall Place
Competence
1) 2)
Psychology of Advertising Internet Search Ability to Communicate Ability to Work in a Team Graphic Design Creativity Image Editing Photoshop Reliability Ability to Work Autonomously Project Management Usability
4) 5)
8) 10)
Frequency of occurrence 19 13 13 12 11 11 11 10 10 9 9 9
Self-profiling of Competences for the Digital Media Industry
373
Table 5.2. Most frequent competences overall by students of interactive media (N = 48) Place 1) 2) 3)
6) 8) 9)
Competence Psychology of Advertising Internet Search Image Editing Graphic Design Ability to Communicate Creativity Reliability Ability to Work in a Team Ability to Work Autonomously Innovativeness Project Management Willingness to Learn
Frequency of occurrence 18 13 10 10 10 9 9 8 7 7 7 7
Students of interactive media focused on competences from traditional media fields (like “Graphic Design” or “Psychology of Advertising”) and social and personal competences (see table 5.2). Only few chose technical competences from the list although sometimes “soft” technical skills like “Internet Research” or “Photoshop” were picked. Students of computer science preferred mostly “hard” technical competences like “Database Systems” and “Programming (see table 5.3). Table 5.3. Most frequent competences overall by students of computer science (N = 18) Place 1) 2) 3)
6)
Competence Database Systems Programming Linux Microsoft Windows Modeling Enthusiasm Java Ability to Communicate Software Development Ability to Work in a Team Usability
Frequency of occurrence 7 5 4 4 4 3 3 3 3 3 3
Although it seems that both student groups have different preferences, a deeper look reveals that their competence choices overlap at certain points. Clustering the competence profiles with the K-Means algorithm [7] results in two clusters (see table 5.4). Table 5.4. Clusters with K-Means Student group
Cluster 1
Cluster 2
Interactive Media Computer Science
26 13
21 4
374
S. Schröder et al.
While the students of Interactive Media split in nearly equal parts in the two clusters, almost all students of Computer Science are in cluster 1. Thus, cluster 1 contains profiles of both groups. Because the profiles are clustered by their competences, there seems to be an overlap in the self-perception of both groups. Cluster 1 highlights technical surface level competences like image editing, (web) design but also programming, data base systems and operating systems. Profiles in cluster 2 emphasize on the one hand social and personal competences and on the other hand non-technical competences like marketing and (project) management. These results reflect the profiles of the two courses of study. Computer Science focuses on technical competences from computer and electrical engineering, math and physics. Interactive Media combines technical aspects from computer science with psychology, business studies and additional courses for art, design and languages. Thus, both courses have a certain overlap in basic technical competences from computer science like modeling and programming. A Pearson correlation revealed that the answers for attractiveness, innovativeness and overstrain positively correlate on a medium level. High values in perceived attractiveness go along with high values in innovativeness, etc. Table 5.5. Pearson correlation between perceived attractiveness, perceived innovativeness and perceived overstrain Value ÅÆ Attractiveness Innovativeness Overstrain
Value Innovativeness Overstrain Attractiveness
Correlation 0.436 0.463 0.483
The participants’ creativity indicator values have a mean of M = 3.32, SD = 0.41 where the lowest value is 2.31 and the highest value is 4.52. To have a deeper look at how the creative persons differ from the less creative persons, the participants were split into three groups: a lower quarter (with creativity values below 3.04), an upper quarter (with creativity values higher than 3.56) and the medium range, which was left out to pinpoint the effects. People with high values in the creativity indicator tend to have higher values in perceived attractiveness and perceived innovativeness of the job offers (see table 5.6). This was analyzed using an independent t-test which showed significance with t(90) = -3.36 with p < .05 (for attractiveness) and t(90) = -3.32 with p < .05 (for innovativeness)3. Table 5.6. Job offer evaluation values according to creativity Value Attractiveness Innovativeness
3
Group More Creative Less Creative More Creative Less Creative
Mean 3.51 2.71 3.37 3.00
Std. Deviation 1.16 1.12 .50 .57
N 48 44 48 44
“Independent t-test: a test using the t-statistics that establishes whether two means collected from independent samples differ significantly” [8].
Self-profiling of Competences for the Digital Media Industry
375
There is also a positive relationship between the participants’ creativity indicator value and the ratings of the job offers in perceived attractiveness and perceived innovativeness, as the values correlate on a medium level (see table 5.7). Table 5.7. Correlation between perceived attractiveness, perceived innovativeness and creativity Value ÅÆ Creativity Creativity
Value Attractiveness Innovativeness
Correlation r 0.320 0.384
Although students of interactive media are said to be more creative than students of computer science a comparison revealed that both groups show almost identical levels of creativity according to our scale (see table 5.8). This was not significant with t(64) = -.39 with p > .05. Table 5.8. Creativity mean according to student groups Student group
Mean Creativity
Std. Deviation
N
Interactive Media Computer Science
3.310 3.356
0.451 0.304
48 18
“Creativity” could also be used as an item in self-profiling. Interestingly, those who chose “creativity” for their competence profile (N = 11) scored significantly higher on the creativity assessment scale (t(64) = -2.074 with p < 0.5). Therefore, the persons who profiled themselves as “creative” were really the more creative persons according to the creativity scale. In order to have a closer look at how the participants made use of their freedom of choice during the profiling process, the ranges of the normalized competence weights in the profiles were calculated (see table 5.9). The range of the profile weights then indicates whether the participants made use of their degrees of freedom, e. g. by assigning extreme values in the third interface type. The majority of participants chose 7 competences for their profile, which means that the opportunity to pick 7 competences was mostly used. In the first profiling interface the weighting intervals between the competences were equidistant, due to the forced ordering of the competences. Therefore the distribution of the normalized weights only differs between profiles with different amounts of chosen competences. The second profiling interface using a limited number of rating points led to different profiling behaviors: some participants chose a “flat” distribution with rather equal weights and some chose rather “wide spread” weightings for their competences, which means that they chose divergent weightings. In the third profiling interface the degrees of freedom due to unlimited number of ranking points were rarely used. Apart from one outlier most of the profiles are “flatly” distributed.
376
S. Schröder et al. Table 5.9. Profile weight ranges
Profiling Interface 1) range “sorted” 2) range “bounded” 3) range “relative”
Min 0.208 0.0 0.0
Max 0.238 0.393 0.625
Mean Range 0.210 0.144 0.136
Std. Deviation 0.006 0.087 0.139
N 22 24 18
As we can see in table 5.9 the weighting ranges for profiling interface 2 (“bounded“) and profiling interface 3 (“relative“) do not differ significantly since both range means are on the same level (interface 2 range M = 0.14, interface 3 range M = 0.14). The range for profiling interface 3, which provided the most degrees of freedom, has the lowest range value, which underlines unused affordances by the participants. Thus, we can say that the participants did not utilize the provided degrees of freedom. To examine whether the profiling behavior of the more creative participants differs from the one of the less creative, the weight ranges were analyzed using an independent t-test. The test results showed that there is no significant difference between both groups t(29) = -.35, p > .05; this means that creative people do not differ in their profiling behavior in general. To have a closer look on how the more creative participants used their possibilities in the third profiling interface the ranges were analyzed using an independent t-test. There was no significant difference between the ranges of both groups t(7) = -.3, p > .05. On the other hand creative persons were slightly more hampered by the profiling interfaces (M = 4.36, SD = 0.64 on perceived mental effort) than the less creative people (M = 2.74, SD = 0.66). This difference is significant t(30) = -7.06, p < 0.5. Finally, the effects of the perceived attractiveness regarding the matched job offers were examined. There was no positive correlation between the values of the matching and perceived attractiveness (r = -.10). Some descriptives of the job offer matching values are provided in table 5.10. Accordingly, there was also no significant difference of perceived attractiveness between the different profiling interfaces. This was analyzed using an independent ANOVA (F(2, 189)=0.70, p > .05)4. Table 5.10. Descriptives of the matching values Value Matching values
Min 0.214
Max 1.000
Mean 0.552
Std. Deviation Range 0.161 0.827
N 192
6 Discussion Since perceived attractiveness, perceived innovativeness and perceived overstrain correlate on a medium level it might be assumed that innovative job offers are often also perceived as attractive. Since overstrain and attractiveness correlate in a positive way, it is also possible that the applicants are searching for challenges. Nevertheless, 4
“Independent ANOVA: analysis of variance conducted on any design in which all independent variables or predictors have been manipulated using different participants (i.e. all data come from different entities)” [8].
Self-profiling of Competences for the Digital Media Industry
377
both assumptions are to be made with caution since the participants are students in introductory courses who may be overstrained by job offers in general. Furthermore, the sample size was relatively small (N=66). Notably, students from interactive media and computer science show on average very similar levels on the creativity scale. This appears to contradict the classification of “creative jobs” in the media industry, especially in media design. Indeed, the verbal labeling of such jobs does not coincide fully with a psychologically plausible notion of creativity, which would indeed also include programming aptitudes and not only design skills. Astonishingly, the more creative participants do not have another profiling behavior than the less creative participants although their perceived mental effort with the profiling user interface is higher. These findings suggest that creative people generally find profiling exhausting. They do not want to spend much time and effort in creating a detailed profile. Thus, it might be helpful to minimize their effort by providing them with proposals of possibly important competences based on their records. Tracks like chat protocols can be evaluated automatically to generate indicators on the user’s competences according to their contributions in the discussed topics considering the feedback of the other users (agreement/disagreement, direct and indirect references) [9]. This method may be applied to forum discussions, too. Furthermore, there are patterns in the co-occurrence of competences in digital job offers, which can be detected by methods of data mining and information retrieval like association rules or latent semantic indexing [10]. These relations may either be used to suggest further competences based on the current user profile or they may be directly considered for matching the user profiles with the job profiles. As demonstrated in section 5, there is no correlation between the ranking and the users’ perceived attractiveness of a job offer. Furthermore, the perceived attractiveness averages at 3.21, which shows that the results were perceived only as slightly attractive. One problem might be that the matching algorithm considers the instances of all sub concepts of a chosen competence in a user profile with the same weighting as the instances of the competence itself. It might be better to use a dedicated measure of taxonomic similarity like the shortest path [11] or the taxonomic overlap [12]. Furthermore, section 5 shows that the participants do not seem to distinguish clearly between the weightings of the competences, and although the weighting is considered in ranking the job profiles, it does not seem to influence the perceived attractiveness of the results. Thus, ranking the profiles according to the directly entered user weights seems to be the wrong approach. New ways of weighting especially for creative people need to be found. One approach may be to perform the competence weighting automatically. The terms in the job offers could be weighted by their tf/idf values [13] indicating their importance in the document. Besides, the job offers could be analyzed to include the importance of specific competences at the job market into the ranking.
7 Outlook Since there is definitely a gap between the current profiling mechanisms and the needs of creative workers as they are looked for in the digital media economy, we are currently developing new approaches for profiling and recommendation of profiles.
378
S. Schröder et al.
As stated in section 6 we plan to make use of user tracks and semantic relations between competences. Therefore, we built a framework for the evaluation of algorithms for matching competence profiles based on different (combined) data sources. This framework uses the open source library SimPack5, which already provides similarity measures for the use in ontology based matching. Furthermore we are developing a user interface that does not comply with the traditional form-based profiling approach, but tries to offer the user a two-dimensional “workspace” that allows the user to specify their attitude towards the selected competences considering dimensions like “already known and interesting” and “interested to learn”. This interface shall help to incorporate the user’s tendency to perceive those jobs attractive that demand more than what is already known without overstraining him at the same time.
References 1. Burke, J.W.: Competency Based Education and Training. Routledge (1989) 2. Draganidis, F., Chamopoulou, P., Mentzas, G.: An ontology based tool for competency management and learning paths. In: Proceedings of I-KNOW 2006, Graz (2006) 3. Ziebarth, S., Malzahn, N., Hoppe, H.U.: Using Data Mining Techniques to Support the Creation of Competence Ontologies. To be published in proceedings of AIED 2009, Brighton, England (2009) 4. Schuler, H., Görlich, Y.: Kreativität. Ursachen, Messung, Förderung und Umsetzung in Innovation. Hogrefe, Göttingen (2007) 5. Bergmann, C., Eder, F.: Allgemeiner Interessen-Struktur-Test / Umwelt-Struktur-Test. Manual (AIST/UST), 2nd edn., Göttingen (1999) 6. Holland, J.L.: Making vocational choices: A theory of vocational personalities and work environments. Psychological Assessment Resources, Odessa (1997) 7. MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297. University of California Press (1967) 8. Field, A.: Discovering Statistics Using SPSS, 2nd edn. SAGE Publications, London (2005) 9. Rebedea, T., Trausan-Matu, S., Chiru, C.-C.: Extraction of Socio-semantic Data from Chat Conversations in Collaborative Learning Communities. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 366–377. Springer, Heidelberg (2008) 10. Ziebarth, S., Malzahn, N., Hoppe, U.: Using Data Mining Techniques to Support the Creation of Competence Ontologies. To be published in the Proceedings of the 14th International Conference on Artificial Intelligence in Education (AIED 2009), Brighton (July 2009) 11. Rada, R., Mili, H., Bicknell, E., Blettner, M.: Development and Application of a Metric on Semantic Nets. IEEE Transactions on Systems, Man and Cybernetics 19(1), 17–30 (1989) 12. Maedche, A., Staab, S.: Measuring Similarities between Ontologies. In: Gómez-Pérez, A., Benjamins, V.R. (eds.) EKAW 2002. LNCS (LNAI), vol. 2473, pp. 251–263. Springer, Heidelberg (2002) 13. Salton, G., Fox, E.A., Wu, H.: Extended Boolean information retrieval. Communications of ACM 26(11), 1022–1036 (1983)
5
http://www.ifi.uzh.ch/ddis/simpack.html
PPdesigner: An Editor for Pedagogical Procedures Christian Martel1, Laurence Vignollet2, Christine Ferraris 2, Emmanuelle Villiot-Leclercq3, and Salim Ouari2 1
Pentila Corp., 73370 Le Bourget-du-Lac, France Université de Savoie, 73370 Le Bourget-du-Lac, France 3 IUFM Grenoble, UJF, 38100 Grenoble, France
[email protected],
[email protected],
[email protected],
[email protected] 2
Abstract. The success of the Learning Design field to help teachers and instructional designers to create technology enhanced collaborative learning activities depends largely on its ability to offer eLearning professionals welladapted tools. A number of approaches are coming to maturity in this field however, the associated authoring tools are still too difficult for instructional designers to manipulate. This paper presents an authoring tool designed in order to simplify the task of the instructional designers, who are confronted with the design of particular scenarios that are frequently used by teachers, and which are here called Pedagogical Procedures. Keywords: Learning Design, Pedagogical Procedures, Pedagogical scenarios, Modelling, Authoring Tool.
1 Introduction The Learning Design (LD) approach [1] [2] has stressed the importance of the design phase in the overall process of elaboration and delivery of learning activities. It has emphasized the necessity both to define models allowing the formalization of scenarios, and to develop computer tools capable of reading and executing these formalized scenarios. eLearning companies have recently become aware of this importance. This is probably related to the qualitative and quantitative increase in the demand for eLearning products, largely due to the development of lifelong learning. In these companies, instructional designers are in charge of designing and implementing the eLearning products. LD seems to be a promising way to support them in this specific educational engineering. Indeed, on the one hand, it provides models to describe pedagogical activities, and languages (IMS LD [3], LDL [4], etc.) to formalize the descriptions produced (scenarios). On the other hand, it provides means to interpret the languages and transform the models of activities into actual activities running in virtual learning environments. Although the results obtained in this field are significant, there remains progress to be made in order for the instructional designers to be able to get hold of them, to U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 379–384, 2009. © Springer-Verlag Berlin Heidelberg 2009
380
C. Martel et al.
appropriate them and to put them into practice naturally. One of the reasons for this difficulty is that the existing modeling languages integrate concepts that are very abstract, and the associated authoring tools, though offering graphical interfaces, are based on the manipulation of these concepts. The current instrumentation of the design phase thus seems to be inadequate for instructional designers: the languages and concepts to handle are too far from their “world” [5]. In order to provide an actual and effective support during this phase, we have worked with a team of instructional designers at defining a new modeling language, based on concepts that are meaningful and eloquent for them. First of all, we have considered a reduced set of scenarios: the Pedagogical Procedures (PP). They are scenarios that are commonly known and used by instructional designers. Then, we have defined a language dedicated to the description of theses particular scenarios. This language can be considered as a Domain Specific Language (DSL). It is described in this paper, together with the associated graphical editor developed to support the design of PP using this DSL. In carrying on this work, we have made the hypothesis that defining such domain languages can be a solution to the problem of the appropriation and dissemination of the LD approach in eLearning companies. This hypothesis still has to be validated with a larger instructional designers population.
2 The Pedagogical Procedures Pedagogical Procedures are specific scenarios that are distinguishable from others by their relatively codified character and the degree to which they are shared in the teacher community. They can be defined in the following way: A Pedagogical Procedure is a particular scenario which contributes to the organisation of the learning activity. It is not linked to one particular subject or domain. It includes a set of instructions given to the future participants of the activity which describes what they have to do. Considering learning objectives, the application of these instructions leads to a quasi-certain result as only scenarios validated by teachers’ experience can be considered as Pedagogical Procedure.
When observing actual learning situations, several pedagogical procedures can be easily identified as they are frequently used within these situations [6]. It is impossible to establish an exhaustive list of PPs, in particular because they are regularly revised and modernized. As an example, you may compare the very first description of the La Martinière PP, which dates from 1951 [7], with the one currently in use, produced in 2007 [8]. For convenient and methodological reasons, we have selected eight PPs to be part of a reference list of PPs: “Case Study”, “Guided Case Study”, “Debate”, “Treasure Hunt”, “Controversy”, “Conduct a Survey”, “Give a Talk”, “Role-Playing Game”. Of course, making an inventory of these PPs together with their description in terms of participants, phases, instructions and artifacts is not enough. They have to be described formally if we want them to be usable within a LD approach in which they are supposed to become computational objects. The meta-model of the PPs (represented in figure 1) is built from the definition of a PP, which is the result of the exchanges between the instructional designers and the computer scientists involved in the project.
PPdesigner: An Editor for Pedagogical Procedures
381
As described in figure 1, a PP can involve several participants (learners, teachers, tutors, etc.). Each participant contributes to the PP by way of her or his score. Thus, a PP is defined by all the participants’ scores. As in music, a score can be divided into “movements”. For instance, in the pedagogical context, such movements correspond to a selection, a search, etc.. The instructional designers involved in this project prefered to call them phases. A phase is composed of, at least, one instruction (collect information, analyse information, etc.). An instruction is in fact the transformation of one artifact to another. This meta-model is the basis of the development of the editor of Pedagogical Procedures: PPdesigner.
Fig. 1. The Pedagogical Procedures Meta-Model
3 PPdesigner In order to transform these descriptions into formal models via an editor, we have analysed the way in which an instructional designer could build a PP by manipulating the ingredients it is made of: participants, phases, instructions and artifacts. We have also identified the entities and the relations that s/he can easily use to express the scenario, whilst keeping it as intelligible as its textual description. The resulting building process is described in [9]. During this participatory design phase, the designers expressed that it must be possible to build a PP starting from objects, or from instructions. These conclusions only serve to strengthen the necessity of offering multiple entries to the designers: by the contents, by the interactions, by the phases, etc. Then, even if the construction of a PP should be done in various manner, it is definitely necessary to first identify the participants and then to define their respective partition (score).
382
C. Martel et al.
PPdesigner has been co-designed by the computer scientists and the instructional designers involved in the project. The Model Driven Engineering methods and tools have been used to specify the interface following the method described in [] and also to develop the editor. For that purpose, the EMF/GMF Eclipse plugins have been used. The figure 2 gives a hardcopy of the editor.
Fig. 2. PPdesigner interface
The next objective is to be able to transform PPs into computational scenarios so that the corresponding learning activity can be deployed and performed. As the language of the PP is not a computational one, there is a need to transform the formalisation of a PP in another language, computational. LDL has been chosen as the targeted language. The resulting description will then benefit from the LDL environment: the operationalisation module and the execution engine.
4 Discussion The term "Pedagogical Procedure" can be bracketed with several other terms that have long been used in the field of educational technologies: tested "educational strategies" [10] [11], "pedagogical formula" [12], "teaching techniques" or "educational tactics" [1] [13]. For all these authors, these various terms indicate some kind of routine used by teachers and identified according to their more or less strict degree of codification. The codification is above all useful for their memorization and their use in specific learning contexts. In this perspective, the codification differs from pedagogical pattern codification [14] which captures and expresses expert knowledge in a domain through patterns based on pedagogical problems and solutions. The pedagogical procedures proposed in this paper have similar characteristics, but we define a more precise and more "formal” codification. Indeed, we consider that the codification has to take into account not only the characteristics of the PPs but also
PPdesigner: An Editor for Pedagogical Procedures
383
the fact that these PPs are going to be used within digital collaborative learning situations that need to be instrumented in order to run in virtual learning environments. Other research works have introduced the codification of collaborative learning situations. For instance, Hernández-Leo & al [15] propose various types of design patterns, called "Collaborative Learning Flow Patterns" (CLFPs) which correspond to winning practices identified from the practioners. They aim at helping the teachers in their design process by providing them a set of reusable, adaptable and combinable patterns (Jigsaw, Brainstorming, etc.). These collaborative patterns differ from the Pedagogical Patterns in several aspects: 1) They aim at structuring and at organizing the learning situation (learning activity, roles, resources organization), while the pedagogical procedures support the four pedagogical pilars (organisation, learning, evaluation, observation) [16] of the pedagogical situation; 2) The proposed CLFPs come from the socioconstructivist approach while the PP allow to support various educational approaches; 3) The CLFPs can be considered as guidelines while the PPs are codified and shared scenarios; 4) Considering evaluation, new patterns are proposed that have to be combined with the original CLFPs whereas in the PP approach, the evaluation is expressed in the PP itself when necessary; 5) Finally, number of CLFPs is fixed while, in the PP approach, the designer could add new PPs to be shared with others.
5 Conclusion The design aspect is more than ever at the center of the process of production of eLearning trainings. This orientation, followed now by instructional designers, is promising in that it improves the reusability and the sharing of the corresponding scenarios produced. This helps to increase the supply of existing scenarios and thus the dissemination of the eLearning approach. At the same time, the instructional designers should more clearly participate in the conception of the tools dedicated to the design of these training modules. In this paper, an initiative for a participatory design of a tool of this family has been presented. This joint working highlighted the necessity of clarifying the means by which the instructional designer tries to build a scenario since s/he relies on a precise definition of this scenario and on some simple objectives [17]. Via the example of Pedagogical Procedures, which are very codified scenarios and recognized as such by teachers, the main objects that are useful for the conception of the editor have been determined. The definition of the central concept, that of the Pedagogical Procedure, and of the various associated models on which the conception of the editor is based have led to the development of the editor using a model driven approach. A same kind of approach is now being followed in order to instrument the transformations of the PP to LDL scenarios to allow their deployment and their execution. The challenge is to provide the means for all teachers to be able to use these tools and to develop them according to their wishes. This initiative joins others in the same field, such as the one of TenCompetence with the ReCourse tool development [5]. This work is part of an ambitious project, called OpenScenario [16], that aims to develop a malleable and flexible Integrated Development Environment of Activities based on pedagogical Scenarios (an IDEAS). OpenScenario will allow, through a
384
C. Martel et al.
unique interface, to access all tools and services required to flexibly create, deploy, monitor and assess scenario-based activities, and in particular pedagogical activities. Acknowledgments. We would like to thank Maïa Wentland from University of Lausanne for her support in this project, and Aurélie Despont, from Symetrix Corp. for her participation in the work on the Pedagogical Procedures.
References 1. Paquette, G.: Instructional Engineering in Networked Environments, 304 p. Pfeiffer/Wiley Publishing Co. (2003) 2. Koper, R., Tattersall, C. (eds.): Learning Design: a handbook on modelling and implementing network-based education & training. Springer, Heidelberg (2005) 3. IMS LD 2003 (2003), http://www.imsglobal.org/learningdesign/index.html 4. Martel, C., Vignollet, L., Ferraris, C., Durand, G.: LDL: a Language to Model Collaborative Learning Activities. In: Proc. of ED-MEDIA 2006 (2006) 5. Griffiths, D., et al.: D6.1 Annex 1 - IMS LD Authoring, TenCompetence project, IST2005-027087 (January 7, 2008), http://hdl.handle.net/1820/1149 6. Villiot-Leclercq, E.: Modèle de Soutien pour l’Elaboration et la Réutilisation de Scénarios Pédagogiques, PhD Thesis, June 2007, Université Joseph Fourier/Université de Montréal (2007) 7. Rossignol, A.: Le procédé La Martinière, Imp. Fabregue (1951) 8. BO n°10 du 8 mars 2007, Mise en œuvre du socle commun de connaissances et de compétences: l’enseignement du calcul, C. n° 2007-051 du 2-3-2007 (2007) 9. Martel, C., Vignollet, L., Ferraris, C., Villiot-Leclercq, E.: A design rational of an editor for Pedagogical Procedures. In: Proceedings of CSCL 2009, Rhodes, Grèce, June 8-13 (2009) 10. Pernin, J.-P., Emin, V., Guéraud, V.: Intégration de la dimension utilisateur dans la conception de systèmes pour l’apprentissage, Scénarisation pédagogique dirigée par les intentions, Revue ISI, Ingénierie des Systèmes d’Information (à paraître) (2009) 11. Lebrun, N., Berthelot, S.: Plan pédagogique: une démarche systématique de planification de l’enseignement. Éditions Nouvelles, Québec (1994) 12. Chamberland, G., Lavoie, L., Marquis, D.: 20 formules pédagogiques. Presses de l’Université du Québec, Sainte-Foy (1995) 13. Ministry of education of Saskatchewan. Approches pédagogiques : infrastructures pour la pratique de l’enseignement (1993), http://www.sasked.gov.sk.ca/docs/ francais/tronc/approches_ped/index.html (retrieved on October 2008 ) 14. Pedagogical Patterns Project Home, http://www.pedagogicalpatterns.org 15. Villasclaras-Fernández, E.D., Hernández-Leo, D., Asensio-Pérez, J.I., Dimitriadis, Y.: Incorporating assessment in a pattern-based design process for CSCL scripts. Computers in Human Behavior (2009), doi:10.1016/j.chb.2009.01.008 16. Jullien, J.M., Martel, C., Vignollet, L., Wentland, M.: OpenScenario: a flexible integrated environment to develop Educational Activities based on Pedagogical Scenarios. In: IEEEICALT 2009, Riga, Latvia (to be published, 2009) 17. Henri, F., de la Teja, I., Lundgren, K., Ruelland, D., Maina, M., Basque, J., Cano, J.: Pratique de design pédagogique et objets d’apprentissage. Actes du colloque Initiatives 2005, Thematic debate (March 2007), http://www.initiatives.refer.org/ Initiatives-2005/document.php?id=143 (accessed online on April 2009 )
Ontology Enrichment with Social Tags for eLearning Paola Monachesi, Thomas Markus, and Eelco Mossel Utrecht University, Utrecht, The Netherlands
[email protected],
[email protected],
[email protected]
Abstract. One of the objectives of this paper is to verify whether it is possible to extract meaningful related tags from a limited set of tagged resources and from resources tagged by only few users. This is the expected situation in a learning community. An additional goal is to assess whether the related tags extracted can be a useful source for enriching an existing domain ontology. A user centered evaluation has been carried out to analyze the effect of the enriched ontology in supporting a learning task in comparison with clusters of related tags. The experiment has been carried out both with beginners and advanced learners. Keywords: e-learning, ontology enrichment, social tagging, delicious, informal learning.
1
Introduction
One of the objectives of the Language Technology for LifeLong Learning project1 is to develop services that facilitate learners in accessing formal and informal knowledge sources in the context of a learning task. To this end, a Common Semantic Framework (CSF) is being developed. Its purpose is to support the learner through a personalized knowledge discovery path. Ontologies constitute the core of the Common Semantic Framework. They can play an important role within eLearning, as concluded in [4]. However, this formalization might not correspond to the representation of the domain knowledge available to the learner which might be more easily expressed by the tags emerging from communities of peers via available social media applications. It is for this reason that we want to complement the formal knowledge, represented by domain ontologies, with the informal knowledge emerging from tagging, as well as the social networks which emerge from social media [5]. Our goal is to improve the retrieval of appropriate learning material and to allow learners to connect to other people who can have the function of learning mates and/or tutors. Thus, it will be possible to provide a more personalized experience able to fulfill the needs of different types of learners. More specifically, we relate social tags, given by learners to organize learning objects or by other communities of users within social media applications, to the 1
www.ltfll-project.org
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 385–390, 2009. c Springer-Verlag Berlin Heidelberg 2009
386
P. Monachesi, T. Markus, and E. Mossel
concepts of an existing domain ontology on computing. In this paper, we focus on a case study in which we employ Delicious data. One of the objectives of this study is to verify whether it is possible to extract meaningful related tags from only a small set of tagged resources and from resources tagged by only few users. This is the expected situation for a learning community which would be smaller than that of a social media application such as Delicious. An additional goal is to assess whether the related tags that have been found can be a useful source for enriching a given domain ontology. In order to evaluate the result, a preliminary evaluation has been designed to compare the ontology enriched with social tags and clusters of related tags (without the ontological backbone) in the context of a learning task carried out by both beginners and advanced learners.
2
Case Study: Extracting Data from Delicious for Ontology Enrichment
With around five million users, Delicious is one of the biggest social bookmarking/social tagging websites on the internet. It seems thus appropriate to use this social media application to carry out our case study. Various aspects of social tagging have been described in the literature in the last years. A good overview of the basic characteristics of social tagging systems is given by [2]. In our case study, we have taken the work of [1] with respect to similarity measures as starting point as well as [6] and [7] for what concerns the relation between tagging and the labels of ontology concepts. Our main goal is to identify how many users and how many resources are necessary to find relevant related tags to a given one which can be employed to enrich an existing ontology. To this end, we have crawled Delicious to create our data set which consists of 598379 resources, 154476 users and 221796 tags. In order to find related tags to a seed concept, we have employed the standard coocurrence measure but with a restriction on the number of resources and users considered in the calculation. Tags need to coocur on a user-resource pair instead of just the resource itself, meaning that each user needs to have added at least the seed term and one additional tag to be taken into consideration. For each resource n users are picked randomly that meet this criterium. The specified limits on users and resources determines the results from the coocurrence measure. We have performed experiments on common and less common types of tags. The common tags used are: linux, tools, design, blog, software, music, programming, politics, science. The uncommon tags used are: java, touchscreen, xml, xhtml, tex, standards, html5, text-to-speech, gwt, django, mortgage. They have been used as seed tags to get corresponding coocurring tags. We have run experiments considering 2, 3, 4 5, 10, 15, 20 and 25 users in combination with 10, 15, 20, 30, 40, 50 100, 150, 200, 300, 400, and 500 resources. Our hypothesis is that it is easier to find related tags for common tags than for uncommon tags because they are more frequent. Therefore, fewer users and resources would be required to acquire them adequately.
Ontology Enrichment with Social Tags for eLearning
2.1
387
Analysis of the Data
Delicious offers a list of the top-11 related tags for a given tag. This list has been considered as our gold standard, and we have compared the related tags found with this list, measuring thus precision and recall. The underlying assumption being that this gold-standard provides an acceptable basis to asses whether or not a specific number or users and resources can provide related tags. Analysis of the resulting data shows that for common terms 10-15 users and between 200 to 300 resources provide relevant tags with an average precision of about 0.7 compared to the gold standard. For the less common terms, the precision drops to 0.45 for 5 to 10 users and 100-150 resources. In this case, we have taken less users and resources into consideration since increasing their number doesn’t noticeably improve performance.
(a) Specific tags
(b) Common tags
Fig. 1. Fig. 1(a) shows the precision for various amounts of resources and users for specific tags and Fig. 1(b) shows the precision for common tags
Some examples of seed tags and their associated outputs compared to the gold standard (in bold) are given below: – software: tools, web2.0, free, design, opensource web, development, freeware, programming, windows, online – linux: software, opensource, tools, ubuntu, windows, howto, backup, programming, unix, tutorial, free – html5: html, webdesign, web, webdev, w3c, google, canvas, standards, markup, xhtml, reference – django: python, programming, webdev, development, web, framework, opensource, tutorial, blog, scalability, rest The next step is to relate the found tags to the concept in the ontology. As a starting point, we have taken the ontology on computing that was developed in the FP6 EU project "Language Technology for eLearning" (cf. [3]).2 It contains 1002 domain concepts, 169 concepts from OntoWordNet and 105 concepts from DOLCE Ultralite. The connection between words/tags and concepts is established by means of language-specific lexicons, where each lexicon specifies one 2
http://www.lt4el.eu/
388
P. Monachesi, T. Markus, and E. Mossel
or more lexicalizations in one language for each concept. Social tags can thus represent an additional lexicalization of existing concepts (i.e. a synonym) or (the lexicalization of) a new concept. If for a concept in the ontology, a related tag is found that is not in the ontology yet, a new concept can be added for which this tag represents a lexicalization and it is connected to the seed concept by a basic kind of relation (e.g. “related to”). However, this does not give the new concept a place in the hierarchical structure. It should be noticed that the related tags could be also used to create a different representation than ontologies, that is clusters of related tags. To summarize: the tags extracted from the social media applications constitute an efficient way to enrich existing ontologies with new concepts and their lexicalizations. They also allow for the addition of the user dimension to the domain knowledge validated by experts.
3
Evaluation in eLearning Context
A preliminary experiment has been designed to compare the support provided by an ontology enhanced with social tagging and clusters of related tags (lacking ontological structure) in the context of a learning task. The underlying assumption being that conceptualization can guide learners in finding the relevant information that can help them carry out a learning task, which was a quiz in our case. The hypothesis was that learners might differ with respect to the way they look for information in a certain domain depending on whether they are beginners or more advanced learners. While beginners might profit from the informal way in which knowledge is expressed through tagging more advanced learners might profit from the way knowledge is structured in an ontology. Thus, an ontology enriched with social tagging might be the best option. The evaluation we have envisaged is based on a quiz that could be solved with the support of an ontology enriched with social tags and clusters of related tags. The experiment was limited in scope and it was carried out with 6 beginners and 6 advanced learners, all with an academic background. All of them use the internet regularly for search in everyday life and they all employ search engines regularly for study and research related tasks. The beginners had no background in computer science and in the domain of the quiz, that is markup languages, whereas the advanced learners had a computer science background and were familiar with most of the terms. Both groups were supposed to answer 3 questions using a visualization of the ontology enriched with social tags and 3 questions using a visualization of the clusters of related tags. The quiz questions were specifically designed to be answered using the visualizations and consisted of multiple-choice questions. The learners doing the quiz were then able to assess which parts of the graph helped in answering the questions and which offered little value. We elicited this knowledge by having the subjects answer a short questionnaire consisting of 10 questions. These questions were about what elements they used
Ontology Enrichment with Social Tags for eLearning
389
of the graph (concepts, relations, documents, related terms, structure). After the questionnaire was filled, a short interview with the learners was held to query them about potential problems, dislikes or advantages they experienced while doing the experiment, specifically tailored to their filled-in questionaire. 3.1
Results
The responses to the questionnaire by beginners showed that the enriched ontology can be a valid support to answer certain types of questions, because the relations between the concepts are given in an explicit way. This wasn’t the case for all the beginners though. There is great diversity between visually oriented people and textually oriented ones with respect to the preferred type of knowledge representation. Beginners Ontology-based Tag-based documents (4.4) documents (3.4) social tags (3.6) social tags (2.8) structure (2.8) structure (1.8)
Advanced Ontology-based Tag-based social tags (4.33) documents (3.5) structure (3.17) structure (3.0) documents (3.17) social tags (3.0)
Fig. 2. Scores given by learners (1 = strongly disagree and 5 = completely agree)
The table above lists the scores that beginners and advanced learners gave to the various aspects of the visualisation and their value in helping them find the appropiate answer. Documents were the most used in the case of beginners using the enhanced ontology. However, the tags coming from the social media also played a relevant role, but they were relevant mainly because they were combined with the conceptual structure since the results of the tag visualization (and the interview) reveal that tag structure did not help beginners in finding the answer. Also in the case of tag visualization, beginners mainly rely on documents to find the relevant information and rely less on related terms and tag structure. This attitude is probably influenced by the way search occurs in standard search engines. On the other hand, advanced learners indicated that they were able to find the answer to the quiz quickly by using the enriched ontology and they also used less documents to find the answers. They gave the structure of the ontology a higher rating than the beginners. Interestingly, advanced learners were more positive about the structure of the tag visualization than the beginners. This is probably due to their background knowledge that enabled them to interpret the graph better. In general, both advanced learners and beginners profited from the clear ontology structure present in the graph, even though it is not always easy to infer this result from the questionnaire. The interviews following the experiment revealed that the learners found the enriched ontology very helpful in navigating through the information, though not marking it as such under the label ‘structure’. Some subjects reported that they did not use the structure at all, but did
390
P. Monachesi, T. Markus, and E. Mossel
find that the enriched ontology was easier to navigate than the cluster of related tags. The majority of our users has also recognized that structuring the information through an ontology enriched with social tags has a big potential for knowledge discovery since it would allow them to find out about possible new topics in a more effective and structured way than via search engines. In this respect, users envisage more potential in the ontology enriched with tags then in simple tag visualization, at least with respect to learning tasks.
4
Conclusions
The approach described in this paper can be employed to extract related tags, not only from Delicious, but more generally if there is a collection of tagged resources, as for example in Youtube or in Flickr. It constitutes a valuable method to enrich an existing ontology with new concepts and to add a social dimension to it. Even if few people have tagged the resource which might be the case within community of learners. The ontology enriched with social tags constitutes a useful approach in the context of a learning task. Both beginners and advanced learners agreed that it can be a valuable tool in knowledge discovery tasks since they can provide structure to the heterogeneous list of documents which constitutes the output of search engines.
References 1. Cattuto, C., Benz, D., Hotho, A., Stumme, G.: Semantic Analysis of Tag Similarity Measures in Collaborative Tagging Systems. In: 3rd Workshop on Ontology Learning and Population (OLP3), Patras, Greece (2008) 2. Golder, S., Huberman, B.A.: The Structure of Collaborative Tagging Systems (2005), http://arxiv.org/ftp/cs/papers/0508/0508082.pdf 3. Monachesi, P., Lemnitzer, L., Simov, K.: Language Technology for eLearning. In: Nejdl, W., Tochtermann, K. (eds.) EC-TEL 2006. LNCS, vol. 4227, pp. 667–672. Springer, Heidelberg (2006) 4. Monachesi, P., Simov, K., Mossel, E., Osenova, P., Lemnitzer, L.: What ontologies can do for eLearning. In: Proceedings of The Third International Conferences on Interactive Mobile and Computer Aided Learning, IMCL 2008 (2008) 5. Posea, V., Trausan-Matu, S., Mossel, E., Monachesi, P.: Supporting Collaborative Learning Across Social Media Applications. In: Proceedings of the 8th International conference on Computer Supported Collaborative Learning 2009, CSCL 2009 (2009) 6. Specia, L., Motta, E.: Integrating Folksonomies with the Semantic Web. In: Franconi, E., Kifer, M., May, W. (eds.) ESWC 2007. LNCS, vol. 4519, pp. 624–639. Springer, Heidelberg (2007) 7. Van Damme, C., Hepp, M., Siorpaes, K.: Folksontology: An integrated approach for turning folksonomies into ontologies. In: Proceedings of the ESWC Workshop Bridging the Gap between Semantic Web and Web 2.0, Innsbruck, Austria (2007)
How Much Assistance Is Helpful to Students in Discovery Learning? Alexander Borek1,2, Bruce M. McLaren2,3, Michael Karabinos2, and David Yaron2 1
University of Karlsruhe, Germany Carnegie Mellon University, USA 3 German Research Center for Artificial Intelligence, Germany
[email protected],
[email protected] 2
Abstract. How much help helps in discovery learning? This question is one instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage. Keywords: assistance dilemma, intelligent tutoring, inquiry learning, chemistry learning.
1 Introduction A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” [1]. Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 391–404, 2009. © Springer-Verlag Berlin Heidelberg 2009
392
A. Borek et al.
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e., assistance withholding), often called discovery or inquiry learning. They argue that assistance withholding lets students construct knowledge on their own. Other researchers have suggested that the optimal instructional design depends on the students’ level of understanding [9,10]. For instance, it has been suggested that giving full instructions to novices, and then fading support as the novices’ knowledge improves, is best for learning [9]. On the other hand, this work has not precisely identified the amount or timing of assistance that should be provided. More recently, some researchers have suggested that the assistance dilemma can be viewed along various “dimensions,” such as timing of feedback and example study vs. problem solving, and that it may be possible to develop predictive models of the right level of assistance necessary for optimal learning along these dimensions [11,12]. In general, this work, while still preliminary, suggests that a mid-level assistance approach is usually optimal. For instance, McLaren, Lim and Koedinger [12] investigated the example-problem dimension of the assistance dilemma in three studies in stoichometry, and a mid-level assistance approach, i.e., alternating worked examples and tutored problems, led to the most efficient learning. In the work reported in this paper we investigate the optimal amount of assistance in a discovery-oriented domain. In contrast to many domains in which problems are more structured, discovery-learning problems usually involve more open-ended experimentation and thus may require a different level of assistance. Our interest in this work is in taking a first step at identifying the optimal amount of assistance in such discovery learning domains. Our approach focuses on three dimensions of assistance that have been explored in more structured and formalized domains, i.e.: 1) Should immediate yes/no feedback be given to students? 2) What type of feedback content should be given to students? 3) When, how much and what kind of hints should be given to students? In our experiment we used a virtual chemistry laboratory [13], which we integrated with an intelligent tutor built with the Cognitive Tutoring Authoring Tools (CTAT), an authoring system for cognitive and example-tracing tutors [14]. We tested three widely varying points along a assistance continuum in a real classroom setting: from minimal assistance, in which only very basic feedback was provided, to medium assistance, in which help was given upon request or on incorrect steps, to high assistance, in which students were coaxed to take an optimal problem-solving approach. Our goal was to determine which level of assistance leads to the best learning outcomes in a discovery-learning environment. To our knowledge, there has been no prior study that has compared these three (quite different) levels of assistance in a discovery-learning context. A secondary aim of the work was to experiment with CTAT in building intelligent tutors for domains involving simulation and discovery learning. The work we have done and the results we have obtained, although still preliminary, indicate that CTAT can be successfully employed in such a discovery-like domain.
How Much Assistance Is Helpful to Students in Discovery Learning?
393
2 The Technology The virtual chemistry laboratory, called the VLab for short, is a computer-based learning environment that simulates an actual chemistry laboratory [13]. The VLab was developed to support introductory level chemistry learning and can be used to perform virtual experiments in various branches of chemistry, such as thermo chemistry and stoichiometry. To design and run experiments, students choose from various tools and substances (e.g. beakers, pipets, flasks, foam cups, bunsen burners, different solutions) (See Fig. 1). The VLab serves as “a bridge between the traditional paperand-pencil activities from the textbook and actual chemical phenomena” [13] enabling a new kind of interactive learning of chemistry phenomena [15]. The key idea behind the VLab is to help students connect factual and procedural knowledge through authentic chemistry activities, such as experimental design and interpretation.
Fig. 1. A screenshot of the VLab as integrated with the Cognitive Tutoring Authoring Tools
In the base version of the VLab no instructions, hints, or feedback messages are provided. To provide such assistance to the student, we integrated the VLab with an intelligent tutor [17] built with CTAT [14]. CTAT supports the capability to build example-tracing tutors, a type of intelligent tutor built using programming-by-demonstration techniques [16]. Example-tracing tutors emulate the behavior of Cognitive Tutors [18], but with lower authoring costs and without the requirement of programming skills. Example-tracing tutors work by comparing student problem solving steps to examples of desirable problem-solving behavior. As part of this project, the “Messages” box, “Done” button, and “Help” button shown at the bottom of Fig. 1 were added to the VLab’s user interface, and a CTAT tutor was integrated with VLab. These changes allow the VLab to be more supportive,
394
A. Borek et al.
providing the student with more help and feedback regarding on-going activities than was previously possible. Two of the study conditions utilize this extended version of VLab, as explained in the description of conditions below.
3 Method 3.1 Design The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: 1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, 2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track1, and 3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path. Students were given the “discovery” task of mixing chemical solutions that lead to a desired final temperature. This goal was posed in the context of a real world task: preparing food while on a camping trip. The VLab contained solutions of two chemical species, X and Y, that react to form Z via the reaction X + Y Æ Z. The reaction releases heat that goes into the water and raises the temperature of the solution. The central conceptual basis for solving this problem is the realization that the change in temperature is proportional to the concentration of the initial solutions, where concentration is the number of molecules per unit volume (1 M = 1 mole of particles per liter of solution). The student’s task was to discover this concept through experimentation with different concentrations. In the following, we take a closer look at how the assistance differed for each of study conditions as students solved this task (and related subtasks) in the VLab. Inquiry-learning Condition. This condition used the base version of the VLab. Students were given the general problem description and received no hints, and minimal feedback, on how to solve a particular problem, as outlined in Table 1. The only feedback provided was on the correctness of the final solution (i.e., the concentrations of the solutions mixed together), prompting to continue after completing subtasks, and provision of the final solution if the student reaches an incorrect solution. After solving, students were asked to type an explanation of their observations into a textbox, but no feedback was given on the explanation. The Tutored Condition. Students in the tutored condition (see Table 2) were provided with the extended version of the VLab, which used example-tracing CTAT tutors in unordered mode, i.e. students could perform actions in any order. They received no instruction before or after a step unless they explicitly asked for help. By clicking on 1
One of the authors, David Yaron, is a chemistry professor and provided guidance on diagnosing when a student is “far off track”. One example is the obvious misuse of a chemistry tool, e.g., when a bunsen burner is taken from the lab cupboard for a problem not involving the application of heat to substances.
How Much Assistance Is Helpful to Students in Discovery Learning?
395
Table 1. The three dimensions of assistance in the Inquiry-learning Condition Assistance Dimensions
Inquiry-learning Condition
Immediate yes/no feedback
No immediate yes/no feedback on intermediate steps, but feedback given on the correctness of the final solution.
Feedback content
Only two types of basic feedback content are provided: (1) Student is told to move on to the final solution after completing three explanatory tasks (described later), (2) If student provides incorrect final solution, the correct solution is given.
Hint content and timing
No hints available.
the help button, the first level of hint appeared, which gave an implicit instruction, for instance, reminding the student of the goal of the current task, e.g. “Your goal is to mix 10 mL of 1M X with 10 mL of 1M Y in a foam cup” or a leading question used to steer the student in the right direction, e.g. “What else do you need to make the solution?” If the first level of hints was not sufficient, students could proceed to the second and third level of hints, which gave gradually more explicit instructions, as shown in Table 2. Students in this condition also received immediate error messages whenever they strayed far off any solution path. Table 2. The three dimensions of assistance in the Tutored Condition. Assistance Dimensions
Tutored Condition
Examples
Immediate yes/no feedback
Immediate feedback on incorrectness only when student is far off track. No feedback on correct steps.
Feedback content
Feedback says which step is wrong but does not provide explanation why it is wrong.
“No, this was not the right amount of water.”
Hint content and timing
Content: Three levels of hints, starting with implicit instructions, gradually becoming more explicit until specific step is given.
Level 1: “What else do you need to make the solution?”
Timing: Each level of hint is given upon request only.
Level 3: “Click on the flask labeled ‘1 M X’ in the stockroom and drag it to the workbench.”
Level 2: “The 1 M X might be useful.”
396
A. Borek et al.
The Direct-instruction Condition. In the direct-instruction condition (see Table 3) students also used the extended version of VLab, but had to follow a specific solution path in a specific step order (ordered mode). A message was given before each step telling the students the precise action they should perform next. Depending on the current step, an explanation of the goal and why the step is sensible was additionally given to the student, e.g. “The goal is to mix 10 mL of 1M X with 10 mL of 1M Y.” If students did not follow the instruction, even if they took an action on an alternative correct path, an immediate feedback message would be displayed requesting the expected step be taken, e.g., “No, this is wrong. Please reconnect 1 M Reagent Y to the foam cup!” The pedagogical rationale for this form of direct instruction was to ensure that students learn and stay on the optimal solution path. Table 3. The three dimensions of assistance in the Direct-instruction Condition Assistance Dimensions
Direct-instruction Condition
Immediate yes/no feedback
Immediate yes/no feedback on every correct and incorrect step.
Feedback content
Explicit instructions and explanations for each incorrect step.
“No, this is not correct. Remove this item from the workbench and take out the foam cup. A foam cup is better, because it is insulated and will prevent the heat generated by the reaction from escaping into the surroundings.”
Hint content and timing
Content: Explicit instruction before each action, containing an explanation of the goal. One additional explicit hint is also available upon request, specifying the instruction in more detail.
Explicit instruction before each action, given automatically: “The goal is to mix 10 mL of 1M X with 10 mL of 1M Y. To begin, select the flask labeled ‘1 M Reagent X’ in the stockroom and drag it to the workbench.”
Timing: Explicit instruction is given automatically before the student takes each step. One additional hint available upon request only.
Examples
Additional hint upon request: “Take out the foam cup, which is in the glassware cabinet, and drag it to the workspace. A foam cup is used because it is insulated and will prevent the heat generated by the reaction from escaping into the surroundings.”
3.2 Hypothesis Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition. 3.3 Participants and Condition Assignment Participants were 87 undergraduate students in a “Modern Chemistry II” course given during the spring term of 2009 at Carnegie Mellon University (U.S.A.). Most students
How Much Assistance Is Helpful to Students in Discovery Learning?
397
were in their freshman year and had either science or engineering as a major. The materials were presented to students as an optional exercise, with the score on the activity replacing the lowest of the student's quiz grades. (The average of 10 quizzes counts as 20% of the final course grade.) Students were randomly assigned to one of the conditions by pulling a number, either “1,” “2,” or “3.” Altogether there were 47 students in the Inquiry-learning Condition, 16 in the Tutored Condition, and 23 in the Direct-instruction Condition2. Each student worked alone on his or her own machine using the version of VLab appropriate to the assigned condition, as described above. All students were unaware of the experimental design and the existence of other conditions. Table 4. Activities during the study Activity
Time
Medium
Same in all conditions
Introduction and Consent
4 min
Instructor + Paper-based
Yes3
Pretest
6 min
Paper-based
Yes
40 min
Computer-based (VLab)
Posttest & Questionnaire
10 min
Paper-based
TOTAL
60 min
Intervention (different per condition): Camping Problem in the VLab
No
Yes
3.4 Procedure and Materials The study consisted of four basic activities, as shown in Table 4. First, students received a general introduction and a consent form. Second, students took a sixminute paper-based pretest. Third, in the intervention part of the study, students in the three conditions had to solve problems in the VLab according to the different conditions discussed above. They were allotted 40 minutes for this part of the study. Finally, all students completed the paper-based posttest, which was followed by a short questionnaire; they were allowed 10 minutes to complete this portion of the study. The entire study took 60 minutes. In the following, a short overview of each activity is provided. 2
Although 150 students actually participated, technical problems led to the elimination of 63 students from the Tutored and Direct-instruction Conditions. Note that while this elimination of subjects led to lower numbers in the Tutored and Direct-instruction Conditions, it did not alter the random nature of assignment or adversely impact the results. 3 Except that the URL to the problem website was given in the Inquiry-learning Condition.
398
A. Borek et al.
Introduction and Consent Before the study started students were asked to read a document describing the study. They were then allowed to decide whether or not to participate. No students elected not to participate. Pretest The study began with a pretest, which was the same for all conditions. The pretest consisted of a reaction equation and an example reaction; students were asked to solve four tasks on their own based on these items in six minutes. These questions probed the direct proportionality between the change in temperature and the enthalpy of reaction (an underlying concept covered in the course lectures) and the direct proportionality between the change in temperature and solution concentration (a concept that had never been explicitly discussed in the course). Intervention: VLab “Camping Problem” Next, the students were presented with the “Camping problem” (see Table 5) and a paper explaining how to use the VLab. They then worked on the “Camping Problem” in the VLab, which differed for each condition according the level of assistance provided, as described above. In each condition, the intervention began with an exploratory phase designed to focus student attention on the relationship between the change in temperature and the concentrations. In the exploratory phase, students were asked to make the following mixtures and measure the resulting change in temperature (as shown in Table 5): Mixture A: 10 mL 1.0 M X + 10 mL 1.0 M Y Mixture B: 5 mL 1.0 M X + 5 mL 1.0 M Y Mixture C: 10 mL 0.5 M X + 10 mL 0.5 M Y Mixtures A and B lead to identical final temperatures. Reducing the volume by onehalf also reduces by one-half the amount of X and Y that react and thus halves the amount of heat generated. However, the amount of water that must be heated is also halved such that the final temperature is the same for mixtures A and B. For mixture C, the temperature change is half that of mixture A, since the amount of X and Y that react are cut in half while keeping the amount of water to be heated fixed. Since halving the concentration halves the temperature change, students should infer that the temperature change is proportional to the concentration. This direct proportionality may then be used to determine the concentration required to reach the target temperature. Following this exploratory phase, students were given the task of creating solutions that would give the desired final temperature for the “Camping Problem.” In the Inquiry-learning Condition, only the problem was presented and the students had to find solutions on their own. In contrast, students in the Tutored Condition and Directinstruction Condition were guided through the camping problem by four subtasks (see Table 5: Solution approaches 1 and 2 a,b,c), which demonstrate two different problem approaches. In solution approach 1, the proportionality relation was used to explicitly calculate the concentration needed to achieve the target concentration. Creating solutions of the desired concentrations required an additional set of dilution computations. In solution approach 2, explicit mathematical computations were avoided by designing
How Much Assistance Is Helpful to Students in Discovery Learning?
399
experiments that achieve the same goal. First, equal volumes of the starting X and Y solutions (1 M each) were mixed together, leading to a solution that exceeded the target temperature. Water was then added until the temperature reduced to the target temperature. This amount of water was then used to determine the required concentrations. Table 5. Intervention: “Camping Problem” in the VLab
Problem Name
Short problem description
Explanatory Task 1
Mixture A: Mix 10 mL 1 M Reagent X with 10 mL 1 M Reagent Y in a foam cup. Type in the change in temperature of the created solution.
Explanatory Task 2
Mixture B: Mix 5 mL 1 M Reagent X with 5 mL 1 M Reagent Y in a foam cup. Type in the change in temperature of the created solution.
Explanatory Task 3
Mixture C: Mix 10 mL 0.5 M Reagent X with 10 mL 0.5 M Reagent Y in a foam cup. Type in the change in temperature of the created solution.
Solution Approach 14
Create a solution with the given temperature by calculating the needed concentration and creating those concentration by diluting X and Y.
Solution Approach 2a
Mix 5 mL 1 M Reagent X with 5 mL 1 M Reagent Y in a foam cup. Add 10 mL of water. Compare the change in temperature of the created solution to that of mixture C, to show that order (dilution before versus after mixture) does not matter.
Solution Approach 2b
Mix 10 mL of 1 M Reagent X with 10 mL of 1 M Reagent Y in a foam cup. Add water until you have reached the desired temperature.
Solution Approach 2c
Divide the amount of added water through two and dilute 10 mL 1 M Reagent X and 10 mL 1 M Reagent Y with this amount of water to create the needed concentration. Mix both solutions in a foam cup and type in the resulting change in temperature.
Posttest and Questionnaire The study concluded with a paper-based posttest, which contained a near-transfer component with standard exercises and a conceptual-understanding component. The near-transfer part was subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The 4
Students in the Inquiry-learning Condition were asked to try different solution approaches on their own instead of having to solve solution approaches 1 and 2 a,b,c explicitly.
400
A. Borek et al.
near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. The conceptual-understanding portion of the posttest consisted of two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal. The responses were coded on a rubric that assigned points to each of the factors listed. At the end of the posttest, but within the allotted 10 minutes, students answered a brief questionnaire, containing six Likert-scale questions (1-5 scale), probing for students’ self-assessment on task difficulty, frustration level, usefulness, etc. of the materials, and a single free-form “Comments” section. 3.5 Results We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers (i.e., authors 1, 3, and 4 of this paper) graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. Fig. 2 shows the means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores). We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptualunderstanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)= 3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions. Finally, we segmented students into strong (best 50%) and weak (worst 50%) groups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.
How Much Assistance Is Helpful to Students in Discovery Learning?
401
Fig. 2. Means of the posttest scores across all conditions
4 Discussion In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Directinstruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquirylearning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions5. It is possible that students in the Direct-instruction Condition were hurt by too little instruction on how to use their version of the VLab. As the study time was very short, an introduction on how to use the CTAT-enabled VLab was given on paper instead of through trial use of the software. Some students in the Direct-instruction Condition 5
Unfortunately we were unable to analyze these, and other possible explanations, through process analysis, since all logging data was lost due to a technical problem.
402
A. Borek et al.
may not have understood they were supposed to explicitly follow the instructions. In the Inquiry-learning Condition, lack of information may have led to extraneous load [19]. That is, there may have been insufficient cognitive resources available for learning, given the variety of tasks the students had to do simultaneously (i.e., trying to solve the problem, navigate an unfamiliar environment, choose the next substance, etc., all without guidance), thus explaining the lower learning outcomes compared to the Tutored Condition. Furthermore, as discussed above, the differences in conceptual learning were larger and significant for stronger students than weaker students compared to other conditions. We have two possible interpretations for this finding. First, stronger students are likely to have a higher metacognitive awareness than weaker students [20] and thus may have used the available hints and feedback of the Tutored Condition more effectively. Second, stronger students, who tend to be more independent learners, may have simply been more motivated to learn since they were allowed to make their own decisions and construct their own knowledge, asking for help only when they really felt they needed it. Finally, why were differences only observed for conceptual questions? This can be explained by the nature of the camping problem, which is focused on conceptual aspects of thermo chemistry. That is, the camping problem, and use of the VLab to solve it, focused students on running experiments to learn concepts, rather than procedures or calculations. The procedure and calculations necessary to solve the neartransfer problems were done outside of the VLab in all conditions; thus, we would not (necessarily) expect that any of the conditions would do better than the others in the near-transfer part of the posttest.
5 Conclusion The assistance dilemma is concerned with the subtle choices involved in offering assistance to students as they engage in problem-solving activities and how to make choices that will optimize learning. The assistance dilemma becomes especially cogent when students engage with a software environment, such as the chemistry VLab, focused on inquiry and discovery. By integrating the VLab with an intelligent tutor, we were able to experiment with different levels of assistance, varied along different dimensions (i.e., timing, content, and type of feedback and hints). The study presented in this paper was conducted in a real science classroom setting using three conditions that span the assistance continuum of discovery learning. We found that students in a Tutored Condition (mid-level assistance) learned better on conceptual tasks than students in both a greater- and lesser-assistance condition. That is, it appears that the Tutored Condition provided the best balance of giving and withholding assistance. Moreover, stronger students benefited more from the Tutored Condition than weaker students. The results support the notion that the optimal level of assistance lies between the extremes of direct instruction and pure discovery, and that the learning gains from a given level of assistance vary based on student characteristics, such as student pre-knowledge. Furthermore, the results suggest that assistance should be given only when students are far off track or in response to student
How Much Assistance Is Helpful to Students in Discovery Learning?
403
requests for help (as opposed to being offered immediately at each step, as in the direct instruction condition). On the other hand, our study was of a limited duration (60 minutes), with a single student pool in a single domain of science, and did not include any measure of longterm retention (sometimes argued as the only real learning measure [21]). In addition, a process analysis of student activities during the intervention was not possible due to technical problems. Log data may have revealed how students utilize assistance, as well as the origins of the learning effects. Nevertheless, the work presented here, i.e., the merging of an open-ended discovery-learning environment with an intelligent tutor and achieving hypothesized learning results in a controlled study with variations on this system, is encouraging. We intend to replicate this experiment in more chemistry classrooms during the next school year. In addition, we believe the results may extend to other areas of science in which discovery learning is often used (e.g., “discovering” Darwin’s theory of evolution) and intend to apply this experimental model in such domains. In sum, this research represents an important step towards our long-term goal of developing a predictive model for the optimal amount of assistance to provide to students as they engage in a range of authentic learning activities. Acknowledgments. The Pittsburgh Science of Learning Center (PSLC), NSF Grant # 0354420 provided support for this research.
References [1] Koedinger, K.R., Aleven, V.: Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239–264 (2007) [2] Klahr, D., Nigam, M.: The Equivalence of Learning Paths in Early Science Instruction Effects of Direct Instruction and Discovery Learning. Psychological Science, 661–667 (2004) [3] Mayer, R.E.: Should There Be a Three-Strikes Rule Against Pure Discovery Learning? The Case for Guided Methods of Instruction. American Psychologist, 14–19 (2004) [4] Kirschner, P.A., Sweller, J., Clark, R.E.: Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75–86 (2006) [5] Bruner, J.S.: The Art of Discovery. Harvard Educational Review (31), 21–32 (1961) [6] Barrows, H.S., Tamblyn, R.M.: Problem-based Learning: An Approach to Medical Education. Springer, New York (1980) [7] Jonassen, D.: Objectivism vs. Constructivism. Educational Technology Research and Development 39(3), 5–14 (1991) [8] Steffe, L., Gale, J.: Constructivism in Education. Lawrence Erlbaum Associates, Inc., Hillsdale (1995) [9] Renkl, A., Atkinson, R.K., Große, C.S.: How Fading Worked Solution Steps Works - A Cognitive Load Perspective. Instructional Science 32, 59–82 (2004) [10] Cronbach, L., Snow, R.: Aptitudes and Instructional Methods: A Handbook for Research on Interactions. Irvington Publishers, New York (1977)
404
A. Borek et al.
[11] Koedinger, K.R., Pavlik Jr., P.I., McLaren, B.M., Aleven, V.: Is it Better to Give than to Receive? The Assistance Dilemma as a Fundamental Unsolved Problem in the Cognitive Science of Learning and Instruction. In: Love, B.C., McRae, K., Sloutsky, V.M. (eds.) Proceedings of the 30th Annual Conference of the Cognitive Science Society, Austin, TX, pp. 2155–2160. Cognitive Science Society (2008) [12] McLaren, B.M., Lim, S., Koedinger, K.R.: When and How Often Should Worked Examples be Given to Students? New Results and a Summary of the Current State of Research. In: Love, B.C., McRae, K., Sloutsky, V.M. (eds.) Proceedings of the 30th Annual Conference of the Cognitive Science Society, Austin, TX, pp. 2176–2181. Cognitive Science Society (2008) [13] Yaron, D., Evans, K., Karabinos, M.: Scenes and Labs Supporting Online Chemistry. Paper presented at the 83rd Annual AERA National Conference (2003) [14] Aleven, V., McLaren, B.M., Sewall, J., Koedinger, K.R.: Example-Tracing Tutors: A New Paradigm for Intelligent Tutoring Systems. International Journal of Artificial Intelligence in Education (IJAIED), Special Issue on Authoring Systems for Intelligent Tutoring Systems (2009) [15] Yaron, D., Freeland, R., Lange, D., Milton, J.: Using Simulations to Transform the Nature of Chemistry Homework. In: CONFCHEM (CONFerences on CHEMistry): On-Line Teaching Methods. Online-Conference: American Chemical Society (2000), http://www.ched-ccce.org/confchem/ [16] Lieberman, H. (ed.): Your Wish is My Command: Programming by Example. Morgan Kaufmann, San Francisco (2001) [17] VanLehn, K.: The Behavior of Tutoring Systems. International Journal of Artificial Intelligence in Education (IJAIED) 16, 227–265 (2006) [18] Koedinger, K.R., Anderson, J.R., Hadley, W.H., Mark, M.A.: Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education (IJAIED) 8, 30–43 (1997) [19] Sweller, J., Van Merriënboer, J.J.G., Paas, F.G.W.C.: Cognitive Architecture and Instructional Design. Educational Psychology Review 10, 251–296 (1998) [20] Bransford, J.D., Brown, A.L., Cocking, R.R. (eds.): How People Learn: Brain, Mind, Experience, and School. National Academy Press, Washington (2000) [21] Schmidt, R.A., Bjork, R.A.: New Conceptualizations of Practice: Common Principles in Three Paradigms Suggest New Concepts for Training. Psychological Science 3(4), 207– 217 (1992)
A Fruitful Meeting of a Pedagogical Method and a Collaborative Platform Bénédicte Talon1,*, Dominique Leclet2, Grégory Bourguin, and Arnaud Lewandowski1 1
Université Lille Nord de France, Maison de la Recherche Blaise Pascal - BP 719 - 62228 CALAIS CEDEX {talon,bourguin,lewandowski}@lil.univ-littoral.fr 2 UPJV-MIS, 33 rue St Leu 80039 AMIENS CEDEX 1
[email protected]
Abstract. This publication describes the work done to allow the instrumentation of a pedagogical method named MAETIC on a collaborative tailorable platform named CooLDA. MAETIC facilitates the apprenticeship of professional skills in universities thanks to a collaborative project approach. This method has been described and modeled according to the CooLDA meta model. The resulting platform has been experimented during year 2007-2008. This article presents the scientific context of this work, the CooLDA platform and its underlying activity model, the MAETIC pedagogical method and the various associated to it students activities. Keywords: Collaborative environment, activity model, project-based pedagogy.
1 Introduction The work presented in this paper is the result of a fruitful collaboration between researchers working on two subjects of concern. While focusing on the learning process, two of the authors [1][2] have produced a pedagogical method named MAETIC and have designed pedagogical devices to support it. MAETIC organizes a projectbased pedagogy and the first pedagogical device that we designed to implement MAETIC was an environment [1] using Weblogs [3]. After conducting experiments [2] to test the applicability of MAETIC in various teaching fields, our intention was to promote the deployment of it on a larger scale (more teachers - more domains) while proposing guides that describe the method (students and teachers activities to set up). A layer model was so designed to favor the device development while supporting the organizational choices (field of competences, calendar, actors, etc.) and the instrumentation choices (tools implementation for the MAETIC activities). Despite these efforts, one remaining challenge is to find appropriate technological tools that easily support the activities defined by MAETIC. It is essential for us that a teacher can design an environment suited to his needs and that he can easily master its use and development. *
Corresponding author.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 405–417, 2009. © Springer-Verlag Berlin Heidelberg 2009
406
B. Talon et al.
Even if collaborative environments dedicated to learning (Ganesha, Moodle, Claroline, etc.) [4] already exist and offer sets of collaborative tools (forums, chat, wiki, etc), we found that these platforms lacked the flexibility needed for their adaptation to the variety of different teaching and learning situations found in Higher Education. This lack of flexibility seems to result from the fact that these platforms are constructed mainly around the course resources rather than around the actors’ activities. According to Britain and Liber [5], adopting a single Virtual Learning Environment (VLE) across the institution may often not be appropriate as different departments and / or modules may have radically different demands from their e-learning tools. Many institutions surveyed in 2003 reported using two or more VLEs in their institution. Moreover, the high level of in-house developments strongly suggests [5] that commercial systems were not always appropriate to learning and teaching needs. Metaphors such as noticeboard and common room are derived from traditional campus-based situations without questioning whether these are still appropriate. Indeed, if teachers want to use an existing environment in order to support their own approaches, they have to adapt to the framework of the underlying platform. This appears to be inadequate since it seems that it is the platform that should be adapted to support the teacher’s pedagogical method and organization. Centered on the competence acquisition, MAETIC has voluntarily been built without any consideration about its future technical support. The wish is that the instrumentation remains left to the teacher’s appreciations since we think that teachers do not willingly adapt to too much formatted environments. However and from our point of view, none of the many existing platforms currently seems to be able to easily support the teachers’ creativity fostered in the MAETIC approach. The other two authors of this paper have been working for many years in the Computer Supported Collaborative Work (CSCW) research field while trying to create better software environments supporting collaborative human activities [6]. This work has been strongly inspired by results coming from Social and Human Sciences (SHSs) that have demonstrated that human activity is reflective and continuously evolves. Thus, a ‘good’ environment supporting collaborative activities should be generic, tailorable and flexible enough to support dynamic adaptation according to its users’ emerging needs. This approach has been synthesized into the co-evolution concept. The CooLDA platform has been created while trying to support this concept. This is why CooLDA does not rely on a model dedicated to specific activities, but rather tries to offer a generic framework able to support any kind of collaborative computer mediated activity. The aim of this paper is to present how a very generic approach like CooLDA can bring a solution in an educational context while supporting a project based pedagogical method like MAETIC. In the following, we first describe the key points of the CooLDA platform and its generic activity model. We then introduce the MAETIC method, explain how it has been modeled it in the CooLDA platform, and present the resulting pedagogical environment. We introduce the experiment started in January 2008 to validate this work and finally conclude with the current work and the resulting perspectives.
A Fruitful Meeting of a Pedagogical Method and a Collaborative Platform
407
2 What Is CooLDA? CooLDA is a platform designed to support Computer Supported Cooperative Work (CSCW). This work has been strongly inspired by the results coming from the Activity Theory [7]. In this paper, we are only focusing on some principles that constitute the bases of the environment. However, further information about this approach and the results we obtained can be found in [6]. A key point of our approach in designing a CSCW environment is to consider that many tools useful in supporting some activities we are interested in already exist. Thus, our goal is not to (re)create such tools, like a new discussion tool, or a new text processor. Rather, we want to offer an environment able to integrate and articulate them. From our point of view, inherited from the Activity Theory [7], each tool supports one kind of activity. When several tools are used in parallel by a group of actors, they generally support a more global activity than the original activity they were designed for. For example, a group may concurrently use a chat, a web browser, a text processor, and a mailing software. Each of these tools supports a particular activity (online discussion for the chat, etc.) but they do not know each other. However, they are used in a complementary way by the group since they are involved in the context of a global cooperative activity: co-writing a paper for EC-TEL. In such a case, the coherence of the environment is mainly mentally managed by the users. Then, our purpose is to provide a global environment that can create the context of use of the different tools involved in a global cooperative activity (such as a collaborative writing activity) while managing the links between its different sub- activities (like synchronous discussion, text writing, etc). Assuming that each tool supports a specific activity, our environment is intended to manage what we call inter-activities [8]. To achieve this, we have created an activity model [6] that allows specifying the links of the inter-activities. In this model, each (sub-)activity is linked to a resource that proposes operations. A resource is a mapping over a software tool, such as a mailing component. A user is an actor in the activity as he or she plays a role in it. A user’s role allows order him to perform actions that are realized while triggering operations on the linked resource. Operations are mapped into object methods found by dynamic introspection on the associated tool component. An activity can be linked to other ones, when the role of one of its actors implies that this user plays another role in another activity. Finally, an activity is an instance of a task, which constitutes an activity model, or pattern, that can be reused. These concepts have been implemented in CooLDA thus providing a distributed environment as a collaborative extension of the Eclipse platform [6,9]. The client side stands in a meta plug-in named CooLDev, whose role is to articulate the other plugins in the context of cooperative global activities. When the user connects to CooLDA, they retrieve the activity models in which they are implied and configures the environment according to the activity they want to join (by ‘reading’ and ‘running’ the corresponding activity models). Thanks to several mechanisms details in [6], the system can be adapted to some users’ emergent needs during the activity, through cooperative (re)definition of the inter-activities links (i.e. of the activity models). We made up environment as generic as possible, in order to be able to support any kind of activity – provided for it to be the appropriate tools to integrate exist.
408
B. Talon et al.
3 What is MAETIC? MAETIC is a pedagogical method which organizes a project-based pedagogy. MAETIC is independent of the learning domain and has been built to develop and acquire professional skills. In the project-based approach, learners build up their knowledge and know-how thanks to the project [10]. Students have to identify and formulate the problems themselves [11]. MAETIC describes activities managed by a group of students and directed towards a concrete production. The teacher activities consist in animating, not deciding. MAETIC induces a set of activities in which all the students are implied and play an active role. MAETIC lays on some principles like the regular management by the group of students of a project logbook (to work the communication skill), the management by the teacher of a unit logbook (to instigate the activity), the realization of session assessments (to give a progress report on acquisitions and to evaluate the delay taken), a regular control of the students logbook by the teacher to analyze the encountered problems and make students aware students. These principles are integrated into procedures that the actors have to respect. MAETIC so describes a procedure (project cycle: Fig. 1) that the group must respect. The project rests on 5 steps which are traditional activities of project management. These 5 steps are the launching, the framing, the planning, the piloting and the assessment. During each step, students work on sub-activities and produce deliverables. Five technical booklets bring them accurate information on the working techniques to adopt and offer models of documents. Students are responsible for distributing the roles, planning the tasks, organizing the internal and external communication of the project, producing the deliverables according to a planning, etc. For example, the launching step requires 4 activities: 1. 2. 3. 4.
role definition, graphic charter realization, log book opening, answer to demand writing.
Activities require deliverables to be produced and are described in technical booklets given to students to help them to do their activities. The student’s procedure has been described in a pedagogical guide. The students find in it a detailed description of activities and models of documents. MAETIC also describes a teacher procedure consisting of 3 steps. Each step requires activities to be realized and deliverables to be produced. These activities are directed towards the students’ activities organization. For example, the preparation step needs activities like logbook opening, tasks planning, scenario writing and resources preparation. The teacher’s procedure has not been formalized yet and stays at this moment of the order of our know-how. It is possible to practice MAETIC without Communication and Information Technologies (CITs). But CITs can bring many advantages for students [12] such as the work at distance, the distribution of an on-line documentation, communication facilities and traceability. CITs also present many advantages for the teacher such as the centralised control and at distance, the communication with students, the regular follow-up, and so on.
A Fruitful Meeting of a Pedagogical Method and a Collaborative Platform
409
Fig. 1. MAETIC Life-cycle
A device that makes use of Weblogs has been produced and evaluated from 2005 to 2007 on different publics. A first experiment [1] (including the presence of a control group who did not use the MAETIC device) has to demonstrate the value of the initial method coupled to a device using Blogs to promote the learning of project management skills. A second experiment was used to validate the ability of MAETIC to train other skills. Our desire was to test the genericity of the guide (its ability to be used in various fields) as well as to validate the relevance of Weblogs to support the log activity. We tested the application of the MAETIC method in various areas of education such as professional seminars, databases design, learning scenarios design, etc. These evaluations [1][13] have shown the interest of MAETIC for the development of varied competences but also highlighted certain difficulties. Evaluations have brought to light an important workload for the teacher to regularly control the student work via Weblogs. On the other hand, it is not easy for a teacher to adapt MAETIC to existing platforms such as Moodle, Ganesha or Webcity, without an important change of practice. The CooLDA environment seemed to have the abilities to bring an answer to the implementation needs of MAETIC: an activity-centered model, a collaborative environment integrating useful tools that are fitted to the needs of the users.
410
B. Talon et al.
4 Modeling MAETIC in the CooLDA Platform 4.1 The Model of the MAETIC Pedagogical Method The model of the MAETIC method is synthesized in the Figure 2. In this model, we have removed all concepts related to organization and instrumentation. We have only kept what is generic – i.e. what will be applied whatever the sector of training formation or the selected mode of instrumentation will be. Actually, this model contains concepts that must be respected to use the method, as described in the previous section. During the early organization phase, the teachers are responsible for adapting this model to their teaching unit by adding specific objects (instances of organization classes) such as the dates of the sessions, the meeting rooms, the resources related to the training area, etc. The instrumentation phase corresponds to the definition of the tools that each activity will exploit during the remote and in presence sessions. It must also be concerned with the way in which the resources (of MAETIC and of the training area) and deliverables will be placed at the disposal of the actors to exercise their activities.
Fig. 2. UML model of MAETIC method
4.2 Mapping CooLDA and MAETIC The major part of the work has consisted in ‘translating’ the MAETIC method into the CooLDA’s activity model. Actually, this has been done by representing the MAETIC method by five inter-connected activity models (also called tasks) corresponding to the five steps of the method. Each activity model describes a particular step of the process. In order to be as little constraining as possible, the five steps have been modeled as being sub-tasks of a main (system) task. Inside this main task, we have identified two roles, namely ‘teacher’ and ‘student’. Playing one of these roles in the main task may imply another
A Fruitful Meeting of a Pedagogical Method and a Collaborative Platform
411
particular role (MAETIC role of the figure 2) in the sub-tasks (each step of the method). In this kind of platform, designed to offer tools, the users choose the activity they want to join (among the five activities of the method). The sequential cycle of MAETIC is the responsibility of the group of students and has to be controlled by the teacher in this regular control activity. According to the specification of MAETIC, we have identified existing tools that could suit the needs of the activities bound to each phase of the method. These tools had to be pluggable in CooLDA, which relies on the Eclipse platform. That is the reason why, due to inter-activities approach, we mainly use existing Eclipse plug-ins. For instance, we choose Office Integration Editor plug-in that integrates OpenOffice.org tools in Eclipse. We have also chosen to use Eclipse's integrated CVS (Concurrent Versioning System) client, in order to offer document sharing and versioning functionalities. We have also adapted Eclipse’s integrated web browser, and developed a chat plug-in allowing users to discuss synchronously. Other tools fostering collaboration are provided by CooLDA itself (awareness view, perspectives sharing facility, etc.). These tools have then been ‘inscribed’ in the different activity models (that correspond to the five MAETIC steps). The contextualization of a tool is realized through the creation of a particular sub-activity (describing the use of this tool) in each MAETIC step it is involved in. Indeed, an activity model actually describes the use of one tool that is mapped by a Resource. In order to finely integrate these tools, we have identified (by introspection on the tools themselves) which methods could be of use for their integration and for their piloting. These methods have been wrapped into Operations in the activity models. Finally, specific Actions have been defined for each role, in order to determine how these tools should be configured and piloted. This fine-grained modeling is the key to being able to create a highly configurable environment able to manage the links of the inter-activities [6,7].
5 Experimenting CooLDA-MAETIC To validate the genericity of CooLDA on the one hand and to measure the interest of this type of instrumentation for MAETIC on the other hand, we entered a phase of experiment on the ground in January, 2008. 5.1 The Platform The prototype of the environment which supports the activities connected to the project-based pedagogy was developed between September, 2007 and January, 2008 according to the modalities of an iterative prototyping in association with the concerned teachers. Here is a brief description of its functioning. First, a student connects to the environment (identification by the CooLDA server which opens the right perspective) and chooses to join the ‘planning’ activity of a particular project, for example. Then CooLDA automatically retrieves the last versions of the shared documents that are managed through a Concurrent Versioning System, it pilots the Office Integration Editor plug-in in order to open the right document(s), and it connects the Chat plug-in to the right server and channel, with the
412
B. Talon et al.
Fig. 3. Screenshot of a MAETIC student environment (after connection)
Fig. 4. Screenshot of a MAETIC student environment (planning Activity activated)
A Fruitful Meeting of a Pedagogical Method and a Collaborative Platform
413
user’s name. This is possible thanks to activity modelling: CooLDA ‘reads’ the activity model of the ‘planning’ step and its sub-activities models (one model per integrated tool, as explained previously), and ‘runs’ the many start actions tied to the role of the user in this specific activity. All of this is totally transparent for the user. Figures 3 and 4 are screenshots of environments of students. The environment has been modeled for students having to realize a software test project. On this screenshot, we can identify several tools: a Teacher's logbook automatically opens which shows the last information on the learning unit, a chat in order to communicate with other members of the group or with the teacher, a project explorer to share different kinds of documents (MAETIC deliverables, Project deliverables and learning unit lectures) and some other tools. The choice to join an activity in the activity manager starts the activation of the tools necessary for this activity. For example, in Figure 4, an activation of the Planning Activity (planification in French) starts the open office sheet tool and opens the planning sheet in it. At the beginning of the project, models of documents are put in the CVS in the MAETIC Documentation Repository. The students use these models to produce their own. 5.2 The Protocol The experiment has been built around three assumptions: • Assumption 1: CooLDA is a platform which presents capacities of adaptation to the activity of collaborative learning. • Assumption 2: MAETIC may be instrumented in different technological environments (without changing the guide). • Assumption 3: The malleability offered by CooLDA supports the adoption of the device (methods, tools, procedures, principles of action, actors) by its users. To counter or confirm the hypothesis, the protocol consists of: • • • • •
Observations during sessions. Collection of the students’ slides realized at the conclusion of their project. Collection of posts on the student Weblogs. Collection of tracks collected on the server. Teachers’ interviews.
The target population stands out in two different promotions. The first population consists of 10 students of Professional License (called afterward promo A), that is 2 groups of 5 students. The second population concerns a promotion of 61 students of first years of Technological Certificate (Called afterward promo B), that is 10 groups of 5 to 7 students. The platform was installed on each computer of the classroom. A version to be downloaded was also available for the students to allow them to install the working environment on their own computer. Promo A used the platform within the framework of a project management unit of 24 hours in the classroom. The students were in charge of realizing a Web site by respecting the phases of MAETIC. The first part of the experiment notably allowed to reveal a certain number of technical difficulties and to bring them fast answers.
414
B. Talon et al.
Promo B then used the so improved platform, in a software test unit of 12 hours in the classroom. The students were in charge of testing programs written in C, respecting the phases of MAETIC. The environment has been increased with a perspective (CDT) to facilitate the work of execution and edition of the C programs and was also supplemented by resources uploaded by the teacher. 5.3 Some Results Elements to Confirm or infirm assumption 1: CooLDA is a platform which has capacities of adaptation to an activity of collaborative learning. The two teachers who have deployed a MAETIC device built on the CoolDA platform have formalized their needs thanks to the specification of the interface. They had been experimenting MAETIC devices for 3 years and knew what kind of activities the platform must implement. They drew collaboratively the needed environment in terms of tools supporting activities. They have specified the needed functionalities thanks to comments. For example, they have described a Document Management Activity. In this activity, students can create documents and share them with others actors. The teachers have also asked for a log book management activity. This activity is realized with a blog management accessible via Internet Explorer. They also asked awareness and chat functionalities. The platform developer has modeled the environment as described in 3.2. This kind of prototyping has been rather easily realized. The prototype has then been tested in classroom (Windows Environment) before the first session. We have discovered some technical problems due to security management at university. The teachers have then downloaded a version on their own computers (one teacher works in a Windows XP OS environment and the other one in a Mac OS environment). The installation in the Mac OS Environment needed the installation of the X11 system but was then able to run correctly. The CooLDA environment has proved its capacity to adapt to a learning environment. However, the robustness of the environment still remains to be proven. The server on which it was installed presented some signs of weakness which disturbed the course of the experimentation Elements to Confirm or infirm assumption 2: MAETIC may be instrumented in different technological environments (without changing the guide). The two educational scenarios were built in total respect of the MAETIC Method. The teachers could thus use the MAETIC guide designed beforehand, without any need for adapting it for their specific pedagogical context. After a definition of the organizational context (6 weeks in the case of the project management with one duration meetings from 4 to 6 hours, 8 weeks in the case of the software test with one duration meetings of 1h30…) the prototype was easily deployed in their own environments (with the same device pattern for the two experiments) Elements to Confirm or infirm assumption 3: The malleability offered by CooLDA supports the adoption of the device (methods, tools, procedures, principles of action, actors) by its users The teacher’s interviews, logs analysis, post analysis and slides analysis allows us to be relatively positive
A Fruitful Meeting of a Pedagogical Method and a Collaborative Platform
415
Extracts from slides show numerous reactions confirming the method adoption: « The MAETIC method allows us to be in the same conditions as those of a professional project. It prepares us to professional world » « MAETIC method requires great rigor » Many comments confirm the interest for the platform “The means placed at our disposal made it possible to be implied in the project. With all these tools of communication at our disposal, it was easy to be invested to try to produce a suitable result” “The platform has an unquestionable advantage which makes it very useful, even essential: the fact to be able to publish documents and that they are visible by any member wherever he/she is allows to know how advanced each one is in the project” “It is really a practical tool for the whole of the members of the group because it makes it possible to maintain up to date the various documents with the good versions of all these documents” “The platform provides a common space of work (space of storage) to the group and also allows us to communicate (chat).” However, “for me the platform is as a deer which should be drawn up. Even if publishing a file is not very complex, how to do it should be learned. In the same way, it is not easy to navigate through various menus: no explanation is given by the platform on its use. A user guide would have been quite useful”. The logs analysis enabled us to note that the platform was used remotely. The teacher left no instructions requiring work at home. The project can be completely realized during sessions in the classroom. He had however asked the students to download a version of the environment on their machine if… Some students connected remotely (29 out of 75). They were essentially the project managers or the communication responsible. Sometimes connections were short, certainly for a files recovery or a deposit on the CVS. Sometimes, these connections were longer (3h00) in particular at the end of the project when many deliverables were to be delivered. At the end of the project, remote connections were most frequent with numerous connections between 10 pm and 12 pm. We noticed a regular and rather long connection of the teacher each week in order to control the progress of the work (between 1h40 and 3h00 each week and 8h00 for the final evaluation).
6 Conclusion and Perspectives The portability of MAETIC on CooLDA was rather easily done. Logs allow us to verify that the students worked outside the classroom sessions and thus downloaded the platform on their computer, which would tend to prove its usability. However, inspection of the data already shows that the students did not modify their working environment but rather adapted to the proposed environment. Document sharing through CVS during the activities of the project was a much-appreciated feature. The teachers have strongly appreciated the centralization, in their environment of the follow-up of the groups: availability of documents and log books, integrated into the same environment offered them an easier follow-up of the activities of the groups and consequently an easier assessment.
416
B. Talon et al.
Demands of improvements are outlined by the teachers with notably the will to integrate new tools like an asynchronous tool to leave messages to the students as well as a centralized tool of notation and a tool of comments to leave information on the evaluation. We plan in particular to focus our attention on the malleability of the platform is that: will users adapt the environment to their use and if so, how will they make this adjustment? Until now, the platform has been tested on an audience of Information Technology specialists. It will be interesting to also test it on the non IT specialists (teachers and students) to determine their suitability for this tool. It is therefore necessary to adjust the tools, in particular by facilitating access to CVS. A second wave of testing has been delayed until September to complete the ongoing adjustments and incorporate features requested by teachers.
References 1. Talon, B., Leclet, D., Quénu-Joiron, C.: Learning Know-How through a Method using Technologies of Information and Communication and Project-Based Learning: the MAETIC Project. In: Proceedings of E-LEARN 2006. AACE/ Springer, Hawaï (2006) 2. Leclet, D., Talon, B.: Testing a device dedicated to know-how learning with a projectbased pedagogy in group (original in French). In: Proceedings of EIAH 2007, Lausanne (2007) 3. Fiedler, S.: Personal Webpublishing practices and conversational learning. In: Symposium on Introducing disruptive technologies for learning: Personal Webpublishing and Weblogs. In: Proceeding of ED-MEDIA 2004, Lugano (2004) 4. Graf, S., List, B.: An Evaluation of Open Source E-Learning Platforms Stressing Adaptation Issues. In: Proceedings of the Fifth IEEE International Conference on Advanced Learning Technologies, pp. 163–165 (2005) 5. Britain, S., Liber, O.: A Framework for the Pedagogical Evaluation of Virtual Learning Environments (February 2004), http://www.jisc.ac.uk/uploaded_documents/jtap-041.doc 6. Lewandowski, A.: Towards better support for cooperative activities in agreement with the co-evolution - Application to cooperative software development (original in French). PhD thesis, Université du Littoral Côte d’Opale, France (2006) 7. Bedny, G., Meister, D.: The Russian theory of activity, Current Applications to Design and Learning. Lawrence Erlbaum Associates, Publishers, Mahwah (1997) 8. Lewandowski, A., Bourguin, G.: Inter-activities management for supporting cooperative software development. In: Proceedings of the 14th International Conference on Information Systems Development (ISD 2005), Karlstad, Sweden, August 15-17 (2005); In: Nilsson, et al. (eds.)Advances in Information Systems Development: Bridging the Gap between Academia and Practice, vol. 1, pp. 155–167. Springer, Heidelberg, ISBN: 0-38730834-2 9. Lewandowski, A., Bourguin, G.: A New Framework for the Support of Software Development Cooperative Activities. In: Proceedings of the International Conference on Enterprise Information Systems (ICEIS 2006), Paphos, Cyprus, May 24-27, pp. 36–43 (2006) ISBN: 972-8865-43-0
A Fruitful Meeting of a Pedagogical Method and a Collaborative Platform
417
10. Gibson, S.: Group Project Work in Engineering Design Learning Goals and their Assessment. International Journal of Engineering Education 17(3), 261–266 (2001) 11. Barr, R.B., Tagg, J.: From Teaching to Learning - A New Paradigm for Undergraduate Education. Change, 13–25 (November/December 1995) 12. Garbay, R.: The role of Information Science in Interdisciplinary Research: A Systemic Approach, Rethinking Interdisciplinarity (2003), http://www.interdisciplines.org/interdisciplinarity 13. Leclet, D., Talon, B.: Binding the gap between professional context and university. In: International Conference in Interactive Computer Aided Learning, ICL 2008, Villach, Austria, September 24-26 (2008)
A Model of Retrospective Reflection in Project Based Learning Utilizing Historical Data in Collaborative Tools Birgit R. Krogstie Norwegian University of Science and Technology (NTNU)
[email protected]
Abstract. In project based learning, learning from experience is vital and necessitates reflection. Retrospective reflection is as a conscious, collaborative effort to systematically re-examine a process in order to learn from it. In software development student projects it has been empirically shown that project teams’ retrospective reflection can help the teams collaboratively construct new knowledge about their process and that historical data in collaborative tools used in daily project work can aid the teams’ recall and reflection on the different aspects of project work. In this paper, we draw on these results as well as other findings on the use of collaborative tools in a similar setting. We use the framework of distributed cognition to develop a model of retrospective reflection in which collaborative tools used as cognitive tools for daily project work are utilized as cognitive tools in retrospective reflection, aiding the creation of individual and shared representations of the project process. Keywords: project based learning, reflection, distributed cognition, cognitive tools.
1 Introduction In project based learning [1] learning from experience is essential [2-4]. To achieve this learning, reflection is necessary, both during day-to-day work and with some distance to the activity reflected upon [4]. In this paper, we will focus on what we will call retrospective reflection, a form of reflection-on-action in which participants in a collaborative process systematically re-examine their process. There exist many techniques and tools to support reflection in project based learning. Examples include the writing of diaries and reflection notes, and user annotation of information in the collaborative tools used to support their work. Computerized tools can be specifically introduced to support learning. Reflection supporting tools comprise tools showing users their learning process and its result, tools providing guidance for users’ monitoring of their learning process, and tools providing scaffolding for comparison with expert thinking [5]. In an organizational setting, advanced collaboration platforms may include knowledge management functionality [6], but such tools are untypical in formal education. The project based learning to be focused in this paper is that which is based on project work involving the development of artifacts and aided by lightweight collaborative U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 418–432, 2009. © Springer-Verlag Berlin Heidelberg 2009
A Model of Retrospective Reflection in Project Based Learning
419
tools. The latter includes tools frequently associated with Web 2.0, e.g. wikis, instant messaging tools, and discussion forums. Today’s students regularly use such tools to coordinate and perform their activities, whether imposed by school or not [7, 8]. Adding to the picture, students in higher education generally expect to have flexibility of time and place for work. From the point of view of course organizers, lightweight tools are inexpensive options providing flexibility of use and functionality well known to students and course staff. A core argument in what follows is that by supporting various aspects of work throughout a project, lightweight tools collect historical data with a potential value to help the students recall what happened in the project and reflect on it. The objective of the research in this paper is to provide a general account of retrospective reflection in the above described type of project based learning setting. We draw on empirical research on software development (SD) student projects: mainly published work [9-11] and some additional findings. We provide a substantial new contribution by situating the results in a theoretical framework, generalizing to project based learning independent of work domain and presenting a model which can be used to aid the organization of retrospective reflection in project based learning. In Section 2, we outline the empirical research on SD student projects. We present our theoretical framework in Section 3, and in Section 4 use it to analyze findings from the student projects. Section 5 outlines our conceptual model of retrospective reflection. Section 6 concludes the paper.
2 Empirical Basis: Research on SD Student Projects In this paper we draw on findings from research on SD student project teams in the period 2006-2008 focusing on the teams’ use of collaboration technology. The project course is arranged in the last (6th) semester of an IT undergraduate program. Teams of 3-5 members develop software for genuine customers. The main learning objective is to get experience with SD work in a team for a customer. Project deliveries include a software product and a report. The teams choose which collaboration and development tools to use, depending on customer requirements, team members’ prior experience, team members’ wish to learn, and the availability of the technology. The data collection in the studies on these teams includes in-depth, longitudinal non-participant observation, semi-structured interviews across the cohorts, and examination of project documentation. A constructivist view of learning as well as guidelines for interpretive field research [12] guided the data collection and analysis. 2.1 Retrospective Reflection Based on Participants’ Memory The adoption of a retrospective reflection technique from SD industry [13] in the SD project course was subject to a qualitative study [10]. Facilitated reflection workshops were conducted with 10 student teams at the end of their project. Each workshop lasted about an hour and started with participants’ individually drawing project timelines incorporating events they perceived as important to the project, and along the timelines, ‘satisfaction curves’ indicating their feelings about the project over time. From the individual timelines, the teams constructed shared timelines on a
420
B.R. Krogstie
Fig. 1. Shared timeline from a reflection workshop (One event emphasized by the authors)
whiteboard. Individual satisfaction curves were drawn along it (see Fig. 1) and explained by the participant. Next, the participants individually answered questions about tasks, roles and lessons learned and presented their answers. After the workshop, the teams made reflection notes. The study showed that individual timelines and satisfaction curves reflected different perspectives on a project process. Fig. 1 illustrates how experience curves may differ within a team. The study also showed that shared timelines often reflected views of the project not found in any individual timeline. Closer examination of the individual timelines used in the creation of the shared timeline in Fig. 1 revealed that most of the events from the individual timelines had been included in the shared one, and some had been transformed through the co-constructive effort. For instance, the event ‘a bit ineffective work’ in an individual timeline was modified into an event marking a point in the project process when the team realized that they had to work more efficiently (‘insight: need to work more efficiently’). The study [10] concludes that the satisfaction curves gave the students new insights, that the workshop helped them take new perspectives on important issues, and that they considered it useful. 2.2 Retrospective Reflection Aided by Historical Data in Collaborative Tools In SD industry it is acknowledged [14] that project retrospectives would benefit from better data to help participants create a shared understanding while avoiding oversimplification and time-consuming examination of unimportant information. This was addressed in a study of SD student projects in which historical data in project wikis, used as lightweight project management tools by several teams [15], were used to aid project participants’ reflection on their project process [9]. In retrospective interviews with project members, wiki contents were chronologically examined, and particular types of information was seen to trigger recall of and reflection on project events, project phases, and collaboration within the team and with other stakeholders. In [11] it was shown that historical data in an issue tracking tool, a lightweight tool for SD project management, was useful to aid retrospective reflection. Historical data was accessed by traversal of a timeline showing team members’ updates to development artifacts. The reflection effort was organized in line with the approach of
A Model of Retrospective Reflection in Project Based Learning
421
creating timelines and satisfaction curves [10], but conducted as a two day, video recorded workshop with one team to investigate in depth the use and role of historical data in the reflection process. Examination of the historical data helped team members individually identify project events that had not been included in the timelines made from (individual) memory alone and that were later included in the collaboratively constructed timeline and accounts of lessons learned. Historical data in the issue tracker were also used by the team to adjust details in the shared timeline.
Fig. 2. Historical data in a collaborative tool aiding recall of a project event [11]
Some features for navigation and retrieval of historical data were found to be important in retrospective reflection [9, 11]. These features included chronological overview and traversal, and the possibility to switch between overview and details. Interviews with the 2007-2008 teams and examination of their reflection notes showed that the collaborative tools used were generally lightweight. The teams were clear about what tools were used for what purpose. While the specific tool selection and usage differ among teams, we see a general pattern: Lightweight project management tools (typically project wikis or issue tracking tools) are used for managing team-internal coordination of tasks, e.g. create and follow up on a project plan, and define, assign and track the status of tasks. The tool provides links to project documentation. Development tools are used to write, test and integrate source code. The storage and versioning of project artifacts are managed in a file versioning system that may or may not be integrated with the project management tool. Email is used for formal and documented communication internally and in communication with other stakeholders. Typically, the project team has its own mailing list. Instant messaging chat is used for informal, team-internal messages and substitute face-to-face conversation over synchronous, distributed work. Less often, it is used for communication with other stakeholders. Internet sites are used to get
422
B.R. Krogstie
information about technology; in most cases, simple web search or FAQ lists provide answers, but occasionally project members participate in discussion forums [16]. These patterns of use, combined with the functionality of tools, imply that data resulting from various types of project work is generally logged. Wiki revisions are automatically stored and the email clients store all mail unless otherwise specified by the user. It is tacitly expected that an email user keeps an archive of work-related email. With instant messaging, team members frequently choose to enable logging. On an internet forum, postings remain as long as the community hosting the forum wants to keep them. Looking into the historical data stored in these tools, they can be seen as a trace of the work undertaken with the aid of the tools. The studies investigating reflection aided by historical data [9, 11] showed that data in tools used for project management and coordination reminded participants of events related to those aspects of project work and thus helped them reflect on that type of issues. To examine the potential for historical data to shed light on other types of issues, we started out by issues that, according to the teams’ own reflection. had been of great importance in their projects (e.g. misunderstandings in team-customer collaboration, problems of getting timely information from a service provider). Examining historical data in collaborative tools used by those teams, including email archives, instant messaging logs, and discussion forums, we looked for historical data that could shed light on those issues The data found were, as seen by the researcher, rich enough to have such a potential, partially because the data shed light on the use of the collaborative tool itself, which was often at the core of the project challenge. We end this section by outlining some more characteristics of the SD projects relevant to our agenda of understanding and supporting retrospective reflection in the teams. Work in the projects is typically diversified, project participants having different roles, dividing work and using different tools to address different tasks. Team members’ roles affect their day-to-day use of collaborative tools. Consequently, historical data in a tool typically reflects work in which some team members have been more involved than others. Project artifacts (e.g. requirements specifications and project plans) frequently play a role in collaboration with project stakeholders (e.g. customer [17] and course staff) having different goals for their project involvement. Project artifacts in various states can be seen as more or less well defined versions. These may be deliberately saved by the user (as when a text document is renamed and saved) or automatically stored in a tool. A file versioning system stores the contents of and differences between every file version ‘checked in’ by the users. In our studies of retrospective reflection aided by historical data in collaborative tools, going into detail often meant exploring specific artifact versions.
3 Theoretical Background Taking the view of constructivism, seeing learning as integral to activity that is basically social and situated, and focusing on the role of tools, several theories [18, 19] may shed light on project based learning. They include activity theory (AT), actor network theory (ANT), symbolic interactionism, situated learning, and distributed cognition [20, 21]. In [11] it was shown that distributed cognition is an adequate framework for understanding retrospective reflection in SD student projects. It has
A Model of Retrospective Reflection in Project Based Learning
423
been used to analyse learning in educational [22-24] as well as work [25, 26] settings. In the present study, the main reason for choosing distributed cognition among the candidate frameworks is its focus on transformation between representations. Such transformations can be seen as a core element in a process of retrospective reflection incorporating construction of timelines and examination of historical data. The concept of cognitive tools [23, 27] can shed light on how such representations aid work and learning. Selecting distributed cognition as a framework we focus on its descriptive power and what we want to achieve by applying it [18]. We want to develop a model which not only descriptively accounts for the elements and dynamics of retrospective reflection in project based learning but which also informs its organization. To this end we augment our theoretical framework with theory addressing how the process of learning and reflection may be facilitated. Reflection can be seen as active and careful consideration of any belief or supposed form of knowledge [28], implying the reviewing and judging of present knowledge or beliefs [29]. A model of the learning process linking reflection to the experience reflected on in a reciprocal relationship is provided by Kolb and Fry [30]. Boud et al. [31] present a model of the reflective process in which the returning to experience is essential. The experience comprises the behaviour, ideas and feelings involved in the situation reflected upon. In the reflective process, feelings about the experience should be attended to and the experience re-evaluated. The desired outcomes of reflection include new perspectives on the experiences, change in behaviour, readiness for application and commitment to action. The model is descriptive with respect to steps typically occurring in reflection but is also meant to provide guidance about how to achieve successful reflection. Project work experience can be understood in terms of the project trajectory [32], a concept in line with the idea of temporarily distributed cognition. Strauss refers to Mead on the issue of how a social world reflects on its history: “The temporal spans of group life mean that the aims and aspirations of group endeavor are subject to reviewal and recasting. Likewise past activities come to be viewed in new lights, through reappraisal and selective recollection. ..History, whether that of a single person or of a group, signifies a ‘coming back at self’ (Mead 1936)”. By ‘trajectory’ Strauss means “the course of any experienced phenomenon as it evolves over time”. Distributed cognition “implies that the learners are enabled to think deeply and create certain types of artifacts that represent their thinking by working with cognitive tools” [23] p.209. Cognitive tools are tools that “help users represent what they know as they transform information into knowledge and are used to engage in, and facilitate, critical thinking and higher order learning.” [27] Stahl [33] explicitly addresses the collaborative aspect of learning in his model of collaborative knowledge building, “a cyclical process with no beginning or end” (p.75). At the heart of the model are processes of building personal and collaborative knowing. The model comprises the transformation of cultural and cognitive artifacts and sheds light on the interplay of individual and collective reflection and learning. Our aim is to develop model that sheds light on the elements and dynamics of retrospective reflection in the particular setting of project based learning in which lightweight collaborative tools are used to support work and retrospective reflection. We want the model to account for the return to experience in the light of project trajectory
424
B.R. Krogstie
and to express how work and learning involves individual as well as collective reflection and the use of cognitive tools. We clarify our use of some concepts: Activity is used as a generic, commonsense term and not as a reference to activity theory. By project artifact we mean something used and produced in project work, e.g. a diagram or a report. In distinguishing between work and the retrospective reflection on that work, we deliberately ‘hide’ the reflection and learning in day-to-day project work inside the concept of project work. This is not to pretend that project based learning only happens in ‘chunks’ of retrospective reflection, but it is our agenda to shed light on the latter. We proceed with an analysis of retrospective reflection on the above theoretical basis, from the empirical grounding described in Section 2 and with the objective of developing a model. With a sidelong glance at the work on organizational memory in [25], we structure our analysis in terms of retrospective reflection being socially distributed, temporally distributed, and involving transformation of representations.
4 Analysis We use findings from the research on SD teams outlined in Section 2 to shed light on how retrospective reflection may be supported in similar settings of project based learning from the perspective of distributed cognition. 4.1 The Social Distribution of Cognition in Retrospective Reflection The social distribution of the cognition involved in retrospective reflection can be seen as having two main components: the social distribution of the process reflected upon, and that of the retrospective reflection activity itself. The social distribution of the process reflected upon in the case of SD student projects was largely described in Section 2 and illustrates the complexity of the experience to be returned to in retrospective reflection We noted that tasks relating to different aspects of work were distributed in the teams, resulting in a distribution of tool use. Historical data were generally being stored in the tools as a result of the work. The data reflected aspects of project work in which the tool has played a role, including the tool use itself. E.g., in Fig. 2, the historical data reflects software development work, more specifically coding. Crucially, in our context, a new version of a project artifact stored in the tool represents different data than the previous version, this distinction serving to capture the temporal and partially also the social distribution of the project work. We now take a closer look at the social distribution of the retrospective reflection with reference to the SD students teams. The timelines in [10] (see Fig.1) were drawn with the aid of simple physical tools; paper and pencil, whiteboard and pens. These tools had a flexibility in the situation of knowledge sharing appearing to make them adequate for externalizing and transforming participants’ representations. When historical data in collaborative tools were introduced in similarly organized retrospective reflection [11], particular tool features for retrieving and navigating data were found to be important to the utility of the tool. For instance, the view in Fig.2 allows easy chronological traversal with direct access to project artifacts in there-andthen versions. The likelihood of being able to retrieve interesting data from a specific tool also depends on the actual tool usage in the team’s work. Further, what is
A Model of Retrospective Reflection in Project Based Learning
425
worthwhile examining retrospectively depends on what the team considers important issues. These might have been identified prior to retrospective reflection, but may also emerge from the examination of the historical data as seen in [11] and Fig.2. Synchronous, distributed work among pairs of SD team members is frequently supported by instant messaging chat. Such conversation has an oral flavor, often combining there-and-then problem resolution with social chit-chat. Instant messaging logs exemplify historical data that for privacy reasons may be inadequate for retrospective reflection even if the contents are of potential interest to the team. The social distribution of retrospective reflection is also about which participants are involved, e.g. whether reflection is done individually or by the whole team together. In the SD teams, recall of events was essential in the construction of individual and shared timelines [10, 11]. Research on transactive memory, “a set of individual memory systems in combination with the communication that takes place between individuals” [34], shows that in collaborative remembering, comparing groups of people who have a history of remembering together (e.g. as colleagues or as family members) with groups people who have not, those who are used to repeatedly remembering together have the most efficient systems for doing so [35, 36]. In close relationships, responsibility for knowing is distributed, with the effect that some information is differentiated and non-redundant whereas other is shared. However, research indicates that the collective remembering skills that a group of people has developed through experience are only an advantage in a situation of remembering if the group is allowed to use these skills [37]. Translated to the context of project based learning, we may take these results to indicate that recall of project events benefits if project members can draw on the collaboration expertise developed through their project work. The retrospective use of collaborative tools well known from day-today work may be part of this. Findings from the research field of social contagion of memory indicate that there is a tendency for participants in collaborative remembering to be influenced by others in the remembering process, individual memory thus being distorted [38, 39]. This research is however based on experiments far from real-life work. Recent research on memory of events that are real, complex and significant in people’s lives show that collective memory is qualitatively different from individual memory in these settings [40, 41]. In project based learning, the inclusion of individual as well as collective steps of recall and reflection may serve as a way of exploiting the positive effects of recalling alone as well as those of doing it in concert. The empirical research on SD student projects outlined in Section 2 indicated that creating a shared representation based on individual ones results in more knowledge than that found in the individual, external representations [10]. However, those who had the most expertise with a certain aspect of work tended to dominate the team’s shared, externalized view of that aspect of work [11]. Research stating that expertise is a combination of the expert, his environment and the tools he uses [23] underpin these findings. Apart from being a source of power differences, however, expertise is an important resource for using collaborative tools as cognitive tools for reflection. We conclude from this section that the historical data in collaborative tools are important resources in unveiling different aspects of project work involving different team members, although the value of using a particular tool in a specific case must be considered. A combination of individual and collective reflection seems feasible.
426
B.R. Krogstie
4.2 The Temporal Distribution of Cognition in Retrospective Reflection Retrospective reflection in project based learning is part of a trajectory of project based learning spanning the entire project, but also constitutes a separate sub-activity within the overall project. The distributed cognition of retrospective reflection can thus be seen as situated in two different contexts with different objectives for the ongoing activity: There-and-then project work focused on completing project tasks, and here-and-now retrospective reflection focused on making sense of there-and-then work in the light of knowledge of the entire process. Events of there-and-then work may be interpreted as belonging to different subtrajectories, and can be seen as the core of what is reflected upon. We illustrate this by going into some detail about the findings reported in [11] and illustrated in Fig.2. In the retrospective reflection workshop of the team in question, there was a project event not remembered by any team member in the individual reconstruction of the project trajectory based on memory alone, i.e. not included in any individual timeline. The event marked the onset of an activity in the project, more specifically coding work done to get familiar with technology to be used later in the real development. As such, the event could be seen both as a turning point in the trajectory of the entire development activity and as the starting point of a sub-trajectory: that of the early-investigation of technology. The team members’ individual examination of historical data in the issue tracking tool made two of them recall the event as important to the project (Fig.2) and include it in their timelines. Later, as the team created a shared project timeline, the event was included and brought up into the discussion of how things might have been done differently in the project. The event was discussed in the light of the specific activity for which it marked the starting point and in the light of the entire development project. The trajectory of the project as represented in the shared timeline was compared to a process described in SD literature as ideal for a certain type of development work. These findings illustrate that comparing trajectories is at the core of the retrospective reflection process. Considering retrospective reflection as part of a trajectory of project based learning we see a possible effect on tool usage if it is known to the team that historical data will later on be systematically examined. The dual role of the tool may lead project members to adjust their day-to-day tool usage, e.g. by providing more frequent or elaborate comments associated with ongoing tasks, which may affect project work in a positive way. However, changes may have adverse effects, e.g. if participants cease to communicate issues in a tool because the logged data may give an unfavourable impression in retrospect. Adjusting tool use to meet the needs of both retrospective reflection and day-to-day work may be seen as part of project based learning itself. We conclude from this section that the ‘return to experience’ of retrospective reflection involves examination and comparison of sub-trajectories of project work and that retrospective use of tools may affect project work through participants’ awareness of the dual role of the tools. 4.3 Transformations of Representations and the Use of Cognitive Tools Looking at the timeline technique as used in [10], and illustrated in Fig.1 we see many examples of use of representations as cognitive tools. The internal, individual representations of the project process were transformed into externalized individual
A Model of Retrospective Reflection in Project Based Learning
427
timelines on paper. Both types of representations were used in the collaborative session through which the knowledge captured in the representations was transformed into a shared, external representation in the form of a timeline, a process likely to make the individual, internal representations change. The internal and external representations created and modified through retrospective reflection served as cognitive tools for the teams’ later writing of collaborative reflection notes. The experience curve helped participants reflect on a particular sub-trajectory corresponding to the ‘attending to feelings’ in the reflective process [31]. The above outlined transformations were achieved by use of the timeline as a cognitive tool. The successful use of timelines in the study indicates that it is a good form of representation to aid both individual and collective recall and reflection. Salomon [24] argues that “even in the most radical formulations of activity-insetting [..] there is no way to get around the role played by individuals’ representations.” p.134. The timeline approach as conducted in the SD studies pays heed to this by including the steps of creating external, individual representations. We see the following as favourable characteristics of the timeline: the metaphor is easily grasped by participants, it maps well to the conception of process as trajectory, it is easily understood when presented to others, and different timelines representing the same process are easily combined. The timeline provides a framework for contextualizing historical data in collaborative tools, in which the timing and chronological sequence of events associated with the data are generally available. In the collaborative tools used in project work, some of the data originating in the work and stored in the tools are used as cognitive tools in the work process, thus contributing to the transformation of participants’ internal project representations.. Turning to the step of retrospective reflection, the same data may partially be accessed in the same way as in day-to-day work (as when previous events in the timeline of the issue tracker was examined; this is a tool use similar to day-to-day coordination work) but may also be retrieved in a way not typically used in day-today work (as when early revisions of the project wiki main page were chronologically examined in retrospective reflection). Thus, the representations of the project trajectory retrieved from the tool vary with the purpose and procedure of retrieval. When collaborative tools are used as cognitive tools for retrospective reflection, externalized representations originally transformed through day-to-day project work take part in the transformation of other representations of the project trajectory; internal and possibly external ones (e.g. timelines), individual and/or shared ones. In the studies outlined in Section 2, tools used to aid retrospective reflection were of two types: those created for the purpose of retrospective reflection, i.e. the timelines and curves on paper and whiteboard, and tools primarily supporting day-to-day project work and being re-used for the purpose of retrospective reflection, e.g. the project wiki and the issue tracker. In case of the latter category of tools, even if the process of accessing historical data had some transformative effect on the representations retrieved, the historical data in the tool remained fixed. The timelines and curves created for the purpose of retrospective reflection, on the other hand, underwent continuous transformation during the retrospective reflection activity. Whereas the empirical studies of SD projects indicate that the two types of tools independently benefit retrospective reflection [9, 10], there appears to be added value
428
B.R. Krogstie
in combining them [11], using the historical data as a means for transforming the timeline and the timeline as a means for making sense of the historical data. By (re-)using a computerized tool that has been used in one context, e.g. that of (some aspect of) day-to-day project work, and employing it as a cognitive tool for retrospective reflection, some of the expertise inherent in the learner’s there-and-then usage of the tool may be applicable to the tool use in retrospective reflection. This can be understood as utilizing the expertise of the joint learning system [23] of day-to-day work in the transition to retrospective reflection on that work. There are limits to the insight that can be gained by the use of one type of representation of a project, e.g. a timeline. This is the case even if the representation is made more sophisticated (for instance by including representation of sub-trajectories like the individual experience curves). Not all aspects of project work fit within a temporal/linear perspective, even if it is possible to revisit most issues along a timeline. There may be aspects of project work that are better expressed in other ways, e.g. with textual descriptions, diagrams or even role play. For instance, a representation outlining the structure of an artefact, showing who contributed to what part, may be useful to aid reflection on the work process. Some representations providing synthesized information about the project are likely to be available in the historical data of collaborative tools; project plans are an example. The timeline should be seen not only as a valuable project representation in itself, but also as a good starting point for identifying and creating other representations by helping participants get an overview and create a context for exploration of particular issues. In the studies outlined in Section 2, retrospective reflection was conducted when the projects were more or less finished. Taken from the learning objectives of the course, the students were meant to be able to use their experience from the project in other projects, but implicitly this was taken to happen through individual learning. Revisiting the model of the reflective process [31], its outcomes comprise new perspectives on experiences, change in behaviour, readiness for application, and commitment to action. We would expect a successful process of retrospective reflection to result in all of these being somehow incorporated in participants’ internal representations of the project experience, and some of it to be expressed in the external representations resulting from reflection. However, the actual learning from experience is best seen in further project work, and retrospective reflection should be an elements of a learning cycle [30]. If process improvement is part of the purpose of retrospective reflection, lessons learned should be captured in representations that are applicable in later project work. How to make lessons learned applicable in practice is a challenging issue at the core of organizational learning and knowledge management. Our conclusion from this section is that many types of transformations of representations may be involved in, and add value to, retrospective reflection. The transformations we have discussed were achieved by the use of tools primarily supporting day-to-day work as well as tools introduced for retrospective reflection.
5 Retrospective Reflection in Project Based Learning: A Model Based on the previous analysis, we outline a model of retrospective reflection in project based learning and briefly illustrate with examples from Section 2.
A Model of Retrospective Reflection in Project Based Learning
429
The model in Fig. 3 illustrates retrospective reflection incorporating the main elements elaborated in Section 4, and serves as a generic description of retrospective reflection utilizing individual and shared timeline representations as well as historical data in various collaborative tools used in the project. The rectangles in the diagram are representations (internal or external), the ovals are processes in which the representations are transformed. These processes can be seen as learning processes in the sense that new knowledge is constructed. A representation having an outgoing arrow pointing to a process is a cognitive tool in that process. Where arrows point in both directions between a representation and a learning process, the representation used in the process is itself changed through the process. The finer, dashed arrows show the use of historical data to identify events and sub-trajectories. In more detail, process (1) is the daily project work. Collaborative tools are used as cognitive tools, resulting in data remaining in the tools as historical data. The tools in daily work can be seen as continuously updated ‘representations’ of the project by containing data reflecting aspects of the project work. Internal representations of the project in participants’ heads also impact on, and result from, the work. Turning to the retrospective reflection, there is an individual step (2) in which individuals make an external representation of the project process in the form of a timeline. We have indicated the possibility to represent sub-trajectories (e.g. a satisfaction curve). In this, the collaborative tools with their historical data potentially serve as cognitive tools, as illustrated in Fig.2. In making sense of the historical data, the learner recognizes them as associated with project events and sub-trajectories. Participants’ individual timelines are cognitive tools for collaborative reflection (3) in which a shared timeline is created (Fig. 1), mediated also by the individual, internal representations. Elements from the individual timelines are included and may be transformed (see Fig.1, the emphasized event). Again, the collaborative tools with their historical data may be used as cognitive tools. In addition to the shared timeline, the team may create other representations (e.g. textual reflection notes). Some of the resulting representations may serve as cognitive tools in further project work, closing the cycle of project based learning. The model is a simplification with respect to the number of representations and transformations. For instance, the step of orally presenting the contents of individual timelines to the team and the transformation of historical data that happens through retrospective use of the tool has been only indirectly assumed. The model outlines main elements in the distributed cognition of retrospective reflection and the dynamics among these elements. It can be used to guide the organization of retrospective reflection in project-based learning. While our analysis addressed the benefit of various representations and transformative processes, retrospective reflection may be arranged without the inclusion of all the elements. Depending on constraints and objectives, a specific organization of the process may leave out the individual or the collective step of creating external timeline representations. Also, the use of collaborative tool history can be omitted. To decide whether a specific collaborative tool should be used in retrospective reflection, a team should take into account the tool features for data navigation and retrieval, tool usage in daily work, and issues seen as important to the team and thus worth reflecting on.
430
B.R. Krogstie
Fig. 3. A model of retrospective reflection in project based learning
6 Conclusion We have used empirical data from SD student projects to analyse retrospective reflection in the context of project based learning and develop a model outlining its elements and dynamics. On basis of distributed cognition, the model describes retrospective reflection as a set of transformations of representations of the project process, internal and external. A timeline of events is used as a cognitive tool. It fits the idea of process trajectories, is good for situating historical data in context, and is good for being shared and combined into a collaboratively constructed representation. The scope of our analysis and model is project based learning in which lightweight collaborative tools are used to support various aspects of work. While the domain of our empirical studies is software development, the model may be used to aid the organization of retrospective reflection in project based learning within any domain. Taking a constructivist view of knowledge, there is no single, ‘real’ story of a project that can be revealed through the use of timelines and collaborative tool history. Our approach can help participants unveil issues of importance to them and aid individual and collective sense making of various aspects of the project. We believe that the systematic approach, the explicit incorporation of individual contributions into collective, co-constructive learning activity, and the utilization of available resources in the form of historical data together result in reflection outcomes that are likely to be useful with respect to the learning objectives and valid in the sense that more stones have been turned.
A Model of Retrospective Reflection in Project Based Learning
431
The analysis presented in this work would have benefited from empirical data on the use of more types of lightweight collaborative tools in retrospective reflection in SD student projects or similar settings of project based learning. Further research should examine the use of different collaborative tools within the framework of the model. Results of this research can be used to validate the model.
References 1. Blumenfeld, P.C., et al.: Motivating Project-Based Learning: Sustaining the Doing, Supporting the Learning. Educational Psychologist 26(3 & 4), 369–398 (1991) 2. Boud, D., Keogh, R., Walker, D.: Reflection: Turning Experience into Learning. Routledge Falmer (1985) 3. Dewey, J.: Democracy and education - an introduction to the philosophy of education. The Free Press, New York (1997) (1916) 4. Schön, D.: Educating the Reflective Practitioner. Jossey-Bass, San Fransisco (1987) 5. Lin, X., et al.: Designing Technology to Support Reflection. Educational Technology, Research and Development 47(3), 43–62 (1999) 6. Edwards, J.S., Shaw, D., Collier, P.M.: Knowledge management systems: finding a way with technology. Journal of Knowledge Management 9(1), 113–125 (2005) 7. Garrett, R.K., Danziger, J.N.: IM=Interruption Management? Instant Messaging and Disruption in the Workplace. Jnl. of Computer-Mediated Communication 13(1) (2008) 8. Grinter, R.E., Palen, L.: Instant Messaging in Teen Life. In: CSCW 2002, New Orelans, Louisiana, USA. ACM, New York (2002) 9. Krogstie, B.R.: Using Project Wiki History to Reflect on the Project Process. In: 42nd Hawaii International Conference on System Sciences, Big Island, Hawaii. IEEE, Los Alamitos (2009) 10. Krogstie, B.R., Divitini, M.: Shared timeline and individual experience: Supporting retrospective reflection in student software engineering teams. In: CSEE&T 2009, Hyderabad (2009) 11. Krogstie, B.R., Divitini, M.: Collaboration tools as a resource for retrospective reflection (submitted, 2010) 12. Klein, H.K., Myers, M.M.: A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. MIS Quarterly 23(1), 67–94 (1999) 13. Derby, E., Larsen, D., Schwaber, K.: Agile Retrospectives. Making Good Teams Great. Pragmatic Bookshelf (2006) 14. Kasi, V., et al.: The post mortem paradox: a Delphi study of IT specialist perceptions. European Journal of Information Systems 17, 62–78 (2008) 15. Krogstie, B.R.: The wiki as an integrative tool in project work. In: COOP 2008, Carry-leRouet, Provence, France. Institut d’Etudes Politiques d’Aix-en-Provence (2008) 16. Krogstie, B.: Power through brokering. OSS participation in SE projects. In: ICSE 2008, Leipzig. IEEE Computer Society, Los Alamitos (2008) 17. Krogstie, B., Bygstad, B.: Cross-Community Collaboration and Learning in CustomerDriven Software Engineering Student Projects. In: CSEE&T 2007, Dublin. IEEE, Los Alamitos (2007) 18. Halverson, C.: Activity Theory and Distributed Cognition: Or What Does CSCW Need to DO with Theories? Computer Supported Cooperative Work 11, 243–267 (2002) 19. Nardi, B.A.: Studying context: a comparison of activity theory, situated action models, and distributed cognition. In: St. Petersburg International Workshop on Human-Computer Interaction, St. Petersburg, USSR, Int.Centre Sci. & Tech. Inf, Moscow, Russia (1992)
432
B.R. Krogstie
20. Hutchins, E.: Cognition in the Wild. MIT Press, Cambridge (1995) 21. Salomon, G.: Distributed Cognitions. Cambridge University Press, New York (1993) 22. Karasavvidis, I.: Distributed Cognition and Educational Practice. Journal of Interactive Learning Research 13(1/2), 11–29 (2002) 23. Kim, B., Reeves, T.C.: Reframing research on learning with technology: in search of the meaning of cognitive tools. Instructional Science, 35 p., 207–256 (2007) 24. Salomon, G.: No distribution without individuals’ cognition, in Distributed Cognitions. In: Salomon, G. (ed.) Psychological and educational considerations. Cambridge Univ. Press, Cambridge (1993) 25. Ackerman, M.S., Halverson, C.: Organizational Memory as Objects, Process, and Trajectories: An Examination of Organizational Memory in Use. Computer Supported Cooperative Work 13(2), 155–189 (2004) 26. Sharp, H., Robinson, H.: A Distributed Cognition Account of Mature XP Teams. In: Abrahamsson, P., Marchesi, M., Succi, G. (eds.) XP 2006. LNCS, vol. 4044, pp. 1–10. Springer, Heidelberg (2006) 27. Kirschner, P.A., Erkens, G.: Cognitive tools and mindtools for collaborative learning. Journal of Educational Computing Research 35(2), 199–209 (2006) 28. Dewey, J.: How we think. A restatement of the relation of reflective thinking to the educative process (Revised edn.). D. C. Heath, Boston (1933) 29. Kim, D., Lee, S.: Designing Collaborative Reflection Support Tools in e-project Based Learning Environment. Journal of Interactive Learning Research 13(4), 375–392 (2002) 30. Kolb, D.A., Fry, R.: Towards an applied theory of experiental learning. In: Cooper, C.L. (ed.) Theories of Group Processes, pp. 33–58. John Wiley, London (1975) 31. Boud, D., Keogh, R., Walker, D.: Promoting Reflection in Learning: a Model. In: Boud, D., Keogh, R., Walker, D. (eds.) Reflection: Turning Experience into Learning, pp. 18–40. Routledge Falmer (1985) 32. Strauss, A.: Continual permutations of action. Aldine de Gruyter, New York (1993) 33. Stahl, G.: Building collaborative knowing. In: Strijbos, J.-W., Kirschner, P.A., Martens, R.L. (eds.) What We Know About CSCL And Implementing It In Higher Education, pp. 53–85. Kluwer Academic Publishers, Boston (2002) 34. Wegner, D.M.: Transactive memory: A contemporary analysis of group mind. In: Mullen, M.B., Goethals, G.R. (eds.) Theories of group behaviour, pp. 185–208. Springer, New York (1987) 35. Hollingshead, A.B.: Retrieval processes in transactive memory systems. Journal of Personality and Social Psychology 74, 659–671 (1998) 36. Wegner, D.M., Guiliano, T., Hertel, P.T.: Cognitive interdependence in close relationships. In: Ickes, W. (ed.) Compatible and incompatible relationships. Springer, Heidelberg (1985) 37. Hollingshead, A.B.: Communication, learning, and retrieval in transactive memory systems. Journal of Experimental Social Psychology 34, 423–442 (1998) 38. Gabbert, F., Memon, A., Allan, K.: Memory conforminty: Can eyewitnesses influence each other’s memories for an event? Applied Cognitive Psychology 17, 533–543 (2003) 39. Roediger, H.L., Meade, M.L., Bergman, E.T.: Social contagion of memory. Psychonomic Bulletin & Review 8, 365–371 (2001) 40. Alea, N., Bluck, S.: Why are you telling me that? A conceptual model of the social function of autobiographical memory. Memory 11, 165–178 (2003) 41. Barnier, A.J., et al.: A conceptual and empirical framework for the social distribution of cognition: The case of memory. Cognitive Systems Research 9, 33–51 (2007)
Fortress or Demi-Paradise? Implementing and Evaluating Problem-Based Learning in an Immersive World Maggi Savin-Baden Coventry University, UK
[email protected]
This other Eden, demi-paradise, This fortress built by Nature for herself Against infection and the hand of war, (Richard II Act 2 scene 1) Abstract. This paper suggests that there is a lack of pedagogical underpinning relating to the use of virtual worlds in higher education, for example there are currently few research papers that suggest why such worlds are being used. The paper presents a project (Problem-based Learning in Virtual Interactive Educational Worlds (PREVIEW) that sought to combine pedagogy with technology, which has been tested in health, medicine, social care and education, physiotherapy and psychology. Keywords: Problem-based Learning, Immersive Virtual Worlds, E-Learning.
1 Introduction Learning in immersive virtual worlds (simulations and virtual worlds such as Second Life) has become a central learning approach in many curricula. Most research to date has been undertaken into students' experiences of virtual learning environments, discussion forums and perspectives about what and how online learning has been implemented. Immersive virtual worlds (IVWs) offer different textualities that are increasingly ushering in new issues such as temporality and spatiality becoming not just contested but dynamic and intersected by one another. This paper suggests that there is a lack of pedagogical underpinning relating to the use of virtual worlds in higher education. The paper presents a project (Problem-based Learning in Virtual Interactive Educational Worlds (PREVIEW) that sought to combine pedagogy with technology. It is argued the current lack of pedagogical underpinning has introduced a number of difficulties which might be overcome by using approaches that readily combine pedagogy with technology, thereby shifting from the fortresses of VLEs to the (semi) paradise of IVWs.
2 Background Problem-based learning (PBL) was popularised in the 1980s, partly in response to the predominantly content-driven transmission educative model of the time. It arose out of a desire to give students the opportunity to apply practices and theoretical U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 433–440, 2009. © Springer-Verlag Berlin Heidelberg 2009
434
M. Savin-Baden
knowledge to problems or scenarios within the professional or clinical setting, crucially in interactive collaboration with colleagues, thus replicating features of the real-life context of application. It has become an increasingly influential approach in curricula in a variety of settings, across a range of subject areas. The increasing adoption of problem-based learning (PBL) and the growth in online learning each reflect the shift away from teaching as a means of transmitting information, towards supporting learning as a student-generated activity. To date problem-based learning (PBL) has been seen as a relatively stable approach to learning, delineated by particular characteristics and ways of operating. Most of the explanations of and arguments for problem-based learning, thus far, have tended to focus on (or privilege) the cognitive perspectives over the ontological position of the learner. However, facilitating this collaborative approach to participation and learning is considerably more challenging in self-directed and distance learning contexts, due to difficulties associated with effective discussion between geographically and spatially disparate learners. Linking PBL with IVWs brings other challenges, which are evident in the IVW literature: there has been growth in the research into students' experiences of virtual learning environments, discussion forums and perspectives in terms of what and how online learning has been implemented. Authors [1] [2] [3] have explored students’ perspectives of e-learning and the findings would indicate students’ experiences of e-learning are more complex and wide-ranging than was first realised. At the same time there is an increasing interest in the use of immersive worlds for learning. One of the reasons for such interest appears to be a recognition that for students in workplace or competency-led courses, learning through case-based scenarios is an excellent method for acquiring sound knowledge and developing decision-making and problem solving skills [4] [5]. Thus an increasing number of curricula are based on a particular variant of case based learning: problem-based learning (PBL) which is an approach in which students work in teams to manage or solve a problem [6]. Guided by a tutor they share their existing knowledge and understanding relevant to the scenario, agreeing on what they need to learn and how to carry it out. Medicine and Healthcare education have used this approach in the UK since the mid 1980s but there has been a shift in the last three years toward moving into online and immersive spaces [7] [8]. The rationale for using learning in IVWs in higher education, it is suggested here, is because practicing skills within a virtual environment online offers advantages over learning through real-life practice, in particular the exposure of learners to a wide range of scenarios (more than they are likely to meet in a standard face-to-face programme) at a time and pace convenient to the learner, together with consistent feedback.
3 Informing Literature It could be argued, and increasingly is, that cyberspace has resulted in a sense of multiple identities and disembodiment, or even different forms of embodiment. The sense of anonymity and the assumption that this was what was understood through one’s words rather than one’s bodily presence, is becoming increasingly unmasked through worlds such as Second Life. However, before this is explored it is perhaps helpful to delineate current forms of PBL. Face-to-Face problem-based learning was an approach popularised by Barrows and Tamblyn [9] following their research into the reasoning abilities of medical students at McMaster Medical School in Canada. This
Fortress or Demi-Paradise? Implementing and Evaluating Problem-Based Learning
435
was because they found that students could learn content and skill, but when faced with a patient could not apply their knowledge in the practical situation. Barrows and Tamblyn’s study and the approach adopted at McMaster marked a clear move away from problem-solving learning in which individual students answered a series of questions from information supplied by a lecturer. In this early version of problembased learning certain key characteristics were essential. Students in small teams would explore a problem situation and through this exploration were expected to examine the gaps in their own knowledge and skills in order to decide what information they needed to acquire in order to resolve or manage the situation with which they were presented. Problem-based learning online is defined here as students working in teams of four to six on a series of problem scenarios that combine to make up a module or unit that may then form a programme. Students are expected to work collaboratively to solve or manage the problem. Students will work in real-time or asynchronously, but what is important is that they work together. Synchronous collaboration tools are vital for the effective use of PBLonline because tools such as Chat, Shared Whiteboards, Video conferencing and Group browsing are central to ensuring collaboration within the problem-based learning team. Students may be working at a distance or on campus, but they will begin by working out what they need to learn to engage with the problem situation. This may take place through a shared whiteboard, conferring or an email discussion group. What is also important is that students have both access to the objectives of the module and also the ability to negotiate their own learning needs in the context of the given outcomes. Facilitation occurs through the tutor having access to the ongoing discussions without necessarily participating in them. Tutors also plan real-time sessions with the PBLonline team in order to engage with the discussion and facilitate the learning. However, it is questionable as to whether there is value in using real-time PBLonline for students undertaking the same programme at the same university, unless it is used because of long distances between campus sites where students are using the same problem-based learning scenario. There also needs to be questions asked about whether having asynchronous teams adds something different to PBLonline. Certainly, in distance education, across time zones and campus sites, this would be useful and suit different students' lives and working practices. Yet this raises problems about how cooperative and collaborative it is possible to be, in terms of sharing learning and ideas and developing forms of learning that are genuinely dialogic in nature. Although PBLonline combines problem-based and online learning, in doing so it is recognised that students learn collaboratively through web-based materials including text, simulations, videos and demonstrations. Resources such as chatrooms, message boards and environments have been purpose-built for PBL; both synchronously and asynchronously, on campus or at a distance. Practising skills within a virtual environment online offers advantages over learning through real-life practice, in particular the exposure of learners to a wide range of scenarios (more than they are likely to meet in a standard face-to-face programme) at a time and pace convenient to the learner, together with consistent feedback. It offers learners the chance to make mistakes without real-world repercussions. One such example is the PREVIEW project. This project is investigating implementing and evaluating a user-focused approach to developing scenarios and materials, linking the emerging technologies of virtual worlds with interactive PBL online, to create immersive collaborative tutorials.
436
M. Savin-Baden
4 Fortress to Demi Paradise? New learning spaces and emerging technologies such as wikis and podcasts offer new possibilities in terms of communication in distance learning, but also present limitations and barriers in terms of the presentation of the self, meaningful synchronous interaction, and team-building. For these reasons, caution must be exercised when making claims for their equivalence to the communicative modalities of the face-toface setting. When seeking to implement PBLOnline, purpose-built educational virtual learning environments (VLEs) such as Blackboard may also be limited and limiting. These digital spaces (VLEs) have prompted concerns about both containment and exteriorisation in online environments [10]- containment is particularly evident in VLEs, inherent in their structuring and management of learning. However, in order to facilitate meaningful engagement in PBL in the online environment, the need for creative and authentic self-representation, a sense of co-presence, immediacy, and rich multimodal communicative spaces should also be addressed. This will provide an environment in which a pseudo-authentic feel, complexity and a sense of ‘messy’ decision-making that occurs in real time can be achieved. Although resources and environments for PBLonline have been developed, they have not hitherto provided this degree of immersion. With these issues in mind, the PREVIEW (Problem-based learning in virtual interactive educational worlds) project was initiated, in order to investigate the feasibility of using a virtual world to deliver problem-based learning to distance learning students, and to better understand the potential of participation in this environment and the benefits and challenges it offers collaborative working and learning. The project team, led by Coventry University and its partner St George’s University of London (SGUL) implemented and evaluated a user-focused approach to developing problem-based learning environments and ‘good practice’ materials. This was achieved by linking the emerging technologies of virtual worlds with interactive PBL online to create immersive, collaborative tutorials in the virtual world of Second Life (SL), which allows distance learners from the geographically distant institutions to meet ‘in-world’ and collaborate around a case. This environment differs radically from the VLE in that it draws on a primarily visual set of semiotic resources with each participant having an online presence, or avatar, to aid their communication. The aims of the PREVIEW project were to (1) deliver problem-based learning in Second Life, (2) develop eight interactive PBL scenarios, (3) guide development and evaluation alongside users, (4) develop guidelines and best-practice on delivering PBL in virtual environments and (5) share outputs and technology. A variety of problem-based learning scenarios were developed within SL for distancelearning students at the two institutions. The project was introduced to the part-time MA in Health and Social Care Management, a distance online programme for students across the midlands of England. The project was also implemented on the second year of the Paramedic Foundation Degree at Coventry University. This is a three-year inservice blended learning programme, with 70% of its materials provided via Blackboard, to practice-based students based in London and various locations in the south of England. The PBL scenarios were categorised in two ways: information-driven
Fortress or Demi-Paradise? Implementing and Evaluating Problem-Based Learning
437
scenarios via machinima videos, and avatar-driven scenarios using artificially intelligent SL avatars, otherwise known as chat bots. PBL scenarios were developed for use within Second Life. For each course, two avatar-driven scenarios were to be developed, as well as two information-driven scenarios. Avatar-Driven: The PBL was set in appropriate surroundings (e.g. at the patient’s home, in the hospital ward) and the patient or staff member was represented by a nonplayer character (NPC). Initial information was given by the NPC or pre-recorded avatars, such as (an avatar discussion in) a machinima, and the students would then discuss how to proceed, as in any PBL. Additional information was presented on display screens (via text, image, video, animation or external links), notecards, touchable objects or sound streams or through the 'chat' function of any NPCs involved in the scenario. Information-Driven: This scenario was to be presented through multiple interactive screens in SL. These screens presented text, images, sound and video footage as necessary. The information on display changed depending on the students’ decisions, similar to the virtual patient model already used at SGUL; the difference being SL allowed multiple information screens and a collaborative environment so that the students could interact with one another as well as the scenario. An example of one of the problem-based learning scenarios at Coventry University, based in a virtual care home for those with learning disabilities, is a difficult situation about an outbreak of disease within the facility (see Figure 1 below). Scenario You each represent a part of the management of an NHS residential / nursing care service – The Cedars Care Complex. There is community concern about the Clostridium difficile (C. diff) infection and your own service is experiencing higher rates of deaths than the average. A front page newspaper article published today is not helping matters. Explanation The students arrive in world to the Cedars care complex. In the office area there is a ringing phone, which when answered, is a message from the local councillor who says he will be along shortly to discuss the C. diff crisis. There is information in the room such as web links and the newspaper article. When the students are ready, they can press a button on the table to call the councillor who subsequently arrives within a few seconds. The students then interact with the councillor a chatbot) and discuss his concerns. When the interaction is finished the councillor is scripted to disappear and instructs them to create a plan for what to do next. At this point the students must work on a plan together for the next course of action for the care home.
Fig. 1. Example of PBL Scenario in Second Life
The role of the students, as a collaborative exercise, is to gather as much information about the situation and the disease as possible using a variety of informationdriven methods before moving on to an avatar-driven method. The students are required to interact with a ‘chat bot’ to distinguish what their next actions should be. Feedback suggested that the information-driven scenarios did not work as well as avatar-driven, and the scenarios were restructured slightly to compensate for the students’ comments that they did not feel as immersed into the environment with information-driven scenarios. The decision was made to design all the health care scenarios as avatar-driven to provide for a truly immersive and realistic experience. An iterative process was used when implementing and evaluating the PBL scenarios. At several stages throughout the project, testing of each scenario was undertaken, and the feedback from the students’ experiences was analysed to improve on the scenarios. The scenarios were then reviewed further alongside students to ensure the feedback had been beneficial to the project.
438
M. Savin-Baden
5 Project Outcomes and Future Possibilities It was anticipated that the technological demands and initial lack of user friendliness of SL would be a barrier to participation. However, when the PREVIEW project underwent testing by staff and students, few access barriers were reported, although this may become more of an issue with wider implantation of this approach. However, students who were beginners to the SL environment needed more time than anticipated to explore and experiment with the virtual world, and familiarise themselves with the new environment; mock scenarios became an important strategy in this process. This suggests that a degree of initial strangeness and discomfort may have been experienced by the participants, which is significant when considering that they would need a tolerable degree of conformity with the visual /kinetic /semiotic resources of the world and their avatar identity, before they could devote meaningful attention to group collaboration around a problem. Preliminary results from the project indicate that SL holds a great deal of potential for PBL. Students seemed able to use their avatars to communicate, collaborate and problem solve effectively. The level of realism and immersion of the scenarios seemed to be enhanced by the virtual world environment, including the option to use voice in addition to text-based communication, and students reported that it felt like a more ‘authentic’ learning environment than PBL based in VLEs. Students responded enthusiastically to the environment, interestingly tending to initially treat it as a ‘game’. This (common) association of the look and feel of SL with online gaming may arguably be a limitation in the educational setting - in that it could encourage individualism rather than collaboration, and may simplify scenarios in which more nuanced critical engagement is required and no one clear solution is available. However, it is likely to also be an advantage in that it may increase student enjoyment and motivation via memorably novel forms of participation. This project developed an innovative approach to address problems faced by courses which wish to use collaborative scenario-based learning as a tool for the learning of competency, but are restricted in their opportunities for face to face learning. The approach took advantage of the new opportunities offered by immersive, 3dimensional multi-user virtual environments (MUVEs) which provide the authenticity of a simulated real-world environment, and the open-ended nature of in-world activity. This may not be the first time that an attempt has been made to develop immersive scenarios, however we believe this may be the first use of PBL in immersive worlds in this way. Furthermore we believe this work goes some way to engaging with the taxonomy suggested by Schmidt and Moust[11] for using problems in order to acquire different kinds of knowledge, rather than solving problems or covering subject matter. The importance of the work undertaken by Schmidt and Moust is not only the way they provide and explicate different problem types, but also their exploration of the way in which the questions asked of students guide the types of knowledge in which students engage. A particular strength of Second Life as a learning environment is that it provides a variety of communication tools which are particularly important for PBL. Furthermore to date problem-based learning has been seen as a relatively stable approach to learning, delineated by particular characteristics. Using PBL in Second Life embraces issues connected with complex curriculum design and the need for complex PBL scenarios to be developed. All the planned scenarios were
Fortress or Demi-Paradise? Implementing and Evaluating Problem-Based Learning
439
delivered, and significant changes were made during development to take most advantage of Second Life’s strengths. Students appreciated the value of Second Life as a collaborative environment, but also viewed such practice-based simulations as valuable for individual work. An interesting consequence of the richness and authenticity of the Second Life scenarios is the large amount of detail provided, much more than is usual in paperbased face-to face PBL sessions. Second Life can provide a more authentic learner environment than classroom based PBL and therefore changes the dynamic of facilitation, but at this stage it is not clear how such detailed virtual reality impacts on the way the scenario is used and facilitated. While the facilitators expressed the view that the scenarios produced were appropriate and fit for purpose, it is revealing that none would currently consider adopting them in a live presentation of a course. The main reasons concerned the time needed for facilitation, usability, and access to Second Life. Teachers considered the time required for Second life facilitation of large groups of students was comparatively very high (in comparison compared to their normal commitment to in face to face sessions with students). Clearly an approach which required such tutor facilitation would not succeed, and in retrospect it would have been appropriate to examine an alternative approach based on lead-student facilitation. The relatively high technology demands were considered a major barrier for students, since the Second Life programme issues severe warnings during initial downloading to those with PCs which fall below its optimum specification. This can be confusing, since in many instances of such warnings the programme will in practice run quite adequately, but users will not necessarily proceed to install the programme, to discover this for themselves. During testing on university premises, there were minimal access problems and these only caused minor annoyance. However, students reported difficulties in obtaining access from halls of residence, and inadequate specification of their own machines. Facilitators highlighted this as an issue for distance learning with cohorts on these courses. Savin-Baden[12], points out that facilitation of PBL is itself a source of concern for many teachers and that there are differences and tensions to be resolved between online and face to face facilitation This is an important area for further research. The user-guided collaborative development model enabled rapid and timely modification to the scenarios and the complementary expertise in the multi-disciplinary team provided effective sharing of learning both within and beyond the team. Given the success of PREVIEW as a demonstrator, it is essential to build on these results to promote the embedding of scenarios in courses in terms of: • • • •
Further development and research to develop models and understanding of good practice in areas such as scenario design in Second Life/ MUVEs, Exploration of technology reuse and repurposing, Locating mechanisms to improve usability, The development of PBL facilitation practices for MUVEs.
6 Conclusion The curriculum to some extent remains a partitioned-off space where policy and expectations of governments are increasingly seen as given rather than negotiable,
440
M. Savin-Baden
contingent or contextual, both in terms of space, place and discipline. This project has a user-centred approach and has provided a strong pedagogical underpinning to the use of virtual worlds in higher education. Developing open source pedagogically driven PBL scenarios such as these may offer a new liquidity to learning, combining technology with pedagogy in ways that are mutually beneficial not only in distance education, but also as a means to enrich the face-to-face learning environment. However, these environments must be examined not only in terms of the new freedoms they may afford, but also in recognition of their intermittently strange and ‘troubling’ nature, which may in itself provide potential for creativity[13]. In doing so, we may extend the scope of our enquiry - not only considering what ‘learning’ means in such spaces, but also addressing more fundamental questions raised, such as the nature of emergent modalities of educational communication, practices and identities in the ‘digital age’. Such a vision however, will require that we stop seeing the curriculum as a predictable, ordered and manageable space, but instead re-view it as an important site of transformation characterised by risk, uncertainty and radical unknowability.
References [1] Sharpe, R., Benfield, G., Lessner, E., DeCicco, E.: Learner Scoping Study: Final Report (2005), http://www.jisc.ac.uk/index.cfm?name=elp_learneroutcomes (November 19, 2007) [2] Creanor, L., Trinder, K., Gowan, D., Howells, C.: LEX. The learner experience of elearning final report (2006), http://www.jisc.ac.uk/uploaded_documents/ LEX%20Final%20Report_August06.pdf (March 14, 2007) [3] Conole, G., de Laat, M., Dillon, T., Darby, T.: JISC LXP Student experiences of technologies – final report. JISC report (November 2006) [4] Scalese, R.J., Obeso, V.T., Issenberg, S.B.: Simulation technology for skills training and competency assessment in medical education. J. Gen. Intern. Med. Suppl. 1, 46–49 (2008) [5] Bergin, R., Fors, U.: Interactive Simulation of Patients – an advanced tool for studentactivated learning in medicine & healthcare. Computers and Education (40/4), 361–376 (2003) [6] Savin-Baden, M.: Problem-based Learning in Higher Education: Untold Stories. Open University Press/SRHE, Buckingham (2000) [7] Conradi, E., Kavia, S., Burden, D., Rice, D., Woodham, L., Beaumont, C., Savin-Baden, M., Poulton, T.: Virtual patients in Virtual World: Training paramedic students. Medical Teacher (2009) [8] Savin-Baden, M.: A Practical Guide to Problem-based Learning Online. Routledge, London (2007) [9] Barrows, H.S., Tamblyn, R.M.: Problem-based Learning, an approach to Medical Education. Springer, New York (1980) [10] Land, R.: Paradigms Lost: academic practice and exteriorising technologies. ELearning 3(1), 100–110 (2006) [11] Schmidt, H.G., Moust, J.: Towards a taxonomy of problems used in problem-based learning curricula. Journal on Excellence in College Teaching 11(2/3), 57–72 (2000) [12] Savin-Baden, M.: Problem-based Learning in Higher Education: Untold Stories. Open University Press/SRHE, Buckingham (2000) [13] Bayne, S.: Temptation, trash and trust: the authorship and authority of digital texts. ELearning 3(1), 16–26 (2006)
Project-Based Collaborative Learning Environment with Context-Aware Educational Services Zoran Jeremić1, Jelena Jovanović1, Dragan Gašević2, and Marek Hatala3 1 FON-School of Business Administration, University of Belgrade, Serbia School of Computing and Information Systems, Athabasca University, Canada 3 School of Interactive Arts and Technology, Simon Fraser University, Canada
[email protected],
[email protected],
[email protected],
[email protected] 2
Abstract. Teaching and learning software design patterns (DPs) is not an easy task. Apart from learning individual DPs and the principles behind them, students should learn how to apply them in real-life situations. Therefore, to make the learning process of DPs effective, it is necessary to include a project component in which students, usually in small teams, develop a medium-sized software application. Furthermore, it is necessary to provide students with means for easy discovery of relevant learning resources and possible collaborators. In this paper, we propose an extensive project-based collaborative learning environment for learning software DPs that integrates several existing educational systems and tools based on the common ontological foundation. The learning process in the suggested environment is further facilitated and augmented by several context-aware educational services. Keywords: Semantic web, ontologies, collaborative learning, project-based learning, software patterns, context-awareness.
1 Introduction The major concern of today’s software engineering (SE) education is to provide students with the skills necessary to integrate theory and practice; to have them recognize the importance of modeling and appreciate the value of a good software design; and to provide them with ability to acquire special domain knowledge beyond the computing discipline for the purposes of supporting software development in specific domain areas. Software engineering students should learn how to solve different kinds of software problems both on their own and as members of a development team. As stated by many researchers [1], problem solving in SE is best learned through practice, and taught through examples. In addition, it is essential that students learn how to exploit previous successful experiences and knowledge of other people in solving similar problems. This knowledge about successful solutions to recurring problems in software design is also known as software design patterns (DPs) [2]. However, the increasing number of software DPs, described in several different patterns’ forms and stored in many online repositories, makes it difficult for students to find an appropriate pattern that could be used in a specific situation. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 441–446, 2009. © Springer-Verlag Berlin Heidelberg 2009
442
Z. Jeremić et al.
All the above mentioned specificities of learning software DPs indicate the need for the social constructivist approach in SE education, as well as educational services that provide students with right in time advices about learning resources and possible collaborators. In particular, an active learning paradigm is needed which recognizes that student activity is critical to the learning process [3]. Following this paradigm, we have developed DEPTHS (Design Patterns Teaching Help System) [4] - a learning environment for project-based and collaborative learning of DPs. DEPTHS integrates an existing Learning Management System (LMS), a software modeling tool, diverse collaboration tools and relevant online repositories of software DPs. The integration of these different learning systems and tools into DEPTHS environment is achieved by using the ontologies. We have also built context-aware educational services that are available throughout the DEPTHS environment. These services enrich and foster learning processes in DEPTHS in two main ways: 1) by recommending appropriate learning content (i.e., Web page(s), lessons or discussion threads), and 2) by recommending peers (students, teachers or experts) that are currently dealing with or have experience in the same or a similar software problem. In this paper, we describe pedagogical background that this comprehensive learning environment is based on – project-based learning and collaborative learning, as well as context-aware educational support provided in the form of educational services available within each system and tool integrated into DEPTHS.
2 Project-Based Learning in DEPTHS Having in mind that effective learning of software DPs requires a constructivist approach to be applied in the teaching process, we have identified two most important theories in this field: Project-based learning (PBL) and Engagement theory. PBL is a teaching and learning model that organizes learning around projects. Projects comprise complex tasks and activities that involve students in a constructive investigation that results in knowledge building. The engagement theory is based upon the idea of creating successful collaborative teams that work on tasks that are meaningful to someone outside the classroom [5]. Its core principles are summarized as “Relate”, which emphasizes characteristics such as communication and social skills that are involved in team effort; “Create”, which regards learning as a creative, purposeful activity; and “Donate”, which encourages learners to position their learning in terms of wider community involvement. Later research inspired by this approach, suggests a generic framework called “Genex framework” [6], that describes four phases a creative process will most likely pass through, “Collect”, which regards searching and browsing digital libraries, visualizing data and processes, “Relate”, “Create” and “Donate”. Based on the guidelines for teaching SE to students (e.g., [1]), and our own teaching experience, we believe that the presented theories provide a solid base for effective learning of software DPs. Accordingly, we based the DEPTHS framework on them. A typical scenario for learning software patterns with DEPTHS assumes a PBL approach with collaborative learning support (Figure 1). In particular, a teacher defines a specific software design problem that has to be solved in a workshop-like manner by performing several predefined tasks: brainstorming, creating and submitting solutions, evaluating solutions etc. These activities enable and even foster active learning that has strong foundation in the engagement theory and Genex’s framework.
Project-Based Collaborative Learning Environment
443
Brainstorming has foundation in two Genex’s phases, collect and relate. First, a student is asked to present his ideas about possible ways for solving the problem under study and to discuss and rate his peers’ ideas. In order to get enough information to perform this task, he needs to search online repositories about software DPs and other related course content. DEPTHS makes this search more effective by providing semantically-enabled context-aware learning services for finding related online and internally produced resources (Figure 1B). Moreover, to get some initial directions on the performing task, the student uses semantically-enabled peers finding service (Figure 1A) to find people who have shared interests and are engaged in similar problems.
Fig. 1. An example learning scenario with DEPTHS: problem-based learning with collaborative learning support (titles in clouds indicate Genex’s framework phases)
Genex’s phase create is found in several DEPTHS activities, namely exploring earlier works on similar problems, creating design artifacts using software modeling tool or evaluating peers’ solutions. Having acquired the required knowledge, students should complete the deliverable using the software modeling tool. This kind of learning activity requires students to externalize their knowledge, to analyze possible solutions and to provide a design rationale. After completing the project, students are asked to evaluate their own and other students’ projects. Students reflect critically on their own and others’ contributions, and acquire knowledge about other possible solutions. Genex’s donate component in DEPTHS stresses the benefits of having authentic deliverables that will be meaningful and useful to someone else.
444
Z. Jeremić et al.
3 Educational Services in DEPTHS During the last couple of years context-aware learning has gained a constantly increasing attention of the e-learning research community. Despite different interpretations of the term ‘context’ by different authors, researchers seem to agree that a learning context is about the environment, tools, resources, people (in terms of social networking), and learning activities. Being more specific, context in learning systems is mostly characterized by the learners, learning resources and a set of learning activities that are performed in the light of a specific pedagogical approach [7]. In order to provide effective, context-aware learning in DEPTHS we have developed supporting services: Semantic Annotation and Context-aware Learning Services. Semantic Annotation Service is used for annotating online resources in publicly accessible repositories of DPs, as well as software models created by students. It analyses the text of each resource, recognizes specific domain topics (i.e., software DP’s name), creates the semantic-rich metadata (based on the ontology of software DPs) and stores them in the DEPTHS’s repository of LO. Context-aware learning services are accessible to all systems and tools integrated in the DEPTHS framework and are exposed to end users (students) as context-aware learning features. They are based on Semantic web technologies, and include: - Web resource finding. Based on the student’s current learning context, this service generates a list of recommended Web resources from publicly accessible repositories of software DPs. To do this, it computes the relevance of each resource (i.e., Web page) available from these repositories for the student’s current learning context and selects the most relevant pages for the student. The computation of relevancy of a Web resource is based on two kinds of semantic metadata: 1) the semantic metadata assigned to the resource by the Semantic Annotation Service and 2) the formal (i.e. ontology-based) representation of the student’s current learning context. - Discovery of relevant internally produced resources. This service suggests internally created resources (e.g., discussion threads, comments…) that could be useful for a student to solve a problem at hand in the given learning context. The computation of relevance is done in a similar manner to the one applied for external, Web resources. - Experts, teachers and peers discovery. Based on the current learning context, this service suggests other students or experts as possible collaborators. Collaborators are selected and sorted using an algorithm which considers their competences on three different levels: same content (i.e. current software problem), similar or related learning content (i.e. similar software problem) and broader content (i.e. software problem in the same course). Estimation of the peer’s competence on each level is performed through assessing three types of competence indicators: 1) participation in learning activities, 2) knowledge level estimated by the teacher and through peers’ evaluations and 3) social connections with the peer asking for help. We have implemented DEPTHS by leveraging open-source solutions and extending them with Semantic web technologies. Specifically, we have integrated Moodle (http://moodle.org) LMS, ArgoUML (http://argouml.tigris.org) software modeling tool, OATS (Open Annotation and Tagging System) tool for collaborative tagging and highlighting (http://ihelp.usask.ca/OATS) and LOCO-Analyst tool to provide
Project-Based Collaborative Learning Environment
445
teachers with feedback regarding students’ activities [7]. Moreover, we have extended both Moodle and ArgoUML with the above described context-aware educational services and developed a Moodle module for project-based collaborative learning.
4 Evaluation The evaluation of DEPTHS was conducted in February 2009, in the context of a course on software development taught at the Department of Computer Science of Military Academy in Belgrade, Serbia, with a group of 13 students of the fifth year of the computer science program. The students were divided into 4 groups (3 groups with 3 students and 1 group with 4 students), based on the teacher’s subjective opinion about their knowledge and their grades in courses related to SE. The aim of the evaluation was to determine how effective DEPTHS is for learning DPs. Specifically, we wanted to check if active students involvement in real world problems and the employment of context-aware educational services could ensure more effective way of imparting knowledge in the domain of software development. We used an interview to collect data about the students’ satisfaction with and attitudes towards learning with the DEPTHS system, as well as their perceptions regarding the effectiveness of learning with DEPTHS. The DEPTHS system received high marks from the students. Majority of them (53.85%) reported they have learned as effectively as in traditional way, and 30.77% reported that they have learned more than in traditional way. The students reported it was intuitive and very easy to use (76.92%), but they also have reported some technical issues caused by a software bug in the software modeling tool. The students felt educational services provided in DEPTHS are very helpful: Web resource recommendation – 92.30%; course content recommendation – 84.61%; and peers recommendation – 76.92%. They also thought that the activities provided within the tasks considerably contribute to the learning process (brainstorming – 76.92%, and evaluating each other’s works – 100%).
5 Discussion and Conclusions The work presented in this paper is related to two favored research fields: collaborative learning in the domain of SE and context-aware learning. Even though extensive work has been done in both research fields, to the best of our knowledge there were very few attempts in developing collaborative learning environments that support knowledge creation and sharing through the collaborative learning process based on the active learning principles. Unlike other learning environments that rely upon a similar PBL-based approach (i.e., students learn collaboratively by solving practical problems), such as the one presented in [8], DEPTHS offers higher learning potential by providing access to relevant learning resources, allowing for sharing and commenting of produced learning artifacts, and facilitating context-aware learning (i.e., context-aware retrieval of peers, formal and informal learning content). In addition, having grounding our approach in pedagogical theories and best practices of collaborative learning, we can expect to provide students with better learning experience than systems (such as the
446
Z. Jeremić et al.
one presented in [9]) that do consider the learners current context when recommending learning resources but fail to provide adequate pedagogical baseline, or systems (e.g., [10]) that fail to consider students participation in learning activities as one of the relevant factors determining their competence in a given topic. Aiming to develop a learning environment that would allow for effective learning of software DPs, we leveraged semantic technologies to integrate several existing learning systems and tools, and develop context-aware educational services. Our present implementation and first evaluation results convince us that this environment could significantly contribute to effective teaching and learning of DPs. Semantic Web technologies facilitate development of beneficial educational services that makes search for relevant resources and possible peers fast and effective. We are encouraged with the results of the initial evaluation study that show very positive students’ attitude toward learning in DEPTHS learning environment. Students’ perception of system’s usefulness is valuable and encouraging for our further research. However, the results we got still do not have a statistical power, as the participants’ sample was too small. Further research is required that would include sufficient participants to ensure the general applicability of the findings. In addition, in our future work we intend to do a more precise evaluation of each specific educational service as well as a quantitative evaluation of the students’ learning effectiveness.
References 1. Jazayeri, M.: The Education of a Software Engineer. In: Proc. of the 19th IEEE Int’l Conf. on Automated Soft. Eng., pp. xviii-xxvii (2004) 2. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading (1995) 3. Warren, I.: Migrating to a Teaching Style that Facilitates Active Learning. CiLTHE Stage 1 Dissertation, Lancaster University (2002) 4. Jeremic, Z., Jovanovic, J., Gasevic, D.: Towards a Semantic-rich Collaborative Environment for Learning Software Patterns. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 155–166. Springer, Heidelberg (2008) 5. Kearsley, G., Schneiderman, B.: Engagement theory: A framework for technology-based learning and teaching (1999), http://home.sprynet.com/~gkearsley/engage.htm 6. Shneiderman, B.: Creating Creativity: User Interfaces for Supporting Innovation. ACM Trans. on Computer-Human Interaction 7(1), 114–138 (2000) 7. Jovanović, J., Gašević, D., Brooks, C., Devedžić, V., Hatala, M., Eap, T., Richards, G.: Using Semantic Web Technologies for the Analysis of Learning Content. IEEE Internet Computing 11(5) (2007) 8. Baghaei, N., Mitrovic, A., Irwin, W.: Supporting collaborative learning and problemsolving in a constraint-based CSCL environment for UML class diagrams. International Journal of CSCL 2(2-3), 150–190 (2007) 9. Ghidini, C., Pammer, V., Scheir, P., Serafini, L., Lindstaedt, S.: APOSDLE: Learn@work with semantic web technology. In: I-Know 2007, Graz, Austria (2007) 10. Yanlin, Z., Yoneo, Y.: A Framework of Context Awareness support for peer recommendation in the e-learning context. British Journal of Educ. Techn. 38(2), 197–210 (2007)
Constructing and Evaluating a Description Template for Teaching Methods Michael Derntl1 , Susanne Neumann2 , and Petra Oberhuemer2 1
University of Vienna, Research Lab for Educational Technologies Rathausstrasse 19/9, A-1010 Wien, Austria 2 University of Vienna, Center for Teaching and Learning Porzellangasse 33a, A-1090 Wien, Austria {michael.derntl,susanne.neumann-heyer,petra.oberhuemer}@univie.ac.at
Abstract. There are numerous formats of representing good teaching practice, for instance, case study collections, pattern repositories, and experience reports. Yet, none of these representation formats has found broad acceptance among teaching practitioners. In this paper, a new description template for teaching methods is proposed. The template was constructed based on an analysis of previous formats for describing teaching methods, and by synthesizing findings regarding user requirements during browsing, selecting, developing, and implementing teaching methods. The so attained template was then evaluated. In a first evaluation phase, more than twenty international participants were asked to use the template to describe their teaching methods and to judge the template according to validation criteria. In a second evaluation phase, the teaching method descriptions were exchanged and participants suggested modifications. This paper presents the first version of the description template, the results of the two evaluation phases, and a discussion of the findings.
1
Introduction
Teaching practitioners dispose of considerable experience related to teaching strategies and methods applied to achieve intended learning outcomes. Their knowledge, however, is mostly implicit and the concepts they draw on when deciding on teaching strategies are based upon prior examples [1]. Beyond that, they generally are little accustomed to systematically documenting their teaching strategies and methods for exchange and further use. The questions thus are how instructors obtain knowledge about different teaching methods and how they might want to capture the essence of the methods they apply. In 2003, the IMS Global Learning Consortium published the IMS Learning Design (IMS LD) specification [2] that prescribes a standardized modeling language for representing teaching methods as a description of teaching and learning processes, able to be executed by a software system that coordinates all involved people, resources and services. The downside of the specification is, however, that it is not easy to understand and work with [3]; only specialists, who have spent tremendous time working with the specification, are able to use it properly and to its U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 447–461, 2009. c Springer-Verlag Berlin Heidelberg 2009
448
M. Derntl, S. Neumann, and P. Oberhuemer
full extent. Because of the slow uptake of the IMS LD specification, several initiatives have begun to foster distribution and reuse of teaching methods resulting in a great number of competing formats for describing teaching methods. None of the formats, however, has yet reached a wide acceptance (compare e.g. [4]). In the context of the ICOPER project1 , which is co-funded by the European Commission in the eContentplus programme, the two abovementioned efforts with respect to the documentation of teaching methods and the use of the standardized modeling language IMS LD will be brought together. This is achieved by first developing a fit for purpose teaching method description template, and second, by examining the potential of the IMS LD specification’s concepts and language to model the teaching methods described that way. This paper describes development efforts and evaluation results regarding the teaching method description template. The use of a template-based approach to describing teaching methods as opposed to freeform or narrative approaches is grounded in the fact that a structured template provides a scaffold for more effective and efficient authoring, communication, searching, and comparison of descriptions, particularly when descriptions are provided by a multitude of practitioners and authors. This is aligned with the current trend towards more consistent and common ways of documenting teaching methods [5]. The paper is structured as follows. The next section introduces the methodology behind the construction of the description template and the resulting template. Section 3 details the process and outcome of collecting teaching methods from instructors using the new template; it also presents the process of evaluating the description template and discusses the outcomes of the evaluation. Section 4 concludes the paper and gives an outlook on further research.
2
Constructing the Teaching Method Description Template
2.1
Methodology
To approach the task of constructing a new teaching method description template, fourteen existing description schemas for teaching methods were reviewed including pattern catalogs (e.g. [6]), learning design repositories (e.g. [7]), and pedagogic scenario collections (e.g. [8]). These collections employed different descriptive elements (e.g. title, objectives, learning outcomes, activities, etc.) for describing teaching methods. In a first step, we analyzed the frequency of occurrence of descriptive elements. The process of distilling elements from the collections was executed as follows: 1. Extract the elements that are used to describe teaching methods in the collection; 2. Match each element to either a previously identified element of another collection or define a new element that is distinct from existing ones. 1
http://icoper.org
Constructing and Evaluating a Description Template for Teaching Methods
449
While the frequency of occurrence of an element in existing collections is certainly useful information, it cannot reasonably be used as the exclusive criterion for creating a new description template. Therefore, a method for the final selection of elements was established. The foundation for this process was the JISC funded Mod4L project [9], where information needs of instructors during browsing, selecting, developing, and implementing teaching methods were investigated. The Mod4L researchers collected those information needs during practitioner workshops. They identified over 60 information elements and collected practitioner “requests” for these elements during the four phases mentioned above. Since the outcomes of this project were obtained through practitioner involvement, we decided to rely, as a first step, on a selection of those elements that were requested most frequently. However, we discovered that after this step some elements, which were prominently included in the collections we analyzed, were still missing. So we complemented in a second step these elements of the Mod4L workshop with frequently occurring elements in our aggregation matrix and necessary elements like copyright. 2.2
Results
As a result of the above methodology we identified more than forty distinct elements from existing collections of teaching practice. The top ten elements were: sequence/activities (14 occurrences), delivery context (13), educational approach (12), learner profile (12), name/title (9), group size (9), author (8), resources (8), goals/aims (7), and tools (7). After the second step, in which the identified elements were complemented and adapted based on findings from the Mod4L project, the resulting set of description elements was split up into two sections: a “teaser” section offering essential information for browsing and selecting, and a detailed description section including information oriented more towards application. The teaser section includes the following elements2 : – Name: Name of teaching method (Example: Jigsaw) – Author and copyright : Name and optional contact information of the person who filled out this teaching method description, as well as copyright information (Example: Jane Doe, University of Nowhere,
[email protected]; Copyright: Creative Commons) – Summary/thumbnail : Overview of teaching and learning activities in this teaching method; quick information about key points of the teaching method (Example: Ten-Plus-Two method is used to break long presentations. Instructor presents for ten minutes, learners then reflect two minutes. Repeat.) – Rationale for teaching method : Why and when is the method being used (Example: To foster active participation of and communication between students.) 2
The brief description and example for each element provided in this list was also available to teaching method authors in the template, which we distributed as a Microsoft Word document.
450
M. Derntl, S. Neumann, and P. Oberhuemer
– Subject/discipline: In what (topical) area of study can this teaching method be used (Example: Civil engineering, geo-technology and hydraulic engineering) – Learning outcomes: The intended goals for learning (Example: Learners are able to calculate forces on dams.) – Group size: The approximate number of participants suitable for this teaching method (Example: The method is ideal for 15–20 participants, max. 30 participants.) – Duration: The amount of time it takes to complete the teaching method when it is being used/implemented (Example: 2 hours, if it is a large group 3 hours) – Learner characteristics: Description of the “target group” of this teaching method, i.e. the learners’ age, level within the curriculum, previous knowledge, special attributes, or qualities (Example: 15–35 years of age, introductory stage in college, high knowledge of technology) – Type of setting: The setting in which the teaching method is intended to be implemented (Example: Distance learning, blended learning, face-to-face) Following the teaser, the detailed information section includes the following descriptive elements: – Graphical representation: A depiction of the teaching method. (Example: flow chart, activity diagram, swim lanes). The template included here an example screenshot of an activity diagram, and a hyperlink to the Graphical Learning Modeler [10], an open source tool for IMS Learning Design [2] compliant modeling of teaching methods and units of learning. – Sequence of activities: Detailed description of all activities (including assessment) performed by the participants as part of the teaching method as well as the activities’ temporal sequence (Example: 1. [Presenter] Present the concepts to be learned for ten minutes; 2. [Learner] Share and reflect together with another learner what has been presented in the last ten minutes; 3. [Presenter] Repeat steps 1 and 2 as necessary.) – Roles: Name and short description of roles that the participants take within the teaching method (Example: tutor, moderator, discussion participant, expert) – Type of assessment : The intended method for assessing learners’ progress and learning outcomes (Example: portfolio, multiple-choice test, oral exam) – Resources: Detailed description of the requirements for implementing the teaching method including room equipment, IT infrastructure, software, virtual learning environment, personnel resources, learning materials, and other supports (Example: flip chart, projector, forum or chat, at least 5 tutors, facilitator’s toolkit, study guide) – Alternatives: Description of possible variations of the teaching method (Example: To ensure that all participants contribute ideas during brainstorming, you may use note cards for collecting ideas instead of contributing ideas by shouting. Each participant writes their ideas on note cards and then shares them publicly.)
Constructing and Evaluating a Description Template for Teaching Methods
451
– Teacher reflection: Description of experiences that teachers have had when implementing the teaching method, benefits and opportunities, risks and threats (Example: Method works well when learners are active contributors. Preparatory effort of this method is high.) – Student feedback : Description of feedback that students have given when they learned with the teaching method (Example: Students liked the active participation during this method. Some students were afraid of the ill-structured nature of the method, because a lot of the responsibility is shifted to the students’ side. This may cause discomfort.) – Peer review : Evaluation of the quality of the teaching method by a qualified peer or a colleague instructor (Example: The teaching method fulfills five of the seven principles of good teaching practice [11].) – Comments: Any comments from people who have read or applied the teaching method. – References: Any references to the original source of the teaching method, background literature, or to resources used within the method (Example: Reigeluth, C.M. (1999). Elaboration Theory. In Reigeluth: Instructional Models – The New Paradigm. Mahwah, NJ: Lawrence Erlbaum.)
3
Evaluating the Template
The evaluation of the teaching method description template proceeded in two consecutive phases. In phase I (see Section 3.1), evaluators were asked to describe one of their teaching methods using the description template, and to provide judgment of the template on various criteria of good descriptions. In phase II (see Section 3.2), evaluators were asked to read teaching methods written by other evaluators and to judge their confidence of implementing the method, and to make suggestions on modifying the elements included in the template. Both evaluation phases and the obtained results as well as a discussion of the results are presented in this section. 3.1
Evaluation Phase I
Methodology. Evaluators were asked to partake in the evaluation if they have had teaching experience in higher education. Some evaluators additionally had research experience regarding higher education teaching. Evaluators performed two tasks during the first phase of the evaluation. They first described one teaching method from their own teaching context using the template. They were not told, what level of specificity to choose when describing their teaching method as it was part of the evaluation to see, what the instructors would choose as their preference for describing methods, and how much of the context of the actual implementation of the teaching method would remain in the description of the teaching method. After the evaluators had described their teaching method using the template, they rated the appropriateness of the template. For judging, criteria for good
452
M. Derntl, S. Neumann, and P. Oberhuemer
descriptions were used, which were derived, among others, from Lambe [12]. The criteria were formulated as statements, and the participants had to rate for each statement on a five-point Likert scale, whether they strongly agree (5), agree (4), neither agree nor disagree (3), disagree (2), or strongly disagree (1). The participants (n = 22) were also asked to provide additional comments to any of the criteria. The following criteria/statements were used: – Completeness: the template covers all relevant aspects of a teaching method; there are no descriptive elements missing. – Clarity: it was clear, what the descriptive elements in the template meant. – Allocation: it was easy to allocate information regarding the teaching method within the descriptive elements of the template. – Understandability: the descriptive elements in template support the reader’s understanding of the teaching method. – Distinctiveness: the descriptive elements of the template are distinctive, i.e. they do not overlap. – Appropriateness: a structured template (compared to, for instance, a narrative) is an appropriate instrument for representing teaching methods to support readers in browsing, selecting and implementing teaching methods. – Reusability: the template supports reusability of described teaching methods for myself as well as for others. – Added value: filling out the template provides added value for my own work/teaching (e.g., it fosters personal reflection, supports documentation, fosters exchange with colleagues, etc.) – Durability: the template seems durable, i.e. it seems unlikely that it needs to be changed in the (near) future.
Results. We collected 34 highly diverse teaching method descriptions. These descriptions included well-known teaching methods such as role play, brainstorming and reflection, creative workshops, e-portfolios, peer-to-peer teaching, and project-based learning; in addition, they included a wealth of teaching methods that are not so commonly known, e.g., resource-based analysis, online reaction sheets, image sharing, or constellation. More than half of the teaching methods had a typical duration of less than a day. Blended and online teaching methods predominantly had longer durations, i.e., days to weeks. Most of the teaching methods had online elements (i.e., are either blended or purely online), and about one third had a setting that was primarily face-to-face. Only a handful of teaching methods have been described using all or nearly all elements of the template. Also, the amount of information provided varied greatly, with some evaluators providing extensive descriptions, while others provided rather brief, bullet-list descriptions. The ratings that the evaluators gave in their second task are summarized in the histogram in Fig. 1. Results are explained in terms of the average of each rating (M ), standard deviation of ratings (SD), and correlation coefficient (r), which were calculated according to confidence intervals (p).
Constructing and Evaluating a Description Template for Teaching Methods 1
2
3
4
453
5
Completeness
4.00 ± 0.77
Clarity
4.33 ± 0.66
Allocation
4.05 ± 0.80
Understandability
4.10 ± 0.68
Distinctiveness
3.29 ± 1.27
Appropriateness
3.90 ± 1.04
Reusability
4.05 ± 0.74
Added value
3.62 ± 1.07
Durability
3.43 ± 1.08
Fig. 1. Evaluation of the teaching method description template with respect to nine different criteria (Likert scale: 1 = strongly disagree . . . 5 = strongly agree; n = 22)
Five of the above criteria were rated with an average score of at least 4 points; these are completeness, clarity, allocation, understandability and reusability. These criteria also happen to have the lowest standard deviation values of all nine criteria, indicating considerable agreement among participants. The template was broadly considered as complete (M = 4.05, SD = .79). However, some participants provided additional comments as to what aspects are still missing. In particular, participants, who rated the completeness of the template with a value of 3 or lower, missed elements where activities are described in more detail, where one could link the teaching method to theory, or wished information on how it is embedded within other teaching methods. Another remark was that it was hard to capture the essence of a teaching method in purely written form. It is evident that some of these comments refer to missing elements that are in fact included in the template. For instance, it was not prohibited to include pictures in the text, and there was a dedicated element for providing a graphical representation of the teaching method. Clarity received the highest average rating (M = 4.32, SD = .66), indicating that the descriptive elements of the template have meaningful titles and purpose. Correlation analysis using Pearson’s correlation showed that clarity has significant positive correlation with the judgment of the template’s distinctiveness (r = .753, p < .01), and the template’s appropriateness (r = .552, p < .01), respectively. This seems plausible, since clear meaning of elements also helps distinguishing the elements and it also supports the positive judgment of the appropriateness of a structured description template. Every person has his/her own mental representation of a teaching method. The allocation of those “mental chunks” to the elements of the template seemed
454
M. Derntl, S. Neumann, and P. Oberhuemer
to be easy for most participants (M = 4.05, SD = .79). Note that this criterion has a positive correlation with understandability (r = .440, p < .05), appropriateness (r = .481, p < .05) and added value (r = .576, p < .01), indicating that people who find it easy to allocate information to descriptive elements in the template also perceive added value (e.g. supporting reflection and documentation) in filling out the template. It also seems reasonable to assume that people who find it easy to allocate information to elements of a structured description tend to expect that this description will be easy to understand by readers. The most controversial criterion was the distinctiveness of the template’s elements, having the lowest average value (M = 3.32) and the highest deviation (SD = 1.25). However, low distinctiveness is not necessarily a negative property of a set of description elements. For example, the elements describing roles, sequence of activities, and graphical representation were deliberately designed to be non-distinctive in the sense that they view the same concepts from different perspectives. As one comment reads, “there is some overlap but I think that may be inevitable.” The appropriateness of using a structured description template for describing teaching methods was rated fairly high (M = 3.91, SD = 1.02). Participants who gave high ratings for clarity, allocation, distinctiveness and added value, also tended to give a high rating to the appropriateness (p < .05, respectively). Reusability of teaching method descriptions using the template was rated high (M = 4.00, SD = .76). However, as one participant commented this judgment is “based on expected benefit, not on real experience.” Another participant stated that it might help her reuse the teaching method but may not help others. The actual reusability of teaching method descriptions needs to be tested in practice. The added value perceived during the writing of the teaching method received a moderately high average rating with considerable deviations (M = 3.55, SD = 1.10). The comments were controversial. For instance, one participant mentioned that “working on activity descriptions is [hard but] this is the core of reflecting on learning situations.” Others mentioned that “there’s no particular incentive” in filling out the template and that “reflecting and sharing might be quite different processes [needing] quite different types of descriptions and templates.” Rating data showed that judgments of added value correlate with easy allocation of information to template elements (r = .576, p < .01) and with perceiving structured templates as an appropriate description format for teaching methods (r = .428, p < .05). Finally, the durability of the template was rated moderately high (M = 3.41, SD = 1.05), with some hoping for changes (“I hope the order of presentation changes”), and some simply stating that this would be “difficult to predict.” 3.2
Evaluation Phase II
Methodology. The purpose of phase II was to evaluate the effectiveness of the template in regard to transferring a teaching method described by someone else into one’s own teaching context. Evaluators received for this phase an evaluation form that consisted of three parts:
Constructing and Evaluating a Description Template for Teaching Methods
455
1. A teaching method description that was written by another evaluator; 2. A continuous rating scale ranging from “very confident” to “not at all confident” for judging how high evaluators estimate their ability to transfer the described teaching method into their own teaching contexts; 3. A form that asked evaluators to choose up to three description elements (like duration or group size) they would remove from the template, and to specify up to three description elements they would add to the template. This part also included a text box to justify each of the suggestions for adding or removing elements in the template. Of the 34 teaching methods collected in phase I, three were selected to be included in the phase II evaluation. These three teaching methods were selected because they were described using extensive and cohesive information, and because they represented a range of teaching methods differing especially in terms of duration, type of setting, and the number of resources needed during implementation. The selection of the three diverse teaching methods was grounded on the assumption that the final evaluation of the description template (third part of the phase II evaluation form) would turn out less biased towards a particular type of teaching method, and be thus more representative, if the teaching methods differed inherently. The three chosen teaching methods for phase II were: – Role Play: During a lecture class, learners team up to take on roles of student, teacher, and observer in order to teach each other topic-related concepts etc. – WebQuest : WebQuest is an inquiry-oriented activity, where learners complete a task using (pre-selected) websites. Afterwards, learners’ task results are evaluated. – Image Sharing: Pre-service teacher students document practical teaching activities using images with captions. The image collection is shared with all class members. Evaluators participating in phase II all received evaluation forms that were set up as described above. However, sixteen evaluators read all of the three selected teaching methods and for each they assigned a transfer confidence rating, while seventeen evaluators read and assigned transfer confidence ratings for a single teaching method. All evaluators filled out one form to suggest modifications to the template by eliminating or adding elements. Results. During phase II of the evaluation, 22 confidence ratings for applying the Role Play teaching method, 21 confidence ratings for the WebQuest teaching method, and 22 confidence ratings for the Image Sharing teaching method were collected. Overall, there were 33 evaluations of modifying elements in the template. The evaluators suggested 15 distinct elements to be removed from the template, amounting to almost three quarters of all elements currently in the template. Eight elements received three or more nominations for removal. These elements are listed in Table 1, along with their number of nominations and examples of comments provided by evaluators.
456
M. Derntl, S. Neumann, and P. Oberhuemer
Table 1. Elements proposed by evaluators to be removed from the description template including number of nominations and example comments Element
# Extracts from comments
Peer review
9
“Usually empty” — “It seems interesting, but not available” — “Not useful in any of the three cases” — “I do not understand what that is” — “Might be very subjective”
Graphical representation
7
“Is repeated in sequence part” — “Too complicated for rather self speaking methods” — “I am used to other graphical representation, so I do not consider the current one very helpful or intuitive” — “Sketch or situation photo might be more helpful”
Alternatives
5
“Not very important” — “I cannot see how one can find alternatives – maybe variations” — “I should be able to find my own alternatives for my own settings”
Roles
5
“Do not need to be redefined separately if you already describe the activity first” — “Already described in the sequence of activities” — “The introduction makes it quite clear which roles are taken by whom, thus I think this point is just not necessary”
Comments
4
“Redundant to teacher reflection” — “Usually empty”
Duration
4
“Depending on the target group” (3x) — “The duration can be roughly estimated [by the reader]”
As evident from Table 1, “peer review” received the most nominations (9) for removal. The problem with this element was that it was empty in all of the three selected teaching methods. Accordingly, some evaluators who suggested removing this element commented that “it was usually empty.” If provided, this can be a useful element to have opinions of colleagues on the teaching method, e.g. regarding criteria of good teaching practice. One evaluator made a suggestion to rename this element to “evaluation of the method”, which might be a more intuitive title for the element. Similar feedback was given regarding the elements “comments” and “student feedback” since they were also mostly empty in the teaching methods. Naturally, empty description elements are not seen as helpful. The frequent votes for “graphical representation” to be removed came as a surprise, since this element was intended as a complementary visual representation of the “sequence of activities” to help readers get a quick overview and keep the overview while developing the teaching method. The frequent nominations for removal of this element may have been caused by the visualization using the software Graphical Learning Modeler3 : All three of the selected teaching methods had a visualization using this software. Evaluators often mentioned that they could not interpret this particular activity diagram (e.g. they did not understand the icons used in the activities), which is specific to the software environment and not targeted towards general teaching method depictions. On a positive note, the evaluations of the teaching methods frequently contained comments 3
http://sourceforge.net/projects/prolix-glm
Constructing and Evaluating a Description Template for Teaching Methods
457
Table 2. Elements proposed by evaluators to be added to the description template including number of nominations and example comments Element
# Extracts from comments
Examples
14 “Briefly described, it would clarify the method” — “Image of a real setup: to get it fast explained” — “Useful to have a comparison with a real case” — “Would make it easier to understand the teaching method” — “The general description helps to understand the idea [. . . ], but an example is very useful for the fine tuning”
Potential problems
5
“Key Issues: A section with main clues and critical issues” — “Threats/weaknesses of the teaching methods: To know, what could be a possibility for ‘failure’” — “Liabilities/drawbacks: Is there anything that could go wrong?”
Background
4
“Description of the background (theory, research) to get a better insight” — “Foundations: [. . . ] the theoretical background of a method” — “Source: where does the method come from”
Preparation
3
“Preparation time and reusability: [. . . ] how much time [the teacher] will spend for this” — “Preparation and post-processing activities” — “What has to be done in advance by the teacher and students?”
in which evaluators mentioned that the graphical representation helped them to a better understanding by visualizing the sequence of activities, explaining the method and providing a quick overview. The element “alternatives” seemed to have caused some confusion; as one evaluator commented, this should probably be called “variations” since the intent is to provide variations in the teaching method, not really alternatives to it. This was also suggested twice in the elements to be added to the template (see below). The “roles” element was also suggested to be removed, since the participating roles are evident from the description, e.g. in the “sequence of activities”. Although seemingly redundant, including an element “roles” in the template allows providing a description of a role that would not have been provided in the sequence of activities. However, higher education instructors are not necessarily accustomed to defining roles in teaching situations [13]. Therefore, they may not feel that this element is useful to the description. “Duration” and “subject/discipline” are – as mentioned by evaluators – heavily dependent on the actual implementation of the method in a specific context. However, these can be useful elements when searching and selecting teaching methods [9]. In addition to the elements proposed for removal, evaluators proposed more than twenty different elements they would like to see included in addition to the current set of elements. Among these, four elements were mentioned more than twice. These elements are listed in Table 2, along with number of nominations and example extracts from comments.
458
M. Derntl, S. Neumann, and P. Oberhuemer
Almost half of the evaluators suggested descriptions of concrete examples to be included in the template. The teaching methods are described in a generic way, since it is important to distil properties of teaching practice that are transferable to other contexts [14]. However, it is often difficult to imagine the method in practice without having concrete examples, and some studies found that practitioners prefer to implement teaching methods based on concrete examples rather than generic descriptions [15,16]. We have deliberately not included an example section in the template. Our plan is to provide examples as units of learning alongside the generic teaching methods. A unit of learning refers to a contextualized, complete, self-contained unit of education or training that consists of a teaching method and associated content (adapted from [17]). One goal of our work is to create a repository of (generic) teaching methods, each paired with a number of concrete examples in the form of units of learning. The suggestions to include “potential problems”, also referred to as “key issues” or “threats” by some evaluators, would help in giving useful hints for practitioners during implementation of the teaching method. The call for “background” on theory and foundations of a teaching method to be included in the template is certainly meaningful. The problem with such an element may be that it would presumably be empty in most teaching method descriptions, because collecting and presenting theoretical background information is a challenging and time-consuming task for authors. Also, the background information could be integrated into the summary, rationale, and other existing elements to support the description. Information on “preparation” for implementation was suggested by three evaluators. This seems to fit into the “teacher reflection” element. The fact that two evaluators suggested to add “student skills” as a separate element may either suggest that not all evaluators have read the teaching method descriptions carefully or that this information was missing in the “learner characteristics” or “learning outcomes” element of the teaching method they have read depending on what evaluators referred to prerequisite or target skills by suggesting this additional element. 3.3
Discussion of Different Views During Phase I and II
A look at the results of the two evaluation phases reveals an interesting observation: even though the template was considered highly complete by authors during phase I of the evaluation, readers in phase II provided numerous suggestions for modifying the template. Of the 22 participants, who provided evaluations of the appropriateness of the description template along with their description of a teaching method in phase I, 16 also participated as evaluators in phase II. As one would expect, those participants who judged the template to be complete in phase I had fewer suggestions for extension of the template in phase II: there is a significant negative correlation between the rating of completeness and the number of nominations for elements to be added to the template (r = −.544, p < .05). Nevertheless, the high number of suggestions by evaluators to extend the template in phase II clearly points to the fact that understanding teaching method
Constructing and Evaluating a Description Template for Teaching Methods
459
descriptions written by someone else can be a very difficult endeavor. One explanation could be that authors are using the template to provide “their own” teaching method, while the readers are confronted with a representation provided by someone else, and thus may require additional information on issues that were clear or not worthy of mention to the author. Put another way, authors seem to be more likely to perceive a “lossless” transformation of their mental representation of a teaching method into written form than readers trying to recreate a mental representation of the teaching method from the written descriptions. This could be due to the fact that authors closely connect the implementation context of the teaching method in their mental representation, even if they describe the method in a generic way. Readers, however, just have the generic description and lack the information on the implementation context, making it harder to create a vivid representation. The unit of learning accompanying a generic teaching method may be able to resolve this conflict as the example could serve to aid the buildup of the mental representation. Also, offering additional representations of the teaching method in formats outside the currently propagated structured description may be able to remedy this issue. Such formats could be storyboards, case study descriptions, or even videos allowing multiple access points to the teaching method. A similar argument was brought forward by Falconer and Littlejohn [4,15], who concluded that multiple perspectives on the teaching method are necessary, since different user groups may need different forms of representation that meet their requirements.
4
Conclusions and Future Work
In this paper we have presented the construction and evaluation of a new teaching method description template, which can be used to capture teaching methods for sharing and reuse by practitioners. While there are many different representations of teaching practice available, we argued that few of them have had success with adoption by practitioners. This paper reported research efforts dealing with the construction of a description template for teaching methods based on (a) previous research and project outcomes, and (b) the evaluation of the template by practitioners and instructors through a two-phase evaluation procedure. The template was rated high with respect to various criteria of good description by authors using it to describe teaching methods: the template was broadly considered to be complete, clear and understandable, and being able to facilitate reuse of captured teaching methods. In a subsequent evaluation by readers of provided teaching methods, we collected suggestions for improvement of the template in terms of elements that should be added, changed or removed. The most significant suggestion concerns the lack of examples for the teaching methods that could help teaching practitioners getting a “feel” of the method in action. We also found a discrepancy in the results of the two evaluation phases concerning the completeness of the template. While authors describing their teaching methods widely considered the template to be complete, readers nominated numerous description elements
460
M. Derntl, S. Neumann, and P. Oberhuemer
that were missing in their opinion and should therefore be added. We discussed that this supports existing claims in literature, suggesting that we may need different forms of representation for different target groups. The construction and evaluation of this template was the initial step of a broader research agenda aiming to evaluate the usability of IMS LD with end users, i.e. instructors, instructional designers and learning designers. Based on the collected teaching methods and the description template, the next step will be the definition of a method for providing concrete examples of the teaching methods in the form of units of learning. Further steps include (1) the definition of a metadata schema for teaching methods based on the description template and evaluation results presented in this paper and the integration of the template into a web based teaching method repository; (2) the specification of a methodology for transforming teaching methods and examples into IMS LD units of learning. These steps will enable us to identify and collect evidence for the strengths and weaknesses of IMS LD in supporting practitioners documenting, sharing and reusing teaching methods and units of learning. Acknowledgments. This paper was written in the context of the ICOPER Best Practice Network (http://icoper.org), which is co-funded by the European Commission in the eContentplus programme, ECP 2007 EDU 417007.
References 1. Beetham, H.: Review: Developing e-learning models for the JISC practitioner communites (2004), http://tinyurl.com/jisc-review-models 2. IMS Global: IMS Learning Design Specification (2003), http://www.imsglobal.org/learningdesign/index.cfm 3. Griffiths, D., Blat, J.: The role of teachers in editing and authoring units of learning using IMS Learning Design. Advanced Technology for Learning 2(4), 1–9 (2005) 4. Falconer, I., Littlejohn, A.: Designing for blended learning, sharing and reuse. Journal of Further and Higher Education 31(1), 41–52 (2007) 5. Agostinho, S.: Learning design representations to document, model, and share teaching practice. In: Lockyer, L., Bennett, S., Agostinho, S., Harper, B. (eds.) Handbook of Learning Design and Learning Objects: Issues, Applications, and Technologies, vol. I, pp. 1–19. Information Science Reference, Hershey (2009) 6. Voigt, C., Swatman, P.M.C.: Describing a design pattern: Why is it not enough to identify patterns in educational design? In: 23rd ASCILITE Conference, Sydney, Australia, pp. 833–842 (2006) 7. Learning Designs Project: Learning Designs (2003), http://www.learningdesigns.uow.edu.au/ 8. Flechsig, K.H.: Kleines Handbuch didaktischer Modelle. Neuland, Eichenzell (1996) 9. Falconer, I., Beetham, H., Oliver, R., Lockyer, L., Littlejohn, A.: Mod4L final report: Representing learning designs (2007), http://tinyurl.com/mod4l-final 10. Neumann, S., Oberhuemer, P.: Supporting instructors in creating standard conformant learning designs: the Graphical Learning Modeler. In: World Conference on Educational Multimedia, Hypermedia and Telecommunications 2008, Vienna, Austria, pp. 3510–3519. AACE (2008)
Constructing and Evaluating a Description Template for Teaching Methods
461
11. Chickering, A.W., Gamson, Z.F.: Seven principles for good practice in undergraduate education. AAHE Bulletin 39(7), 3–7 (1987) 12. Lambe, P.: Organising Knowledge: Taxonomies, Knowledge and Organisational Effectiveness. Chandos, Oxford (2007) 13. Neumann, S., Oberhuemer, P.: User evaluation of a graphical modeling tool for IMS Learning Design (manuscript in preparation) 14. Conole, G.: The role of mediating artefacts in learning design. In: Lockyer, L., Bennett, S., Agostinho, S., Harper, B. (eds.) Learning Design and Learning Objects: Issues, Applications, and Technologies, vol. I, pp. 188–208. Information Science Reference, Hershey (2009) 15. Falconer, I., Littlejohn, A.: Representing models of practice. In: Lockyer, L., Bennett, S., Agostinho, S., Harper, B. (eds.) Handbook of Learning Design and Learning Objects: Issues, Applications, and Technologies, vol. I, pp. 20–40. Information Science Reference, Hershey (2009) 16. Bennett, S., Agostinho, S., Lockyer, L.: Reusable learning designs in university education. In: Montgomerie, C., Parker, J.R. (eds.) International Conference on Education and Technology, Calgary, pp. 102–106. ACTA (2005) 17. Olivier, B., Tattersall, C.: The Learning Design specification. In: Koper, R., Tattersall, C. (eds.) Learning Design: A Handbook on Modelling and Delivering Networked Education and Training, pp. 21–40. Springer, Berlin (2005)
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design Valérie Emin1,2, Jean-Philippe Pernin1,2, and Viviane Guéraud1 1 Laboratoire Informatique de Grenoble 110 av. de la Chimie - BP 53 - 38041 Grenoble- cedex 9 - France 2 EducTice - Institut National de Recherche Pédagogique 19 Allée de Fontenay - BP 17424 - 69347 Lyon - cedex 07- France
[email protected],
[email protected],
[email protected]
Abstract. For several years some researches have concerned the process modelling of learning situations integrating digital technologies. Educational Modelling Languages (EML) aim at providing interoperable descriptions of learning scenarios. In order to generalize the use of EML, it is necessary to provide authoring environments allowing users to express their intentions and requirements. This paper presents the core concepts of one of these, called ISiS (Intentions, Strategies, and interactional Situations), a conceptual framework elaborated to structure the design of learning scenarios by teachers-designers. The framework is based on a goal-oriented approach and proposes a specific identification of the intentional, strategic, tactical and operational dimensions of a scenario. This paper also presents how these concepts have been implemented within ScenEdit, a specific authoring environment dedicated to teachers-designers based on the ISiS goal-oriented framework. Keywords: technology enhanced learning, learning scenarios, authoring approach, requirements engineering, goal oriented approach.
1 Introduction Since the beginning of the 2000s, certain research in the field of Technology Enhanced Learning has been concerned with Learning Design [1]: the process modelling of learning situations integrating digital technologies. Its purpose is to produce a description (called “learning scenario”) of the organization and the time scheduling of learning situations where many actors are involved (students, teachers, tutors, designers, etc.). At the international level, various Educational Modelling Languages (EMLs) have been proposed such as IMS-LD [2] or LDL [3] . The main challenge of EMLs is to propose a neutral and shared formalism, capable of expressing the widest range of learning situations and to be implemented more or less automatically towards specific Information Systems (called Learning Management Systems). An EML allows the definition of relationships between learning goals, the roles of staff and learners in the learning process, performed activities, the environment and resources U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 462–476, 2009. © Springer-Verlag Berlin Heidelberg 2009
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design
463
necessary in a learning situation. Specific research works consist today of the analysis of the expressiveness of these languages (for example to express complex collaborative learning situations) or in the solution of the problems raised by the deployment of learning scenarios towards technical platforms [3]. As pointed out by IMS-LD authors themselves [4], an EML, which mainly aims at expressiveness and interoperability, is not intended for a direct manipulation by human users (teachers, engineers…). Specific authoring systems [5] must be provided [6] in order to help designers to design their own scenarios at a lower cost. Two main authoring contexts can be identified. In the first case, a structured team is in charge of the requirements analysis, solution design, and the encoding of the solution into an EML language. In the following step, the EML code can be interpreted in a target LMS integrating an adapted “player”. This first type of context can basically be found in an industrialization perspective of distance learning, handled by instructional engineering methods [6]. In this case, design strategies are based on a stage of requirements extraction, often proceeding from narrative texts written by teachers. Authoring tools proposed to this kind of designers are based on mastering conceptual models, which are very close to the targeted modelling languages. In the second case, which we focus on in this paper, the teacher himself designs the scenario: he is potentially conducted to integrate digital resources and tools as part of the training he provides. Economic constraints do not allow a team of designers or developers to assist each teacher : it becomes necessary therefore to provide authoring tools [5] which allow teachers to express their requirements based on their own business-oriented languages and shared practices. Two combined goals can be reached: to provide a “computable” description to be translated into an EML like IMS-LD or LDL and to be understood and shared by experts and practitioners sharing a common vocabulary, knowledge of the discipline and pedagogical know-how. This authoring approach [5] aims to further consider learning scenario designers’ requirements and the “business process” dimensions of learning scenario design, which are subjects of many works in Systems Engineering and Software Engineering. We have particularly focused our research on works concerning Goal-Oriented Requirements Engineering [7] where the elicitation of goals is considered as an entry point for the specification of software systems as in the Rolland and Prakash MAP model [8]. In this perspective, our purpose is to provide authoring tools allowing teachers-designers belonging to close communities of practice to design their scenarios expressing the intentions and strategies they adopt. This paper is organized into 4 sections following this introduction. In section 2, we describe our context, our goals and the specificity of our typical audience in more detail. Section 3 describes, with an example of a scenario, our conceptual framework: the ISiS model (Intentions-Strategies-Interactional situations) which we propose to structure the design of learning scenarios. Before concluding, section 4 describes experimentations of tools we have developed upon the ISIS model and especially the implementation within ScenEdit, a specific authoring environment dedicated to teachers-designers.
464
V. Emin, J.-P. Pernin, and V. Guéraud
2 Context of Research The research works presented in this paper were conducted in collaboration between the Laboratoire Informatique de Grenoble and the INRP 1. This collaboration closely associates panels of teachers in charge to co-elaborate and experiment models we want to implement. This work led us to study existing practices of sharing scenarios. In parallel with the work concerning formalization based on EMLs presented above, some international initiatives aim to propose scenarios databases in order to favour sharing and reuse practices between teachers, such as the IDLD [9]. Their goal is to disseminate innovative practices using digital technologies in the field of education. These databases for teachers-designers, such as that proposed by the French Ministery of Education: EduBase and PrimTice, list scenarios indexed with different fields depending on the domain or subject. Their descriptions are very heterogeneous: from practice narrations to more structured formalizations. This diversity has led us to question the ability of these representations to be understood and shared between several practitioners. Our research is at the intersection of the two approaches previously identified: proposing scenario databases in order to favour sharing practices for the integration of technology by practitioners and proposing computational interoperable formalisms (like IMS-LD) to describe scenarios. Based on empirical results obtained in previous research conducted with groups of teachers [10] our first hypothesis (H1) is: a structuring formalism is more favourable to reuse than a narrative or a computational formalism provided that it is in accordance with the vocabulary of the stakeholders concerned. The research questions we address is to facilitate teachers’ task in designing and implementing learning scenarios using Information and Communication Technology by providing them formalisms and tools satisfying criteria of understandability, adaptability and appropriability. In other words, provide a common formalism which elicitates intentions and strategies to give a better understanding and context adaptation of learning scenarios within a community of practice. In this context, we aim to provide models, methods and tools allowing teachers-designers belonging to communities of practice to design their scenarios expressing intentions and educational strategies they will adopt. The context of our research is the CAUSA project at INRP and the specific type of designers we focused on are teachers who are called to integrate digital technologies in an academic context, more precisely in the French secondary educational system (pupils from 11 to 18 years). We organized our work into four phases in order to propose adapted formalisms and tools[11]. After a preliminary phase where we defined the targeted audience precisely, the first phase consisted of analyzing current uses of share and reuse of scenarios. It appeared that for a given scenario, it required a very precise analysis to identify the general objectives and the strategy or pedagogical approach although it would have been for them an important criterion of choice. After this work, teachers suggested that the design task could be facilitated by providing libraries of typical strategies, scenarios, or situations of various granularities. Each of 1
Institut National de la Recherche Pédagogique (French National Institute for Research in Education).
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design
465
these components were to be illustrated by concrete examples. These results allowed, in a second phase, to co-elaborate with teachers an intention-oriented model: ISiS which structures the design of a scenario. In a third phase we experimented the ISiS model with a pilot group of teachers by the means of textual forms and graphical representations. In a fourth phase integrating the evaluation of this experiment, we tested several tools implementing the ISiS model (paper forms, diagram designer, mind mapping software and a first dedicated tool) with audiences not yet involved in our research.. The purpose of this phase was to validate our assumptions and to evaluate our model and its first implementation before new developments. We set up an experiment to compare the perceptions of the three types of formalisms we are studying : narrative, computational and structural. This work led us to develop a new version of our graphical editor. We are presently experimenting our graphical web editor with teachers not yet involved in our research who will use it for their classes.
3 Proposition of a Conceptual Framework Instead of proposing an alternative solution to EMLs, our observations [10, 11] led us to complete them by offering models, methods and tools to sustain design and reuse by non computer specialists of learning scenarios using digital technologies. 3.1 Theoretical Background Our research is concerned with teacher-designer activity and we base our approach on a set of complementary theoretical works concerning the theory of activity: - the organization of activity, proposed by Russian psychologists such as Leontiev [12], defines hierarchical levels (activity, action, operation) which distinguishes intentional, strategic and tactical dimensions in activity; - the importance of routines or schemata, which represents typical solutions given to recurrent problems in specific contexts. These features have been particularly studied by Schanck and Abelson in the context of teaching activity [13]. We also take into account the recent works in Business Process Engineering and Goal-Oriented Requirements Engineering [7] where the elicitation of goals is considered as an entry point of the specification of software systems as in the Rolland and Prakash MAP model [8] and set them in our particular context of learning scenario design. In a MAP Model, concepts of goal and intention are considered as equivalent. A MAP Model is described in these terms: “A map is a process model expressed in a goal driven perspective. It provides a process representation system based on a nondeterministic ordering of goals and strategies. A map is represented as a labeled directed graph with goals as nodes and strategies as edges between goals.”… “A Strategy is an approach, a manner to achieve a goal”. It appeared to us that it was coherent to propose a specific business model for learning scenario design based on intentions and strategies. After a two-year project in close association with teachers-designers, we progressively co-elaborated a “goal-oriented” business model: ISiS (Intentions-Strategies-Interactional situations).
466
V. Emin, J.-P. Pernin, and V. Guéraud
MAP and ISiS are both models dedicated to the design process in a goal oriented perspective. MAP is a more generic model defined to sustain the design process than ISiS which is dedicated to a specific learning “business-process” and aims to imply actors themselves in the design of the process. To reach that goal, it is necessary to provide users with sufficiently accessible conceptual terms. In our experimental context, we confronted French teachers-designers with the concepts of intentions and strategies. For those teachers, the concepts of “pedagogical intention”, “learning strategy”, and “learning situation” belong to common vocabulary. By linking them to their regular uses, they were able to define two different articulated levels: first a “didactical” level dealing with domain specific knowledge and second a “pedagogical” level dealing with organizing learning situations. For each level, they were able to define intentions and strategies. The concepts of intention and strategy in MAP and ISiS are quite close. When MAP considers a strategy as a way of linking two goals, ISiS proposes to sequence two intentions where the first intention is linked to the strategy. Implicitly the model assumes that the second intention will be invoked after that the first strategy has been implemented. Concerning intentions, ISiS proposes to gather two or more intentions of different kinds in the same group. This enables the same strategy to be linked with several intentions, which is an explicit demand of some teachers-designers. In ISiS, alternatives are represented by a specific distribution strategy, which allows one to distinguish several sub-strategies linked to subintentions which refines the main one. The concept of variability [15] with MAP may be declined in ISiS in two ways: by choosing different strategy or by associating different operational solutions to a same strategy. ISiS manages a “tactical level” refining the modelling of strategies by linking them to their typical solutions. After evaluation of different authoring solutions in learning design [5, 6], we chose to develop a graphical environment ScenEdit [14] based upon the ISiS model. 3.2 Intentions and Strategy in the Context of Learning Scenarios We illustrate our model with an example based on a collaborative learning scenario, the LearnElec Scenario [16] dedicated to the concept of “the power of a light bulb” in the domain of electricity at secondary school. In this scenario, the teachers’ first didactical intention is “to destabilize” a frequently encountered “misconception” of students in electricity which is that “proximity of the battery has an influence on current intensity”. After having defined his intention, the teacher-designer can choose the appropriate strategy he wants to use to reach the goal. In our example, the didactical intention is implemented with a specific didactical strategy called the “scientific inquiry strategy” composed of four phases: hypothesis elaboration, solution elaboration, hypothesis testing and conclusion as you can see in figure 1.
Fig. 1. An example of intentions and strategies elaborated by teachers in electricity
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design
467
Each phase can be performed through various pedagogical modes and can be refined by another intention according to the type of activity, the availability of computer services, etc. the teacher wants to use. In our example, the first didactical phase, the “hypothesis elaboration” is refined by a pedagogical intention called “increase the ability to work in a collaborative way” as shown in figure 2.
Fig. 2. An example of different levels of intentions and strategies in a scenario
This intention is implemented with a strategy called “elaborating a proposal by making a consensus” composed of two phases: “Make an individual proposal” and “Confront proposals. Obtain a consensus”. For each phase, an interactional situation can be defined: “Individual proposal on a MCQ” and “Argued debate on a forum with consensus”. During these two phases the teacher is involved in an activity of “Group management” symbolized by an interactional situation called “Group management”. In the following section, we present the ISiS conceptual model more formally. 3.3 Our Proposal: The ISiS Model From our first hypothesis (H1), we co-elaborated the ISiS model [11]: a conceptual framework elaborated to structure the design of learning scenarios by teachers and to favour sharing and reuse practices between designers. The ISiS model is based on three complementary hypotheses: - (H2) the elicitation of intentions and strategies and linking them to patterns of situations of interaction facilitates the understanding of the scenario; - (H3) the identification of the concept of situation of interaction provides an overall description of the organization of a set of activities, without necessarily specifying it in detail or restrictively; - (H4) the reuse of components (Intentions, Strategies, interactional Situations) or scenarios, in the form of templates or design patterns allows the practitioners to design their scenarios more efficiently. According to the ISiS model (cf. fig. 2), the organization and planning of a learning unit can be described with a high-level structuring scenario which reflects the designer’s intentional and strategic dimensions. A structuring scenario organizes the
468
V. Emin, J.-P. Pernin, and V. Guéraud
scenario into different phases or cases by means of intentions and strategies. Each phase or case can be either recursively refined by a new intention or linked at a tactical level to a suitable interactional situation. An interactional situation can be itself described by a more low-level interactional scenario which defines, in an operational way, the precise organization of situations (in terms of activities, interactions, roles, tools, services, provided or produced resources, etc.). Interactional scenarios are the level typically illustrated with EML examples of implementation.
Fig. 3. An overview of the ISiS model
Figure 3 provides an overview of the ISiS model which proposes to structure the design of a scenario describing the organization and planned execution of a learning unit. - the I level (Intention) describes the designer’s intentions. In our field, intentions are closely linked to the knowledge context which defines targeted knowledge items (concepts, notions, competencies, know-how, abilities, conceptions or misconceptions, etc.). The intentions for the designer can be, for example, to reinforce a specific competence in electricity, to favour a notion discovery, to destabilize a frequent misconception, etc; - the S level (Strategy) is related to strategic features. In order to reach goals related to the intentions formulated at I level, the designer opts for the strategy he considers
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design
469
to be the most appropriate. Two main kinds of strategies can be distinguished: sequencing strategies which organize the arrangement of logical phases (for example a scientific inquiry strategy can be modelled as a series of four phases), distribution strategies which plan different solutions for identified cases (for example, a differentiation strategy takes into account three possible levels of mastering). Strategies can be combined by successive refinements: for example, a sequencing strategy may specify one of the cases of a distribution strategy; - the iS level (interactional Situation) represents the tactical level, i.e. the proposed solution to implement the formulated intentions and strategies. We consider that, for a new problem, a teacher-designer does not rebuild a new specific solution from scratch. As underlined in works on schemata and routines in teaching activities [14], the teacher bases his planning or his adjustments upon a library of mastered solutions, which are triggered by specific events. In the same way, we assume that a scenario designer selects situations which are appropriate for his intentions and strategies, from a library of components. Each component, an “interactional situation”, is made up of a collection of interactions with a specific set of roles, tools, resources, according to the situational context. The situational context is characterized by a set of variables such as resources that can be manipulated to support the activities (document, tools, services), locations where activities can take place, planning elements in which activities must be scheduled or the number of learners, roles which can be distributed. For example, in order to specify the scenario for the “solution elaboration phase” in a collaborative way and for distant learners, a designer can choose a typical situation called “argued debate on a forum with consensus”. In another context, as for example for pupils who have difficulties at school, a more personalized situation can be used, such as “choosing a solution between different possible proposals by using a MCQ tool”; - the interactional scenario (operational level) describes the details of the solution precisely, i.e. the organization and process of each interactional situation. Nowadays, EMLs focus essentially on the description of this operational level by organizing relationships between actors, activities and resources in a given language. The ISiS model proposes to clarify the upper levels (I, S and iS) that are generally not defined precisely by current methods or tools.
4 Implementation of the ISiS Model 4.1 Towards Flexible and Continued Design Processes The ISiS framework is not properly a method as it does not propose a specific order to combine design steps. The ISiS is based on the hypothesis that all dimensions of a scenario (intentions, strategies, situations, activities, resources) must be elicited and interlinked in order to facilitate the design, appropriation, sharing and reuse . In our experimentations, we analyzed the tasks undertaken by teachers-designers [10]. Several design processes as shown by different studies involving teachers-designers were considered. Some teachers were able to choose a top-down approach by hierarchically
470
V. Emin, J.-P. Pernin, and V. Guéraud
defining their intentions, strategies, situations, etc., while others preferred to adopt a bottom-up approach by “rebuilding” a scenario from resources or patterns that they wanted to integrate. Consequently, one of our hypotheses is that the design of learning scenario cannot be modelled as a linear process without significantly reducing designers’ creativity. According to the designer type, according to the uses within a precise community of practice, several kinds of objects or methods are shared. As a result, resources, pedagogical methods and typical situations could constitute an entry point from which design steps will be combined. From this entry point (for example typical interactional situations), the designer may alternatively and recursively perform design tasks. On these principles, the ISiS model was implemented successively using different kinds of tools (diagram designing or mind mapping software). In a first step, we elaborated paper forms to express the different dimensions of the design (knowledge context, situational context, intentions, strategies, interactional situations, activities, etc.). We also adapted mind mapping software where each node represents a concept (e.g. strategies, phases, interactional situation) and can be edited with a specific electronic form. These first tools based on the ISiS model were experimented in a secondary school with a group of five teachers in technological subjects : these teachers were associated to the INRP institute. Each “teacher-designer” had one month to model a learning sequence that he had to implement during the school year, by using the tools provided. All teachers accomplished the required task in the prescribed time, and the different sequences which were produced had a duration varying between two and six hours. One teacher actually covered the complete process by (1) describing his scenario in paper form , (2) encoding the designed scenario with a specific editor (LAMS), (3) implementing the result automatically towards Moodle, a learning management system and (4) testing the scenario with his pupils. After this first experimentation, the teachers were questioned about their design activity. The answers given by the teachers-designers have shown the benefits of the model for the improvement of the quality of the scenarios created, for illustrating the importance of the elicitation of intentions and strategies by users themselves, for the better understanding of the scenarios created by others and for simplifying the design process by reducing the distance between users’ requirements and the effectively implemented system. Finally, the following points can be raised: - the elicitation of intentions and strategies allowed the teacher-designer to better understand a scenario designed by a peer; - teachers expressed the need to be provided with reusable components allowing (a) a significant decrease in the design duration and (b) an exploration of solutions, proposed by peers, for a renewal of practices; - the complete implementation on a LMS by one of the teachers was considered to be facilitated by using the ISiS model; - the provided tools (paper forms and mind mapping tools) were considered as too costly to be integrated into regular professional use. These first results show the capabilities of the ISiS model to encourage an efficient authoring approach. The main restriction formulated by users refers to the provision of adapted graphical tools.
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design
471
4.2 A Step Towards Graphical Tools: ScenEdit As a solution to this restriction, we have co-elaborated with panels of users a specific graphical authoring environment named ScenEdit [14] based on the ISiS Model. This environment proposes three workspaces to edit a structuring scenario. Figure 4 shows the main screen of the actual web version.
Fig. 4. ScenEdit main screen
The first Scenario Edition workspace structures the scenario by logically linking elements previously defined in the components workspace or directly defined in the Edition window to compose a graphical representation of the scenario. The Context workspace defines the two different types of context in which a learning unit can be executed: the knowledge context and the situational context. The Components workspace is dedicated to manage the three main components of the ISiS model: (a) Intentions, (b) Strategies and (c) interactional Situations. For each type of component, the author can either create a new element or import and adapt an existing element from a library. The choice of a component depends on the characteristics defined in the Context workspace. For example an intention is considered as an operation to be conducted by a certain type of actor (previously defined in the situational context) for an item of knowledge (previously defined in the knowledge context). Each type of component is shown with a different symbol: a rounded rectangle for an intention, a rectangle for a strategy, a circle for a phase and a picture for a situation. The graphical representation shown on figure 4 is a classical hierarchical tree quite useful to produce a scenario but not very clear to understand a new scenario because of the different levels of imbrications. The future graphical representation we are implementing in the online version is a tree where the horizontal dimension represents time evolution and the vertical dimension represents the hierarchy of the ISiS concepts like the one shown on figure 1 and 2. As the structured scenario can be encoded as an XML file, different outputs can be produced and several possibilities of transformation are offered: a printable picture of the edition views, a printable text or form for the teacher. A future work will be to provide an EML-compliant version for editing with another tool or for being executed on a LMS. ScenEdit offers patterns of different levels (intentions, strategies, interactional situations) elaborated from best-practices found in the literature or within
472
V. Emin, J.-P. Pernin, and V. Guéraud
communities of practice. We have worked with teachers to formalize and design patterns of learning scenarios, pedagogical approaches and recurrent interactional situations. With this environment, users will be able to feed databases by exporting fragments of their own scenario, in order to share them with others or reuse them further in similar or different contexts. 4.3 Results of Recent Experimentations We have conducted several experiments since the beginning of our research work in order to adopt a user-oriented or authoring approach. Context and Methodology The experimentation took place in November 2008 during a training session to design scenarios using ICT. The 18 participants were teachers, pedagogical engineers, trainers and had the common characteristics that they were not familiar in learning scenario design and techniques and not involved in our researches. The complete results of this experiment were presented in a francophone conference on TEL : EIAH 2009 in Le Mans June 2009 [17]. The experimentation consisted in confronting individually each of the 18 subjects with a same scenario expressed with a formalism chosen among the three types we wanted to compare: narrative, computational and structuring. We formed 3 experimental groups of 6 subjects, where the members of one group assessed a particular formalism. We made sure we had an homogeneous representation of each profiles within each group. The chosen scenario was LearnElec Scenario [16] we presented above. It describes collaborative situations, alternating questionnaires, votes, synthesis and debate. The three different descriptions of the scenario were produced in 2006 and 2007 regardless of this experiment. The narrative description was developed by teachers and researchers at the beginning of the project. The computational description, expressed with activity diagrams (by actors) proposed by IMS-LD, represents graphically the course (unfolding) of the scenario in play, actions, partitions, structured activities and basic activities. The structuring description, based on ISiS concepts, proposes a graphical arrangement of intentions implemented through strategies divided into phases, each phase being associated with one or more interactional situations. IMS-LD and ISiS descriptions were produced by researchers with a high degree of expertise in the field. The results of each formalization were given to subjects as paper prints. During a 45-minute session, each subject had to read the scenario expressed in one formalism and then to evaluate it using an online questionnaire. The latter had two sets of questions, one concerning the notion of pedagogical scenario in general (Q1 to Q5) and the other specifically relating to the given formalism Q6 to Q9). The questions were either multiple choice or open-ended questions to gather precise information from the subject and compensate for the relatively small sample of subjects. Results and Interpretation The analysis of collected data was done as follows: frequency table for questions dealing with the concept of pedagogical scenario in general, and cross tabulation for the questions specific to one of the 3 formalisms in a way so as to isolate possible variations of answers. A series of 5 general questions were asked, and each question could give rise to five different types of answer (cf. table 1).
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design
473
Table 1. Breakdown of the answers to questions Q1 to Q5 General questions on scenarios “For the comprehension of the scenario,…” Q1 : the description of the different phases is… Q2 : the precise description of the actors’ activities is… Q3 : the explanation of the underlying pedagogical approach is… Q4 : the explanation of the notions, knowledge, competencies, know-how aimed at by the scenario is… Q5 : the explanation of the articulation between aimed knowledge or competencies, and proposed activities is… Total
No answer
Not less Imporimportant impor- tant at all tant
Indispen- Total sable
0
0
0
2
16
18
0
0
2
6
10
18
1
0
1
8
8
18
0
0
4
5
9
18
0
3
2
4
9
18
1
3
9
25
52
90
From this first series of questions, we can draw the following lessons: - as expected, it is essential for the scenario to describe the major phases of the learning situation. However, the precise description of the activities seems essential to a small majority of respondents (10 over 18). This could be an indication as to the fear of over-scripting [18] which may be detrimental to the effectiveness of the situations to be established; - the answers to questions Q3 to Q5 show that some elements, absent today from computational formalisms, are of significant importance in the eyes of the respondents: at least two thirds of them consider it as important or essential to explain the pedagogical approach, the notions of the program and the articulation between knowledge and activities. A second series of questions Q6 to Q9 were in relation to the given formalism and allowed a first comparison of the three formalisms. We totalize the number of “rank 1-answers” to questions Q6 to Q9, corresponding to a « yes absolutely» or a « yes partially » answer to questions about “the capacity to determine the main steps of the scenario”, “to give details to explain the scenario to another teacher”, “to express the pedagogical approach” and “the links with the knowledge context”. The totalization of answers scores 7 for the narrative formalism, 4 for IMS-LD scores and 10 for ISiS. This experiment has provided indications on how our formalism has been perceived by practitioners who are not involved in our research work: the original hypotheses have been proved to be partially valid. First of all for a non-trained public, a structuring formalism contributes to a clearer indication than a narration about the organization of a scenario (H1). Secondly, among the structuring formalisms, preference was given to ISiS which favours relating the scenario to elements such as intentions,
474
V. Emin, J.-P. Pernin, and V. Guéraud
adopted strategies or the knowledge at stake (H2). Finally, a too precise definition of a scenario is questioned, especially if this impedes the re-use of the scenario in other contexts (H3). As for the last hypothesis (H4) concerning the re-use of components, the first answers have not allowed to draw clear indications. Experimentation of the ScenEdit Environment A new experimentation of our graphical online tool ScenEdit has been done in April 2009 during two days in a French secondary school. The subjects were a group of five teachers in Industrial Sciences and Techniques fields (electronics, mechanics and physics). Two teachers had worked with us before on the definition of reusable components inside our tool ScenEdit and the three others had never heard about ISiS model or learning scenario design before this experiment. This study is qualitative and is used So as to help us improve the model and tools we are developing. We are only presenting here the main results, and especially the ones where Hypothesis H4 has being tested through. The preliminary analysis of this experiment shows the interest of having reusable components in the context of designing for the teachers’ own ordinary work in their classroom or for a collaborative work with other teachers. Table 2 shows their answers as regards collaborative work with other teachers. Table 2. Breakdown of the answers to questions about re-use for collaborative work between teachers As regards collaborative work with other teachers, evaluate the fact of having components / patterns, implemented previously by the designers of ScenEdit is… implemented previously by other teachers is… implemented previously by yourself is… Total
No answer
totally useless
quite quite useless useful
absolutel y useful
Total
0
0
0
3
2
5
0
0
0
3
2
5
0
0
0
3
2
5
0
0
0
9
6
15
More precisely the elements provided with ScenEdit (knowledge items, intentions, strategies, interactional situation patterns…) are useful as can be seen in table3. To the question “Would you say that the presence of components and patterns is…” (two possible choices), the associated terms were “advantage”(4 answers), “help” (4 answers). One of the experimented teachers said, it was an “advantage” and a “constraint”, and he explained that “at first sight I found the choices were not wide enough, I was a little embarrassed to be unable to put whatever I wanted… and finally it’s another advantage of ISiS thinking of words that everybody can accept and then speak the same language” so he was convinced of the necessity to have a definite number of possibilities in the list if the vocabulary chosen is relevant for their users. Some of the comments suggested improvements of the visual representation of the model ISIS: in particular more precision is required for the temporal dimension which is not represented on the actual simple tree version, as mentioned before. Moreover
Model and Tool to Clarify Intentions and Strategies in Learning Scenarios Design
475
Table 3. Breakdown of the answers to questions about the presence of suggestions Evaluate the presence suggestions knowledge items intentions strategies interactional situations Total
of No answer 0 0 0 0 0
totally useless 0 0 0 0 0
quite useless 0 0 0 0 0
quite useful 4 4 4 3 15
absolutel y useful 1 1 1 2 5
Total 5 5 5 5 20
they pointed out that making the phases and the activities more explicit helped them as « the scenario can be appropriated more rapidly ». Finally, the issue of the complementarity of the formalisms was raised. Practitioners probably prefer having several complementary formalisms at their disposal with each one contributing to the precision and the removal of eventual ambiguities existing in the others. This hypothesis could be one of the subjects of further experimentations. Moreover we are aware that many factors could refrain teachers from designing or reusing scenarios, but this is not the subject of this paper.
5 Conclusion In this paper, we have presented an overview of the ISiS Model, a “goal-oriented” business process model whose purpose is to assist teachers in the design of learning scenarios and to favour sharing and re-use practices. The model, co-elaborated with a panel of users, appears efficient, according to our experimentations. These experim entations with teachers-designers have shown the benefits of the model (1) to improvement the quality of the scenarios created, (2) to illustrate the importance of the elicitation of intentions and strategies by users themselves, (3) to better understand the scenarios created by others and (4) to simplify the design process by reducing the distance between users’ requirements and the effectively implemented system. Our priority now is to develop a new online version of ScenEdit and experiment it more thoroughly, with a wider audience which not necessarily has a great familiarity with ICT and scenario design softwares and methods. Essentially, this experimentation will essentially aim at consolidating the validation of the visual representations of the scenario that we propose (with the levels of ISiS and the timeline on a single tree representation) and to enhance the system with databases of patterns or components allowing new effective practices of sharing and reuse. With this environment, users will be able to feed databases by exporting fragments of their own scenario, in order to share them with others or reuse them further in related or different contexts.
References 1. Rawlings, A., van Rosmalen, P., Koper, E.J.R., Rodríguez-Artacho, M.R., Lefrere, P.: Survey of Educational Modelling Languages (EMLs). Publication CEN/ISSS WS/Learning Technologies (2002) 2. Koper, R., Tattersall, C.: Learning Design: A Handbook on Modelling and Delivering Networked Education and Training. Springer, Heidelberg (2005)
476
V. Emin, J.-P. Pernin, and V. Guéraud
3. Martel, C., Vignollet, L., Ferraris, C., David, J.P., Lejeune, A.: Modelling collaborative learning activities on e-learning platforms. In: 6th IEEE ICALT Proceedings, Kerkrade, pp. 707–709 (2006) 4. Koper, R.: Current Research in Learning Design. Educational Technology & Society 9(1), 13–22 (2006) 5. Murray, T., Blessing, S.: Authoring Tools for Advanced Technology Learning Environment, Toward Cost-Effective Adaptive. In: Ainsworth, S. (ed.) Interactive and Intelligent Educational Software, p. 571. Kluwer Academic Publishers, Dordrecht (2003) 6. Botturi, L., Cantoni, L., Lepori, B., Tardini, S.: Fast Prototyping as a Communication Catalyst for E-Learning Design: Making the Transition to E-Learning: Strategies and Issues. In: Bullen, M., Janes, D. (eds.), Hershey (2006) 7. Van Lamsweerde, A.: Goal-Oriented Requirements Engineering: A Guided Tour. In: Fifth IEEE International Symposium on Requirements Engineering, p. 249 (2001) 8. Rolland, C., Prakash, N., Benjamen, A.: A Multi-Model View of Process Modelling. Requirements Engineering 4(4), 169–187 (1999) 9. Lundgren-Cayrol, K., Marino, O., Paquette, G., Léonard, M., De La Teja, I.: Implementation and deployment process of IMS Learning Design: Findings from the Canadian IDLD research project. In: Proc. Conf. ICALT 2006, pp. 581–585 (2006) 10. Emin, V., Pernin, J.-P., Prieur, M., Sanchez, E.: Stratégies d’élaboration, de partage et de réutilisation de scénarios pédagogiques. International Journal of Technologies in Higher Education 4(2), 25–37 (2007) 11. Pernin, J.P., Emin, V., Guéraud, V.: ISiS: an intention-oriented model to help teachers in learning scenarios design. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 338–343. Springer, Heidelberg (2008) 12. Leontiev, A.N.: The problem of activity in psychology. In: Wertsch, J. (ed.) The concept of activity in Soviet psychology. Armonk, Sharpe, New York (1981) 13. Schank, R.C., Abelson, R.: Scripts, plans, goals and understanding. Erlbaum, Hillsdale (1977) 14. Emin, V.: ScenEdit: an authoring environment for designing learning scenarios. Poster ICALT 2008, IEEE International Conference on Advanced Learning Technologies, Santander (2008) 15. Rolland, C., Prakash, N.: On the Adequate Modelling of Business Process Families. In: Workshop on Business Process Modelling, Development, and Support (BPMDS), Trondheim, Norway (June 2007) 16. Lejeune, A., David, J.P., Martel, C., Michelet, S., Vezian, N.: To set up pedagogical experiments in a virtual lab: methodology and first results. In: International Conference ICL, Villach Austria (2007) 17. Pernin, J.-P., Emin, V., Loisy, C.: « Perception de trois types de formalismes de scénarisation par des enseignants et des formateurs », Actes de la conférence EIAH, Le Mans (June 2009) 18. Dillenbourg, P.: Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In: Kirschner, P.A. (ed.) Three worlds of CSCL. Can we support CSCL, pp. 61–91. Open Universiteit Nederland, Heerlen (2002)
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods in a University Repository Susanne Neumann1, Petra Oberhuemer1, and Rob Koper2 1
University of Vienna, Center for Teaching and Learning, Porzellangasse 33a, 1090 Wien, Austria 2 Open University of the Netherlands, CELSTEC, Valkenburgerweg 177, 6419 AT Heerlen, The Netherlands {susanne.neumann-heyer,petra.oberhuemer}@univie.ac.at,
[email protected]
Abstract. This article argues for a new, user-driven process of developing a classification for teaching methods. First, a previous literature review is summarized that verified the need for a classification of teaching methods. Then, types of classifications are introduced with their characteristics and typical uses in regard to the maturity of knowledge domains. After shortly reflecting the maturity of the knowledge domain “teaching methods”, former classifications’ approaches to mapping this knowledge domain are examined. We argue that previous classifications focused on analyzing the content and did not take user perspectives into account. In the third part of the article, a case study at the University of Vienna is presented. There, twelve representatives of four stakeholder groups were interviewed to determine their needs for organizing teaching method related objects in a repository. Interview results along with considerations on technology and knowledge domain suggested developing a facets classification at the University. Keywords: teaching method, classification, taxonomy, pattern, repository.
1 Incentive and Previous Work The need for a classification of teaching methods has been often proclaimed, e.g. [1], [2], [3]. A teaching method is defined as a learning outcome oriented set of activities to be performed by learners and learning supporters. The current trend to establish federated repositories that store large numbers of digital objects related to learning, from learning materials to lesson plans and ready-to-play units of learning, has verified this need. This article revisits the issue of developing a classification for teaching methods, looking at previous approaches that classified items related to learning and teaching, showing inconsistencies and shortcomings of previous approaches in order to identify how the problem of a missing classification can be overcome, and how to approach a new development. In a previous literature review, classifications related to learning and teaching were documented and analyzed for their potential to serve as a teaching method classification [4]. Data was collected regarding the origins, theoretical underpinnings, purposes U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 477–491, 2009. © Springer-Verlag Berlin Heidelberg 2009
478
S. Neumann, P. Oberhuemer, and R. Koper
and uses as well as degrees of documentation of these classifications. A two-step analysis was then performed having the first goal to group the classifications according to their topical focus, and having the second goal to identify the quality of these classifications according to taxonomy validation criteria. As a result of the first analysis step, three groups of classifications were identified: narrow focus classifications (placing emphasis on singled-out components of a teaching method such as learning objectives or lecturing styles), holistic focus classifications (placing emphasis on the gestalt of teaching methods, or placing emphasis on an overarching learning theory view on teaching methods), and versatile focus classifications (placing no particular emphasis on any aspect of teaching methods, rather trying to cover a large set of descriptors for the same). As a result of the second analysis step, only a small number of the reviewed classifications were identified as fulfilling more than one of the eight accounted for criteria of taxonomy validation; the most criteria any classification fulfilled was three. The literature review concluded that a classification for teaching methods is still needed as the present classifications do not provide sufficient quality or purposerelated extensiveness. The review further showed that eventual users of the classifications were never involved during development. Suggestions were thus made for new developments of teaching method classifications to incorporate the classification users’ experiences and usage procedures to ensure that the classification reflects their perspectives, their ways of organizing, and their language.
2 Classification Foundations As foundation for the further discussion, some classification related concepts are now presented. Afterwards, these concepts are brought together with the knowledge domain that teaching methods represent as well as the approaches of former classifications of learning and teaching to map this knowledge domain. Classification is defined as the meaningful clustering of experience [5]. Classification work comprises the 1) grouping of related entities and 2) making the relationships between the entities obvious and visible [6]. The term taxonomy has also become popular in the last decade, and Lambe states in this regard that taxonomies classify, describe, and map a knowledge domain [6]. He views the taxonomy as the product of classification work. The terms classification and taxonomy are seen as interchangeable; however, for this article, the term classification will be preferred. Classification work becomes necessary whenever (a) there is a lot of content in one or more repositories and its accessibility needs to be improved; (b) stakeholders doing related work within an organization need to be coordinated more effectively to create synergies [6]. The goal of this work is to either simplify the access to, or the management of a knowledge domain [6]. Classifications have been built in several fields of study. Sciences that have produced widely accepted and used classifications are medicine (International Classification of Diseases, now in its 11th revision), biology (taxonomy of plants and animals), and chemistry (Periodic Table of Elements). The objectivist paradigm states that any entity can be described with its essential properties and then be placed in a category of entities that share the same essential
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods
479
features, and that these categories can be related to other categories defining the classification scheme [7]. However, reality shows that there remain entities, where no classification could be agreed upon to this day. Among them are smells and viruses [5]. The problem with smells is not that we don’t understand how smells work in terms of perception, or what important role smell plays in human and animal life; but there is no “unit of measuring” smell, and there is no good way of talking about smells. In this sense, the basis for a development of a classification of smells is lacking [5]. The same is true for viruses, which often change and are rather ambiguous. No clear classification has been developed for viruses because of this. The next sections will first introduce some classification types, which stem mainly from library science as this science has had a long tradition of organizing knowledge items. Then, a reflection on the maturity of the knowledge domain “teaching methods” is presented. To round up this second part of the article, selected classifications included in the preceding literature review (cp. section 1) are revisited in order to understand what classification types were previously used, and how these classifications approached the knowledge domain. 2.1 Types of Classifications Lists. Lists represent the basic building blocks of classifications. Lists group related items together. The following relationships of items could serve as reasons for creating a list: commonality in attributes or purpose, collocation, sequence, chaining, genealogy, or gradients in attributes [6]. When lists get too long, i.e. when they exceed 12-15 items, or get too complicated, they are re-organized in either trees or maps. Lists are commonly used when the knowledge domain is simple and when the collection of items that need to be managed is not very large. Trees. Trees divide and subdivide the contained classes based on rules of distinction [5]. Trees allow multiple relations between the items in it, for instance, part/whole, cause/effect, starting point/outcome, or process/product relationships, even including different relationships within the same tree [5]. This makes trees versatile and pragmatic [6]. Trees translate well to folder structures that are commonly used in digital organization. Knowledge for building trees must be known and decided in advance, so that the important criteria for distinction can be determined [5]. This means that postcoordination of items is not possible in trees. Trees do not work well when the relationship represented in each level is not immediately apparent to users, when too many inconsistent principles of subdivision are applied in the same tree, where different user groups use alternate organizing principles to the same tree, and when there are too many (beyond three) levels of detail [6]. Hierarchies. A hierarchy is a specific type of tree structure. It is inclusive (the top category includes all subordinate groups), the relations are consistent (exactly one type of relation distinguishes all subordinate groups at all levels), subordinate groups inherit attributes from their superordinate groups, and it demands mutual exclusivity (an entity can only belong to one class within the hierarchy – there is no ambiguity in placement). These attributes make hierarchies popular [5]; however, not all things can be neatly arranged this way. Hierarchies work for animals (cp. Blackwelder [8], who
480
S. Neumann, P. Oberhuemer, and R. Koper
states that perfect hierarchical organization has been achieved for vertebrates), but hierarchies often do not work for manufactured objects or mental concepts [6]. A difference between biology and, for instance, library hierarchies is that in biology taxonomy the animals are only classified in the lowest levels and categories, deepest down in the tree, while in library classifications, books can also be assigned at general levels that are high up in the hierarchy [6]. This effect appears because knowledge objects (products of the human mind) can be either general or specific, while physical objects can only be specific [6]. Hierarchies are well-suited for knowledge representation in those domains that are mature, meaning that the nature of the entities and the nature of their meaningful relationships are known [5]. A sign that it is premature to use hierarchy as the type of classification is that a category “miscellaneous” or “other” is needed, where items are placed that do not fit the logic of the classification as specified [5]. Hierarchical classifications do not accommodate well knowledge domains exhibiting complexity and ambiguity. This is especially true for entities that cannot easily be observed or analyzed, such as information or knowledge artifacts. Just like trees, hierarchies do not allow competing principles of organization. Matrices. In matrices, two or three attributes are linked together in order to reveal the presence or absence of entities or the specific nature of an entity at the intersection of the attributes [5]. Matrices are also known as typologies in social sciences (Bailey cited in [6]), or as paradigms in library science [5]. Main features of matrix representations are that they support “sense-making” (quickly getting guidance within a knowledge domain), and that they foster the discovery and creation of new knowledge [6]. For instance, classification along multiple dimensions in matrices allows for comparison, the locating of issues, problems or opportunities, the creation of inventories or checklists, the identification of gaps, and the description of complex phenomena [6]. To compare, trees subdivide only along one dimension; therefore, trees only allow the location and retrieval of items, and do not support the above mentioned functions. While trees cannot represent alternative points of view effectively, matrices do so very well, up to three alternative approaches [6]. Matrices work well with a well-defined, cohesive body of content, whereby the content must be able to be consistently described by two or three facets, which make up the dimensions of the matrix. The best reflection of knowledge in the domain is achieved when the matrix dimensions are set up using a consensual framework with common vocabulary. In fields, where the fundamental relationships of concepts are not well understood, it is difficult to build a matrix that reveals essential knowledge [5]. Above three dimensions, matrices are not appropriate classification structures, mainly because the content can no longer be visually organized, which in turn impedes easy comprehension and navigation [6]. Diverse collections of content are not easily expressed in a matrix due to the lack of common attributes. Matrices also rarely give complete pictures of a phenomenon or knowledge domain [5]. Facets. Facets represent not merely a different type of classification, but entail a completely different approach to classification work. Facets provide a set of perspectives to have on content, whereby each facet has its own representation (one facet could be a list, while the next facet could be a tree) [6]. Each facet is mutually
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods
481
exclusive, i.e. the facets are orthogonal to each other. The representation in facets rests on the beliefs that there are always multiple perspectives on the world and on the entities in it, and that even seemingly stable classifications, like hierarchies, are in fact provisional and dynamic [5]. Facets and facet analysis are attributed to Ranganathan, who developed the system decades ago; however, his system of colon classification did not become popular until recently when digital objects could be saved in multiple places, contrary to the previous organization within the physical world of libraries where one book had to have exactly one place on a shelf [5]. Facets represent the predecessor of semantic taxonomies used today, and they allow post-coordination [6]. Facets neither require a strong theory as backbone nor complete knowledge; this makes facets useful for new and emerging fields, or fields that are changing [5]. Faceted classifications are ideal for working with the concept of metadata, because facets provide structured information on a piece of content. Facets work best when the main organization scheme of the facets is transparent and well understood by users [6]. No more than seven facets are included in a faceted classification; otherwise, users are not able to cognitively comprehend and manipulate the facets [6]. Facets do not work well where the base classification is not well understood or cannot easily be observed or predicted, for instance, in the case if specialist knowledge is presented to general users. Additional Types. Lambe [6] additionally lists polyhierarchies and system maps as types of classifications. Polyhierarchies are essentially multiple hierarchies linked together at the top level. The linking allows multiple assignment of one entity, thus essentially breaking the stringent rules of hierarchy. System maps are visual displays of either lists or trees, providing a richer context for the knowledge being mapped. These two types are ignored in the further discussion of this article as they represent specialty classifications of the other introduced types. Folksonomies are more recent types of classifications that involve the socially exposed personal tagging of objects [9]. Folksonomies often result in high ambiguity in the collective vocabulary as well as low precision, especially when the number of participants is not large and diverse enough, and when the number of content objects being tagged is anything but humongous [6]. We will disregard folksonomies here. 2.2 Reflecting on the Maturity of the Knowledge Domain “Teaching Methods” As the above descriptions have shown, certain classification types like hierarchies work best with knowledge domains that have attained maturity, while others like lists work best with small knowledge domains, and facets work well for changing knowledge fields. This section serves to reflect the presumed state of maturity regarding the knowledge domain teaching methods. The reflection will take place using two perspectives: First, we regard the teaching method knowledge domain as an embedded part of the knowledge domain educational science, and second, we regard teaching methods as a stand-alone knowledge domain. Please note that the chosen aspects used in the reflection do not provide a complete description of the knowledge domains. Teaching methods can be considered as belonging to the educational science knowledge domain. Educational science features multiple learning theories,
482
S. Neumann, P. Oberhuemer, and R. Koper
pedagogical frameworks, and instructional design models, which exist in parallel and at times endorse competing positions. There appears to be a need to map these different frameworks, models, and vocabularies in order to compare, contrast and identify relationships between them (cp. recent initiatives described in [10] [11]). Although educational science aims to provide fundamental concepts that translate from theoretical assumptions to sound practical implications for teaching, the provision of such concepts has not been achieved [12] [13]. The educational science knowledge domain lacks a common consensual framework and may thus be regarded as complex and ambiguous. Focusing just on the knowledge domain teaching methods, we recognize a range of terms used for “teaching method”. Other terms used to describe concepts similar to teaching methods include, but are not limited to, models of teaching [14], patterns [15], scripts [16], and pedagogical, learning or educational scenarios, e.g. [17]. Finding common ground with this difference in terminology becomes difficult as supported by Beetham [18], who mentioned the lack of common terminology of instructors when talking about their teaching practice. Next to the diversity in terminology, the uncertainty is further enhanced by the differences that individual teaching situations present and that influence the decision whether a teaching method may be appropriate or not. For instance, the adequateness of the teaching method, no matter how theoretically backed, is dependent on the persons interpreting, modifying, and implementing the teaching method as well as dependent on the learners that participate during implementation. This creates a complex setup of often unknown variables and unpredictable factors impeding common understanding and interpretation. Further, practical implications of choosing certain teaching methods over others have only been presented as vague guidelines, for instance, by loosely connecting learners’ knowledge levels and the task to be learned to types of instructional strategies promoted by the different learning theories [19]. This leaves teaching methods as a knowledge domain in a fuzzy state with ambiguous character. This short reflection suggests that the knowledge domain “teaching methods”, both as a stand-alone knowledge domain and as part of the educational science knowledge domain, does not yet provide sufficient consensus resulting in the use of classification types that rely on firm theories and models like hierarchies or trees. 2.3 Former Classifications’ Approaches to the Knowledge Domains Educational Science and Teaching Methods This section gives an additional view on the classifications included in the literature review described in section 1. This time, the focus is placed on the types of classifications used and how the classifications attempted to structure the knowledge domain. The purposes of building a classification varied widely, from articulating nature and scope of different learning designs [3], to providing search mechanisms in a repository for learning objects [20], guiding teaching practitioners through decision-making [21], classifying research in instruction and learning [22], establishing connections between theory and practice of teaching [23], and classifying instructional methods [24] to name a few. The types of classifications employed for these purposes are listed below.
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods
483
Types of Classifications Used. Classifications of the literature review [4] were sorted according to the employed type of classification as introduced in section 2.1 above. Overall, there were 35 counts, which are distributed as shown in Table 1. Of the five main classification types introduced, previous classifications could be attributed to four types. Trees were the most popular classification type used. Of the 20 classifications that represent trees, fourteen featured just a single level of division (the entire tree was represented in the top level). For instance, Ramsden [25] distinguishes three theories of teaching at the top level. This single level use strikes as unusual as trees allow structuring according to multiple relations, yet, only a few trees have taken advantage of this feature. Two classifications (Fuhrmann & Weck [24], and Currier [26]) went beyond the recommended maximum depth of three levels, making the tree hard to navigate. Matrices were also popular. The most prominent representative of this classification type is likely the revised version of Bloom et al.’s taxonomy of educational objectives, namely, the Anderson & Krathwohl taxonomy for learning, teaching, and assessing [27]. Most classifications in this category used a three-dimensional representation, while one classification also opted for the unusual number of four dimensions [28]. Table 1. Distribution of classification types in previous literature review (total count = 35) Type of Classification List Tree Hierarchy Matrix Facet
Number of Count 3 20 0 7 5
Lists were not so common; however, it is worth mentioning that all classifications that used the type list were grouping teaching methods. These lists are Flechsig’s twenty didactic models [23], the GEM teaching method controlled vocabulary1, and the list of teaching methods by Sader et al. [29]. Lists of teaching methods can be found quite often on the Internet (cp. Glossary of Instructional Strategies2 listing 988 strategies alphabetically), i.e. lists seem to be popular for organizing teaching methods. Choosing this type of classification especially often for teaching methods seems to suggest that classification development for teaching methods is in the beginning stages as lists represent basic classification building blocks. Since the knowledge domain teaching methods cannot be considered small, which would then make lists an adequate type of classification, this in turn suggests moving to more sophisticated classification types for teaching methods. None of the classifications used a hierarchical classification structure. One of the reasons for this could be that neither educational science nor teaching methods are 1
http://www.thegateway.org/about/documentation/gem-controlled-vocabularies/ vocabulary-teaching-methods 2 http://glossary.plasmalink.com/glossary.html
484
S. Neumann, P. Oberhuemer, and R. Koper
knowledge domains that are well-structured, a prerequisite to choosing this classification type. Another reason could be that these knowledge domains comprise predominantly mental concepts. Mental concepts are not necessarily best organized in a hierarchy, which allows the use of one organizational principle only [6]. Facets were also used to structure the herein examined knowledge domains. With facet classifications, the choice of the facets is most important [5]. In the following sub-section, special focus will be placed on how previous classifications attempted to choose facets to structure the knowledge domains. Inconsistencies and Gaps in Previous Classifications. The above reflection on the knowledge domain teaching methods and the related knowledge domain educational science has shown that both domains can be regarded as complex and ambiguous. Despite this nature, the majority of the reviewed classifications chose the tree as their method of organizing knowledge. The appeal to use trees as classification type is apparent: Trees are pragmatic, and users can easily relate to the tree’s organizing principles. However, the knowledge to building a tree must be known in advance [6]. A tree is built using a priori rules and cannot be changed after it has been established. It is not convincing to use a tree classification when the knowledge domain is changing and possesses complexity as well as ambiguity. Matrices, which were the second most common type of classification used, work best for structuring a knowledge domain when there is a consensual framework with common vocabulary in place, and when a consistent description of the domain can be achieved by two or three facets. Again, neither of the two knowledge domains exhibits signs that a consensual framework or vocabulary is in place. Solely for particular elements of the knowledge domain, like educational objectives, matrices seem useful, as the knowledge on particular elements is confined and thus more likely to be agreed upon. If the classifications had used empirical investigations as their foundation, the structuring exclusively focused on particular elements of teaching, e.g. Brown et al.’s lecturing styles [30], or Anderson & Krathwohl’s increasing levels of learners’ cognitive processes [27]. None of the classifications had used empirical investigations for more complex entities such as teaching methods. Outside of empirical investigations, classifications were also built based on the authors’ own understanding, because they have worked in the domain of educational science for a long time, such as Farnham-Diggory’s paradigms of knowledge and instruction [22] and Squires’ framework for teaching [31]. These classifications feature strong use of specialized expert language, which is not easily accessible to an outside community or general users. Five classifications have used facets, which seem more appropriate for the type of knowledge domain being structured. The choice of facets, however, was often made arbitrarily as records of facet choice were hardly made. For example, Reeves set up fourteen dimensions of computer based education [32]. His inclusion of facets in the classification is hard to retrace. Additionally, Reeves’ fourteen dimensions are double the recommended maximum number of seven facets, making his classification hard to comprehend and navigate. A more cognizant choice of facets may be attributed to Reigeluth & Moore [33], who established five dimensions to compare and contrast instructional theories (although the authors did not explain the process of choosing
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods
485
facets). The dimensions are assumed to be recognizable at least to the intended target audience (readers of the book describing instructional theories) such as type of learning and interactions for learning. As was shown during the previous literature review, a good number of classification developers had altogether failed to make the process of creating the classification transparent. From the explanations made, however, it appears that none of the classifications have involved the target users of the classification in the development. Test users sometimes evaluated the classification after it had been set up (e.g. Carey et al. [20] and Currier [26]), but future users were never part of the actual development. Final Remarks. In the classifications analyzed during the literature review, entities of the knowledge domains educational science and teaching methods were structured. The most often used classification type in this regard was the tree, a classification type that uses a priori knowledge and rules, and does not allow multiple representations or post-coordination of items. Because classifications were often developed by experts of the knowledge domains, the resulting classifications feature expert language, which is not suitable for all purposes. Furthermore, the developers focused solely on structuring the content, not focusing on or involving the future users and their perspectives in the classification development. Last but not least, teaching methods, whose classification is the focus of this article, were explicitly covered in a small number of the classifications and were often organized in lists suggesting a need to move to more sophisticated structuring. Two goals are thus set up in reaction to these findings: (1) In order to solve the problem of experts’ language represented in the classification, a user-driven method should be applied to classification development, and (2) the process of establishing this classification should be made transparent allowing insight for other initiatives. A new approach to developing a classification for teaching methods may thus be to socially negotiate a classification with the eventual users.
3 Case Study at the University of Vienna The results of the previous discussion will now be used to demonstrate in a specific use case how a user-driven classification development is approached. The purpose of including this case study is to make the process of classification development during the initial phases transparent. The case study is placed at the University of Vienna, where a classification for teaching methods is currently being developed. The University of Vienna is Austria’s largest university with currently 72,000 students enrolled and 6,200 scientific personnel employed. At this point, the university is installing a digital asset management system called Phaidra3. The contents of this system will be searchable and visible to anyone, yet, only university related persons (students and employees) can upload into the repository. Next to learning materials and other content objects, Phaidra is also planned to store teaching method documentations and publications, as well as units of learning. In order to organize these types of assets, a classification is to be developed. 3
Phaidra is an acronym for Permanent Hosting, Archiving and Indexing of Digital Resources and Assets, https://phaidra.univie.ac.at/ (in German).
486
S. Neumann, P. Oberhuemer, and R. Koper
The following section describes the initial user needs analysis that supplied information on the classification’s purpose and scope, enabling us to decide on a specific type of classification to be developed. 3.1 Method For this case study, the underlying methodology stems from Lambe [6]. We chose this methodology of classification development, because it is highly user-oriented, and specifically aims at bringing together stakeholders from different parts of an organization, which the University of Vienna is. For the analysis portrayed in this article, only the portion of the methodology is presented that is relevant to identifying with stakeholders the purposes that the classification will serve and the classification’s scope. Later stages in the methodology involve the decision on a specific design approach (how users will be involved during the classification’s development) and the actual development of the classification including series of testing and validation. The type of classification to be developed must be chosen based on three decisive inputs: the nature and maturity of the knowledge domain, the needs of the users in relation to their tasks in the work environment, and the type of technological system, in which the classification will be used [6]. The first step to attain this information was to brainstorm with the person, who initially requested the development of a classification, who the key stakeholders of the organization are, i.e. who will benefit from the classification, as well as their activities and tasks within the organization. Stakeholders and their activities were arranged in a concept map along with resources that play a role in the activities. The concept map, and respectively updated versions of the map, served as the communication basis with the stakeholders. At the University of Vienna, the following four main stakeholders, who have a presumed interest in the classification of teaching methods, were identified in the initial brainstorming: 1. faculties/departments, along with study program leads 2. university instructors 3. (formally established) subject-specific didactics4, who educate (future) school subject teachers, for instance, in chemistry or biology 4. the Center for Teaching and Learning, which supports university instructors with counseling in the use of teaching methods, and the use of technology in teaching. The concept map constructed in the initial meeting was then discussed with representatives of the main stakeholder groups in individual sessions lasting between 45 and 90 minutes. During the interviews, differences in understanding were recorded directly in the concept map along with any comments that stakeholders had. Cues regarding the purpose and scope of the needed classification were filtered from these interviews. The adjusted concept maps were later meshed to create a coherent view representing all stakeholders. 4
The German equivalent term is “Fachdidaktik“, usually relating to one specific natural science, for which subject-specific teaching methods are researched and then recommended, with which the subject can be taught well at schools.
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods
487
We led twelve interviews with stakeholder representatives. The distribution of interviews according to the stakeholder groups is as follows: 2 faculty/study program representatives, 5 university instructors, 2 subject-specific didactics (one each for biology and chemistry), and 3 Center for Teaching and Learning representatives. 3.2 Interview Results Purposes of the Classification. Study program development teams form for the duration of a curriculum development project; once the curriculum is complete, the team ceases to exist. If documentations of these teams could be preserved in the repository, then future curriculum development teams could benefit from their knowledge, e.g. regarding how learning outcomes were formulated and how teaching methods were assigned to learning outcomes and modules. Often, the connection between curriculum and teaching methods is not apparent to instructors, who have to translate the curriculum into specific teaching methods used within courses. A representative from the faculty notes instructors would have an easier time if the connections between teaching practice and the curriculum were documented, for instance, by explicitly capturing the experiences of the curricular development teams and reasons behind curricular setups. Center for Teaching and Learning representatives stated that instructors could use a classification of teaching methods to make the course creation for instructors more efficient, because instructors could then use methods that were used before for similar courses. Especially new instructors would benefit from such an installation as they often require guidance in teaching method choice and application. The formally established subject-specific didactics at the University offer continuing education courses in biology, chemistry and physics. School teachers learn about teaching methods in these courses, which must have convincing instructional concepts in order to convey credibility and to allow participants to integrate the taught concepts in their own teaching. One goal of this stakeholder group is to document and communicate the successful course concepts they implement. Another goal for them is to move away from strict subject-driven didactics for each single subject towards joint teaching methods that inspire cross-subject use. A sensible organization of teaching methods and associated content in a repository would support this goal. Common Issues and Themes (Scope). Several stakeholders at the University of Vienna produce similar information, which could be of benefit to other groups within the University’s structure. For instance, instructors that teach similar courses could exchange items regarding their teaching, such as their course concepts or materials. Study program development teams could communicate outcomes of discussions on teaching methods, so that other study program development teams and instructors can benefit from them. An emerging theme at the University is also that different subjects wish to exchange teaching-related knowledge. Not only the established subject-specific didactics (biology, chemistry etc.) have mentioned this, but other departments seek exchanges between departments. They have established a “didactics research platform” to organize their efforts. The goal of this platform is to cross the boundaries of the subjects to improve teaching.
488
S. Neumann, P. Oberhuemer, and R. Koper
All stakeholder groups reported that the communication about teaching methods is nearly “non-existent”. Even when courses on the same topic are taught in parallel, instructors of these courses do not necessarily communicate about their teaching approaches. Stakeholders recognized that instructors possess a lot of implicit knowledge even if instructors are not necessarily keen or able to talk about teaching. This knowledge, however, is expressed in diverse documentations, such as instructional concepts prepared for lectures and seminars, course descriptions (which include learning outcomes, teaching methods, and references) as well as the implementation of teaching methods within the learning management system. Instructors and subject-specific didactics further compose publications about teaching method use. All these resources could be stored and organized within the repository to provide systematic access to a wider community – within and external to the University. 3.3 Translating Cues into Type of Classification to Be Developed Needs of Stakeholders. Lambe [6] provides guidelines for translating cues from stakeholders into types of purposes and suggested classification types. An overview of these guidelines is shown in Table 2. Table 2. Interview cues, purpose of classification, and type of classification [6], pp.137 & 158
Sample Cues “We have a clear workflow that everyone follows.” “We share folders but they are a mess; everyone does their own thing, and we can’t find the information we need.” “Different divisions replicate the same information; they don’t know what exists in other parts of the organization. If we shared we could be more effective.” “This is a new area for us. Our domain is changing too quickly and our specialists don’t agree.”
Indication of classification’s purpose
Probable classification type needed
Structure and organize
Trees
Establish common ground
Trees
Span boundaries between groups
Facets
Help in sense-making, or Aiding the discovery of risk and opportunity
“Disposable” frameworks, matrix, maps
Although stakeholders may give conflicting cues that match several categories, recurring cues that appear across stakeholder groups provide the tendency for the classification’s purpose and type. We have identified the purpose “spanning boundaries between groups” from the common themes at the University of Vienna because many comments focused on helping different groups share their knowledge. Since the purpose of the classification is to span boundaries, it is recommended to develop a faceted classification [6]. This type of classification is best suited to fulfill the need of multiple representations that have to span across several boundaries. Knowledge Domain. What further supports the choice of facets is that teaching methods represent a knowledge domain that lacks consensual frameworks and
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods
489
exhibits ambiguity. For this characteristic, a faceted classification is recommended as it does not require strong underlying theories or models [6]. Technology. The University of Vienna’s repository Phaidra features a sophisticated system of metadata that governs how entities in the repository are retrieved and organized. For this type of technological environment, classifications can be larger and more complex such as facets [6]. Overall, the information collected during the analysis suggests that the University of Vienna should develop a facets classification for use in its repository. A problem that might occur because of this decision to use facets is that the user communities represent different expert levels, namely that subject-specific didactics have expert knowledge in educational science while university instructors, who are often not formally educated in pedagogy, may not use this highly specialized expert language. A thesaurus that maps variant terminologies may thus be used in the initial phases to accommodate different user groups within the same facets classification.
4 Summary and Outlook This article presented a new, namely, user-driven method to developing a classification for teaching methods. This method was favored because none of the earlier classifications took user perspectives into account during classification development. The user-driven method was then applied within the initial development phase at the University of Vienna, where specific stakeholder needs in regard to classifying teaching methods were identified. The results showed that the needs of the users are far from theory-driven frameworks, but rather represent needs related to common teaching tasks at the University. The process was made transparent, how a classification type was chosen for development based on user needs, the technology in use, and the requirements of the knowledge domain. Next steps are to refine the purpose of classification development. Following that, we design the approach to the classification’s development. Using iterative trial and error, the classification will then be built, tested, and validated with the stakeholders at the University of Vienna. Acknowledgments. This article was written in the context of the research and development integrated project PROLIX, which is co-funded by the European Commission in the Sixth Framework Programme “Information Society Technologies”.
References 1. Currier, S., Campbell, L.M., Beetham, H.: Pedagogical Vocabularies Review. Joint Information Systems Committee. Centre for Educational Technology Interoperability Standards (2005) 2. Koper, R., Olivier, B.: Representing the Learning Design of Units of Learning. Educational Technology & Society 7, 97–111 (2004)
490
S. Neumann, P. Oberhuemer, and R. Koper
3. Oliver, R., Harper, B., Wills, S., Agostinho, S., Hedberg, J.: Describing ICT-based learning designs that promote quality learning outcomes. In: Beetham, H., Sharpe, R. (eds.) Rethinking Pedagogy for a Digital Age. Designing and delivering e-learning, pp. 64–80. Routledge, London (2007) 4. Neumann, S., Koper, R.: Instructional Method Classifications Lack User Language and Orientation (manuscript in preparation) (2009) 5. Kwasnik, B.H.: The Role of Classification in Knowledge Representation and Discovery. Library Trends 48, 22–47 (1999) 6. Lambe, P.: Organising Knowledge: Taxonomies, Knowledge and Organisational Effectiveness. Chandos, Oxford (2007) 7. Lakoff, G.: Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. University of Chicago Press, Chicago (1987) 8. Blackwelder, R.E.: Taxonomy. A Text and Reference Book. Wiley, New York (1967) 9. Vander Wal, T.: Folksonomy Definition and Wikipedia (2005), http://www.vanderwal.net/random/entrysel.php?blog=1750 10. Mayes, T., de Freitas, S.: Stage 2: Review of e-learning theories, frameworks and models. JISC e-Learning Models Desk Study (2005) 11. Conole, G., Littlejohn, A., Falconer, I., Jeffery, A.: Pedagogical review of learning activities and use cases. LADIE project report (2005) 12. Carr, W.: Education Without Theory. British Journal of Educational Studies 54, 136–159 (2006) 13. Levin, J.R., O’Donnell, A.M.: What to do about Educational Research’s Credibility Gaps? Issues in Education 5, 177–229 (1999) 14. Joyce, B., Weil, M., Calhoun, E.: Models of Teaching, 7th edn. Pearson, Boston (2004) 15. Derntl, M.: Patterns for Person-Centered e-Learning. University of Vienna, Wien (2005) 16. Dillenbourg, P.: Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In: Kirschner, P.A. (ed.) Three worlds of CSCL. Can we support CSCL, pp. 61–91. Open Universiteit Nederland, Heerlen (2002) 17. Marty, J.-C., Heraud, J.-M., Carron, T., France, L.: Matching the Performed Activity on an Educational Platform with a Recommended Pedagogical Scenario: A Multi-Source Approach. Journal of Interactive Learning Research 18, 267–283 (2007) 18. Beetham, H.: Review: developing e-Learning Models for the JISC Practitioner Communities, Version 2.1 (2004), http://www.jisc.ac.uk/uploaded_documents/Review%20models.doc 19. Ertmer, P.A., Newby, T.J.: Behaviorism, Cognitivism, Constructivism: Comparing Critical Features from an Instructional Design Perspective. Performance Improvement Quarterly 6, 50–72 (1993) 20. Carey, T., Swallow, J., Oldfield, W.: Educational Rationale Metadata for Learning Objects. Canadian Journal of Learning and Technology 28 (2002) 21. Conole, G.: Describing learning activities. Tools and resources to guide practice. In: Beetham, H., Sharpe, R. (eds.) Rethinking Pedagogy for a Digital Age. Designing and delivering e-learning, pp. 81–91. Routledge, London (2007) 22. Farnham-Diggory, S.: Paradigms of Knowledge and Instruction. Review of Educational Research 64, 463–477 (1994) 23. Flechsig, K.-H.: Der Göttinger Katalog Didaktischer Modelle: Theoretische und methodologische Grundlagen. Zentrum für didaktische Studien, Göttingen (1983) 24. Fuhrmann, E., Weck, H.: Forschungsproblem Unterrichtsmethoden. Volk und Wissen, Berlin (1976) 25. Ramsden, P.: Learning to Teach in Higher Education. Routledge, London (1992)
Users in the Driver’s Seat: A New Approach to Classifying Teaching Methods
491
26. Currier, S.: SeSDL Taxonomy Evaluation Report. University of Strathclyde, Glasgow (2001) 27. Anderson, L.W., Krathwohl, D.R.: A Taxonomy For Learning, Teaching, And Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Complete ed. Addison Wesley Longman, New York (2001) 28. Kyllonen, P.C., Shute, V.J.: Taxonomy of Learning Skills. Air Force Human Resources Laboratory, Brooks, TX (1988) 29. Sader, M., Clemens-Lodde, B., Keil-Specht, H., Weingarten, A.: Kleine Fibel zum Hochschulunterricht. Überlegungen, Ratschläge, Modelle, 2nd edn. Beck, München (1971) 30. Brown, G.A., Bakhtar, M., Youngman, M.B.: Toward a Typology of Lecturing Styles. British Journal of Educational Psychology 54, 93–100 (1984) 31. Squires, G.: A Framework for Teaching. British Journal of Educational Studies 52, 342– 358 (2004) 32. Reeves, T.: Evaluating What Really Matters in Computer-Based Education (1997), http://www.educationau.edu.au/jahia/Jahia/pid/179 33. Reigeluth, C.M., Moore, J.: Cognitive Education and the Cognitive Domain. In: Reigeluth, C.M. (ed.) Instructional-Design Theories and Models: A New Paradigm of Instructional Theory, vol. II, pp. 51–68. Lawrence Erlbaum, Mahwah (1999)
Generating Educational Interactive Stories in Computer Role-Playing Games Marko Divéky and Mária Bieliková Institution of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 3, 842 16 Bratislava, Slovakia
[email protected],
[email protected]
Abstract. The aim of interactive storytelling is to tell stories with the use of computers in a new and interactive way, which immerses the reader inside the story as the protagonist and enables him to drive its course in any desired direction. Interactive storytelling thus transforms conventional stories from static structures to dynamic and adaptive storyworlds. In this paper, we describe an innovative approach to interactive storytelling that utilizes computer role-playing games, today’s most popular genre of computer games, as the storytelling medium in order to procedurally generate educational interactive stories. Keywords: Interactive storytelling, story generation, educational stories, computer games, role-playing games.
1 Introduction Stories and the art of storytelling have played an inevitable role in our lives ever since the earliest days of language. Historians found first records of storytelling in ancient cultures and their languages, in which stories served as the vehicle by which cultural knowledge was communicated from one generation to the next [1]. Even in today’s modern times, such utilitarian use of stories has endured and their educational potential is utilized in many different areas, e.g., in business to spread knowledge amongst employees in order to help them become more productive and to collaborate with one another [2]. Stories are also used in many other ways: in policy, in process, in pedagogy, in critique and as a foundation and as a catalyst for change [3]. The reason why storytelling proves to be an effective medium for educating is that our minds are programmed much more for stories than for abstract facts [4]. Many aspects of stories have changed since the very first records of storytelling in ancient times. However, even in today’s modern world, the fundamental idea behind conventional stories remains unaltered. Let us suppose that the storyteller seeks to communicate some truth or information (principle) to a desired audience. Instead of just telling the principle, the storyteller translates it into an instantiation (a story), then communicates the story to the audience, which in turn translates the instantiation back into the principle [1], as depicted in Fig. 1. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 492–506, 2009. © Springer-Verlag Berlin Heidelberg 2009
Generating Educational Interactive Stories in Computer Role-Playing Games
493
Fig. 1. The process of telling a conventional story
Interactive storytelling differs from conventional stories in a way that the process of transforming the principle into the instance (story) is delegated to the computer. It is best described as “a narrative genre on computer where the user is one main character in the story and the other characters and events are automated through a program written by an author. Being a character implies choosing all narrative actions for this character” [5]. In interactive storytelling, the computer transforms the principle into a storyworld, which operates on rules rather than on predefines events (see Fig. 2). A single playing of a storyworld generates a single story. In other words, when a player goes through a storyworld, he produces a linear sequence of events that makes a story. Different playing of the storyworld can yield many different stories, but all of them share one common principle [1], as depicted in Fig. 2.
Fig. 2. The process of playing through an interactive storyworld
In this paper, we propose an approach that enables to generate educational interactive stories in computer role-playing games. Researchers in the field of interactive storytelling often disregard computer games as a suitable storytelling medium [1]. Still, some researchers see them as the ideal form for storytelling given their practically unlimited degree of interactivity and visual attractiveness [6]. Throughout the many diverse genres of computer games available today, roleplaying games (RPGs) have the most detailed and involving storyline, thus being the most appropriate genre for storytelling [7]. Moreover, computer role-playing games have originally derived from tabletop role-playing games and are considered as being one of today’s most popular genres of computer games [7].
2 Related Work Despite the large amount of work that has already been done in the field of interactive storytelling, there have been only a few working solutions that found practical use other than being proof-of-concept demonstrations. However, it is possible to divide the most notable solutions into three categories described below:
494
M. Divéky and M. Bieliková
─ Systems that generate complex, but only text-based interactive stories, e.g., Storytron [8] (formerly Erasmatron) developed by Chris Crawford since 1991. Storytron utilizes a drama manager agent to direct the generated stories, which are, however, very cumbersome to define and have only text-based visualization. ─ Systems that generate visually attractive, but not interactive stories, e.g., IStorytelling [9] developed by Marc Cavazza and Fred Charles at the University of Teesside. Their solution utilizes Hierarchical Task Network (HTN) planning and Heuristic Search Planning (HSP) to generate stories via autonomous behavior of character agents. Users, however, have very limited or no means of interacting with and directly influencing the generated stories. ─ Systems that generate interactive stories, but only with a single dramatic conflict, e.g., Façade [10] developed by Michael Mateas and Andrew Stern. Façade uses Natural Language Processing (NLP) to parse user-inputted actions and visualizes stories with a custom proprietary 3D graphics engine. Façade solves almost all problems that relate to interactive storytelling; however, it is able to generate only a single dramatic scene. Consequently, our goal is to devise a new approach to interactive storytelling that not only builds on present-day techniques and formalisms in a way that eliminates the above-mentioned drawbacks of existing interactive storytelling solutions, but also enables to generate educational interactive stories in computer role-playing games.
3 Overview of the Proposed Concept The aim of the hereby-described concept is to programmatically generate educational interactive stories with computer role-playing games as their medium, therefore combining the dynamic and enthralling storyworlds created by interactive storytelling with the visual appearance, gameplay and popularity of computer role-playing games. The presented concept can be broken into the following three logical layers: ─ Character Behavior Layer describes interpersonal relationships and conditional reasoning of characters, ─ Action Planning Layer realizes planning and replanning of narrative actions, ─ Visualization Layer utilizes computer role-playing games for story visualization. All generated interactive stories initiate in the topmost logical layer named the Character Behavior Layer, since all stories are about people, despite the fact that references to them are often indirect or symbolic [1]. The behavior of characters results in creating plans and planning actions that move the story forward. The middle layer, called the Action Planning Layer, handles all the planning and replanning. All created plans eventually break down into actions that are visualized by the lowest logical layer called the Visualization Layer. All three logical layers operate on easy-to-define rules and data structures, which are described in detail in the following sections.
Generating Educational Interactive Stories in Computer Role-Playing Games
495
3.1 Visualization Layer The Visualization Layer provides graphical visualization of the generated interactive stories to the players. It uses the concept of computer role-playing games as the visualization and storytelling medium. The most important aspects of computer role-playing games that are important for the scope of this paper are described below. Themes. Games based on the role-playing genre are most often set in a fictional fantasy world closely related to classic mythology, or in a science-fiction world set somewhere in the future. Historical and modern themes are also common [11]. Avatars. In role-playing games, a player controls one in-game character called the avatar, and uses him as an instrument for interacting with the game world [12]. Character Development. Besides having the option to fully customize the appearance of his in-game character, the player is allowed to choose various attributes, skills, traits and special abilities that his avatar will possess. These are given to players as rewards for overcoming challenges and achieving goals, most commonly for completing quests. Character development plays, together with stories, a key role in today’s computer role-playing games. Quests. A quest in role-playing games can be defined as a journey across the game world in which the player collects items and talks to non-player characters “in order to overcome challenges and achieve a meaningful goal” [13]. Quests often require the player to find specific items that he needs to correctly use or combine in order to solve a particular task, and/or require the player to choose a correct answer from a number of given answers to a certain question [14]. Upon solving a quest, the player is often presented with a reward, which can have many forms – i.e. ranging from a valuable item to a new skill, trait or ability for the player’s avatar. Many quests are optional, allowing for freedom of choice in defining the player’s goals. Moreover, a set of quests may be mutually exclusive with another set, therefore forcing the player to choose which set of quests he will solve, having in mind the possible long-term effects these quests will have on the game world. Some quests can be solved in more than just one way and thus bring non-linearity into the game. What is noteworthy and important to realize with regard to the focus of this work is the fact that quests are a fundamental structure by which the player moves the storyline forward in computer role-playing games. In other words, a quest is a conceptual bridge between the open structure of role-playing games and the closed structure of stories, thus being an ideal vehicle for interactive storytelling in computer roleplaying games. Consequently, it is possible to use computer role-playing games as a medium for interactive storytelling by dynamically generating non-linear quests. Non-player Characters. Role-playing game worlds are populated by non-player characters (NPCs) that cannot be controlled by the player. Instead, their behavior is scripted by the game designers and executed by the game engine. Players interact with non-player characters through dialogue. Most role-playing games feature branching dialogue (or dialogue trees). As a result, when talking to a non-player character, the player may choose from a list of dialogue options where each choice often results in a different reaction. Such choices may affect the player’s course of the game, as well as latter conversations with non-player characters.
496
M. Divéky and M. Bieliková
Items, Containers and Objects. Throughout playing role-playing games, quests require the player to find, collect and properly use various items scattered throughout the game world. Special items may be equipped on the player’s avatar, improving his abilities, skills or other attributes. All items can be either created from other items, or obtained from containers, i.e. objects that can hold or carry items. The player himself is an example of a container, since he can carry items in his inventory. Non-player characters and objects representing treasure chests are also examples of containers. Atomic Actions. We have formalized computer role-playing games from an interactive storytelling point of view into a set of atomic actions having narrative impact that a player (or a non-player character) is able to commit inside the game world. Each atomic action has the following structure: ─ Name: A string concisely describing the atomic action. ─ Source: The type of an in-game element that can commit the atomic action, e.g., the player, or a non-player character. ─ Target: The type of an in-game element that this atomic action is committed upon, e.g., an object or a container. ─ Parameter: The type of an in-game element that the atomic action operates on. The presence of the parameter is optional. ─ Preconditions: A set of statements regarding the specified source, target and parameter defining the circumstances under which the atomic action can be committed in the game world. ─ Effects: A set of changes to the game world that are a consequence of the atomic action having been committed. The typical set of atomic actions that describes a common computer role-playing game is shown in Table 1. Table 1. Atomic actions that describe a typical computer role-playing game
Source
Atomic Action1
Character GIVE (Item) Character TAKE (Item) Character USE (Item) Character EQUIP () Character WALK_TO () Character TALK_TO () 1
Target
Description The character specified as the source Container gives the item to a targeted container. The source character takes the item Container from the target container. The atomic action’s source character Element uses the item on the targeted element. The character specified as the source Item equips the targeted item. The source character walks near the Element targeted element. The atomic action’s source initiates a Character dialogue with the targeted character.
The atomic actions are written in a shortened textual form with the following structure: NAME (parameter).
Generating Educational Interactive Stories in Computer Role-Playing Games
497
As an example, we elaborate the second atomic action mentioned in Table 1, which is defined as follows: Name: Source: Target: Parameter: Preconditions: Effects:
TAKE Character Container Item Target has parameter Target has parameter (negation of the action’s precondition) Source has parameter
3.2 Action Planning Layer From a top-down perspective, the role of the middle logical layer is to transform character goals set by the Character Behavior Layer into plans consisting of actions that are to be executed and visualized by the bottommost logical layer – the Visualization Layer. From a bottom-up perspective, the Action Planning Layer is responsible for processing actions committed by the player and all non-player characters that were reported back from the Visualization Layer. Since the described process is done on-the-fly while the player is playing a computer role-playing game, the hereby described layer creates new quests based on the previous actions committed by the player and seamlessly integrates them into the game, thus dynamically creating an interactive story perceived by the player. The Action Planning Layer utilizes Hierarchical Task Network planning in a similar way as described in [15]. Our algorithm searches for appropriate actions based on recursive matching of their effects with required preconditions. In other words, the planner finds all actions resulting in the required preconditions (see section 0). The Action Planning Layer operates on simple actions, complex actions and action bindings, all of which are described below. Simple Actions. A simple action refines the usage of exactly one atomic or simple action by specializing its source, target, parameter or by adding additional preconditions or effects. Unlike atomic actions, simple actions can alter attributes of characters and interpersonal relationships – features of the Character Behavior Layer. Every simple action consists of the following structure: ─ ─ ─ ─ ─
Name: A string concisely describing the simple action. Base action: An atomic or simple action that this simple action refines. Source: The base action’s source type or its subtype (see below). Target: The base action’s target type or its subtype. Parameter: The type of an in-game element that this simple action operates on. The parameter is equal to or a subtype of the base action’s parameter. ─ Preconditions: A set containing preconditions inherited from the base action plus additional preconditions related to this simple action. ─ Effects: A set containing effects inherited from the base action plus additional effects related to this simple action.
498
M. Divéky and M. Bieliková
Below is an example of a simple action named REPAIR that refines the atomic action USE (mentioned in Table 1) and enables the player to repair a malfunctioning computer with a spare circuit board: Name:
REPAIR
Base action:
USE
Source:
Player (a character subtype)
Target:
Computer (an object subtype, see below)
Parameter:
Circuit board (an item subtype) 2
Preconditions :
[Source has parameter] Target is “malfunctioning”
Effects2:
[Source has parameter] Target is “malfunctioning”
The second effect marks the target computer as “not malfunctioning,” since it is defined as being an object subtype, which has a “malfunctioning” property defined. An object instance can belong to multiple object subtypes, each having defined custom properties that can be set on the object instance. Likewise, subtypes of all other core in-game elements (described in section 0), i.e. items, containers and characters are also supported, with each subtype having its own custom properties defined. Complex Actions. A complex action encloses multiple actions in a sequence with atomic, simple or complex actions being its elements. Every complex action has the following structure: ─ Name: A string concisely describing the complex action. ─ Source: Type of an in-game element that is the source of all enclosed actions. ─ Targets: Types of in-game elements that are targets of the enclosed actions. ─ Parameters: A set of in-game elements that the enclosed actions operate on. Each parameter is identified by a unique name distinguishing it from the others. ─ Enclosed actions: A set containing atomic, simple or other complex actions that this complex action encloses. ─ Preconditions: A filtered3 set containing preconditions of all enclosed actions plus additional preconditions related to this complex action. ─ Effects: A filtered set containing effects of all enclosed actions plus additional effects related to this complex action. Below is an example of a complex action, with which a non-player character unlocks a locked item for the player if the particular non-player character likes the player: 2 3
The preconditions and effects inherited from the base action are enclosed in [square brackets]. The sets do not contain preconditions nor effects that are created and at the same time annulled during the sequential execution of actions enclosed by the complex action.
Generating Educational Interactive Stories in Computer Role-Playing Games
Name:
UNLOCK_LOCKED_ITEM_FOR_PLAYER
Source:
Non-player character
Targets:
Player Lockable item i (an item subtype)
Parameters:
499
Lockable item i 4
Enclosed actions :
Source : TAKE (i) → Player Source : UNLOCK() → i Source : GIVE (i) → Player
Preconditions:
[Player has i] [i is “locked”] Source “likes” player
Effects:
[Player has i] [i is “locked”]
Action Bindings. The purpose of optional action bindings is to explicitly permit or disallow bindings of atomic, simple, and complex actions to actual in-game entities. Action bindings therefore define what can or cannot be done with all defined story elements, and what exactly can or cannot the player and all non-player characters do. For example, the action bindings given below4 denote that the player can steal the item “thyme syrup” from the non-player character representing an evil doctor and cannot steal it from a non-player character representing a good doctor: ─ [CAN] Player : STEAL ("Thyme syrup") → "Evil Doctor" ─ [CAN’T]Player : STEAL ("Thyme syrup") → "Good Doctor" 3.3 Character Behavior Layer The Character Behavioral Layer is responsible for generating goals of all in-game characters, i.e. the player and all non-player characters. These goals are created according to interpersonal relationships among the player and non-player characters by matching their existing values5 to a set of behavior patterns (see below) and picking the resulting goals from the best matching patterns. All generated characters’ goals are afterwards translated to plans by the Action Planning Layer. Behavioral Patterns. A behavioral pattern defines the circumstances that lead to a change in the behavior of characters from a narrative point of view. The set of all behavioral patterns defines the conditional reasoning of all in-game characters. Each behavioral pattern has the following structure: 4 5
Each action is written in the form: Source : NAME (parameter(s)) → Target. The storyteller sets the initial values of interpersonal relationships, if necessary. Otherwise, all relationships are initialized to their default values.
500
M. Divéky and M. Bieliková
─ Description: An optional text concisely describing the behavioral pattern. ─ Preconditions: A set of in-game elements, including their properties and relationships among them. Also included are the preconditions of actions that represent goals. Undesirable preconditions may be explicitly excluded. ─ Goals: A set containing goals describing the change in characters’ behavior as a consequence to the situation portrayed by the pattern’s preconditions. Goals can be represented by actions of any type, or by changes in properties of in-game elements and in relationships among them. Each goal is bound to a character. ─ Effects: A set containing effects of all identified goals. Undesirable effects may be explicitly excluded from the set. Below is an example behavioral pattern that describes the situation in which John, a husband of Mary with a respiratory infection, cures his wife with thyme syrup that the player had given him: Preconditions:
Non-player characters “John” and “Mary” John is “husband” to Mary, Mary is “wife” to John Mary is “sick with respiratory infection” […preconditions of the two actions that represent goals…]
Goals:
Player : GIVE ("Thyme syrup") → John John : USE ("Thyme syrup") → Mary
Effects:
[…effects of the two actions that represent goals…] Mary is “sick with respiratory infection”
This behavioral pattern contains examples of two actual interpersonal relationships, namely “is husband to,” “is wife to” and an attribute “sick with respiratory infection.” Following is a detailed description of all types of interpersonal relationships and attributes that the Character Behavior Layer operates on. NPC → Character Relationships. Interpersonal relationships oriented from a nonplayer character6 to any type of in-game character (i.e. the player or another nonplayer character) are identified with their unique label. Under normal circumstances, NPC → Character relationships are either absent (the default initial value) or present7. However, there are cases when two relationships are mutually exclusive (meaning that the presence of one disallows the presence of the other and vice-versa). Such pairs of relationships are merged into relationships with two complementary values. These interpersonal relationships can have only one of their two values present at a time, e.g., if the precondition “Tom likes player” is true at a given time, then the precondition “Tom dislikes player” is false at the same time. 6
Relationships oriented from the player are not considered, because monitoring such relationships and limiting the number of narrative actions the player can commit based on their presence would degrade the player’s experience of the generated stories. 7 Certain types of NPC → Character relationships can be set to have a numerical value instead of being either present or absent. Such mathematical relationships require additional apparatus for expressing preconditions that is not described in this paper.
Generating Educational Interactive Stories in Computer Role-Playing Games
501
Character Roles. Another type of relationships between in-game characters are character roles. Unlike NPC → Character relationships, character roles can be oriented from the player, as well. Moreover, character roles can be bilateral. Similarly to NPC → Character relationships, character roles are either absent (the default initial value) or present. If a bilateral character role is present, then preconditions testing either end evaluate to true, e.g., “John is husband to Mary” and “Mary is wife to John” are either both true, or both false in a given time. Character Attributes. Every in-game character, whether being the player or a nonplayer character can have various attributes defined. Such attributes are either unnumbered or numbered. Unnumbered character attributes do not have any numerical value and are either absent (the default initial value) or present. An example of an unnumbered character attribute is “sick with respiratory infection” referenced in the aforementioned behavioral pattern example. Numbered character attributes are present in every in-game character (i.e. the player and all non-player characters) and store a numerical value (individually for every character and from a custom-defined interval), unlike unnumbered attributes. An example of numbered attributes found in the majority of computer role-playing games is located in the left column of Table 2. Table 2. Examples of various numbered character attributes
Common Attributes ─ Agility
Domain-specific Attributes ─ C++ proficiency
─ Endurance
─ Java proficiency
─ Intelligence
─ Lisp proficiency
─ Strength In addition, attributes of characters can be easily made domain-specific, e.g., an interactive storyworld created in the domain of teaching programming languages can have defined numbered character attributes located in the right column of Table 2. 3.4 The Story Generation Cycle The three logical layers described in the preceding sections are cyclically used to generate interactive stories, as depicted in Fig. 3. Preparation Phase. Before any interactive stories can be generated, the author of the stories, i.e. the storyteller creates his domain-specific rules and input data based on which the proposed storytelling system operates. In other words, the storyteller defines the necessary simple and complex actions, action bindings, types of possible character properties (attributes, relationships and roles between characters) and behavioral patterns – as described in the previous sections. Initialization Phase. After having defined all necessary custom domain-specific rules comes the initialization phase, in which the storyteller defines the storyworld that he
502
M. Divéky and M. Bieliková
wants the generated stories to take place in. He therefore creates the desired nonplayer characters along with their initial attributes, roles and relationships between them. The storyteller can also optionally scatter objects, containers and items throughout the storyworld as desired. Story Generation Phase. After the storyteller had defined the initial state of the storyworld, the Character Behavior Layer analyzes the storyworld and chooses one or more goals that best match its current state, i.e. goals having all preconditions met.
Fig. 3. The story generation cycle
Such goals are afterwards processed by the Action Planning Layer, which translates all player’s and non-player characters’ active goals to plans. The planning process is based on the Hierarchical Task Network (HTN) planning formalism that recursively finds appropriate actions, effects of which meet the required conditions. The planning process starts out with the effects of each active goal and finds any complex, simple or atomic actions (in this order) that have their effects match the goal’s effects. The whole planning process afterwards runs recursively to search for actions that have their effects match the preconditions8 of the newly found actions. As shown in Fig. 4, the resulting plan for every active goal is a tree-like graph that contains multiple paths of complex, simple or atomic actions, which result in accomplishing the particular goal when followed in the storyworld. 8
Since the planner matches the required preconditions with effects of candidate actions, all actions depicted in Fig. 4 have preconditions placed below them and effects above.
Generating Educational Interactive Stories in Computer Role-Playing Games
503
Fig. 4. An example plan9 consisting of six simple actions that are interconnected by common preconditions (PRE) and effects (EFF), with the goal being the plan’s root. The plan is constructed from top to bottom, but carried out from bottom to top.
After plans for the player and all non-player characters are constructed, a subset of all available player’s planned paths is chosen and delegated to the Visualization Layer, along with any planned actions that each non-player character should commit. The subset of the player’s plan is chosen according to the player’s user model, which contains records of all actions that he had previously successfully committed in other storyworlds. Thanks to these records, it is possible to filter out and ignore planned paths containing similar actions that the player previously successfully committed (positive personalization), or actions he had not yet committed or failed to commit successfully (negative personalization). If the player’s user model contains no records, then the subset of player’s planned paths is selected randomly.
9
The plan is from a storyworld in an educational domain related to the history of computing and programming basics, in which the player is to repair a malfunctioning mainframe computer, either by reprogramming it (see Fig. 6) or by repairing it with a spare circuit board, which he can either steal or get from John, a non-player character, upon becoming trustworthy by successfully answering a mainframe-related question or by reading a mainframe handbook.
504
M. Divéky and M. Bieliková
In the example plan shown in Fig. 4, presuming positive personalization and that the player’s user model states that the player had previously committed the simple action REPROGRAM, then the path with the simple action REPROGRAM is chosen and delegated to the Visualization Layer, as shown in Fig. 5.
Fig. 5. An example of a chosen path, colored grey (the rest of the player’s plan is omitted)
The Visualization Layer receives planned actions for both the player and nonplayer characters (NPCs) from the Action Planning Layer and the actions destined for NPCs are committed by the particular NPCs, whereas the actions planned for the player are presented to him as multiple options via the game’s graphical interface. The player commits, either successfully or unsuccessfully, his desired action (that may not have been planned, but must have its preconditions met), what is afterwards signaled back to the Action Planning Layer, along with actions that were, either successfully or unsuccessfully, committed by any non-player characters. All committed actions are processed by the Action Planning Layer that matches all committed actions to player’s and non-players’ existing plans, which remain either unaffected, or are replanned according to the new state of the storyworld, i.e. new planned paths are again chosen based on the player’s user model, as described in the preceding paragraphs. Any alterations to the state of the storyworld, such as changes in characters’ attributes, roles or relationships are signaled to the Character Behavior Layer, which collects all changes to the storyworld from the Action Planning Layer and determines whether any of the existing goals are affected (in terms of having been accomplished or not being accomplishable anymore), or whether preconditions for any new goals are met. The current set of both the player’s and non-player characters’ goals is updated and any changes are delegated back to the Action Planning Layer. The hereby-described process is repeated in a cyclic manner until the player has successfully accomplished all of his goals, in which case the story ends. An ending also occurs if the player is unable to accomplish all of his existing goals and there are no more possible paths left to do so.
4 Conclusions and Future Work We have described an innovative approach to interactive storytelling that uses techniques common in this field in such a unique way that, in contrast to all existing
Generating Educational Interactive Stories in Computer Role-Playing Games
505
solutions, operates on simple-to-define rules, and enables to programmatically generate interactive stories in computer role-playing games. We have implemented a prototype system based on the approach described in this paper. The domain that we have selected for prototyping is education related to the history of computing and programming basics. The prototype system reads from an input file a set of rules and data structures (described in detail in [16]) defining a storyworld from which an educational interactive story is generated and presented to the player through a computer role-playing game (see Fig. 6). The story progresses on-the-fly by reacting to narrative actions that the player had committed.
Fig. 6. Screenshots from an educational interactive story generated by the prototype system. The top-left picture shows the player while reprogramming a mainframe computer by constructing valid program code from multiple choices of possible commands. The bottom-right picture depicts an example environment in which the generated interactive stories take place.
Our future work consists of evaluating the prototype system empirically by players who will fill out questionnaires regarding various narrative and educational aspects of the generated interactive stories.
506
M. Divéky and M. Bieliková
Acknowledgments. This work was partially supported by the Cultural and Educational Grant Agency of the Slovak Republic, grant No. KEGA 3/5187/07 and by the Scientific Grant Agency of Slovak Republic, grant No. VG1/0508/09.
References 1. Crawford, C.: Chris Crawford on Interactive Storytelling. New Riders Games (2004) 2. Denning, S.: The Springboard: How Storytelling Ignites Action in Knowledge-Era Organizations. Butterworth-Heinemann (2000) 3. Sandercock, L.: Out of the Closet: The Importance of Stories and Storytelling in Planning Practice. Planning Theory & Practice 4(1), 11–28 (2003) 4. Schank, R.: Tell Me a Story: Narrative and Intelligence. Northwestern University Press, Evanston (1995) 5. Szilas, N.: The Future of Interactive Drama. In: Proceedings of the Second Australasian Conference on Interactive Entertainment, pp. 193–199. Creativity & Cognition Studios Press, Sydney (2005) 6. Murray, H.J.: From Game-Story to Cyberdrama. First Person: New Media as Story, Performance and Game, pp. 2–11. MIT Press, Cambridge (2004) 7. Rollings, A., Adams, E.: Andrew Rollings and Ernest Adams on Game Design. New Riders Games (2003) 8. Storytron, http://www.storytron.com/ 9. Cavazza, M., Charles, F., Mead, S.J.: Sex, Lies, and Video Games: An Interactive Storytelling Prototype. In: Proceedings of the 2002 AAAI Spring Symposium, pp. 13–17. AAAI Press, Menlo Park (2002) 10. Façade, http://www.interactivestory.net/ 11. Barton, M.: Dungeons and Desktops: The History of Computer Role-playing Games. A.K. Peters Ltd., Wellesley (2008) 12. Tychsen, A.: Role Playing Games – Comparative Analysis Across Two Media Platforms. In: Proceedings of the Third Australasian Conference on Interactive Entertainment, pp. 75–82. Murdoch University, Perth (2006) 13. Howard, J.: Quests: Design, Theory, and History in Games and Narratives. A.K. Peters Ltd., Wellesley (2008) 14. Bieliková, M., Divéky, M., Jurnečka, P., Kajan, R., Omelina, Ľ.: Automatic Generation of Adaptive, Educational and Multimedia Computer Games. Signal, Image and Video Processing 2(4), 371–384 (2008) 15. Cavazza, M., Charles, F., Mead, S.J.: Planning Characters’ Behaviour in Interactive Storytelling. Journal of Visualization and Computer Animation 13(2), 121–131 (2002) 16. Divéky, M., Bieliková, M.: An Approach to Interactive Storytelling and its Application to Computer Role-playing Games. In: Návrat, P., Chudá, D. (eds.) Znalosti 2009: Proceedings of the eighth annual conference, pp. 59–70. STU, Bratislava (2009)
CAMera for PLE Hans-Christian Schmitz, Maren Scheffel, Martin Friedrich, Marco Jahn, Katja Niemann, and Martin Wolpers Fraunhofer-Institute for Applied Information Technology (FIT), Schloss Birlinghoven, 53754 Sankt Augustin, Germany {hans-christian.schmitz,maren.scheffel,martin.friedrich, marco.jahn,katja.niemann,martin.wolpers}@fit.fraunhofer.de
Abstract. Successful self-regulated learning in a personalized learning environment (PLE) requires self-monitoring of the learner and reflection of learning behaviour. We introduce a tool called CAMera for monitoring and reporting on learning behaviour and thus for supporting learning reflection. The tool collects usage metadata from diverse application programs, stores these metadata as Contextualized Attention Metadata (CAM) and makes them accessible to the learner for recapitulating her learning activities. Usage metadata can be captured both locally on the user’s computer and remotely from a server. We introduce two ways of exploiting CAM, namely the analysis of email-messages stored locally on a user’s computer and the derivation of patterns and trends in the usage of the MACE system for architectural learning.1 Keywords: self-regulated learning, personalized learning environments, monitoring, usage metadata, learning reflection, social networks, Zeitgeist.
1 Introduction The core idea of this paper is that self-regulated learning is especially promising regarding positive learning-outcomes, that it demands self-monitoring by the learner and that therefore the learner is to be supported in monitoring and reflecting her learning activities. We present a tool called CAMera for monitoring and reporting on user actions, thus fostering learning process reflection. The outline of the paper is as follows: in section 2, we argue that self-monitoring is an integral part of self-regulated learning. In computer-based learning environments, self-monitoring can be supported by a monitoring tool that records a user’s actions within the learning environment, generates reports on her computer-related activities and helps to recapitulate her learning paths. In section 3, we outline the design of a the CAMera monitoring-tool and present two example components; one for the observation and analysis of email-exchange and one for the observation and analysis of interactions with the MACE system. 1
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 231396 (ROLE project) and from the European Community’s eContent+ Programme under grant agreement ECP-2005-EDU-038098 (MACE project).
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 507–520, 2009. © Springer-Verlag Berlin Heidelberg 2009
508
H.-C. Schmitz et al.
2 Self-monitoring of Learning Activities Self-regulated learning is a promising way to successfully achieve learning goals and thus highly eligible with both solo and collaborative learning processes. It will be shown that self-monitoring plays an essential role when successfully learning in a self-regulated way and that computer-based self-regulated learning demands personal learning environments. Within such personalized learning environments selfmonitoring has to be supported by recording the interaction of a learner with the actually used tools and services in order for her to later analyze and evaluate her learning processes. 2.1 Self-regulated Learning and Monitoring The term self-regulated learning denotes a learning process where the learner herself decides on what to learn, when and how. Self-regulatedness is a gradable feature that does not exclude guidance by a teacher as long as this guidance does not question the autonomy of the learner. Self-regulated learners are able to meta-cognitively assess and strategically plan, monitor and evaluate their learning activities. Over the years, self-regulated learning has been a focus for research within educa2 tional psychology. Torrano and González ([1]) give an account of current and future directions of self-regulated learning: the self-regulated learner can be characterized as a person who actively participates in learning – on a meta-cognitive, motivational and behavioural level –, motivates herself and makes use of strategies in order to achieve the desired results. According to Pintrich’s learner model, the process of self-regulation comprises four phases: planning, self-monitoring, control and evaluation ([8]). These phases, in turn, are composed of a cognitive, a motivational/affective, a behavioural and a contextual area. Zimmerman ([9]) also accentuates the need for feedback, espe3 cially the self-oriented type, when learning in a self-regulated way. His loop of selfregulated learning comprises forethought, performance and self-reflection ([11]). By meta-cognitively assessing, analyzing and evaluating her behaviour, a learner can adjust her learning processes and consequently achieve better results. Thus, selfreflection is not only useful but an essential part of self-regulated learning. In general, research agrees that self-regulated learning including self-monitoring and self-evaluation supports successful acquisition of academic (e.g. [12]) and nonacademic skills (e.g. [11]) and a better understanding of the things studied. This is covered by experimental evidence: when girls learning how to throw darts were split into groups of students with present or absent self-evaluation (among other groups), those students who did not self-evaluate showed a tendency of attributing poor training outcomes to a lack of ability or insufficient effort ([11]). Those students, however, who did self-evaluate attributed poor outcomes to improper strategy use and practice. These results imply that self-evaluation and -monitoring lead to higher levels of selfefficacy and motivation to learn.
2
According to them, the five most relevant publications on self-regulated learning are: [2], [3], [4], [5] and [6]. See also [7]. 3 For a detailed and extensive account on feedback and self-regulated learning see [10].
CAMera for PLE
509
Other experiments were concerned with primary school children who had to solve the Tower of Hanoi problem with three and four discs ([13]). After setting up different training conditions (children watching themselves trying to solve the problem, children watching another child solving the problem inefficiently and children watching another child solving the problem efficiently) results show that those children watching themselves perform better than the others. With four other training conditions (video of self solving the problem inefficiently, video of self solving the problem inefficiently in a prescribed way, video of self solving the problem efficiently in a prescribed way and video of another child solving the problem in the most efficient way), the results, again, show that watching oneself produces better results than watching others. It also turns out to be much more efficient to watch non-prescribed, spontaneous, actual performance as that group achieves the best results. Self-monitoring is also a motivator ([14]). When dividing students into different groups, either self-evaluating their learning processes or being externally evaluated while acquiring mathematical skills, the prediction is that self-monitoring is more effective than external monitoring. This hypothesis is supported by the results of Schunk ([14]). It can be concluded that self-monitoring fosters the students’ motivation to learn. Monitoring their behaviour helps them to become aware of their actions and regulate them accordingly. There are two criteria that are important when recording and self-monitoring one’s behaviour: regularity and proximity. Only if these criteria are fulfilled, do students get a thorough impression of their actions and are able to react according to the goal they set. Tracing features of studying thus has most use as it is done during the action ([15]): by making use of computers to trace the learner’s behaviour, the learner is free in her actions and not disturbed or interfered with by other recording measures, such as think-alouds, nor can she forget to mention things as might be possible in post-test questionnaires. Gress et al. show that with the possibility of real-time analysis of everything a learner does and produces while studying (chats, documents, file system interaction) it is easier for her to understand learning processes ([16]). They also stress that this holds true for solo as well as collaborative processes. Making real-time feedback available to the learner enhances the learning process as she can recapitulate what she has done. To conclude: self-monitoring is an essential part of self-regulated learning. Environments for self-regulated learning have to provide the learner with means to monitor and evaluate her learning activities whether they are connected to solo or collaborative learning processes. Such environments therefore have to record the learner’s activities and make the recordings accessible for analysis and recapitulation. 2.2 Personalized Learning Environments Within a personalized learning environment (PLE)4 the learner can control all learning processes. She can choose from a vast amount of services and use them how she thinks is best for her thus facilitating the process of self-regulated learning. Apart from the personal space provided by a PLE, the social context of learning is also covered by enabling the connection between several personal spaces, thus supporting collaborative learning ([19]). A PLE is a prerequisite for computer-based 4
For definitions and characterizations of PLEs see [17] and [18].
510
H.-C. Schmitz et al.
self-regulated learning as it is able to record the learner’s activities within the environment and to analyse these activities according to the individual learner’s needs and choices regarding used tools as well as learning strategies. (We presume that from observations of PLE-interactions conclusions on actual learning behaviour can be drawn, at least by the learner herself who has sufficient background knowledge to retrospectively interpret her computer related actions in the light of her learning goals and activities.) An important aspect of computer-based PLEs is the option for a learner to not only choose from a given number of services but to bring her own tools and services into the environment and even share them with other users collaboratively. A learner should be able to use those tools and services she is familiar with from her every-day interaction with the computer making the PLE a resource for studying and learning as well as working and using it in her spare time. With the PLE thus being both a task and information environment it is suited for academic and non-academic lifelong learning. The question about what tools and services are actually being used in PLEs is not easily answered. For browser statistics, several sources can be looked at: according to w3schools.com ([20]) the usage of browsers depends significantly on the user group. Technical affine people tend to not use the Microsoft Internet Explorer, but alternative browsers like Mozilla Firefox. A second source are the access statistics of Wikipedia ([21]): here, the most often used browser is the Internet Explorer (66.82%), followed by Mozilla Firefox (22.05%), Safari (8.23%), Google Chrome (1.23%) and Opera (0.70%). Fingerprint ([22]) offers usage statistics for e-mail clients: it shows that most often Outlook is used (36%), followed by Mozilla Thunderbird (2.4%). Additionally it is pointed out that many users do not install e-mail clients on their computers but use web interfaces like the ones from Hotmail (33%) or Yahoo! Mail (14%). It is relatively simple to capture information about the usage of browsers and email clients, but not so easy to gather information about the usage of other tools a user is executing. The statistics from Wakoopa provide such information ([23]). Wakoopa is a software application that runs locally on a user’s computer and tracks the usage of all other applications and sends this information to a server. Even if only specific user groups install such a tracker, they still offer an overview over used tools. The most used Instant Messaging Applications under Windows are the Windows Live Messenger, Skype and the Yahoo! Messenger. The most used office applications under Windows are Microsoft Office Word, Microsoft Office Excel, Microsoft Office PowerPoint, Adobe Reader and OpenOffice.org ([23]). Other frequently used software applications tracked by Wakoopa comprise e.g. World of Warcraft, Adobe Photoshop, iTunes, Microsoft Visual Studio, Windows Media Player and VCL media player. The Centre for Learning & Performance Technologies annually compiles lists of the most used learning tools and services: one for learners and one for learning professionals ([24]). Contribution is open to anyone via an online spreadsheet. 54 learners and 35 learning professionals have participated until 19 April 2009. The top five tools for learners are Google Search, YouTube, Firefox, Twitter and, sharing the fifth position, Wikipedia and Delicious. For learning professionals Twitter, Delicious, Google Reader, Skype and PowerPoint make the top five. Most of those are
CAMera for PLE
511
accessible via the internet and available for free and also commonly used when working or leisurely using the computer. It can thus be concluded that easy access and familiarity are important requirements for tools to be used frequently while learning. Such statistics do not necessarily reflect the actual usage of tools and services while learning though they do give an impression about what learners are generally interested in. For a thorough behaviour analysis, however, tracking of learning processes needs to be done in a personalized learning environment as not only the fact that a tool has been used should be recorded but also the actions conducted with that tool. At this point, in addition to the self-regulated learner’s choice of tools and services, the next critical aspect of personalization comes into play: the adaptation of the PLE’s behaviour to a learner’s preferences, goals, background, knowledge, experience, etc ([25]). For a system to appropriately react to the learner’s needs, the observation and collection of user behaviour data is required. This is where the aforementioned selfrecording and self-monitoring ties in to personal learning environments. When the learner’s behaviour is being recorded, it can later be reproduced and analyzed so that the learner can adjust her further learning processes accordingly. As this is only possible within a personalized environment, it is evident that self-regulation demands personalization. 2.3 Stock-Taking Self-regulated learning takes place when the learner by strategically planning and meta-cognitively assessing is in control of the actions involved in the learning process. A personalized learning environment (PLE) supports the needs of a self-regulated learner. For a PLE to be able to react to a learner's preferences and goals and for a self-regulated learner to successfully achieve these goals it is – if not essential, then at least – highly desirable to get feedback about her behaviour. This feedback can be obtained by self-monitoring and self-evaluating while learning. As the two significant features of self-monitoring are regularity and proximity, a software tool can trace the required data while learner and PLE interact. The tracing tool has to comply with several requirements so as to be of avail to the learner: Firstly and – maybe – most important of all, the learner may not be disturbed in her studying process thus forcing the tool to run in the background, silently recording and analyzing all actions. Secondly, the recordings have to contain extensive observations taking all such tools and applications into account that are actually being used by the learner. Thirdly, the data being recorded must not be too finely grained in order for the learner to actually deduce something from the recordings of her behaviour. The actions have to be recorded and the recordings have to be presented in such a way that the learner can understand them and recapitulate her course of actions. To this end, the recorded actions must be meaningful regarding the learning process. As an example, the opening of a document is an action that can be recapitulated and put into the context of other actions by a learner. The recording of a single keystroke or a sequence of keystrokes, at contrary, will not give the learner insightful information on her learning activities. The following section presents the realization and implementation of a tracing tool in compliance with these requirements.
512
H.-C. Schmitz et al.
3 CAMera and Usage Metadata Analysis 3.1 CAMera We call the tracing tool for supporting self-monitoring in personalized learning environments CAMera – “CAM” because its design is based on the Contextualized Attention Metadata (CAM) schema for representing user actions ([26], [27]); “camera” because, like a camera, it is basically a recording tool. The architecture of the tool is largely determined by the requirements mentioned in the previous section: the tool must continuously collect usage metadata5, transfer these metadata into a well-defined format, store them and hold them ready for further analysis and the on-demand generation of usage reports. The tool must not disturb the user from doing her actual work and, thus, it must not make use of obtrusive sensors like eye-trackers etc. The usage reports given by CAMera have to be reliable. Therefore, the reports must not be based on defeasible interpretations of the user’s actions. Suppose, for example, that a user opens an email-message and replies to it. It is highly probable that she read the message or at least parts of it. It is, however, also possible that she opened the message and replied to it by accident without reading. Therefore, the CAMera tool may not store that the user read the message. Instead, the actions recorded by the tool have to be on a level of lower granularity that does not demand defeasible interpretation. Nevertheless, the observations must not be too fine-grained as representations of actions observed and reported by the tool have to be meaningful to the user. The solution to this problem is to record the interaction of users with application programs and the file system: the tool records when text documents are opened, modified and stored with word processors, when data objects are moved or deleted, when emails are sent, chat-messages are uttered, queries are posted to search engines, and so on. Such actions can be tracked without further interpretation. At the same time representations of such actions are immediately interpretable. Since usage metadata are captured from application programs, the CAMera tool firstly consists of a set of metadata collectors, that collect usage metadata from application programs and transfers these metadata into the CAM-schema. In most cases, it suffices to transfer existing log data into CAM; in other cases a metadata collector has to be implemented as a proper monitor component that generates usage metadata instead of just collecting them from existing log-files. At present, we possess metadata collectors for the Thunderbird email-client, the Skype chat-messenger, the Firefox browser and Microsoft Outlook. Furthermore, we have adapted the User Activity Logger developed at L3S (Leibniz Universität Hannover) for recording accesses to the file system. The ALOCOM Framework provides us with usage metadata collectors for MS Power Point and MS Word ([28], [29]). Finally, we exploit recordings of Flash Meetings ([30]). Hence, we are provided with some metadata collectors that we can make use of and experiment with although the set of collectors is still to be extended. We aim at providing collectors for all of the most-used tools mentioned in the previous 5
Metadata are data about data; usage metadata are data about actions rather than data in the narrower sense. One reason to call these data metadata nevertheless is that they have been called metadata in the literature. (We do not see the need of changing the tradition.) Moreover, such metadata can be used to describe the actual usage of data objects (this is not in the focus of this paper, see [27] instead). As such, they are data about data.
CAMera for PLE
513
section. Also, it will be necessary to adapt existing collectors to new versions of their source applications. It will be possible for the user to switch data collectors on and off and thus to control from which applications usage data are being collected. The metadata collectors mentioned capture usage metadata from application programs that run locally on a user’s computer. Usage metadata can also be captured from remote servers with which the user interacts. In section 3.3, we will give an example on the collection of CAM-instances from server data. Secondly, the CAMera tool consists of a database in which the CAM-instances are stored. We are performing experiments with different kinds of databases, both relational databases and xml-native databases, in particular the eXist-database ([31]). The databases provide us with interfaces for generating usage reports. Thirdly, CAMera consists of analysis applications for the evaluation of CAMinstances, for instance in order to detect the network of people a user communicated with or the most heavily used objects over a certain time span. The aim of the CAMera tool is to support self-monitoring for improving outcomes both of solo and collaborative learning processes. In the next two sub-sections we introduce two applications that do or can serve as CAMera components and, as we assume, support this goal. The first application exclusively monitors and analyzes a user’s email-exchange. Email usage metadata and the analyses of these data can be accessed via the CAMera interface. The component runs locally on the user’s computer, hence, the metadata generated are under her control. The second application is a Zeitgeist application that monitors and analyzes the interaction of several users with the MACE system for architectural learning ([32]). It consists of a set of web services; usage metadata are generated and stored remotely within a network so metadata of different users can be cumulated and processed together. 3.2 Email Analyzer The email-component of the CAMera tool consists of two collectors for recording email-exchange, which can be applied together or separately, and an analyzer for generating and representing social networks. The first collector analyzes email-messages that are stored locally on the user’s computer in mbox-format or that can be retrieved from an IMAP server. For each message, the collector generates and stores a CAM-instance. In the course of analysis – with Java Mail ([33]) – it extracts the sender, the receivers, the subject line and the message body. From the body, keywords are extracted which then serve as a shallow content representation. Currently, keyword extraction is carried out with the yahoo! term extractor ([34]) and tagthe.net ([35]).6 The user chooses one or both keyword extractors and, if she uses both extractors together, specifies whether the intersection or the union of the two sets of keywords is stored. The tool for recording the emailexchange does not need to be permanently active as it can run either continuously or on demand for the analysis of previously stored messages. Moreover, the user can decide which messages are analyzed by specifying a time interval or by explicitly freeing or blocking email-folders. 6
Ideally, the email-analysis only runs locally on the user’s computer. The usage of the yahoo! term extractor und the tagthe.net-service for keyword-extraction demands data transfer to external services. This can only be a preliminary solution.
514
H.-C. Schmitz et al.
The second collector is based on the plug-in Adapted Dragontalk ([36]) that permanently records the interaction of a user with a Mozilla tool like the Thunderbird email-client or the Firefox browser.7 In our case, it records all events involving Thunderbird, for instance the opening of an email-message, the creation of a new folder or the moving of a message to a particular folder. The original Adapted Dragontalk plug-in generates usage metadata and writes these data into simple text files. We adapted the plug-in so that a CAM-instance is generated for each event and then stored in a database (adapted Adapted Dragontalk). Before a user’s social network can be created, the e-mail analyzer has to deal with the fact that computer users can have more than one email-address and more than one alias for these addresses. (An example for an alias-address-pair is “Jane Q. Public <
[email protected]>”.) A user’s alias-address-pairs are to be assigned unambiguously to this individual user or her ID, respectively. To this end, we adapt the approach of Bird et al. ([38]) for computing the similarity of two alias-address-pairs according to the Levenshtein distance ([39]). If the distance is below a certain threshold, we assume that the two alias-address-pairs belong to the same user. Emailmessages that have been sent to or from different email-addresses can now be assigned to the same person. The email-analyzer evaluates email-related CAM-instances for representing a social network. Every person that occurs as sender or recipient of a message is represented by a node within the network. Two nodes are connected by an edge iff the respective persons are involved in the same message (as sender or recipient). The more email-messages two persons are jointly involved in, the stronger the edge between the respective nodes is. Figure 2 shows the CAMera tool displaying a user’s social network. The network represents connections to those persons with whom the user has exchanged at least ten messages within a selected time interval. The email-analyzer provides the user with an interface for browsing and manipulating the network: by marking a person’s node, a list of email-messages in which the particular person has been involved is generated and displayed together with the keywords of these messages. Furthermore, time intervals can be specified on a time line; thereby the keywords are weighted regarding their frequencies within these intervals and thus displayed larger or smaller. By clicking on a keyword, the list of messages is reduced to the messages that contain the selected keyword. The network itself can be manipulated in three different ways: firstly, by naming keywords one can highlight the nodes and edges that have been established due to messages that contain the keywords. Secondly, a user can specify a time interval and reduce the network according to the messages exchanged within this interval. This makes it possible to follow the dynamics of the network in the course of time. Thirdly, a user can set the minimal number of messages that must have been exchanged so that a person and an edge to this person appear in the network. By standard, a person appears in the network, if she was involved in at least one message. By setting a higher minimal number, sporadic contacts can be filtered out in order to make the network representation more concise. The email-analyzer gives a user an insight into the structure of her social network. It depicts the persons with whom the user has been in contact and the issues of her email-exchange. Therefore, it gives an account on a specific type of communication 7
Adapted Dragontalk (L3S, Leibniz Universität Hannover) is a further development of Dragontalk which was developed at DFKI Kaiserslautern ([37]).
CAMera for PLE
515
Fig. 2. Representation of a social network within the CAMera tool
behaviour and it supports the user in reflecting her communications. According to Viégas et al. ([40]), users are fascinated by the possibility of evaluating the social networks that are entailed in their email-conversations. (Thus, the email-analyzer arouses interest even without serving an immediate purpose.) Since communication is an integral part of collaborative learning (v.s., section 2), we assume that monitoring communication behaviour also contributes to the reflection of collaborative learning processes. 3.3 MACE Zeitgeist The second application we introduce here is a Zeitgeist application that is implemented as part of the MACE system. The MACE system ([32], [41]) sets up a federation of architectural learning repositories: large amounts of architectural contents from distributed sources are integrated and made accessible for architects, architecture students and lecturers. The contents are enriched with various types of metadata, among them Learning Object Metadata (LOM) and CAM. Examples of the user actions that are captured within the system are search actions (with the respective search keywords as related data), downloads of metadata on learning resources and downloads of the learning resources themselves, modifications of metadata, etc.8 8
The MACE system is intertwined with the ALOE system ([42]). ALOE is a web-based social media sharing platform that allows for contributing, sharing and accessing arbitrary types of digital resources. Users can up- and download resources; they can tag, rate, and comment on resources; they can create collections and add arbitrary kinds of metadata; and they can join and initiate groups, among other actions. The ALOE system provides observations of user activities related to the MACE system which are stored in the MACE usage metadata store. (See [43] and [44] for more information about ALOE and the system architecture.)
516
H.-C. Schmitz et al.
The Zeitgeist application is a set of web services that together provide an overview on activities within the MACE system. It gives users the possibility to reconstruct their learning paths by retracing which learning resources they accessed, how they found them and which topics have been of interest to them. This information can be used to make learning paths more explicit and to foster the learners’ reflection on their activities. Figure 3 shows the Zeitgeist dashboard that is used to give an overview on a user’s MACE-related activities: the Usage Summary (top box) shows the user activities for January 2009 when she viewed the metadata of 136, downloaded 84, bookmarked 60 and tagged 34 learning resources. Further details on the objects that have been accessed can be viewed by following the links called “Details”. The Usage History (middle box) shows the activities of the user per week, indicating when she viewed metadata, downloaded resources and tagged and bookmarked them. By simple statistics like these, the user recapitulates when she was looking for learning resources and when she found suitable ones. According to the graph, she constantly viewed resources during the week. Downloading and tagging, however, significantly increased after Thursday. Presumably, she started by searching for relevant data in the beginning of the week. By Thursday, she had found what she was looking for. Therefore, she downloaded these objects and tagged them. The Daily Content History (bottom box) lists the resources the user accessed most recently. According to the example given with Figure 3, the user viewed the metadata of “Villa dall’Ava” at 13:04:08 and downloaded the learning resource “Notre Dame du Haut” at 13:03:47. The respective titles of these data objects are linked to the objects themselves. The Zeitgeist dashboard depicted in Figure 3 is a web-based interface. The Zeitgeist data, however, can also be requested by the local CAMera-tool and thus – although this is not yet implemented – be presented through the actual CAMera-interface. That is, MACE Zeitgeist can become a remote component of CAMera. It
Fig. 3. MACE Zeitgeist dashboard
CAMera for PLE
517
provides an individual user with an overview on her MACE-related activities. It can also cumulate and analyze usage metadata of different MACE users and thus present an overview on all MACE-related activities and on general trends in MACE usage. This gives the individual learner the opportunity to compare her usage with the behaviour of the mass of MACE users. She can follow trends or, at contrary, refrain from trends and find new ways of exploring contents. An additional advantage of collecting metadata from different users is that now users can be compared regarding their usage profiles. A very simple usage profile can be defined as the set of objects that have been accessed in a certain time interval; the similarity of user profiles correlates with the cardinality of their intersection. Therefore, the Zeitgeist component not only provides data for reflecting one’s own learning behaviour but can also determine and point to similar learners which might be good cooperation partners. This is a clear advantage over a locally running component that observes and analyzes only a single user’s browsing behaviour. (With the Adapted Dragontalk plug-in, we are already provided with a respective metadata collector.) The local component can collect CAM-instances about the individual user’s interaction with the MACE system. However, it cannot easily integrate other kinds of metadata that are provided with MACE (LOM, e.g.), nor can it account for activities of other MACE users. The Zeitgeist component provides the learner with an overview on her MACErelated learning paths. It lets her remember how she came to the engagement in her current issues. It helps her to maintain an overview on her activities and the development of her interests. Thus, the Zeitgeist component supports her self-monitoring.
4 Conclusions We have argued that self-regulated learning is especially promising regarding learning outcomes and therefore should be supported. Self-monitoring is an essential part of self-regulated learning; the support of self-regulative learning can consist in the support of self-monitoring. We introduced CAMera as a tool for monitoring and reporting on a learner’s computer-related activities. In particular, we introduced an email-analyzer as an internal CAMera-component and a Zeitgeist application that can become a remote component. The CAMera tool is still under development, and – even further – it is necessarily continuous work in progress. It monitors user interaction with application programs and remote services. As application programs change and new tools and services are being developed, metadata collectors have to be exchanged and added. In addition, new tools most probably require new usage metadata analyses. As a consequence, new analysis components have to be implemented. We therefore designed CAMera as an open system to which new components can be easily added. CAMera can also function without observing a learner’s entire computer usage behaviour; it may just monitor the interaction with a few selected applications. So far, we only informally evaluated CAMera and its components by making them accessible to colleagues. The feedback we received was very good: the colleagues like to play with the components; they are interested in the reports and analyses; they state that they understand their own behaviour better. We still have to prove by a formal evaluation that the usage of the CAMera tool in fact, not only in principle, supports
518
H.-C. Schmitz et al.
self-regulated learning effectively. Optimally, we have to show that the usage of this tool leads to better learning outcomes. To this end, test-beds with large groups of selfregulated learners are needed. We are currently designing such test-beds within the European research project ROLE (Responsive Open Learning Environments, [45]). Evaluations within these test-beds will be performed in the near future.
References 1. Torrano, F., González, M.C.: Self-regulated learning: current and future directions. Electronic Journal of Research in Educational Psychology 2(1), 1–34 (2004), http://www.investigacion-psicopedagogica.org/revista/ articulos/3/english/Art_3_27.pdf (accessed: April 19, 2009) 2. Zimmerman, B.J., Schunk, D.H. (eds.): Self-regulated learning and academic achievement: Theory, research and practice. Springer, New York (1989) 3. Schunk, D.H., Zimmerman, B.J.: Self-regulation of learning and performance: Issues and educational applications. Erlbaum, Hillsdale (1994) 4. Schunk, D.H., Zimmerman, B.J.: Self-regulated learning: From teaching to self-reflective practice. Guilford, New York (1998) 5. Boekaerts, M., Pintrich, P.R., Zeidner, M.: Handbook of self-regulation. Academic Press, San Diego (2000) 6. Zimmerman, B.J., Schunk, D.H. (eds.): Self-regulated learning and academic achievement: Theoretical perspectives. Erlbaum, Hillsdale (2001) 7. Madrell, J.: Literature Review of Self-Regulated Learning (2008), http://designedtoinspire.com/drupal/node/600 (accessed: April 19, 2009) 8. Pintrich, P.R.: The role of goal orientation in self-regulated learning. In: Boekaerts, M., Pintrich, P.R., Zeidner, M. (eds.) Handbook of self-regulation, pp. 451–502. Academic Press, San Diego (2000) 9. Zimmerman, B.J.: Self-regulated learning and academic achievement: An overview. Educational Psychologist 25(1), 3–17 (1990) 10. Butler, D.L., Winne, P.H.: Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research 65(3), 245–281 (1995) 11. Kitsantas, A.: Self-monitoring and attribution influences on self-regulated learning of motoric skills. Paper presented at the annual meeting of the American Educational Research Association (1997) 12. Nota, L., Soresi, S., Zimmerman, B.J.: Self-regulation and academic achievement and resilience: A longitudinal study. International Journal of Educational Research 41(3), 198–215 (2004) 13. Fireman, G., Kose, G., Solomon, M.J.: Self-observation and learning: The effect of watching oneself on problem solving performance. Cognitive Development 18(3), 339–354 (2003) 14. Schunk, D.H.: Self-Monitoring as a motivator during instruction with elementary school students. Paper presented at the annual meeting of the American Educational Research Association (1997) 15. Winne, P.H., Jamieson-Noel, D.: Exploring students’ calibration of self reports about study tactics and achievement. Contemporary Educational Psychology 27(4), 551–572 (2002) 16. Gress, C.L., Fior, M., Hadwin, A.F., Winne, P.H.: Measurement and assessment in computer-supported collaborative learning. Computers in Human Behavior (in press) (Corrected Proof)
CAMera for PLE
519
17. Wilson, S., Liber, O., Beauvoir, P., Milligan, C., Johnson, M., Sharples, P.: Personal Learning Environments: Challenging the dominant design of educational systems. In: Proceedings of the first Joint International Workshop on Professional Learning, Competence Development and Knowledge Management (LOKMOL 2006 and L3NCD 2006), pp. 67–76 (2006) 18. Chatti, M.A.: Requirements of a PLE Framework. Blogspot (2008), http://mohamedaminechatti.blogspot.com/2008/02/ requirements-of-ple-framework.html (accessed: April 19, 2009) 19. Chatti, M.A., Jarke, M., Frosch-Wilke, D.: The future of e-learning: a shift to knowledge networking and social software. International Journal of Knowledge and Learning 3(4/5), 404–420 (2007) 20. w3schools.com – Browser statistics, http://www.w3schools.com/browsers/browsers_stats.asp (accessed: April 19, 2009) 21. Wikipedia – Usage share of web browsers, http://en.wikipedia.org/wiki/Usage_share_of_web_browsers (accessed: April 19, 2009) 22. Fingerprint – Email client statistics, http://fingerprintapp.com/ email-client-stats (accessed: April 19, 2009) 23. Wakoopa, http://wakoopa.com/ (accessed: April 19, 2009) 24. Centre for Learning and Performance Technologies: Top Tools for Learning (2009), http://www.c4lpt.co.uk/recommended/ (accessed: April 19, 2009) 25. Brusilovsky, P.: Adaptive Hypermedia. User Modeling and User-Adapted Interaction 11(1-2), 87–110 (2001) 26. Wolpers, M., Najjar, J., Verbert, K., Duval, E.: Tracking Actual Usage: the Attention Metadata Approach. Educational Technology & Society 10(3), 106–121 (2007) 27. Schmitz, H.C., Kirschenmann, U., Niemann, K., Wolpers, M.: Contextualized Attention Metadata. In: Roda, C. (ed.) Human Attention in Digital Environments. CUP, Cambridge (2009) 28. Verbert, K., Jovanovic, J., Gasevic, D., Duval, E.: Repurposing Learning Object Components. In: Proceedings of OTM 2005 Workshop on Ontologies, Semantics and E-Learning, Agia Napa, Cyprus (2005) 29. Ariadne ALOCOM Tools, http://www.ariadne-eu.org/ index.php?option=com_content&task=view&id=65&Itemid=96 (accessed: April 19, 2009) 30. The Flashmeeting Project, http://flashmeeting.open.ac.uk (accessed: April 19, 2009) 31. eXist Open Source Native XML Database, http://exist.sourceforge.net (accessed: April 19, 2009) 32. MACE – Metadata for Architectural Contents in Europe, http://portal. mace-project.eu (accessed: April 19, 2009) 33. JavaMail API, http://java.sun.com/products/javamail (accessed: April 19, 2009) 34. Yahoo! Developer Network: Term Extraction Documentation, http://developer. yahoo.com/search/content/V1/termExtraction.html (accessed: April 19, 2009) 35. tagthe.net, http://tagthe.net (accessed: April 19, 2009) 36. Adapted Dragontalk, http://www.l3s.de/~chernov/pas/Documentation/ Dragontalk/thunderbird_documentation (accessed: April 19, 2009) 37. Epos – Evolving Personal to Organizational Knowledge Spaces,
520
H.-C. Schmitz et al.
http://www3.dfki.uni-kl.de/epos (accessed: April 19, 2009) 38. Bird, C., Gourley, A., Devanbu, P.T., Gertz, M., Swaminathan, A.: Mining email social networks. In: Proceedings of the International. Workshop on Mining Software Repositories, Shanghai (2006) 39. Navarro, G.: A guided tour to approximate string matching. ACM Computing Surveys 33(1), 31–88 (2001) 40. Viégas, F.B., Golder, S., Donath, J.: Visualizing email content: portraying relationships from conversational histories. In: Proceedings of the SIGCHI conference on Human Factors in computing systems, Montreal, pp. 979–988 (2006) 41. Stefaner, M., Dalla Vecchia, E., Condotta, M., Wolpers, M., Specht, M., Apelt, S., Duval, E.: MACE – Enriching Architectural Learning Objects for Experience Multiplication. In: Duval, E., Klamma, R., Wolpers, M. (eds.) EC-TEL 2007. LNCS, vol. 4753, pp. 322–336. Springer, Heidelberg (2007) 42. ALOE-Project, http://aloe-project.de (accessed: April 19, 2009) 43. Memmel, M., Schirru, R.: Sharing digital resources and metadata for open and flexible knowledge management systems. In: Tochtermann, K., Maurer, H. (eds.) Proceedings of the 7th International Conference on Knowledge Management (I-KNOW), pp. 41–48. Journal of Universal Computer Science (2007) 44. Memmel, M., Schirru, R.: Aloe white paper. Technical report, DFKI GmbH (2008) 45. ROLE – Responsive Open Learning Environments, http://www.role-project.eu (accessed: April 19, 2009)
Implementation and Evaluation of a Tool for Setting Goals in Self-regulated Learning with Web Resources Philipp Scholl1, Bastian F. Benz2, Doreen Böhnstedt1, Christoph Rensing1, Bernhard Schmitz2, and Ralf Steinmetz1 1
Multimedia Communications Lab (KOM), Technische Universität Darmstadt, Merckstr. 25, 64283 Darmstadt, Germany 2 Pädagogische Psychologie, Technische Universität Darmstadt, Alexanderstr. 10, 64283 Darmstadt, Germany {scholl,boehnstedt,rensing,ralf.steinmetz}@KOM.tu-darmstadt.de, {benz,schmitz}@Psychologie.tu-darmstadt.de
Abstract. Learning effectively and efficiently with web resources demands distinct competencies in self-organization and self-motivation. According to the theory of Self-Regulated Learning, learning processes can be facilitated and supported by an effective goal-management. Corresponding to these theoretic principles, a goal-management tool has been implemented in an interdisciplinary project. It allows learners to set goals for internet research and assign relevant web resources to them. An evaluation study is presented that focuses on short-term learning episodes and selected results are shown that reinforce the benefits of our approach. Keywords: Goal-Setting, Learning with Web Resources, Self-Regulated Learning, Evaluation.
1 Introduction The importance of the World Wide Web as a major source of information for knowledge acquisition is growing steadily. With the web browser being the gateway, both specifically designed learning materials (e.g. contained in Web Based Trainings) and web resources that have not been designed with the intention to provide learning materials (e.g. weblog posts, wiki articles or community pages) but contain valuable information are available at a large scale. The paradigm of using these resources as learning materials is also known as Resource-Based Learning. Often used in context of lesson-style teaching, we focus on a rather informal, self-directed way of learning. However, major challenges for learners when learning in a self-directed way consist of stating their information needs, formulating search queries, estimating relevance of found resources, filtering irrelevant resources and keeping track of the state of the research, i.e. monitoring progress. These processes require high learner’s competencies of self-organization and self-motivation, as a deep information research is not trivial. Additionally, challenges arise that are based on peculiarities of the internet’s structure: information is out-dated, incomplete or targeted towards another audience and web resources cannot be retrieved. Thus, even if relevant information is U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 521–534, 2009. © Springer-Verlag Berlin Heidelberg 2009
522
P. Scholl et al.
found, it has only a transient use for learners, as usually it is not archived or persisted appropriately (see [7]). Hence, planning, organizing, setting goals and monitoring the involved processes may ease the difficulties of learners and prevent informational disorientation [10]. In this paper, we present an evaluation study of our goal-management tool that has specifically been designed to address some of these challenges. Section 2 presents a basic overview of the theory of Self-Regulated Learning that adequately describes this self-directed process of learning with web resources. Further, we explicate the term Scaffolds that denotes support of this process. We describe the design and implementation of a tool that enables learners to set learning goals prior to internet research and assign relevant web resources to these goals in section 3. This tool has been implemented into the web browser Firefox1, as web browsers are the gateway to most information on the web. Section 4 revisits the results of a previous study and section 5 presents a new study and evaluation of this tool with selected results. Section 6 concludes with a short summary and further steps.
2 Self-regulated Learning and Scaffolds Self-directed, resource-based learning with web resources is a process that is quite demanding for learners: they have to plan, monitor and reflect on their learning process in order to reduce disorientation and enhance quality of their learning achievements. In the following, we present particularities of this kind of learning and possibilities to support it using Scaffolds. 2.1 Self-regulated Learning It has been shown that supporting learners conducting the tasks mentioned above can improve the learning experience and the outcome [8] (e.g. by providing training or support learners writing a learning diary). For learning scenarios using web resources, i.e. hypertext documents, supporting self-regulated learning has shown to improve learners’ understanding and conceptual knowledge of a topic [1]. Central to the theory of Self-Regulated Learning is the notion that learning is a process that is self-directed and needs regulation on the learner’s side. According to Boekarts [3], three different systems have to be regulated in order to learn selfdirected. The cognitive system is performing task editing strategies, the learner will choose a strategy that he deems to be effective and efficient. For example, a learner who is researching information on the internet has to think about search query words that are likely to lead to success, i.e. relevant result resources. In his motivational system the learner regulates his volitional and motivational state, so that he will for example start a learning episode, overcome procrastination or better cope with obstacles. Finally, in the metacognitive system, the learner sets learning goals, devises plans and strategies for executing the actual learning process, monitors his progress on his actions, re-adjusts them if necessary and reflects on his learning process, eventually leading to forming of strategies to enhance his learning processes. 1
http://www.mozilla.com/en-US/firefox/ [online: 2009/04/16].
Implementation and Evaluation of a Tool for Setting Goals
523
Schmitz and Wiese [8] partition the learning process in three phases: before learning, during learning and after learning. Those phases may be combined with the three systems to be regulated [9]. As we focus on metacognitive processes in this paper, we will subsequently only consider processes that are executed in the metacognitive system. According to the theory of Self-Regulated Learning, learners profit from different metacognitive processes performed in each respective phase (see Table 1): Before learning (pre-actional phase), the learner performs goal-setting and planning, whereas while learning (actional phase), the progress and course of actions are monitored and – if necessary – adapted to possibly changed circumstances. Finally, after having learned (post-actional phase), reflection processes are executed in order to optimize future learning processes. Table 1. Overview of phases and respective metacognitive processes according to [2]
Phase Pre-actional Actional Post-Actional
Metacognitive processes Goal-Setting and planning Monitoring, adapting to changed circumstances (regulating) Reflecting, adapting goals and plans for next learning episode (modifying)
Further, [2] map the processes described above to learning episodes of different granularity. For example, an elementary task like a learner researching information on the internet is a rather fine-granular learning episode. For executing an efficient search process, the learner has to set his desired research goals, plan and monitor his process and finally evaluate, whether his learning goal has been met in the next minutes. However, a learner working on a bigger project (e.g. homework, a paper or a thesis) usually plans his approach, monitors and evaluates his process over several weeks. Still, a project will consist of several smaller, possibly related, learning episodes that are executed in the context of the project. In our evaluation, we focus on a short-term learning episode of 45 minutes. 2.2 Scaffolds Vygotsky [12] introduces the term Scaffolding as a “guidance provided in a learning setting to assist students with attaining levels of understanding impossible for them to achieve without external support”. Thus, scaffolds can be seen as learning aids that help learners to execute qualitative learning processes in order to achieve better learning results. In the long term, scaffolds should be designed to advance competencies, thus learners will not be dependent on the scaffolds. According to Friedrich et al. [5], scaffolds can be implemented both directly and indirectly. Direct scaffolds communicate instructions (so-called prompts) that ask the learner to carry out a certain learning action. For example, setting learning goals before starting to learn is such a direct scaffold. Indirect scaffolds can be implemented by design of a learning environment, so that the learner has the possibility to use certain supporting functionalities if required. For example, providing a goal-setting functionality in a program without a dedicated prompt can be seen as an indirect scaffold.
524
P. Scholl et al.
The theory of Self-Regulated Learning postulates specific processes that contribute towards a high-quality learning process. The concept of scaffolding defines and describes different possibilities to realize learner supports. Combining both approaches, learning processes can be assisted and supported according to the presented theoretical principles.
3 The Goal-Management Tool In this section we will derive the concept of a goal-management tool for internet research from the presented theoretical principles and present the implementation. Learners can enter goals, organize them into goal hierarchies (setting super- and subgoals), move them via drag&drop and attach found resources relevant to the respective goals. Each goal can have an arbitrary number of sub-goals and resources, organizing everything in a tree structure with exactly one super-goal – analogue to the directory structure of a common file system. 3.1 Conceptualization The goal-management tool is based on the partition of the learning process into the three phases before learning, while learning and after learning. We focus on the metacognitive processes of goal-setting, planning, monitoring, regulation and finally reflection and modification of the learning process. The scaffolds that support those processes are implemented indirectly, which means that the learner is not instructed to take direct action, but he may choose to use the functionality if he sees the need to. Before beginning with the internet research, the learner chooses a goal-directed approach and plans his course of actions in the learning process. For example, if a learner has the task to research information for the topic “Classical antiquity”, he may begin to structure his approach with the goals “I need to get a general idea about the ancient Rome” and “I need an overview of the ancient Greece”. Each goal can be further subdivided into specific sub-goals, e.g. the ancient Rome may contain the sub-goals “Roman Republic” and “First Triumvirate”. This way, the learner organizes his research goals into a goal hierarchy. Thus, the tool supports processes of goal-setting and planning. During the learning process the learner may attach found information in web resources to the set goals and rate their relevance for the respective goal. Monitoring the learning process is supported by multiple scaffolds, e.g. setting the progress of finishing a certain goal and displaying the goal hierarchy in combination with the already found web resources. Both stimulate the learner to contemplate where in the learning process he is right now, which goals he has already achieved and what goals are still open. In order not to loose focus on the goal the learner is following right now, it is possible for him to activate one goal at a time. This goal is displayed prominently, giving a reminder not to go astray and antagonizing the well-known “lost-in-hyperspace” phenomenon (experiencing disorientation due to information overload and aimlessly following hyperlinks). Further, all goals and found resources can be displayed as a knowledge network and an overview, displaying all goals and resources. This enables the learner to reflect on already found information and the current course of action. Is the learner aware of his inefficient advance, he may alter his research behaviour according to his current situation – for example by defining new goals, re-structuring his goal hierarchy or focussing
Implementation and Evaluation of a Tool for Setting Goals
525
on other goals that are more promising at the moment. Hence, during the research the processes of monitoring and regulation are supported. After learning, the learner has the choice between different alternatives of visualizing all goals and resources, basically the three visualizations already described: the goal hierarchy, the knowledge network and the complete overview. However, the theory of Self-Regulated Learning differentiates between the monitoring and regulation processes mentioned above and the processes of reflection and modification, as these occur after having finished the research. Here the visualizations enable learners to reflect on the finished learning episode, both from the view of the results and the taken approach. Further, if the learner decides to optimize his approach based on his reflection processes, modification processes are executed. 3.2 Implementation and Data Model Research and learning using web resources mostly takes place in the web browser, as most web resources are represented as HTML mark-up. The browser is a virtual window to the internet, downloading and rendering web resources and displaying them to the learner. Therefore, the tool has been implemented as an add-on to the popular open source web browser Firefox.
Fig. 1. The sidebar displaying the tree of goals and resources. The goal "plebs and peasants" is currently activated. At the bottom the buttons for displaying the knowledge network and the overview are located.
526
P. Scholl et al.
Fig. 2. An exemplary goal hierarchy displayed as a knowledge network. Resources with the same tag "Rome" are marked. A resource's detailed description (snippet) is shown in a tooltip.
Due to portability and extendibility reasons the core functionality has been realized in a Java applet. Data transmission with Firefox and the web resources is performed via an ECMAScript interface that both orchestrates the data flow and forwards user interaction within Firefox or the web resource to the applet. The graphical user interface and data storage has been implemented in Java. Applets as a technology were chosen, as they allow integration in HTML as well as in XUL (the Firefox-specific XML dialect for creating graphical user interfaces). Because we focus on short-term learning episodes, we confine properties of goals to a title, a description (which may serve to outline a course of actions or additional information) and the level of progress (with the stages “not started”, “25%”, “50%”, “75%” and “finished”). This level of progress can be set by the learner to keep an overview of his open and finished goals. Further, goals can be tagged (i.e. attaching freely chosen key words) for organization and display in the knowledge network. For longer learning episodes (which are not covered here), additional, mostly temporal, properties are planned, e.g. planned start, planned duration etc. The web resources are inserted into goals by use of the “import” functionality, similar to the process of bookmarking in Firefox. Similar to goals, resources have a title, a description, a relevance rating and tags. As the information need a learner has is often quite specific, just bookmarking a whole web resource is often not enough. Instead, the possibility to extract only the relevant part of the information is more target-oriented towards the real learning goal. Thus, the selected fragment (called snippet) of an imported web resource is stored in the description; learners can access
Implementation and Evaluation of a Tool for Setting Goals
527
that relevant information later without having to access the original web page. Rating the relevance of a resource or the snippet with the stages “not rated”, “not relevant”, “a little relevant” and “relevant” is possible as well. On starting up the web browser, the goal-management tool is displayed in the sidebar. Its user interface shows an overview of the current goal hierarchy and resources (see Fig. 1). Alternative representations of goals and resources may be used, e.g. a display of the goal hierarchy as a knowledge network (see Fig. 2 and [4]). While browsing web resources, they can be imported into the goal tree at the current selection. Both goals and resources may be edited and reorganized later-on.
4 The Previous Evaluation In 2008, we performed an evaluation focussing on the research questions, what differences learning online using different tools make and how explicit prompts are given in order to initiate goal-setting, planning and reflection processes [11]. We asked 64 participants (mainly psychology bachelor students) to answer a knowledge test about the topic “Classical Antiquity” (that we expected the participants to have little prior knowledge about) both before and after learning using Wikipedia for 45 minutes. We formed four different treatment groups: one group having pen and paper available as a means to persist findings, one group using the built-in bookmarking functionality of Firefox and two groups using our goal-management tool. The latter groups differed in the given instructions, one group just used the tool without any instructions, the other group was directly scaffolded to set goals, monitor their progress and finally reflect on their learning processes. In conclusion, we found that scaffolds do influence learning processes. Still, we encountered several issues with the study design. First, we tried to emulate “realistic” environments for the learners, i.e. forming a control group learning using bookmark functionality and a pen and paper group. Therefore, the groups were not comparable in some ways, and we think that influenced the learning outcomes. For example, the pen and paper group did not have to learn using a new tool and could quickly outline information, setting relations between content that was not possible for the other groups. Additionally, the bookmarks group was missing the possibility to save web resource snippets, thus participants had to bookmark the whole page – which many participants thought to be futile, thus not using this functionality. Eventually, the groups using the goal-management tool were only briefly trained to using it before learning. This means that computer competence and experience in using comparable tools had a strong influence on the way students were able to handle the tool.
5 The Study and Evaluation In our second study, we optimized our study design and chose a somewhat different scope. First, we provided sufficient training using the goal-management tool and altered the evaluation and control groups in some respects in order to make them more comparable.
528
P. Scholl et al.
Additionally, following research questions were of interest: • •
What are the differences between learners that organize their found web resources with folders (the control group) and learners that set goals prior to learning (the treatment groups)? What are the differences between learners that are explicitly instructed to execute metacognitive processes (the control group and the first treatment group getting indirect scaffolds) and learners that are free to use the functionality to support their metacognitive processes (the treatment group prompted by direct scaffolds)? Thus, what are the benefits of providing direct scaffolds?
5.1 Evaluation Design 104 students (mostly students of Psychology (74.5%) and Education (13.2%), more than 90% being in their first to seventh semester and being between 19 and 28 years of age) could be won for participating in our study. Due to the field of study a majority of the participants were women (72.6%) and 88.7% speak German as first language. The participants were randomly allocated to three groups: The Control Group (CG, n=34) was using a stripped-down goal-management tool that didn’t exhibit the goal-setting functionality. “Goals” were coined “Folders” and could not be activated or attributed progress. Still, the CG was able to put resources and snippets thereof in a folder and access the different displays of the collected data. The First Treatment Group (TG1, n=35) used the goal-management tool with the complete functionality but was not given instructions on how to organize their research. Hence, this group realized indirect scaffolds as given in section 2.2. The Second Treatment Group (TG2, n=35) used the same tool with integrated metacognitive prompts aimed to activate and support the metacognitive processes “defining relevant goals”, “keeping the active goal in mind”, “finding relevant pages”, “importing relevant information”, “assigning relevant information to the relevant goal” and “learning relevant information”. For example, before beginning the research (i.e. actional) phase, the learners were instructed to set goals for the research. Further, during search, instructions to reflect on whether the found information was relevant for the currently followed goal were given (see Fig. 3). Five minutes before the end of the evaluation, this group was instructed to reflect on their results. The overall study was performed in two sessions for each participant. The first session was exclusively for training with the respective tool variant and the second was the research task. The first session was always held the day before the research task and gave the participants a possibility to get to know the handling of the respective tool variant they would use on the research task. First, they watched an introduction in the respective tool, showing common tasks and the functionality of the tool. Then, the participants were presented a small research task in a topic they were confident with, where they could apply the functionality of their tool variant. Further, demographic data and data about the participants’ self-conceptions about their computer (estimation of their familiarity in using computers and knowledge about relevant computerand internet-related concepts) and skills of self-regulated web search (i.e. the competencies to plan and structure their learning processes, based on items of a standardized questionnaire according to [13]) were collected.
Implementation and Evaluation of a Tool for Setting Goals
529
Fig. 3. Example of a prompt, requesting the learner to reflect whether the imported web resource is relevant for the current research goal
The second session was designed to be approximately 1.5 hours in length. Participants were given a first achievement test (multiple-choice) about the “Classical Antiquity” – a topic that is well-covered in Wikipedia and, as we knew from the previous study, students do not have a lot of detailed prior knowledge about. An example for such a question is “Which event led to the end of the Roman Kingdom?” After each question the participants were asked to state how certain they were with answering this question (from the extremes “I guessed” to “I know and I am sure” in four steps). There were ten different versions of the test, which differed in the order the questions were provided. Participants were given the hint that they would receive exactly the same test again after the learning episode. Each participant received a feedback on his individual test performance. Ten questions which were either answered incorrectly or with uncertainty were provided for the first five minutes of the learning episode. This enabled competent learners to identify knowledge gaps in the achievement test and to re-formulate these into research goals in order to finally answer them correctly. During the research, participants were given updates about the time left. Eventually, the achievement test was administered to the participants a second time. Finally, the participants were asked to answer some questions about their learning and their experiences during the web search, their emotions according to PANAS [6] (a standardized questionnaire aiming at measuring positive and negative emotions), their usage of the goal-management tool and its functionality. Between all the phases of this second session, data about the current motivation and self-efficacy were collected. Besides the questionnaires, further data was collected: all participants’ actions during research were recorded using screen-capturing software and on client side, the click path – a list of all sequentially opened URLs – was stored including timestamp of access. In each session, psychometric tests were executed. Further, all actions the learners performed in the goal hierarchy were logged so we could reconstruct the process later.
530
P. Scholl et al.
5.2 Results of Evaluation For evaluating this study, we needed a topic for the students to research that they were not familiar with, thus we chose “Classical Antiquity”. In order to estimate their prior knowledge in this topic, the students were asked to state how much they knew about the Roman Antiquity (83% stated they have only "rather marginal" or "little" background, whereas only 2% said to have a "very good" knowledge about this subject) and Greek Antiquity (where only 1% of the participants claimed to have a "very good" knowledge about, in contrast to 86% of the participants stated to have a "rather marginal" or "little" background). Due to the topicality of given tasks, goals were usually set in a topic-oriented way, process-oriented goals (e.g. “I need to get an overview of …”) were rarely set. The results presented below are all based on the log files and the questionnaires. 5.2.1 Selected Group Differences To analyze the differences between all three groups including differences within specific phases of action, we conducted one way ANOVAs (Analysis of Variance between groups, comparing group means with each other) with quantitative log data as the independent variables. Table 2 presents some selected significant results. These show that, as presumed, in the pre-actional phase the three groups differ in terms of numbers of goals/folders created and edited, links followed, as well as number of imported, viewed and edited resources. Further, the number of viewed resources and links followed in the post-actional phase varied between groups. A difference between groups over all phases was encountered for moved goals/folders. These results in general indicate different approaches of web search for learners of different groups. Some learners seem to have searched in a very structured manner by first defining their search goals instead of already browsing and persisting resources. These learners also seem to have reduced distracting activities at the end of the learning phase in order to prepare for the post-test. Table 2. Significant Group Differences based on Participants' Actions
Category Creation of Goal / Folder Editing Goals / Folder Moving Goals Following new Link Import Resource View Resource Editing Resource
Phase of action Pre-actional Pre-actional All Pre-actional Post-actional Pre-actional Post-actional Pre-actional
ANOVA2 F(2, 102)=7.729, p<.01, r=.36 F(2, 102)=3.801, p<.05, r=.26 F(2, 102)=3.600, p<.05, r=.26 F(2, 102)=6.280, p<.01, r=.33 F(2, 102)=6.885, p<.01, r=.34 F(2, 102)=5.106, p<.01, r=.30 F(2, 102)=3.827, p<.05, r=.26 F(2, 102)=3.105, p<.05, r=.24
As these results show only the presence of significant group differences, we further investigated specific differences between the respective groups defined in our research questions and contrasted them. 2
F=F-Value, p=niveau of significance, r=correlation coefficient.
Implementation and Evaluation of a Tool for Setting Goals
531
In order to analyze our first research question, we contrasted the two experimental groups (TG1+TG2) that were provided with the goal setting function, versus the control group (CG) that applied folders. As hypothesized, TG1 and TG2 significantly set more goals, specifically in the first phase before learning (t(102)=-2.068, p<.05 (1tailed), r=.20) and opened less new web pages in the browser during the pre-actional and post-actional phase (t(102)=2.018, p<.05 (1-tailed), r=.20 resp. t(102)=2.887, p<.01 (1-tailed), r=.27) spending more time with the processes of planning and reflection. This means that they first organized their course of actions before starting to learn. Additionally, they restructured their goal hierarchy more often while planning (t(102)=-2.783, p<.01 (1-tailed), r=.27), which we think to be the result of a detailed planning process. Further, the treatment groups updated their goals and performed more searches in Wikipedia during the actional phase (t(102)=-2.768, p<.01 (1tailed), r=.26, t(102)=-1.790, p<.05 (1-tailed), r=.17) more often than the control group, showing that they monitored their progress and based on the current state altered the data they had already researched. We think this may be due to a more goaloriented approach, identifying and re-evaluating knowledge gaps and acting on those new or changed information needs. Finally, the treatment groups more often re-visited the collected relevant resources after learning (t(102)=-1.964, p<.05 (1-tailed), r=.27, t=.027*), distilling the relevant information and memorizing it for the post-test. To analyze our second research questions, we contrasted TG2, which had received direct support during learning versus TG1 and CG, which were only indirectly supported. TG2 set more goals, especially in the pre-actional phase (t=.000**), whereas later they actually set less goals (t(102)=4,296, p<.01 (1-tailed), r=.31), meaning they took more time to plan their course of action, approaching the research task in a more goal-directed way and performing the research more efficient. Another figure supporting this is that TG2 opened less web resources while researching (t(102)=-1.792, p<.05 (1-tailed), r=.17), having previously identified their knowledge gaps and looking specifically for relevant resources. Participants in TG2 were more often reorganizing their goals, regulating the current state (t(102)=2.253, p<.05 (1-tailed), r=.22) and opened significantly less new pages before (t(102)=-3.866, p<.01 (1tailed), r=.36) and after learning (t(102)=-3.415, p<.01 (1-tailed), r=.32), meaning they acted more efficiently and kept closer to their set goal. Further, after having learned, they more often reflect on found relevant resources (t(102)=2.200, p<.05 (1tailed), r=.21). Participants using the tool with metacognitive prompts (TG2) used the goal activation functionality far more frequently than the group without prompts (t(69)=3,463, p<.001). This means that learners in TG2 significantly monitored their progress more often than TG1. In conclusion, these results show that using our tool for setting goals affects the way learners approach research using web resources: they execute more metacognitive processes, plan in a more-detailed way, monitor their progress better and react on changed circumstances and more often reflect on their learning outcomes and found web resources. In a group comparison, we could not find significant differences in terms of performance (i.e. more correctly answered questions). We think this is due to the short scope of this evaluation and that we did not include third variables (e.g. certainty when answering questions or the relevance of found resources) in this evaluation.
532
P. Scholl et al.
5.2.2 Selected Correlations between the Variables To investigate further dependencies between variables we calculated several correlations accounting for different patterns within different groups and phases of action. A selection of significant correlations is presented in Table 3. Table 3. Selected Significant Correlations, *:p<.05, **:p<.01
Group
Variable 1
CG
Computer Competence
All TG1+TG2 CG+TG2 All TG2
Goals created Goals created Goals created Opened page Opened page
Variable 2 helpful in e-learning would use it snippets useful Computer Competence Search Competence PANAS „active“ Positive emotions Negative emotions
Correlation r (1tailed) .364* .445** .472** .356** .292*; .304* .325*; -.331* -.256** .436**
In the Control Group (CG), the higher the participant’s computer competence was rated by himself, the more he thought e-learning with web resources to benefit from using the tool, the better he liked the goal-management tool and the more valuable he estimated the snippet functionality for e-learning. In both treatment groups, computer competency was not correlated to those variables. This might indicate that participants of the CG implicitly knew how to use the stripped-down version of the tool if they had a high computer competence. Participants of the other groups, however, were supported in setting goals, monitoring them and reflecting on the learning process. Therefore, giving them that much support might have neutralized the influence of computer competence on organizing their research process. Further, creation of goals correlated with computer competence in all groups, meaning participants describing themselves as competent in using computers set more goals. Moreover, participants of the treatment groups set more goals the more confident they were of their ability to perform a good web research. These results indicate that the ability and to use technology are major predictors for efficient use. Curiously, there were clear correlations between the emotion to be “active” and the amount of goals/folders created – for the CG, it was positive, meaning that participants in this group felt themselves to be more active when setting more goals, whereas for the TG2 it was negative – the more goals a participant of this group set, the less active he felt. This might indicate that a strong direct support, among all the positive impact, might cause learners to feel less active. To be provided with more freedom, however, might cause the feeling of activeness in terms of being in charge of ones’ own actions. Eventually, the more web resources were opened, the less positive emotions the participants in all groups had and the less activated the participants felt. Additionally, for TG2, negative emotions (PANAS) were correlated to the number of opened resources. This means that browsing the web resources for information aimlessly (thus browsing a lot of different web resources, eventually becoming “lost in hyperspace”) affects the emotions of learners negatively. Still, participants in the Control Group didn’t have
Implementation and Evaluation of a Tool for Setting Goals
533
negative emotions when browsing more pages. This might be due to the fact that learners who did not set search goals did less encounter their browsing of many resources as being ineffective and accordingly experienced less negative emotions.
6 Conclusions and Further Steps In this paper, we presented a goal-management tool that is based on theoretical principles of Self-Regulated Learning. We introduced the term Scaffolds for functionality supporting meta-cognitive processes during learning episodes. The implemented functionality was well received by the participants of our study: nearly all of them (91%) saw the need to being able to store only small, relevant snippets of a web resource in learning with web resources. We evaluated the goal-management tool with a study. Results show that using our tool for setting goals affects the way learners approach research using web resources: they execute more meta-cognitive processes, plan in a more-detailed way, monitor their progress better and react on changed circumstances and more often reflect on their learning outcomes and found web resources. Although we did not find significant performance differences between the groups, we expect results in further evaluations when taking into account other variables, like the certainty of a learner answering a question or the resources’ relevance. Further, we only present results that base on quantitative evaluation of log files capturing the actions the participants executed and the questionnaires. Additional evaluations based on qualitative evaluation of screen captures, further questionnaire data and relevance analysis of web resources will follow. The presented study has been designed to evaluate short-term learning episodes. However, we aim to focus on long-term learning episodes eventually, supporting learning over a longer time span, e.g. several months. Additionally, as learning often is embedded in a social context (be it learning groups, in an organizational setting or just in communities of interest), we will provide the means to have learners share and communicate their collected information within their community. This will be achieved in a knowledge network as described in [4]. The role that goals and the presented Scaffolds will play in this community-driven application is still a field that demands further research.
References 1. Azevedo, R., Cromley, J.G., Seibert, D.: Does adaptive scaffolding facilitate students’ ability to regulate their learning with hypermedia? Contemporary Educational Psychology 29, 344–370 (2004) 2. Benz, B.F., Polushkina, S., Schmitz, B., Bruder, R.: Developing Learning Software for the Self-Regulated Learning of Mathematics. In: Nunes, M.B., McPherson, M. (eds.) IADIS Multi Conference on Computer Science and Information Systems. IADIS International Conference e-Learning, pp. 200–204. IADIS Press (2007) 3. Boekaerts, M.: Self-regulated learning: Where we are today. International Journal of Educational Research 31, 445–457 (1999)
534
P. Scholl et al.
4. Böhnstedt, D., Scholl, P., Benz, B.F., et al.: Einsatz persönlicher Wissensnetze im Ressourcen-basierten Lernen (en: Application of personal Knowledge Networks in Resourcebased Learning). In: Tagungsband Die 6. e-Learning Fachtagung Informatik der Gesellschaft für Informatik (2008) 5. Friedrich, H.F., Mandl, H.: Lern- und Denkstrategien – ein Problemaufriß (en: Strategies of Learning and Thinking – a Problem Statement). In: Mandl, H., Friedrich, H.F. (eds.) Lern- und Denkstrategien. Analyse und Intervention, pp. 3–54. Göttingen, Hogrefe (1992) 6. Krohne, H.W., Egloff, B., Kohlmann, C.W., et al.: Untersuchungen mit einer deutschen Version der Positive and Negative Affect Schedule (PANAS) (en: Studies with a German Version of Positive and Negative Affect Schedule (PANAS). Diagnostica 42, 139–156 (1996) 7. Nejdl, W., Paiu, R.: I know I stored it somewhere - Contextual Information and Ranking on our Desktop. In: 8th International Workshop of the EU DELOS Network of Excellence on Future Digital Library Management Systems (2005) 8. Schmitz, B., Wiese, B.S.: New Perspectives fort the Evaluation of Training Sessions in Self-Regulated Learning: Time-Series Analyses of Diary Data. Contemporary Educational Psychology 31, 64–96 (2006) 9. Scholl, P., Benz, B.F., Mann, D., Rensing, C., Steinmetz, R., Schmitz, B.: Scaffolding von selbstreguliertem Lernen in einer Rechercheumgebung für internetbasierte Ressourcen (en: Scaffolding self-regulated learning in a Research Environment for Web-based Resources). In: Proceedings der Pre-Conference Workshops der 5. e-Learning Fachtagung Informatik DeLFI 2007, pp. 43–50. Logos Verlag, Berlin (2007) 10. Scholl, P., Mann, D., Rensing, C., Steinmetz, R.: Support of Acquisition and Organization of Knowledge Artifacts in Informal Learning Contexts. In: EDEN Annual Conference 2007, Naples, p. 16 (2007) 11. Scholl, P., Benz, B., Böhnstedt, D., Rensing, C., Steinmetz, R., Schmitz, B.: Einsatz und Evaluation eines Zielmanagement-Werkzeugs bei der selbstregulierten Internet-Recherche (en: Application and Evaluation of a Goal-Management Tool in self-regulated Internet research). In: Seehusen, S., Lucke, U., Fischer, S. (eds.) DeLFI 2008: 6. e-Learning Fachtagung Informatik, pp. 125–136 (2008) 12. Vygotsky, L.S.: Mind in society: The development of higher psychological processes. Harvard University Press, Cambridge (1978) 13. Wild, K.-P., Schiefele, U.: Lernstrategien im Studium. Ergebnisse zur Faktorenstruktur und Reliabilität eines neuen Fragebogens (en: Learning Strategies in Studies. Results of Factor Structure and Reliability of a new Questionnaire). Zeitschrift für Differentielle und Diagnostische Psychologie 15, 185–200 (1994)
The Impact of Prompting in Technology-Enhanced Learning as Moderated by Students' Motivation and Metacognitive Skills Pantelis M. Papadopoulos, Stavros N. Demetriadis, and Ioannis G. Stamelos Aristotle University of Thessaloniki, Informatics Department, PO Box 114, 54621 Thessaloniki, Greece {pmpapad,sdemetri,stamelos}@csd.auth.gr
Abstract. This work explores the role of students’ motivation and metacognitive skills as moderating factors that influence the impact of an instructional method in the ill-structured domain of Software Project Management (SPM). In order to teach aspects of the SPM domain, we developed a web environment for case-based learning and implemented additionally a questioning strategy to help students focus on important parts of the case material. The paper presents the results from three studies revealing how students’ motivation and metacognitive awareness influenced their engagement in the cognitively challenging situations induced by the method. The implication for instructors and designers is that implementing a promising method, to help students efficiently process the complex material in an ill-structured domain, might not always lead to the desired learning outcomes. Students’ motivation and metacognitive skills should also be addressed, in order to maximize the potential benefits of instruction. Keywords: Ill-structured domains, Question prompts, Case-based learning, Technology-enhanced learning, Motivation, Metacognitive skills.
1 Introduction Teaching in an ill-structured domain poses additional instructional difficulties as compared to well-structured domains [1, 2, 3]. According to Spiro et al. [4], in an illstructured domain: (a) each case or example of knowledge application typically involves the simultaneous interactive involvement of multiple, wide-application conceptual structures, each of which is individually complex, and (b) the pattern of conceptual incidence and interaction varies substantially across cases nominally of the same type (p. 60). Students, therefore, need to study several domain cases in order to understand how contextual factors in various situations affect the successful (or not) knowledge application. Many researchers argue that all domains involving the application of knowledge to, unconstrained, real-world situations are substantially ill-structured [e.g., 4, 5]. This notion applies widely in Computer Science Education, a domain we are primarily interested in. For example, while the teaching of programming utilizes well-structured schemata characterized by abstractness, developing a software project for use in the U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 535–548, 2009. © Springer-Verlag Berlin Heidelberg 2009
536
P.M. Papadopoulos, S.N. Demetriadis, and I.G. Stamelos
real world is a complex ill-structured task affected by concepts such as cost, sustainability, and technical infrastructure. To efficiently introduce our students to the intricacies of ill-structured domains, we developed eCASE, a web environment for case-based learning, where instructors can upload and organize the presentation of complex case material. As our intention was to further improve the learning experience, we also embedded in the system a questioning strategy to help students focus on important aspects of the presented cases. This, in turn, generated interesting research questions and we proceeded to test the effectiveness of the instructional technique (i.e. the questioning strategy) in three different situations. The first study explored the impact of the question prompts in three conditions of individual learning (non-prompted, prompted to write answers, prompted to simply think of answers). The second study engaged students in collaborative learning exploring the potential of peer interaction. The third study involved postgraduate students (the students in the two other studies were undergraduates) focusing on students’ attitudes in a condition where providing written answers to the prompts was optional. While the research scope of each of the three studies was different, two factors appeared repeatedly in the analysis of students’ activity. Quantitative and qualitative data from these studies compose a picture revealing that students’ motivation and metacognitive awareness had a major impact on the learning strategy the students applied and, consequently, on their performance. In the following, we present: (a) the theoretical background of our approach, (b) the research design and results of the three studies, and (c) a discussion on the role of student motivation and meta-cognitive skills, as moderating variables that should be considered by instructors and instructional designers.
2 Theoretical Background 2.1 Question Prompts as Student Scaffolds Scaffolds are instructional interventions that aim to help students to develop deeper understanding of the material which might not be within their immediate grasp [6]. One widely implemented form of scaffolding is through the use of question prompts, which are sets of questions, used to guide the learning activity. Research indicates that questioning strategies can be highly beneficial for students, helping them in important cognitive functions, such as stimulating prior knowledge, enhancing comprehension, and facilitating problem-solving processes [7, 8]. Question prompts have been used in technology-enhanced learning environments to help direct students towards learningappropriate goals (e.g., focusing student attention and modelling the kinds of questions students should be learning to ask [6, 9]). 2.2 Scaffolding in Case-Based Learning Case-based learning is often cited as a successful teaching method for ill-structured domains [7]. When practicing case-based learning (CBL), two instructionally challenging issues need always careful interventions [10]. First, how to help students avoid misconceptions by not oversimplifying the material. Students need to work
The Impact of Prompting in Technology-Enhanced Learning
537
through several cases to develop deeper domain-specific knowledge (such as domain concepts, rules, and principles) [11]. Second, how to support students apply their knowledge to new problem situations, which may significantly differ from those encountered in the instructional setting. Kolodner [11] argues that “people…do not always remember the right cases on which to base their reasoning and argumentation”. Consequently, instructional interventions could be valuable in helping students understand deeper the domain and recall relevant cases, when learning in an ill-structured domain. We argue that embedding question prompts in case material can be such a productive intervention, as these questions help students focus their attention on important contextual issues. In order to construct a method-specific (and not domain-specific) questioning strategy, we stipulate that the question prompts should trigger those cognitive processes that are relevant to generating the context of a situation. According to Kokinov [12], there are at least three such processes, namely: perception, memory recall and reasoning. It might be beneficial, therefore, for learners who study complex case-based material, to be prompted to (a) identify and focus on important events in the situation (perception process), (b) relate these events and their impact to what is already known from other similar situations (memory recall process), and (c) reach useful conclusions (reasoning process) based also on the results of the two previous steps. We expect that this “observe-recall-conclude” questioning scheme can improve students’ understanding of the domain and recalling ability of the cases they study. 2.3 Motivation and Meta-cognition “Motivation” refers to factors of the learning situation that make students activate their cognitive processes to accomplish the objectives of the activity. According to the ARCS model, four critical factors are related to motivation [13, 14]: − Attention. A learning activity should gain and keep student’s attention high. − Relevance. The student should clearly understand how the completion of a learning activity will help her in achieving her personal goals. − Confidence. Student’s perceived likelihood of successful completion of the activity. − Satisfaction. The reward satisfaction the student receives for completing the activity. Meta-cognition, respectively, refers to learners’ knowledge concerning their own cognitive processes [15]. Meta-cognitive skills include one’s capability to actively monitor and consequently regulate their cognitive processes, usually in relation to some concrete learning objective. Solving well- and ill-structured problems requires different cognitive and metacognitive skills [1, 2, 3]. While domain knowledge and justification ability are needed in both cases, the ambiguous nature of the ill-structured problems further requires students to possess skills related to regulation of cognition, including planning, monitoring, and re-evaluation of goals [2, 3]. Additionally, many researchers claim that affective variables, including attitudes, values, motivation, and emotions, can influence students in defining problem goals and space, thus affecting their performance in ill-structured domains [1, 16, 17]. However, more empirical evidence is required verifying the differences in the role of metacognitive and emotional variables in the two types of problems [e.g., 2, 18, respectively].
538
P.M. Papadopoulos, S.N. Demetriadis, and I.G. Stamelos
3 Common Characteristics of the Three Studies In the following, we present in juxtaposition the three studies we conducted, investigating the effectiveness of our questioning scheme under different conditions. To be concise, we present first the common characteristics of the three studies and then we continue with the goals and findings of each study, focusing on both students’ performance and attitudes, as depicted through pre- and post-tests, and interviews. 3.1 Material Our interest is mainly Computer Science education, and for this we chose as domain of instruction Software Project Management, a domain of considerable complexity and need for knowledge transfer in job-related situations. SPM is hard to teach and learning relies largely on past experiences and project successes and failures. Difficulties in this domain stem from the fact that software processes are not well-defined, their product is intangible and often hard to measure, and large software projects are different in various ways from other projects [19]. In addition, many aspects of SPM are not adequately formalized and involve subjective quantification, e.g. risk prioritization. As a consequence, software managers recall and use their knowledge about projects they have managed (or are aware of) in the past, and base their decisions on management patterns and anti-patterns. It is worth mentioning that this field has been ranked first among 40 computer science topics whose instruction needs to be intensified in academia because of demands in professional context [20]. For the purpose of our research, we developed eCASE, a web-based environment for case-based learning in the SPM domain. Studying in the environment involves solving ill-structured problems, presented to students as “scenarios”. A scenario is a problem-case anchoring student learning in realistic and complex situations in the field. After presenting the problem, a scenario poses to students some critical openended questions (scenario questions), engaging them in decision-taking processes, as if they were field professionals. Before answering scenario questions the learners are guided to study supporting material in the form of “advice-cases”. An advice-case is a comprehensive case presenting some useful experience in the field, to help students analyze the demands and risks of Software Project Management. Each scenario was accompanied by a number of related advice-cases, which were selected and adapted from authentic SPM cases reported in the literature [e.g., 21]. Advice-cases are organized in smaller parts (“case-frames”) each one presenting a domain theme, that is, some meaningful and self-contained aspect of the whole case. For example, an advice-case could possibly be organized in three case-frames, under the titles “The role of end-users”, “Changing requirements” and “Executive support and commitment”, which are considered as important themes in the SPM domain. The aforementioned observe-recall-conclude questioning scheme typically appears in each case-frame, prompting students to reflect on the material they just read and provide answers to the three following questions: 1. What concrete events (facts, decisions, etc.) imply possible problems during project development? 2. In what other cases do you recall having encountered similar project development problems? 3. What are some useful implications for the successful development of a project?
The Impact of Prompting in Technology-Enhanced Learning
539
3.2 Pre- and Post-Testing The pre-test was a prior domain knowledge instrument that included a set of 6 openended question items relevant to domain conceptual knowledge (e.g., “What role can/should the end-users play in the development of a software project?”). The posttest comprised two sections focusing on: (a) acquired domain-specific conceptual knowledge, and (b) students’ potential for knowledge transfer in a new problem situation. The first section included three domain conceptual knowledge questions. The answers to these questions were not to be found as such in the study material, but rather to be constructed by taking into account information presented in various cases. The second section presented a dialogue-formatted scenario. In this scenario, various stakeholders (company CEO, CFO, clients, technicians etc.) were discussing managerial issues of an ongoing software project in an everyday professional context. Students had to identify elements in the scenario that might be indicators of inefficient management and suggest resourceful alternatives. 3.3 Procedure The studies were conducted in five phases: pre-test, familiarization, study, post-test, and interview. In the pre-test phase, students completed the prior domain knowledge instrument. During the familiarization phase, students logged in to the environment (from wherever and whenever they wanted) and worked individually on a relatively simple scenario prepared for them, accompanied by two short advice-cases. Students had to read the material in the advice-cases and based on that to provide answers to the scenario open-ended questions. They were allowed one week to complete the activity and no question prompts were presented while studying the advice-cases. Hence, the familiarization phase was common for all the students in the three studies. Next, the students continued with the study phase, which was different for each group. This phase lasted one week and the students had to work online on 3 complex scenarios (the same for all groups) that addressed more domain themes and were accompanied by 5 longer advice-cases organized in 20 case-frames (in total). After the study-phase students took a written post-test in class. After the post-test, the students were interviewed to record their comments and attitudes regarding the activity. 3.4 Data Analysis To avoid any biases, students’ paper sheets for the pre- and post-test were mixed and blindly assessed by two raters. The raters used a 0-10 scale and followed detailed predefined instructions on how to assess each specific item. The deviation between scores from the two raters was not to exceed the 10% level (one grade on the assessment scale), else raters had to discuss the issue and reach a consensus. Eventually each student received 3 scores: (a) a score for the pre-test, (b) a score for answering domain-specific conceptual knowledge questions (“conceptual” score) of the posttest, and (c) a score for the post-test scenario analysis (“transfer” score). These scores were calculated as the mean value of the respective pre-test, conceptual, and transfer scores provided by the two raters. As a measure of inter-rater reliability, we calculated the intraclass correlation coefficient (ICC) for the three scores.
540
P.M. Papadopoulos, S.N. Demetriadis, and I.G. Stamelos
For all statistical analyses a level of significance at .05 was chosen. To validate the use of the parametric tests in the first and the second study, we investigated the respective test assumptions and results showed that none of the assumptions were violated. The interviews lasted about 10 minutes per student and were semi-structured and audio recorded. The interviews transcripts were used for content analysis.
4 First Study: Writing vs. Thinking 4.1 Research Scope The first study aimed to investigate (a) the effectiveness of the prompting technique, when students studied in a self-paced mode, and (b) whether the way that the students answered the questions (providing answers in written format vs. simply think of the answers) had an impact on the learning outcomes. When designing a technology-enhanced learning environment, the choice between asking students to explicitly write the answers and prompting to simply make them think of answers is not an easy one to make. Writing has been used as an effective tool for constructive learning [22], supporting students to develop critical thinking and increase their analysis, inference, and evaluation skills [23]. However, a questioning strategy that requires written answers to each and every open-ended question prompt may increase significantly students’ cognitive load, thus posing a threat to their motivation for meaningful engagement. Some researchers suggest that students should answer question prompts in writing to avoid superficial engagement [e.g., 24], while others think that periodically asking learners to reflect on the question prompts should be sufficient [e.g., 25]. 4.2 Participants The study employed 59 undergraduate Computer Science students who volunteered to participate. The students were randomly assigned in three conditions: Non-Prompted (NP) (n = 20); Writing Condition (WC) (n = 19); Thinking Condition (TC) (n = 20). Students who successfully completed all the phases of the study were given a bonus grade for the course. Students were domain novices (this was a prerequisite for participation in all the studies) and they had never before been typically engaged in casebased learning activity. 4.3 Treatment The students in the NP group studied the advice-cases without the question prompts and they were able to answer the scenario-questions after just navigating through the advice-cases. The students in the WC group had to provide written answer in the question prompts that appeared after each case-frame. Only then, they were permitted by the system to answer the scenario-questions. Finally, the TC group studied the advice-cases with the question prompts, but students were only asked to reflect on the material and spend some time thinking of possible answers to the prompts. Similarly to the NP group, the students in the TC were able to answer the scenario-questions after just navigating through the accompanying advice-cases.
The Impact of Prompting in Technology-Enhanced Learning
541
4.4 Results Inter-rater reliability was high for the pre-test (ICC = .90), the conceptual (ICC = .88), and the transfer (ICC = .84) scores. Table 1 presents students’ performance in the pretest and the two measures of the post-test. One-way analysis of variance (ANOVA) results indicated that students were domain novices scoring very low in the pre-test and that the three conditions were comparable regarding students’ prior knowledge (F(2,56) = 1.48, p = .23). The results of the multivariate analysis of covariate (MANCOVA), using the pre-test score as covariate, revealed a significant main effect for the prompting condition regarding the two dependent variables of the post-test (Wilk’s Lambda: F(4,108) = 2.38, p = .04). Univariate tests for each of the post-test measures showed significant main effects for the prompting condition (conceptual: F(2,55) = 3.68, p = .03; transfer: F(2,55) = 3.53, p = .03). Post hoc tests showed that the students in the writing condition outperformed non-prompted (conceptual: p = .02; transfer: p = .01) and thinking condition students (conceptual: p = .02; transfer: p = .04) in both measures of the post-test. On the contrary, no significant differences were found between the non-prompted and the thinking condition groups for the conceptual or the transfer scores. Table 1. Students’ Performance in the First Study
Pre-test Conceptual Transfer
Non-Prompted (n = 20) M SD 1.43 (0.87) 7.78 (1.01) 7.75 (1.44)
Writing Condition (n = 19) M SD 1.85 (1.04) 8.56 (0.74) 8.84 (1.06)
Thinking Condition (n = 20) M SD 1.41 (0.77) 7.76 (1.24) 8.00 (1.45)
Typically, students in the WC group gave short and precise answers to the observerecall-conclude scheme. Following is a student’s answers to the question prompts of a case-frame addressing the “The role of end users” theme: Observe: Senior management decided to exclude completely the end users from the development process to avoid additional delays. Recall: In cases where the end users were excluded, the designers had trouble to define accurately the system requirements and to fit the project in the end users’ needs and that caused additional cost and delays. Conclude: The end users are a valuable source of information for the designers and should be part of the development process – at least a sample of them positioned in key places. When interviewed, students in all the three groups stated that the environment was easy to use and the SPM cases were intriguing. The majority of the WC students said that answering the question prompts helped them to better understand the case material, but they also unanimously agreed that answering all question prompts was a tiresome process. In addition, 12 (out of the 20) TC students stated that after a while they were skipping the question prompts completely. They argued that answering the prompts was not helpful or essential to complete the activity, and that repeatedly
542
P.M. Papadopoulos, S.N. Demetriadis, and I.G. Stamelos
reflecting on the same observe-recall-conclude questions was rather tedious. This attitude made these students practically to shift into the non-prompted condition. However, there were no differences between the two sub-groups of the TC group (either those engaged on thinking of answers or those simply skipping the prompts) and they both achieved significantly lower scores than the WC group.
5 Second Study: Peer Interaction 5.1 Research Scope The second study (presented in detail in [26]) investigated how the context-oriented prompting technique could be combined with a collaborative learning method, to scaffold students in ill-structured domains. Peer interaction is often considered as an effective scaffolding method [27], research, however, has consistently revealed that freely collaborating students may lack the competence to engage in fruitful learning interactions, without external support and guidance [e.g., 28]. As a remedy, many researchers explore the potential of scripted collaboration [e.g., 29]. A collaboration script is a teacher-provided didactic scenario designed to engage a team of students in essential knowledge-generating interactions, and “scripted collaboration” is the practice of actually implementing a collaboration script to have students work within the scaffolding framework provided by the teacher. A computer-supported collaboration script is, accordingly, a computerized representation of a collaboration script [e.g., 30]. Script implementation is subject to students’ appropriation process, meaning that students are expected to “filter” and adjust the script to their own context during runtime. Dillenbourg [31] underlines this distinction, suggesting that one should distinguish between ideal (the activity as prescribed by the teacher), mental (the mental script representation that the group builds from teacher’s prescription) and actual (the actual task and interactions that students engage) script, in order to conceptualize the different teacher’s and students’ script perspective. 5.2 Participants The first and the second study were conducted at the same time. Hence, the same Non-Prompted group was used as a control for these studies. Initially, a total of 77 students volunteered. Next, students were asked whether they would like to work collaboratively or not. Finally, 18 students formed 9 dyads (CSCL group), while the rest 59 students were randomly assigned to the three conditions described in the first study. All the students were domain novices, without formal experience in case-based learning, and were given a course bonus grade after completing all the study phases. 5.3 Treatment As described earlier, the NP group studied the advice-cases without the question prompts and was able to answer the scenario-questions after just navigating through the advice-cases.
The Impact of Prompting in Technology-Enhanced Learning
543
On the contrary, for the CSCL group, the observe-recall-conclude questioning scheme appeared once at the end of each advice-case and student dyads had to follow a specific collaboration script, in order to answer the questions and complete the study of the advice-case. The script had three steps guiding students through a peer review process. In step 1, each student in a dyad had to answer the questions individually. After both students answered the questions, their answers became available to each other. In step 2, the students reviewed individually each others answers and identified issues of agreement/disagreement. In step 3, the students had to collaborate, discuss their reviews, and agree on a common final answer including also argumentation about their choice to present or dismiss issues that appeared in their individual answers. To make collaboration easier, the students were allowed to use the medium of their choice during discussion (eCASE, face-to-face meeting, phone call, email etc.). The script ends when one of the students in a dyad submits the final common answer in the environment. The same script was applied, while answering the scenarioquestions. The students, as always, had to complete the study of the advice-cases, in order to answer the scenario-questions. The collaborating students had to selforganize their activity, in order to communicate and maintain an efficient pace in submitting their answers. 5.4 Results Inter-rater reliability was high for the pre-test (ICC = .90), the conceptual (ICC = .88), and the transfer (ICC = .85) scores. Table 1 presents the pre- and post-test scores of the two groups. T-test results indicated that students were domain novices scoring very low in the pre-test and that the two conditions were comparable regarding students’ prior knowledge (t(36) = 0.16, p = .86). MANCOVA results showed that the main effect of collaboration did not reach statistical significance for the two measures of the post-test (Wilk’s Lambda: F(2,34) = 0.53, p = .59). Table 2. Students’ Performance in the Second Study
Pre-test Conceptual Transfer
Non-Prompted (n = 20) M SD 1.43 (0.87) 7.78 (1.01) 7.75 (1.44)
CSCL (n = 19) M 1.39 7.59 8.11
SD (0.57) (1.33) (1.18)
Analyzing the collaboration of CSCL dyads revealed four different actual collaboration scripts: • Ideal script. Three dyads worked exactly as the ideal script prescribed. Students participated equally throughout the steps of the script and they had a meaningful discussion about the formation of the final answer. • Moderate interaction. The actual script of two other dyads resembled the ideal script, but with a significant decrease of the interaction between students. For example, there was usually a brief discussion about the issues of the final answer, but only one of the students was each time responsible for the final answer.
544
P.M. Papadopoulos, S.N. Demetriadis, and I.G. Stamelos
• Weak interaction. Another two dyads demonstrated a pattern with almost none interaction. In these dyads, communication was usually one-sided. After both students submitted their individual answers, one of them was solely responsible for the formation and submission of the final answer, considering also comments sent by the other student concerning the two individual answers. • No interaction. Lastly, two other dyads worked in a totally individual mode, as one of the students was usually completely non-participating after submitting the individual answer, while the other student had to write and submit the final answer without any feedback from his or her partner. Additionally, the inspection of students’ answers in the environment revealed that in some cases students were submitting superficial individual answers, only to make the system promote them to the next step of the script. This pattern of collaboration clearly violates the ideal script as the instructions were to meaningfully answer the questions and contribute to the effort of the team through interaction and collaboration. The small number of dyads involved prohibits the quantitative analysis between these four collaboration patterns. It seems reasonable to assume that in a situation where students’ engagement and collaboration in the activity were improved their level of learning might also have been improved, although this remains to be examined. Interviews also revealed misconceptions about the ideal script that led students to unpredicted behaviors distant to the script’s goal. For example, the examination of the answers of a dyad showed that the first student was initially submitting comprehensive and good answers, while the other student was answering poorly in short. During the week, the answers of the first student got significantly shorter and adequate analysis was often missing. When asked about, the second student said that she asked her partner to give shorter individual answers to eliminate the differences between their answers and be able to submit a common final answer with more ease. In this student’s mind, agreement between partners was conceived as a script requirement and as a general goal to submit a common answer, and not necessarily a more complete answer.
6 Third Study: Engaging Postgraduates 6.1 Research Scope The third study engaged postgraduates and focused on the possibly different strategies that these students might implement, when learning in the environment. In general, postgraduates are considered as more advanced learners and research on the expertise reversal effect [32] suggests that instructional design techniques that are effective for beginners may not be effective for more experienced learners. In the previous two studies, results showed higher performance for the WC group, although the prompt-induced workload was an issue for many students. In this exploratory study, we wanted to investigate whether postgraduates would demonstrate a higher level of metacognition in the examined setting and whether their performance and attitudes would differ from those of the undergraduates. The different profile of the students participated in this study and the providing of a stronger motive for achieving higher performance enabled us to analyze the role of motivation and metacognitive skills in the effectiveness of our prompting method.
The Impact of Prompting in Technology-Enhanced Learning
545
6.2 Participants The study employed 19 students with a diploma in Economics attending an interdisciplinary postgraduate program in Informatics and Business Management. Students were domain novices, but they had considerable experience in learning with cases, as this method was widely applied in their undergraduates courses (e.g., Marketing). Additionally, the course grade bonus awarded to them was calculated based on their performance in the post-test (and not simply on their successful participation, as in the previous studies). This, we believe, gave postgraduates an additional motive for meaningful engagement. 6.3 Treatment The students studied individually and the question prompts appeared in each caseframe of the advice-cases. As always, the students had to answer the scenarioquestions in writing. However, providing written answers to the question prompts was mandatory only for the advice-cases of the first scenario. Afterwards, submitting written answers in the prompts became optional. We opted for this arrangement, in order to investigate whether postgraduates would continue answering the prompts or skip this step. The postgraduates started studying in the written answers condition and had the choice to shift to the just reflecting condition (or even to the non-prompted condition). 6.4 Results Inter-rater reliability was high for the pre-test (ICC = .89), the conceptual (ICC = .83), and the transfer (ICC = .84) scores. Students were novices in the SPM domain, scoring very low to the pre-test (M = 2.34, SD = 1.08). On the contrary, they achieved very high scores in the conceptual (M = 9.04, SD = 0.65) and the transfer (M = 9.16, SD = 1.46) measures of the post-test, surpassing students’ performance in any of the groups in the previous studies. Analyzing students’ activity during the study revealed that postgraduates’ attitude was significantly different from that of other students. Despite the fact that all students had a positive opinion about the learning environment and the activity as a whole, postgraduates developed a more positive attitude toward the question prompts, accepting the additional workload that these questions induced. Specifically, 13 out of 19 postgraduates submitted written answers to all the prompts in the activity, while the rest 6 students answered the 58% - 83% of them. In their interviews, students claimed that writing the answers to the prompts helped them also in submitting more comprehensive answers to the scenario-questions that followed. According to them, that was the main reason why they continued to provide written answers throughout the whole activity. When asked why they preferred to write down their answers, instead of just thinking them, students stated that the procedure of writing the answers helped them in two ways. First, by writing their answers they were able to organize their thoughts regarding the case material in a better way and to understand how the different domain concepts were related. Second, the submission of written answers in the environment made them easily available for reference later in answering the scenario-questions.
546
P.M. Papadopoulos, S.N. Demetriadis, and I.G. Stamelos
7 Discussion and Conclusions Despite the different research scope in the three presented studies, the comparison of their characteristics and results highlights how students’ engagement in a learning activity can be moderated by their motivation and metacognitive awareness. Figure 1 presents students’ performance in the five groups participated in the studies.
Fig. 1. Students’ performance in the five groups of the three presented studies
First, one should note that the WC students (first study) were compelled to provide written answers and they outperformed the other two groups (TC and NP). This implies that the instructional method (questioning strategy) was indeed effective and that the additional cognitive load induced by the prompts was germane to the learning task. Remarkably, the highly motivated and meta-cognitively skillful postgraduates (third study) selected freely to follow a similar maximum engagement learning strategy (writing the answers) that eventually helped them to benefit most from the prompting technique. Postgraduates had considerable experience in case-based learning, acknowledged the role of prompts in providing better answer to the scenario-questions, and expressed a very positive opinion about writing the answers. By contrast, less motivated and meta-cognitively more naive students (TC group in the first study) followed a learning strategy (skipping the prompts) that diminished the learning benefits of the method. Similarly, several collaborating students (second study) adopted a low-interaction version of the scripted activity, thus weakening the impact of the questioning technique through peer interaction (a presumably key learning mechanism in the collaborative activity). TC and CSCL students followed a low engagement strategy, either because of a conscious choice to minimize the workload or because their failed to understand how they would benefit from the proposed method of studying and processing the learning material. The implication for instructors and designers is that implementing an effective and cognitively challenging method to engage students in deeper content processing in
The Impact of Prompting in Technology-Enhanced Learning
547
ill-structured domains does not guarantee the same level of learning outcomes in all situations. Students appropriate the instructional method through their own “lenses” of motivation and metacognition. This process may have beneficial or detrimental effect on learning, depending on the engagement strategy the students will choose to follow. Our experience so far indicates that a powerful didactical intervention should also aim to increase students’ engagement, addressing motivational and metacognitive issues (and not necessarily forcing students to adopt high engagement strategies). As a key issue for further investigation, we suggest that learning environments for ill-structured domains should also provide opportunities for the learners to reflect on and self-assess their learning strategy. For example, prompting students to analyze the possible benefits and shortcomings of their implemented strategy (possibly by contrasting it to a successful strategy) might be highly beneficial for redirecting the strategy when necessary and becoming meta-cognitively more experienced.
References 1. Jonassen, D.H.: Instructional design models for well-structured and ill-structured problemsolving learning outcomes. Educational Technology: Research and Development 45, 65– 94 (1997) 2. Shin, N., Jonassen, D.H., McGee, S.: Predictors of well-structured and ill-structured problem solving in an astronomy simulation. Journal of Research in Science Teaching 40(1), 6–33 (2003) 3. Voss, J.F., Post, T.A.: On the solving of ill-structured problems. In: Chi, M.H., Glaser, R., Farr, M.J. (eds.) The nature of expertise, pp. 261–285. Lawrence Erlbaum Associates, Hillsdale (1988) 4. Spiro, R.J., Feltovich, P.J., Jacobson, M.J., Coulson, R.L.: Cognitive flexibility, constructivism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. In: Duffy, T., Jonassen, D. (eds.) Constructivism and the technology of instruction, pp. 57–75. Erlbaum, Hillsdale (1992) 5. Chi, M.T.H., Feltovich, P.J., Glaser, R.: Categorization and representation of physics problems by experts and novices. Cognitive Science 5, 121–152 (1981) 6. Azevedo, R., Hadwin, A.F.: Scaffolding self-regulated learning and metacognition – Implications for the design of computer-based scaffolds. Instructional Science 33, 367–379 (2005) 7. Ge, X.: Scaffolding students’ problem-solving processes on an ill-structured task using question prompts and peer interactions. Unpublished Doctoral Thesis (2001), http://etda.libraries.psu.edu/theses/approved/WorldWideIndex /ETD-75/index.html 8. Craig, S.D., Sullins, J., Witherspoon, A., Gholson, B.: The Deep-Level-ReasoningQuestion Effect: The Role of Dialogue and Deep-Level-Reasoning Questions During Vicarious Learning. Cognition and Instruction 24(4), 565–591 (2006) 9. Hmelo, C., Day, R.: Contextualized questioning to scaffold learning from simulations. Computers & Education 32, 151–164 (1999) 10. Feltovich, P.J., Spiro, R.J., Coulson, R.L., Feltovich, J.: Collaboration within and among minds: Mastering complexity, individuality and in groups. In: Koschmann, T. (ed.) CSCL: Theory and practice of an emerging paradigm, pp. 25–44. Lawrence Erlbaum Associates, Mahwah (1996) 11. Kolodner, J.: Case-Based Reasoning. Morgan Kaufmann Publishers Inc., San Mateo (1993)
548
P.M. Papadopoulos, S.N. Demetriadis, and I.G. Stamelos
12. Kokinov, B.: Dynamics and Automaticity of Context: A Cognitive Modeling Approach. In: Bouquet, P., Serafini, L., Brezillon, P., Benerecetti, M., Castellani, F. (eds.) CONTEXT 1999. LNCS (LNAI), vol. 1688, p. 200. Springer, Heidelberg (1999) 13. Keller, J.M.: Development and use of the ARCS model of instructional design. Journal of Instructional Development 10(3), 2–10 (1987) 14. Keller, J.M., Kopp, T.W.: Application of the ARCS model to motivational design. In: Reigeluth, C.M. (ed.) Instructional Theories in Action: Lessons Illustrating Selected Theories, pp. 289–320. Lawrence Erlbaum Publishers, New York (1987) 15. Flavell, J.: Metacognitive aspects of problem solving. In: Resnick, B. (ed.) The nature of intelligence. Erlbaum, Hillsdale (1976) 16. Jehng, S.D., Johnson, S.D., Anderson, R.C.: Schooling and students’ logical beliefs about learning. Contemporary Educational Psychology 18, 45–56 (1993) 17. Tyler, S.W., Voss, J.F.: Attitude and knowledge effects in prose processing. Journal of Verbal Learning and Verbal Behavior 21, 524–538 (1982) 18. Brabeck, M.M., Wood, P.K.: Cross-sectional and longitudinal evidence for differences between well-structured and ill-structured problem-solving abilities. In: Commons, M.L., Armon, C., Kohlberg, L., Richards, F.A., Grotzer, T.A., Sinnott, J.D. (eds.) Adult development. Models and methods in the study of adolescent and adult thought, vol. 2, pp. 133– 146. Praeger, New York (1990) 19. Sommerville, I.: Software Engineering, 7th edn. Addison-Wesley, Reading (2004) 20. Kitchenham, B., Budgen, D., Brereton, P., Woodall, P.: An investigation of software engineering curricula. Journal of Systems and Software 74(3), 325–335 (2005) 21. Ewusi-Mensah, K.: Software Development Failures. MIT Press, Cambridge (2003) 22. Tynjälä, P.: Writing as a tool for constructive learning: Students’ learning experiences during an experiment. Higher Education 36, 209–230 (1998) 23. Quitadamo, I.J., Kurtz, M.J.: Learning to Improve: Using Writing to Increase Critical Thinking Performance in General Education Biology. CBE-Life Sciences Education 6, 140–154 (2007) 24. Greene, B.A., Land, S.M.: A qualitative analysis of scaffolding use in a resource-based learning environment involving the World Wide Web. Journal of Educational Computing Research 23(2), 151–179 (2000) 25. Clark, R.C., Mayer, R.E.: e-Learning and the Science of Instruction. Pfeiffer, San Francisco (2003) 26. Papadopoulos, P.M., Demetriadis, S.N., Stamelos, I.G.: Analyzing the Role of Students’ Self-Organization in a Case of Scripted Collaboration. In: O’Maley, C., Suthers, D., Reimann, P., Dimitracopoulou, A. (eds.) Computer Supported Collaborative Learning Practices: CSCL 2009 Conference Proceedings, pp. 487–496. International Society of Learning Sciences, ISLS (2009) 27. Berge, Z.L., Collins, M.: Computer-mediated scholarly discussion groups. Computers & Education 24, 183–189 (1995) 28. Liu, C., Tsai, C.: An analysis of peer interaction patterns as discoursed by on-line small group problem-solving activity. Computers & Education 50, 627–639 (2008) 29. Dillenbourg, P., Tchounikine, P.: Flexibility in macro-scripts for computer-supported collaborative learning. Journal of Computer Assisted Learning 23(1), 1–13 (2007) 30. Kollar, I., Fischer, F., Hesse, F.W.: Collaboration Scripts – A Conceptual Analysis. Educ. Psychol. Rev. 18, 159–185 (2006) 31. Dillenbourg, P. (ed.): Framework for Integrated Learning. Deliverable D23-05-01-F of Kaleidoscope Network of Excellence (2004), http://hal.archives-ouvertes.fr/ docs/00/19/01/07/PDF/Dillenbourg-Kaleidoscope-2004.pdf 32. Kalyuga, S.: Prior knowledge principle in multimedia learning. In: Mayer, R.E. (ed.) The Cambridge handbook of multimedia learning, pp. 325–337. Cambridge University Press, New York (2005)
Creating a Natural Environment for Synergy of Disciplines Evgenia Sendova1, Pavel Boytchev2, Eliza Stefanova2, Nikolina Nikolova2, and Eugenia Kovatcheva2 1
Institute of Mathematics and Informatics, Bulgarian Academy of Science, 8, Acad. G. Bontchev, 1113 Sofia, Bulgaria 2 Faculty of Mathematics and Informatics, St. Kl. Ohridski University of Sofia, 5, James Bourchier Blvd., 1164 Sofia, Bulgaria
[email protected], {boytchev,eliza,nnikolova,epk}@fmi.uni-sofia.bg
Abstract. The paper presents the authors’ experience in stimulating the synergy of disciplines via active learning methods; the emphasis being on project based learning. Promoting this method is demonstrated in the context of teachers’ training courses and developing a set of IT textbooks. Numerous examples are presented showing that the synergy of various disciplines is quite natural when performed in the context of studying IT. The project samples developed by teachers are inspired by ideas in textbooks and are accomplished by means of specially designed computer applications. The importance of working on projects tuned to the learner’s interest as a decisive motivation factor is emphasized. In addition authors show that the bouquet of projects becomes more colorful with every new issue of the courses thanks to the learners’ creativity and the collaborative knowledge building. Keywords: Project based learning, learner’s motivation, creativity, collaborative knowledge building.
1 Promoting Synergy among Disciplines in Teacher Education To meets the needs of contemporary society synergy between various frontiers of education is crucial. It is not easy to step outside of an individual disciplinary box, learn the language of another field and if necessary, alter the perception. Still, all this become essential in creating collaborative approaches when working on professional projects in science, industry and art. The fields of medical informatics, bioinformatics, bioengineering, design of micro-engineering machines by means of new materials, computer generated art and music provide just a few examples. In order to prepare the young people to integrate knowledge from different fields we have to expose them to such an experience as early as their school years. For that we need teachers who themselves are doing this in a natural way and share their experience with the students. The idea of promoting interdisciplinary approach at school is not a new one. It implies a unified perspective on thinking – one that helps to knit together many areas of the curriculum without compromising the integrity of distinctness of each area [1]. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 549–555, 2009. © Springer-Verlag Berlin Heidelberg 2009
550
E. Sendova et al.
But such an approach has been applied in Bulgaria in a natural way only in isolated educational experiments. One of them was designed by the Research group on Education (RGE) embracing Bulgarian scientists, educators and software developers with the ambition of facing the challenges of the information age [2]. The experiment was based on two main principles - the integration of disciplines and learning by doing. Its success depended to a great extent on the collaboration among the teachers including joint teaching. The experiment ran for 12 years (up to 1991) in 29 schools. The policy makers of today feel very proud with reshaping the Bulgarian school – producing every year multiple sets of textbooks for each grade, introducing a new subject (IT) in 5-7 grades, providing a sufficient number of computers for each school. But the essential questions for us are: Do we use technology so as to stimulate thoughtful analysis, expressing oneself in creative ways, seeing and making the connection among various fields? How do the teachers teach by means of technology? How could we define an innovative teacher? Do the teachers get appropriate education and support for their new role of facilitators in the learning process, of research partners of their students? In our capacity of people involved in the development of computer environments, teaching platforms and materials, as well as in the design of ICT-enhanced teaching strategies we will share an aspect of our work, related to approaches of encouraging teachers’ creativity in multidisciplinary context.
2 Specifics of Our Environment for Natural Synergy of Disciplines The environment we try to create for natural integration of disciplines during our teacher education courses is based on a specific I*Teach (Innovative teacher) methodology. It deals with developing the so called ICT-enhanced skills defined as a synergy between the technical and the soft skills – transferable skills in the Life Long Learning society. Putting the emphasis on such skills in the context of ICT education has been addressed in the frames of Leonardo da Vinci I*Teach project [3]. The I*Teach methodology [4-6] has been proposed based on active learning methods – the student is in the center of the learning process, the teacher is a guide and a partner in a project work based on didactic scenarios encouraging learner’s creativity.
Fig. 1. A typical I*Teach map of a project scenario
Creating a Natural Environment for Synergy of Disciplines
551
We shall demonstrate how our approach of implementing the I*Teach methodology in a set of ICT textbooks and teacher’s handbooks allows teachers and students to enhance the underlying ideas in their field of interest. It is typical for the structure of the textbooks that there is a common thread linking: the tasks in a lesson; the lessons in a common ICT theme; and the ICT themes in the whole textbook [7]. The unifying theme of the final book of the series is the coding, which passes as a red thread through the whole content. Each lesson deals with ideas and tools for solving problems considered as milestones towards a final goal (Fig. 1). The grand finale is a project (Decoding the past) requiring the students to put together all the subject knowledge and skills acquired during the school year and to work creatively in teams, and then – to present their results (Fig. 2a). For the purpose they are expected to decode a message and create computer models of ancient Greek vessels, to figure out their function (Fig. 2b) and thus - to help a local museum to restore them. And, similarly to all real-life projects, the multidisciplinary elements in the project scenario are interweaved in a natural way.
(a)
(b)
Fig. 2. From an abstract I*Teach Scenario to a concrete Grand finale!
Let us note that the soft skills expected to be developed when working on this project include team work (planning, task distribution, communication skills, conflict resolving), information skills (looking for relevant information, critical thinking), presentation skills (preparing written and oral presentation of the milestones and the final product). Furthermore, the project output is expected to be “put on the table”, i.e. to have a finalized appearance and to be sharable. The textbooks are designed so as to foster the creativity of both teachers and students. In the case of the teachers, they are encouraged (in their handbooks and during the training courses) to develop variations on the project theme taking into account their own expertise and students’ interests. As for the students, their creativity is stimulated by offering them a freedom of choice: 1) of a path towards a specific milestone; 2) of a tool representing their ideas, and 3) of a manner of presenting the results.
552
E. Sendova et al.
3 Multidisciplinary Projects 3.1 Preparing the Ground During the last 3 years we have applied the I*Teach methodology in a series of courses with in-service teachers. What proved to be a very valuable idea was to start with an informal introduction of the participants addressing questions of the kind: In which field do you feel an expert and how do you know that? Who was your teacher? What else would you like to learn well? This would give a valuable feedback not only to us but to all of them in terms of interests, background and expertise. Then we would offer a rather general theme offering a lot of room for interpretation and reflection (Be my guests, School out of doors, Which way now?, The Art of Communication). Next, we would group teachers in teams according to their interests. Usually the teams embraced experts in more than one field – a good ground for a multidisciplinary project. Then the teachers would start working on a real-life project formulated by their team after a discussion. The projects were usually rather complex and required a decomposition in subtasks. The team members would solve problems of various type based on their own experience, involving consultation in- and out of the team when needed. The ICT were just an element of the environment and thus were used when necessary. One of important point is that the teacher trainers and the participants are open to taking into account all coming ideas. If we expect teachers to accept the ideas of their students without fear we should let them experience a similar phenomenon in the role of learners ready to take various paths towards the final goal. Of course, at the beginning of the course not all the participants were ready for such an approach but looking at the results of the team work they reconsidered their initial attitude. 3.2 Looking Around through Mathematical Glasses Our experience shows that technology enables the learners to approach mathematics with a special enthusiasm when they work on projects tuned to their interests. For instance a very important mathematical concept – the tessellation of the plane – could be perfectly demonstrated by numerous Escher’s works and then explored in a computer environment. The idea has been implemented in our IT textbook for 6 graders within a topic on integration of activities. The students are expected to apply their technical and mathematical skills in tessellating the plane by a shape of a clown’s face (Fig. 3) and then - to demonstrate their artistic imagination by generating their own tessellations
a)
b)
Fig. 3. A tessellation challenge in an IT textbook for 6-graders
Creating a Natural Environment for Synergy of Disciplines
553
after Escher [8]. At a first glance, this is a project in fine arts but in order to paint the picture the students have to decompose the project in small subtasks – to find the minimal element tessellating the plane, to figure out what geometric transformation to apply and finally – to implement their skills in choosing a relevant computer environment and using it for accomplishing the goal. We also used the scenario approach during the in-service teacher education the difference being that we gave the participants freedom to choose the final goal of the project. It was natural that they decided to work on projects involving the design of objects closely related to their professional orientation and/or hobbies – electrical light sources, hats, jewelry (Fig. 4). For the purpose they integrated not only various IT environments (Paint, Comenius Logo, Elica [9], [10] applications) but also knowledge in mathematics (symmetry, rotation in 2d and 3D, fractals), informatics (procedures with parameters, recursion), art. The scenario of their work followed the I*Teach model – setting the final goal the path to which is traced by milestones, e.g. to create a computer model of a rotational solid, to use the necessary mathematical information (in the case of the hats – the surface of the solid so as to calculate the material needed), to make a decoration by means of a Logo procedure or a graphics editor.
Fig. 4. Computer models of hats, jewellery, and light sources
Our overall impressions were that although uneasy at the beginning the teachers were inspired by the freedom we gave them concerning the theme of their projects. They came up with a lot of original ideas about various multidisciplinary projects embracing not only mathematics and art but also ecology, geography and even physical education. A very rewarding experience for us was the invitation sent by the principal of a school in a small town in Bulgaria which organized a science festival. The most interesting part was accomplished by teams of teachers in English, history, technology, literature, geography together with their students. The initiator of all that was a teacher who after being exposed to the I*Teach methodology managed to disseminate it among her colleagues and to stimulate them to work in teams on multidisciplinary projects reflecting their areas of competence and interests. The most important conditions for using the multidisciplinary approach turned out to be (as expressed by most of the teachers after the courses): -
the use of an active learner-centered approach the formulation of the problem as a motivating challenge for the learner the tuning of the theme of the project in harmony with the learner’s interests the team work (sharing and building knowledge) the use of ICT tools for bridging knowledge from various fields.
554
E. Sendova et al.
4 Conclusions We have never seen anybody improve in the skills of working on projects by any means other than engaging in a project. Therefore, we cannot teach the art of working on multidisciplinary projects without engaging ourselves in such a work. With this in mind we encourage the teachers in expressing their creativity, knowledge and interests in a project chosen by them; we act as research partners of their teams and demonstrate how we attack the occurring problems. Looking back at the challenges they have overcome, and feeling proud with the results they and their peers have received, these teachers enter the schools with a newly gained self-confidence, ready to teach the way they have been taught. And if we manage to convince them that the school is not only a preparation for life but life itself we have achieved our goal. Although good examples of teachers’ creativity are not found in every school our endeavor is to spread their achievements through journals and conferences for teachers, and based on such achievements to enrich the in-service and pre-service teacher training. And we are not alone in this endeavor [11]. The main lesson for us as teacher educators could be summarized as follows: if we hope for a real positive change in education, we should bring today’s and tomorrow’s teachers in situations in which they would stop thinking about the future in terms of tests, exams or teaching pupils only. We should rather enable them to experience what they are doing as intellectually exciting and joyful on its own right.
References 1. Boytchev, P.: Overview of Research Logo System. In: Brna, P., Dicheva, D. (eds.) Proceedings of the Eight International PEG conference, Sozopol, Bulgaria (1997) 2. Sendova, E.: Identifying Computer Environments and Educational Strategies to Support Creativity and Exploratory Learning. In: Davies, G. (ed.) Teleteaching 1998 Distance Learning, Training and Education, Proceedings of the XV IFIP World Computer Congress, Vienna/Austria and Budapest/Hungary, 31 August – 4 September, p. 889 (1998) 3. Innovative Teacher project site, http://i-teach.fmi.uni-sofia.bg (retrieved on March 5, 2009) 4. Stefanova, E., Sendova, E., Van Deepen, N., Forcheri, P., Dodero, G., Miranowicz, M., Brut, M., et al.: Innovative Teacher - Methodological Handbook on ICT-enhanced skills, Faleza-Office 2000, Sofia (2007) 5. Stefanova, E., Sendova, E., Nikolova, I., Nikolova, N.: When I*Teach means I*Learn: developing and implementing an innovative methodology for building ICT-enhanced skills. In: Benzie, D., Iding, M. (eds.) Informatics, Mathematics, and ICT: a ‘golden triangle’ IMICT 2007 Proceeding, CCIS. Northeastern University, Boston (2007) 6. Sendova, E., Stefanova, E., Nikolova, N., Kovatcheva, E.: Like a school (of fish) in water (or ICT-Enhanced Skills in Action). In: Mittermeir, R.T., Sysło, M.M. (eds.) ISSEP 2008. LNCS, vol. 5090, pp. 99–109. Springer, Heidelberg (2008) 7. Sendova, E., Stefanova, E., Boytchev, P., Nikolova, N., Kovatcheva, E.: IT education – challenging the limitations instead of limiting the challenges. In: Proceedings of CIIT 2008, Bitola, Macedonia (2008)
Creating a Natural Environment for Synergy of Disciplines
555
8. Seymour, D., Britton, J.: Introduction to Tessellations. Dale Seymour Publications (1989) 9. Elica site, http://www.elica.net (retrieved on March 5, 2009) 10. DALEST project site, http://www.ucy.ac.cy/dalest/ (retrieved on March 16, 2009) 11. Galabova, G.: Method of the Project for Interactive Training in Mathematical Didactics. In: Gagatsis, A., Grozdev, G. (eds.) Proceedings of the 6th Mediterranean Conference on Mathematics education, Plovdiv, Bulgaria, April 22-26, p. 375 (2009)
Informing the Design of Intelligent Support for ELE by Communication Capacity Tapering Manolis Mavrikis and Sergio Gutierrez-Santos London Knowledge Lab {m.mavrikis,sergut}@lkl.ac.uk
Abstract. This paper presents a method for the design of intelligent support for Technology Enhanced Learning (TEL) systems. In particular it deals with challenges that arise from the need to elicit precise, concise, and operationalised knowledge from ‘experts’ as a means of informing the design of intelligent components of TEL systems. We emphasise that theory development and design of such systems should rely on a process, which we refer to as bandwidth and freedom ‘tapering’. We present the application of the methodology and a case study from our work with an exploratory environment. We then discuss the generality of our method and some pragmatic constraints which may be useful in similar research.
1 Introduction Developing adaptive and personalised support for learning environments is an interdisciplinary endeavour that involves several processes. As with the development of any intelligent system, one of the most difficult and expensive processes is knowledge elicitation. Particularly in the field of education, identifying experts and eliciting knowledge involves additional challenges which stem from the very same nature of the domain. The act of teaching or helping students to learn is full of uncertainty and complex factors to which experienced teachers often respond instinctively. This makes it difficult for individuals to be introspective and operationalise their tacit knowledge to the extent that it can be useful for the design of intelligent components of a TEL system. Even when domain experts are available, their expertise is often applicable only for some aspects of the design of environments that have a transformative role (i.e. seek to change the educational environment). For example, in the case of exploratory learning environments (ELEs), experts may be able to provide particularly useful knowledge when designing the ELE. However, its innovative nature and the freedom to explore makes it difficult to model the domain, and limits the validity of the expert’s experience and knowledge. Until the environment is implemented, evaluated and used by students, it is problematic to predict in advance the difficulties that will emerge when students are learning through their interaction. Therefore, the design of components that can provide intelligent support requires, at the very least, an iterative process, especially when one considers the high cost of their development [1]. This process ideally starts with interviews with domain experts
The authors would like to acknowledge the rest of the members of the MiGen team and financial support from the TLRP (e-Learning Phase-II, RES-139-25-0381).
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 556–571, 2009. c Springer-Verlag Berlin Heidelberg 2009
Informing the Design of Intelligent Support for ELE
557
(e.g. teachers and researchers in the field of education), iterates through several stages in which students interact with the system, and ends with the system fully implemented to be used by the students. However, interviewing domain experts and even observing students working with such an environment is not enough since the introduction of intelligent components that can provide support has the potential to further transform the interaction, to the extent that previous observations can be rendered useless. The aforementioned challenges can be tackled using appropriate research, design and development methodologies. Of particular relevance to our work are user-centered approaches which are becoming the norm within software engineering. A unified and promising approach is contextual design [2] which recognises that in-depth understanding of user behaviour requires observing and analysing situations in their actual context. Our overall methodology for designing and developing intelligent support for TEL systems is inspired by approaches that recommend iterative design and development [3,4] as well as iterative formative evaluations [5]. Moreover, we are strongly influenced by the methods of design experiments or studies [6] which, through iterative, situated, and theory-based research, attempt to understand as well as improve educational processes [7]. A particular relevant methodology from the field of applied Artificial Intelligence in Education (AIEd) is Persistent Collaboration Methodology (PCM) [8] which is inspired by action research (see [9]) and advocates incremental research and development that can also contribute to theories of teaching and learning. PCM recognises not only the need an iterative approach and the transformative nature of interventions in education but also the need for a three-way persistent collaboration involving teachers and their students, researchers and technologists. But just recognising the need for iterative design and research is not enough. To study the interested phenomena in their actual context, there needs to be a transition in the means of communication from those naturally used by humans to those that are available to the computer. The computer is limited in the amount of information that it can obtain from the student, and is also limited in the amount and types of feedback that it can provide. The limitations are both technical (e.g. natural language processing and generation) and pragmatic (e.g. humans behave differently with computers, i.e. they listen and read with different interest or attention). In some cases (e.g. in well-structured and well-researched and understood domains) this transition can be made abruptly, i.e. knowledge can be elicited from experts, and can be implemented as intelligent support in the system. In some cases, however, this fails to appreciate the difference in language and behaviour between both situations (a problem already noted in [10,11,12]). This realisation has lead to the development of a method known as wizard-of-oz, in which unimplemented parts of a system are emulated by a hidden human operator. The method has been extensively used for system design and evaluation in the humancomputer interaction (HCI) field [13,14,15]. Its importance has been understood especially in the design of dialogue systems [12,16], up to the point that tools to facilitate the preparation and execution of wizard-of-oz sessions have been recently devised [17]. In the last ten years the method has gained popularity for the design of Intelligent Tutoring Systems (ITS). However, there are many cases in the literature in which the process is not explicitly documented. This makes it more difficult to evaluate its usefulness and to employ it to its full potential. We argue that, for the design of TEL systems, it is crucial
558
M. Mavrikis and S. Gutierrez-Santos
to pay special attention to the available modalities of the situation as well as the speed and freedom of the messages that the operator can use to provide support to the student. We will refer to the combination of these concepts as the communication capacity of a situation, and describe it in detail in Section 2. In this paper we present our methodology for designing intelligent support that is particularly suited for ELEs but is of relevance to all kinds of TEL systems. We examine issues that, to the best of our knowledge, have not been considered explicitly and explain how our approach can serve in informing the design of such systems. In particular, we describe the need for a gradual evolution of the communication capacity, integrated with the design and theory-development process, while taking into account the different contexts in which the system will be used [2] (e.g. in the classroom, at home, etc). We emphasise the need for starting from face-to-face interactions, and to gradually introduce constraints and structure by ‘engineering didactical situations’ [18]. We refer to this process as ‘tapering’ the communication capacity of didactical situations and we explain it in detail in Section 2. In Section 3, through a case study, we demonstrate how our method helps in investigating the transformed situation — something that would be impossible otherwise — and in deriving and evaluating pedagogical strategies. In Section 4 we discuss the generality of our approach and some pragmatic considerations which, we believe, other researchers and designers will find useful. Finally, Section 5 brings the paper to its conclusion and presents our immediate lines of work.
2 Iterative Communication Capacity Tapering Our methodology can be conceived as a spiral, a cyclical process in which every iteration moves nearer to the centre (Figure 1). Every cycle goes through the four stages of action research (planning, acting, observation, reflection) which, for clarity purposes and following some of the conventions used in [8], we refer to them to as planning and design, implementation, conducting studies and analysis. After each cycle, the communication capacity of the situation is reduced. This spiral process brings obvious similarities to the spiral model of software design [4]. However, it is important to note that our spiral moves inwards, not outwards. In the traditional spiral software design, a bigger radius after each iteration represents more or better functionalities. Conversely, the shrinking radius of our spiral represents a reduction of the communication capacity. Additionally, the four stages in our methodology are more related to the action research and PCM stages [8] than to those in [4]. The planning and design stage comprises a combination of tasks: the design of the learning activities, outlining the relationship between the affordances of the environment and the learning activity, conceptualising and planning (to the extent possible) the support that may be requested and/or provided. The implementation stage involves the implementation (or integration with the ELE) of any additional functionality that has been designed in the former stage. Typical examples are tools for the computer mediated-sessions (e.g. a good example of remote consoles for the rapid creation and dispatch of wizard-of-Oz messages can be seen in [19]), tools for session recording, etc. After the implementation, a series of studies are conducted with students involved
Informing the Design of Intelligent Support for ELE
CONDUCTING STUDIES
559
ANALYSIS
Communication capacity
IMPLEMENTATION
PLANNING & DESIGN
Fig. 1. Spiral design and development by communication capacity tapering
in studies with a ‘facilitator’1. The facilitator can be a teacher or, in our experience, a member of the research team as also discussed in [12]. The studies are recorded and a researcher carefully observes (or takes part in) the interaction between the student, the facilitator and the environment.This is the most important stage, as it usually uncovers many tacit assumptions particularly when subjects do not respond as expected, or show unexpected behaviours. The researchers can afterwards reflect on all these issues (reflection and analysis), with the help of their own notes, video and voice recording from the testing sessions, etc. In accordance with PCM, analysis and reflections sessions should include other stakeholders or other qualified individuals (e.g. teachers or experts in domains similar to the one targeted by the TEL system) that can provide insightful comments, review the data and help in identifying explicitly what are the distinctive landmarks that characterise the actions of the student. This process can be repeated several times until an understanding is achieved about how to move to the next cycle with reduced communication capacity. 2.1 Interaction Bandwidth Tapering We use the term ‘interaction bandwidth’ to express the available modalities between the student and the facilitator and the speed at which information can be transferred during a situation. This depends mostly on two factors: the different modalities available in which the message can be transmitted, and the intrinsic speed of each modality (e.g. it is faster to say an observation than to type it). 1
We use the term facilitator to avoid assumptions that would be introduced by other terms such as wizard, implying that students are not aware of the presence, expert since, as discussed in the Introduction, for certain domains there may be no available experts, or even tutor or teacher which evoke connotations about the subject domain.
560
M. Mavrikis and S. Gutierrez-Santos
In face-to-face communication, there are many different ways to communicate with the student. The facilitator can speak orally, but can also point to entities on the screen, take control of the actions (e.g. moving the mouse for the student) to prove an argument, and draw inferences based on facial expressions and their gaze (e.g. focus of attention, emotions like boredom or excitement, etc). This rich communication is far from what can be achieved by a modern computer-based system. Therefore, in the effort to create a system that supports the student, the interaction bandwidth has to be gradually reduced; in other words, the facilitator has to progressively use communication channels that can provide the same sort of interaction as the one that will be possible with the intelligent system. We refer to this process as bandwidth tapering.
face-to-face
I N T E R A C T I O N
computer-mediated sessions
system-proper
B A N D W I D T H
Fig. 2. Interaction bandwidth tapering
Figure 2 illustrates this process. From its beginning (in which the facilitator interacts face-to-face with the student) to the final stages (where the student interacts only with the system), there is a gradual reduction of bandwidth. This can be implemented through a series computer-mediated (wizard-of-Oz) sessions. 2.2 Message Freedom Tapering There is another crucial factor when investigating the communication between the student and the TEL system. We refer here to the freedom to choose what message to feed back to the student. This becomes important in the case of exploratory environments, in which the student holds a greater freedom to act than in other systems, while the system keeps the usual limitations in the kind of feedback it can provide. This issue is specially important in the case of textual communication, which is the most common approach to provide support. Despite the great advances in the NLP field, natural language generation is still far from being a mature technology that can be used easily for the provision of intelligent support during the learning process. Therefore, most systems rely on a template of pre-generated messages for their interventions. The design of these templates is crucial for the correct deployment of effective intelligent support and plays a central role in our methodology. From the unlimited flexibility that is available in face to face communication, to the limited choice of templates or pre-generated messages that the final system can use, there must be a gradual and structured reduction. This message freedom tapering is illustrated in Figure 3.
Informing the Design of Intelligent Support for ELE
Oral communication
Chat
Script + free messages
Script
561
System implemented
MESSAGE FREEDOM
Fig. 3. Message freedom tapering
During the first stages of design, in which the facilitator communicates with the student orally, she has total freedom to choose what to say. As the process progresses, this freedom is gradually reduced. At some points, the limitations are imposed due to technical limitations (e.g. it takes more time to speak through chat than orally); in other cases, this is part of the design, as the facilitator tries to restrict herself to a script of pre-generated interventions. 2.3 Overall Description The process starts with face-to-face sessions, in which a facilitator interacts directly with a student and the system. A researcher might play the role of the observer and the whole session would be recorded. In some cases, more than one student can be involved in the study. Having more than one student has several positive effects: it makes the situation more comfortable for the students, and the interaction between them makes them verbalise their thoughts (i.e. explain what they want to do to the other student), giving important cues to the researcher. This helps understanding students’ interaction with the system and the conceptual difficulties they face. During this stage, the researcher can provide support directly by opening a dialogue with the student(s). Once the researcher has a fair knowledge of the interaction between the student and the whole environment, the gradual tapering of the communication capacity of the situation should begin. The first step involves physically separating the facilitator and the student, using some kind of remote desktop system to allow them to communicate. Being away from the student, the facilitator loses many cues about what is happening in the student’s mind, and she must infer them from the actions she can see on the screen; in other words, some inputs (e.g. student’s face, tone of voice or gestures) must be restricted and only what will be available to the computer to analyse should be kept. This way the facilitator and even the students become immersed in the situation [12]. At this stage, it is possible to employ think-aloud methods [20] and to ask both student and facilitator to verbalise their cognitive processes. Advantages and disadvantages for this process have been discussed extensively [20] but of particular interest is the cognitive overloading of the facilitator to react and to record their rationale behind their decisions. To avoid this, our experience suggests that it is worth engaging the facilitator in a setting in which she instructs the operator of the computer-mediated tools (usually the researcher) about what to do or what to say. This forces explicit reflection on her thoughts, which is useful for the researcher to understand the kind of support needed.
562
M. Mavrikis and S. Gutierrez-Santos
Moving away from the student, the facilitator has also lost means to provide feedback. For a start, they can no longer give feedback by body language, facial expressions, eye direction, etc. Other modalities have to be replaced by less fluent alternatives: direct finger-pointing on the screen evolves into remote cursor handling or area selection, voice communication evolves into chat, etc. The alternatives are always limited, but more similar to the equipment available to the computer. Constrained by these limitations, the researcher becomes aware of the kind of feedback that the system will be able to provide, and how to modify messages to by more effective in the new medium. Gradually through this iterative process, the interventions crystallise into a script. The script contains information about what needs to be said, and when it has to be said. The feedback can be verbal (e.g. written text) or not (e.g. making a special part of the screen to blink), and the triggering condition will probably be a combination of explicit and probabilistic rules. This script is followed by the researcher as closely as possible during each iteration of the design process. The amount of freedom is gradually reduced until the script can start to be implemented by the system. By iteratively developing new components and restricting the freedom of choice the role of the facilitator evolves evolves into a hidden operator.
3 Application of the ICCT Methodology: A Case Study We are certain that several projects implicitly follow aspects that can be framed within the ICCT methodology. For example, the process followed in [21] towards the development of an adaptive collaborative experimentation environment, reduces a wizard’s freedom employing a script. Similarly, in order to derive a predictive model of student’s affective characteristics [22] follow several of the steps of ICCT. In [23] data from tutorial dialogues are collected by conducting studies and successively replacing simulated components of the tutoring system by actual implementations. However, as discussed in the Introduction to the best of our knowledge, most work so far does not explicitly describe the process neither the importance of the interaction bandwidth and freedom tapering. In order to demonstrate how the ICCT methodology can serve in informing the design of intelligent support in TEL system, we present below a case study derived from a project in which both authors are currently involved. The particular case of this project exemplifies almost all steps of the ICCT methodology and demonstrates its potential even in situations where development of intelligent support has to co-evolve with the design and development of the actual environment. 3.1 MiGen: Intelligent Support for Mathematical Generalisation The MiGen project revolves around the development of intelligent support for a mathematical microworld to help students in secondary education with the learning of mathematical generalisation. The rationale of the project, its aims and the environment are described in detail elsewhere [24]. For the benefit of the reader and the purposes of the current discussion, it suffices to say that the project focuses on activities that ask students to identify relationships that underpin figural patterns (like the one in Figures 4,5). Similar activities are often found in the UK National curriculum and have the potential
Informing the Design of Intelligent Support for ELE
563
to emphasise the structural aspect of patterning rather than the purely numerical. As explained in more detail in [24,25] this is a key difficulty that students face. The microworld (called eXpresser) allows students to build their constructions and expressions for the figural pattern tasks and to reach expressions (rules) that describe relationships that underpin the patterns. In order to do that, students can use entities that behave like variables and enable the description of the relationships that students perceive. Patterns are created when a base shape is repeated. In the case of Fig 4 the base shape (shown in A) is repeated 3 times. That is as many times as the value of a student-created variable, named ‘reds’ (shown in B). The base shape is placed every two squares across (C) and zero places down (D). The pattern is painted correctly and can be animated in the ‘General world’ (right) where variables take different values (e.g. in Figure 4 the value of ‘reds’ is 7). Students then express rules that describe the total number of tiles (E).
Fig. 4. Constructing and describing with rules a pattern in the eXpresser
3.2 Pedagogical Strategies in MiGen The application of the ICCT methodology in MiGen aims at informing the design of intelligent support and particularly the pedagogical strategies and interventions that can be followed in order to provide support to students and information to teachers during the use of the system in classroom. Although the mathematics education literature, consultation with the teacher group and our previous research provided suggestions on how to proceed with the design of strategies for the intelligent support, when this part of the work started, the domain of the microworld’s validity, the interaction it afforded and particularly the pedagogic strategies that would be effective were unknown. The ICCT process allowed the identification, investigation and evaluation of several pedagogic strategies. Throughout the rest of the paper we will employ as an example only one strategy that pertains to all stages of ICCT and was introduced through members of the teacher group who had particular experience in dynamic geometry. The strategy, which is referred to as ‘messing-up’, challenges students to construct models that are impervious to changing values of the various parameters of their construction
564
M. Mavrikis and S. Gutierrez-Santos
[26]. In geometry this provides an incentive for creating constructions using the properties of objects rather than ‘draw’ geometry figures. It also helps understanding variants and invariants of the constructions. The strategy seemed applicable in eXpresser from the outset. Figure 5 shows how changing the number of red tiles breaks the construction if it is not entirely general.
(a)
(b)
(c)
Fig. 5. ‘Messing-up’ in eXpresser. In (a), a pattern has been created with 3 red (dark) tiles surrounded by green (light) tiles. When the number of red tiles increases from 3 to 5, the pattern should be as shown in (b). However, the construction in (c) does not look correct —it is messed up— since the number of green tiles surrounding the original 3 red tiles was specified in absolute terms, so did not change with the number of red tiles.
As already mentioned there are other strategies to help students (e.g. making them aware of the rhythm of their action[24], providing challenges that demonstrate the lack of generality of their approach, providing them with counter examples, etc.). Describing these in detail is out of the scope of this paper. The interested reader is referred to [24]. 3.3 ICCT in MiGen The overall design and development process in MiGen involves collaboration among researchers with various expertise, including technical, educational and design-related expertise. We will not elaborate here on the social configuration of the project. For the purposes of this paper, we will refer to an educational team (comprised primarily of mathematics educators), a technical team (includes mostly computer scientists and engineers) which are directly involved in the project and an external group referred to as the ‘teacher group’ (includes teachers and teacher educators)2. In order to inform the design of the intelligent components, researchers from the educational team observed students interacting with the system in different phases of the spiral process. Analysing the think-aloud protocols and screen recordings of the session enabled the adjustment of existing theories of the domain of mathematical generalisation to our particular domain. It also facilitated the iterative development of an interaction model which comprises of state transition diagrams, and the identification and evaluation of pedagogic strategies that can be followed in each state. As part of of the reflection phase, members of the teacher group are presented with scenarios derived from students’ work in previous studies and are asked to comment on the pedagogic strategies followed in them. This facilitated the knowledge elicitation process 2
Of course, some members of the team have an interdisciplinary expertise and therefore this separation is only indicative. In fact, the actual social configuration of the project is more complex and combines aspects of the “two-legged” and the “ladder” models described in [27].
Informing the Design of Intelligent Support for ELE
565
from experts but also the investigation and subsequent evaluation of the strategies that the intelligent system rather than a human can follow to provide support. In each stage of the spiral process, the whole team kept reflecting carefully on the communication capacity of the situation and incrementally prioritised the components that need to be implemented to facilitate the next cycle of computer-mediated sessions. We provide more details below for the specific stages in the ICCT process. Face-to-Face Studies. In MiGen, the ICCT process started with several one-to-one, face-to-face sessions in which a facilitator (a teacher from the teacher group or a researcher from the educational team) was helping the student interact with the environment and solve the given tasks while another researcher was observing the situation, mostly keeping notes. Students (particularly the older and higher level ones) were asked to think-aloud. Of course the interaction between student and facilitator was recorded, together with the screen and the interaction of the student with the microworld. Since thinking-aloud is more difficult for younger students and it interferes sometimes with the task, the team conducted small scale studies where two or three students collaborated with each other. This was particularly important at early stages of the research since little was know about how students appreciate the activities, how they perceive the microworld and whether the pedagogic strategies the literature proposed would be applicable. These studies validated the importance of the initial set of strategies that the literature and the teacher group suggested. For example, in relation to the ‘messing-up’ strategy, the studies showed its potential in creating a culture where students take responsibility for distinguishing between patterns that can and cannot be messed up [25]. However, such a situation introduces a lot of noise. During the collaborative session in particular, not only it was difficult to see how individual students perceived the feedback from the didactical situation but also it was hard to control and evaluate the effects of the teachers’ actions. According to the discussion in Section 2, this is a situation with high communication capacity. Not only student and teacher are quite free to adapt to the communication requirements of the situation but there is high interaction bandwidth. As expected, it was not easy to generalise all of the strategies to what we envisaged the system would be able to do. Although we could easily imagine how to transfer the effective messing-up strategy to a strategy for the intelligent system, these studies were clearly demonstrating the importance of also drawing students attentions to parts of their constructions through language and gestures. The project’s challenge however was to identify ways of helping the students through the computer. This would clearly be a transformative situation which could be investigated only with a simulation or a prototype. Our only means of doing that was clearly through computer-mediated sessions with a reduced interaction bandwidth. Reducing the Interaction Bandwidth. When enough data were collected from the aforementioned process the bandwidth of communication between facilitator and students was reduced, following the process described in the ICCT methodology. For example, in the first phase students were allowed to talk with the facilitator through a web-conferencing and application sharing software.3 3
Elluminate (http://www.elluminate.com/ ) was used for this process.
566
M. Mavrikis and S. Gutierrez-Santos
This for MiGen was the first reduction of communication capacity. It allowed the researchers of the educational team to start considering the actions of the student employing the same bandwidth that would be available to the system, yet provided the flexibility to talk to the students and a rationale for the student to verbalise their thinking since, at least in some critical stages (e.g. when they had an impasse), they had to talk to the researcher. This allowed a low-cost evaluation of the strategies according to the analysis of the previous iterations and the development of initial scripts for subsequent iterations. In addition, the difficulties encountered and the missing tools required to facilitate the interaction between student and researcher provided requirements for components that could be used to support the next iteration of computer-mediated sessions. For example, preliminary sessions highlighted the importance of drawing students attention to particular parts of their constructions construction, especially when students were deriving general expressions that required reflection on different components of a pattern. This was difficult in this modality. Gestures were substituted by rapid mouse movements around the object of attention and verbose explanations about the location of objects as well as unnecessary confirmation requests on whether the student had seen or not what the facilitator was ‘pointing’ at. In relation to the messing-up strategy, these sessions helped demonstrate its effectiveness in providing a rational for the students to create a general pattern. The situation allowed scripting some of the actions of the facilitator, as well as runtime discussions (with the observing researcher) in relation to the rationale behind any decisions made. On the other hand, the high interaction bandwidth (voice) was starting to hinder the investigation of strategies particularly suited for the intelligent system. It was important to reduce further the interaction bandwidth. It is important to emphasise here that this gradual reduction was required. As in all projects, time, budget and other pragmatic constraints make such intermediate steps indispensable, particularly in order to draw requirements for subsequent iterations that would otherwise be impossible and time-consuming to design. Reducing the Interaction Bandwidth Further. Having derived enough data, the team felt comfortable to reduce the communication bandwidth further, transforming the didactical situation closer to the envisaged final system. In the next phase therefore one researcher was out of sight and relied only on a remote view of the actions of the student with the microworld and a chat interface that allowed sending messages to and receiving short replies from the student. However, the previous sessions raised the critical requirement of equipping the facilitator with tools to draw students’ attentions to aspects of their constructions. In order to avoid the risk and cost of implementing on-screen annotation before knowing exactly what is needed we employed classroom management software. Such software superseded our requirements as it allows not only monitoring and controlling the students’ computer (something that we could do simply with remote control software4 ) but also tools suited for educational use (such as locking the keyboard and mouse during the 4
In pilot sessions we found VNC (http://www.realvnc.com/ ) particularly useful.
Informing the Design of Intelligent Support for ELE
567
lesson) and in particular on-screen annotation.5 The setup required three computers in total (one for the student and two for the remote facilitator, one with the classroom management software and on-screen annotation software and another with the chat interface enabling fast switching between the two modalities). The team found important to engage students in a role-playing game. A researcher was always besides the student reminding them to think-aloud but also helping them in case of technical problems. The students were asked to consider the researcher as their teacher who could not always help them in the classroom. Therefore if students wanted help they had to ask for help from the out-of-sight operator. Although we were aware of the difference between students’ perception, language and frequency of help-request between asking help from a human versus a computer, we were constrained by the fact that the on-screen annotations were still manual and looked quite humane compared to the actions of a computer. Nevertheless the studies provided useful insight for the project in relation to the requirements for annotation and the pedagogic strategies. In particular, this set of studies highlighted that written feedback (especially unsolicited) is often ignored, and other times difficult to understand. This is a common difficulty with all intelligent systems but it seems exacerbated in our case, because of the young age of the projects’ target group, and because the innovative environment requires developing a particular discourse with the students over a series of sessions. It was thus decided, after consultation with members of the educational team and the teacher group, to try to use as much as possible non-verbal feedback in our next iteration, but also to take special care during previous interactions with the system, to introduce students to the appropriate language. In relation to our example with the messing-up strategy, the studies indicated the lack of a robust incentive for students to construct something that cannot be messed up. We hypothesised that this was because of the human presence and the fact that remote messing-up especially without any associated language was rather artificial. The facilitator had to access the pattern’s property list and change an attribute manually. As far as the students were concerned absence of the facilitator would not require general constructions, they were satisfied with their ‘drawings’. As the studies progressed this raised the requirement for on-screen annotation and the ability to mess-up in a less conspicuous way. Emulating Intelligent Support. All the aforementioned sessions facilitated the iterative development of a script for the hidden operator, and provided the means to derive more information on the effectiveness of such strategies and specific ways of implementing them. They also contributed in designing, in consultation with the education team and the teacher group, the UI components with which we envisage the final system will provide support. We continued the tapering process by reducing in every step the freedom of the hidden operator. Meanwhile, the technical team provided more UI components. For example, the chat could give its place to the actual prompts allowing their iterative design, improvement and evaluation. In relation to the messing-up strategy, the user 5
We used Vision, a classroom sharing tool equipped with Netop’s Pointer plugin (http://www.codework.com/vision/pointer.php). We would like to thank Vision’s distributor in UK, CodeWork and Kam Yousaf in particular for the assistance in using the software.
568
M. Mavrikis and S. Gutierrez-Santos
interface permitted changing the attributes of any pattern thus causing a more believable messing-up and providing a further incentive to the students to construct patterns that the intelligent system (in fact, by the hidden operator) cannot mess-up. In parallel in every session, having collected enough information, the freedom of the hidden operator was restricted. Initially, a draft state transition diagram was provided together with possible actions for each state. As the studies progressed some of the actions could be pre-designed allowing their evaluation and enabling the collection of more useful data from the interaction that could be analysed informing not just the design of the pedagogical strategies but rather the actual algorithms and AI techniques that could be employed to adapt and personalise the feedback that they system would provide. A description of this is outside the scope of this paper.
4 Discussion The Iterative Communication Capacity Tapering methodology is divided into different stages, in which the communication capacity is gradually reduced. The stages, however, are not clear cut. Depending on the project needs, more time has to be spent on some stages than others (e.g. in domains that are not well-understood, far more effort is required in earlier stages). Several factors have to be taken into account. The ICCT methodology is particularly useful as a means of indirect knowledge elicitation, and is thus very suitable when working with students, or with adults without a technological background. For instance, the case study demonstrates that it was very useful when trying to learn from students in the 11–12-years-old range, which still find significant difficulties in verbalising their thoughts and making them explicit. Another important factor is the nature of the system to be supported. Some systems are easier to support than others, depending on their domain or characteristics. It is evident that it is more challenging to support an exploratory learning environment, in which the students have a lot of freedom and the number of possible courses of action (even the ‘sensible’ or ‘correct’ ones) is huge. Additionally, systems that aim at sustaining the learning of ‘tangible’ domains (e.g. ELEs for physics, chemistry etc.) might not be as challenging because it is easier to provide a simulation and predict the difficulties that students may face. However, in microworlds or other environments for mathematics, it is often the case that it is difficult to instantiate a concept with a tangible representation. They are therefore truly transformative as they can potentially change even the possible mathematics that can be learned [28,29,24]. It therefore becomes more labour-intensive to understand students’ interactions. In those cases in which difficult domains and early ages coincide, the ICCT methodology can be of greater help. In the paper, we did not discuss separately cases in which some kind of environment already exists, and thus intelligent support has to be designed a posteriori, versus cases where the support is designed together with the core environment. It is worth highlighting however that in the latter, the first stages of the ICCT take longer since everything is developed in parallel and pedagogical strategies and interventions all co-evolve as the system changes. The final stages, on the other hand, probably take less time as the various components and the influence to the design of each other start to stabilise.
Informing the Design of Intelligent Support for ELE
569
It is important to note that the ICCT methodology offers a dual benefit contributing both to research and design. In the first stages, interdisciplinary research has more weight, and researchers learn about the domain, the system, the needs for support and unexpected behaviours on the part of students (what Twidale called the ‘cognitive microscope’ [30]). In the final stages, design becomes more important as the focus turns to implementing a fruitful system taking advantage of what has been learnt from the literature and the first stages of the design.
5 Conclusions and Future Work This paper presents a methodology for knowledge elicitation and design of intelligent support for TEL systems, especially those based on exploratory environments. The methodology builds on previous work on contextual and iterative design, and wizard-ofoz techniques. It is based on the gradual tapering of communication capacity, on its two dimensions of interaction bandwidth and freedom of message choice. After each iteration of the four-stage cycle of planning & design, implementation, conducting studies, and analysis, the communication capacity of the whole system is reduced. The student is supported by a set of mechanisms that gradually evolve from those used by a human to those available to the final intelligent TEL system. In particular the tapering of the interaction bandwidth and of the message freedom ensures that the final system will provide adequate support, appropriate to its communication capabilities. The final stages of the process place more emphasis on design and on the gradual development and evaluation of the intelligent support components. We presented our experience through the case study of designing intelligent support for a microworld to support students’ mathematical generalisation. A particular consideration that became apparent through this case study is the efficiency of the method, especially as it was employed while the microworld was under iterative development. The method can be time consuming and when the core system changes some of its results can be rendered useless. However, the structured nature of the method offers a dual benefit. On the one hand, it facilitates conducting research that contributes to the growth of theories of teaching and learning. On the other hand, it enables the iterative design, development and evaluation of a system. In the future we will attempt to streamline the process by identifying ways to make it more efficient. A first step is to develop generic tools that can enable the communication capacity tapering in different contexts.
References 1. Murray, T.: Authoring intelligent tutoring systems: An analysis of the state of the art. International journal of artificial intelligence in education 10, 98–129 (1999) 2. Beyer, H., Holtzblatt, K.: Contextual Design: A Customer-Centered Approach to Systems Designs. Morgan Kaufmann Series in Interactive Technologies (1997) 3. Sharples, M., Jeffery, N., du Boulay, B., Teather, D., Teather, B.: Socio-cognitive engineering. In: European conference on Human Centred Processes (1999) 4. Boehm, B.: A spiral model of software development and enhancement. SIGSOFT Softw. Eng. Notes 11(4), 14–24 (1986)
570
M. Mavrikis and S. Gutierrez-Santos
5. Johnson, L.W., Beal, C.: Iterative evaluation of a large-scale, intelligent game for language learning. In: Proceedings of the International Conference on Artificial Intelligence in Education, pp. 290–297 (2005) 6. Cobb, P., Confrey, J., Disessa, A., Lehrer, R., Schauble, L.: Design experiments in educational research. Educational Researcher 32(1), 9–13 (2003) 7. Disessa, A.A., Cobb, P.: Ontological innovation and the role of theory in design experiments. Journal of the Learning Sciences 13(1), 77–103 (2004) 8. Conlon, T., Pain, H.: Persistent collaboration: a methodology for applied AIED. International Journal of Artificial Intelligence in Education 7, 219–252 (1996) 9. Cohen, L., Manion, L., Morrison, K.: Research Methods in Education, 5th edn. Taylor & Francis Ltd., Abington (2000) 10. Rizzo, P., Lee, H., Shaw, E., Johnson, L.W., Wang, N., Mayer, R.E.: A semi-automated wizard of oz interface for modeling tutorial strategies. In: Ardissono, L., Brna, P., Mitrovi´c, A. (eds.) UM 2005. LNCS (LNAI), vol. 3538, pp. 174–178. Springer, Heidelberg (2005) 11. Dahlback, N., Jonsson, A., Ahrenberg, L.: Wizard of oz studies: why and how. In: IUI 1993oceedings of the 1st international conference on Intelligent user interfaces, pp. 193– 200. ACM Press, New York (1993) 12. Maulsby, D., Greenberg, S., Mander, R.: Prototyping an intelligent agent through wizard of oz. In: CHI 1993: Proceedings of the INTERACT 1993 and CHI 1993 Conference on Human factors in computing systems, pp. 277–284. ACM, New York (1993) 13. Gould, J.D., Lewis, C.: Designing for usability: key principles and what designers think. Commun. ACM 28(3), 300–311 (1985) 14. Wilson, J., Rosenberg, D.: Rapid prototyping for user interface design. In: Helander, M. (ed.) Handbook of Human-Computer Interaction, pp. 859–875. Elsevier Science Publishers, Amsterdam (1988) 15. Chignell, M.H.: A taxonomy of user interface terminology. SIGCHI Bull. 21(4), 27 (1990) 16. Bernsen, N.O., Dybkjaer, H., Dybkjaer, L.: Designing Interactive Speech Systems From First Ideas to User Testing. Springer, Heidelberg (1998) 17. Benzm¨uller, C., Fiedler, A., Gabsdil, M., Horacek, H., Kruijff-Korbayov`a, I., Pinkal, M., Siekmann, J., Tsovaltzi, D., Vo, B.Q., Wolska, M.: A wizard of oz experiment for tutorial dialogues in mathematics. In: Aleven, V., Peinstein, C. (eds.) Workshop on Tutorial Dialogue Systems: with a view toward the classroom (July 2003) 18. Brousseau, G.: Theory of Didactical Situations in Mathematics. Springer, Heidelberg (1997) 19. Wang, N., Johnson, L.W., Rizzo, P., Shaw, E., Mayer, R.E.: Experimental evaluation of polite interaction tactics for pedagogical agents. In: IUI 2005: Proceedings of the 10th international conference on Intelligent user interfaces, pp. 12–19. ACM, New York (2005) 20. Ericsson, K.A., Simon, H.A.: Protocol Analysis: Verbal Reports as Data. MIT Press, Cambridge (1993) 21. Tsovaltzi, D., Rummel, N., Pinkwart, N., Scheuer, O., Harrer, A., Braun, I., Mclaren, B.M.: Cochemex: Supporting conceptual chemistry learning via computer-mediated collaboration scripts. In: Proceedings of the Third European Conference on Technology Enhanced Learning (ECTEL 2008)(September 2008) 22. Porayska-Pomsta, K., Mavrikis, M., Pain, H.: Diagnosing and acting on student affect: the tutors perspective. User Modeling and User-Adapted Interaction 18(1), 125–173 (2008) 23. Fiedler, A., Gabsdil, M.: Supporting progressive refinement of wizard-of-oz experiments. In: Workshop on Empirical Methods for Tutorial Dialogue Systems, ITS 2002, pp. 62–69 (2002) 24. Noss, R., Hoyles, C., Mavrikis, M., Geraniou, E., Gutierrez-Santos, S., Pearce, D.: Broadening the sense of ‘dynamic’: a microworld to support students’ mathematical generalisation. Special Issue of The International Journal on Mathematics Education: Transforming Mathematics Education through the Use of Dynamic Mathematics Technologies 41(5) (2009)
Informing the Design of Intelligent Support for ELE
571
25. Geraniou, E., Mavrikis, M., Hoyles, C., Noss, R.: A constructionist approach to mathematical generalisation. Proceedings of the British Society for Research into Learning Mathematics 28(2) (2008) 26. Healy, L., Hoelzl, R., Hoyles, C., Noss, R.: Messing up. Micromath. 10, 14–17 (1994) 27. Disessa, A.A., Azevedo, F.S., Parnafes, O.: Issues in component computing: A synthetic review. Interactive Learning Environments 12(1-2), 109–159 (2004) 28. Papert, S.: Afterword: After how comes what. In: Sawyer, R.K. (ed.) The Cambridge Handbook of the Learning Sciences, pp. 581–586. Cambridge University Press, Cambridge (2006) 29. Noss, R., Hoyles, C.: Windows on mathematical meanings: Learning cultures and computers. Kluwer, Dordrecht (1996) 30. Twidale, M.: Redressing the balance: the advantages of informal evaluation techniques for intelligent learning environments. Journal of Artificial Intelligence In Education 4, 155–178 (1993)
Automatic Analysis Assistant for Studies of Computer-Supported Human Interactions* Christophe Courtin and Stéphane Talbot Équipe Systèmes Communicants, Université de Savoie, Campus de Savoie Technolac, 73376 Le Bourget du Lac cedex {Christophe.Courtin,Stephane.Talbot}@univ-savoie.fr
Abstract. This paper presents a system architecture to bridge the gap between the users computing activity in collaborative platforms and the analysis of this activity which is carried out by researchers in human and social sciences. This research work tends to highlight the capacity of a computer-supported observation station, based on a theoretical model called TBS (Trace-Based System), to assist researchers automatically in their activity of analysis using a high abstraction level. We present the modules of a prototype of an observation station called CARTE (Collection, activity Analysis and Regulation based on Traces Enriched) which enable the interoperability between the collaborative platforms, where the users produce raw traces and the analysis environments, where the researchers study traces of a very high abstraction level. Keywords: trace, automatic analysis, observation, learning activity.
1 Introduction The introduction of the technology in the collaborative learning activities has aroused the interest of the researchers in human and social sciences, who study the various forms of concerned cognitive and social activities [1]. There is a complexity in these studies such that it is necessary to assist them, as is possible to do with technology. Actually, the analysis of human interaction with a computer system remains a human activity with a high abstraction level, where the technology can assist the researcher by helping him/her to achieve his/her tasks. In parallel to these research works in human and social sciences, there are others in computer science relative to the design of computer-supported collaborative learning systems, which we label observation based on activity traces. The relationship of the concerned actors (i.e. learners and teachers) to the software system is where the interest of the study of the traces for learning lies. Indeed, we have shown that we could exploit the activity traces, by means of a regulation model [2], in order to modify the system processing so that it tends towards the user’s own model of this system, that we call the use model [3]. *
This work is part of the cluster ISLE project, supported by the "Region Rhone-Alpes" in France.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 572–583, 2009. © Springer-Verlag Berlin Heidelberg 2009
Automatic Analysis Assistant
573
2 Study Context The study we present in this article aims to define the significance of the results of research work in computer science in the field of the observation of the collaborative learning activities, for the automatic assistance of human and social science researchers’ activities, on the understanding of the analysis processes of the socio-cognitive interactions. It is necessary now that we define precisely the context of the study presented above. The socio-cognitive study of computer-supported human interactions, is based on the software traces (object of study) (e.g. row data from a log file, data from an instrumentation of the software tools [4]) and on the computerized traces (e.g. video/audio transcription). The research practices in human and social sciences are, among others, the creation and the sharing by the research communities of corpora of traces useful for the understanding of the activities performed. Those of computer science are collection from various sources (e.g. log file, video), representation in different abstraction levels and help in the analysis of multi-modal and multi-user interaction traces. The traces processing is carried out in an independent system, called “observation system”, which is designed from a theoretical model of observation which we shall not re-describe in this article due to the lack of space [5], [6]. In general, we define a trace as being a sequence of temporally-situated “observeds”. The concept of “observed” represents a datum relative to the observation activity. For the interpretation of the trace in a given context, we associate to it a semantics by means of a use model and we use the term MTrace (trace + model). In the context of the socio-cognitive study of the computer-assisted human interactions, the researchers use three types of trace: 1. Interaction traces stemming from software tools which are used for the collaborative activities; 2. Calculated (or created) traces which correspond to a computer translation of traces stemming from non computer sources (e.g. transcriptions of video); 3. Enriched traces, i.e. stemming from transformations of interaction traces or calculated or already enriched by means of use models of the software tools from which they arise. The final research question we are going to treat in this article is: how can an independent observation station provide an automatic assistance to researchers in human and social sciences, who carry out analyses of multi-modal and multi-user traces of interaction using a specific computing environment? In this article, we will focus on the computing aspects of the question, such as the interoperability of the systems where the activities take place (e.g. production, communication) and the analysis system that will be described in the following paragraphs. The other computing questions which derive from our study concern, among others, the capacity of an observation station to produce automatic analyses which make sense for the human observers. The observation of the multi-modal and multi-user interactions in a context of computer-mediated collaborative learning is an intrinsically multidisciplinary activity. We humbly measure the limits of computer science when complex activities are carried out, compared to the greater capacity of analysis of the human being. The results
574
C. Courtin and S. Talbot
that we expect from our research works are the design of computer models which aim to stretch these limits. In this article, we first set out the context of the trace analysis that we split into three areas (or families of software tools): the collaborative platforms, the observation stations and the analysis environments. The following chapter deals with trace structuration which is used in each family to make specific analyses. Then, we will present our trace-centered approach to allow interoperability between the various analysis systems. From this, we present the bases of the design of an automatic analysis assistant for researchers in human and social sciences. This study leads us to a conclusion and a perspective view of this research work in a multi-disciplinary project, which starts in the coming months.
3 Trace Analysis We have underlined the difficulty in carrying out automatic analysis of traces of multi-modal and multi-user interactions by use of a computing environment, which makes sense for researchers in human and social sciences. Conversely, the latter are confronted with a complexity of the analysis such that it is necessary for them to have assistance, in particular in the tasks involving very many parameters or time consuming tasks. We identify three families of software tools to carry out these analyses (see Fig. 1). The first family corresponds to the collaborative platforms which contain the tools of production, communication, cooperation and regulation according to the four dimensions of the clover model of [7] augmented by the regulation by [8]. These software tools intrinsically produce traces which, according to the situation, can be directly exploited for observations generally in recorded mode. We will study the case of the DREW © (Dialogical Reasoning Educational Web) system [1], which we will present hereafter, and which allows the exploitation internally of its own traces. The second family corresponds to the analysis environments which include software tools dedicated to assisting the observers in the management, the synchronization, the visualization and the analysis of activity traces. In these environements, most of the aforesaid actions are carried out manually on the traces. In our study context, namely the analysis of the human interactions with a computer system, we will study the TATIANA © (Trace Analysis Tool for Interaction ANAlysts) system [9], also presented hereafter, which fits the aforesaid description. The third family, which is at the heart of our study, concerns the observation stations, i.e. environments which fit the specification of the theoretical observation model which we call a TBS (Trace-Based System) [5], [6], on which we have contributed to its definition with other actors in the Technology Enhanced Learning and Teaching (TELT) field. Today, we have a prototype called CARTE (Collection, activity Analysis and Regulation based on Traces Enriched) [2] which fits this specification and of which we will present the modules which answer the problem presented at the beginning of this article. 3.1 Collaborative Platforms Collaborative platforms represent the digital workspace environments which place at users’ disposal software tools to carry out collectively the tasks of the field of application. We will limit ourselves here to the tools dedicated to the collaborative learning.
Automatic Analysis Assistant
575
ANALYSIS ENVIRONMENTS TATIANA Display format ©
Replayer
OBSERVATION STATION Analyser visualization Interfaces
Awareness and feedback visualization
Third-party tool con-
Configuration
TBS Query/Notify
DB Rules Trace manager
Admin
Signals
Analyzer
Sequences CARTE format ©
Analysis Collect
retroactions
DREW format ©
DREW chat © (client C)
Tool 1 (client A)
Tool 2 (client B)
COLLABORATIVE PLATFORMS trace format
Fig. 1. The observation station
In the context of our research, we carried out experiments during a practical class in an English course (foreign language) at the university [4]. The students were to define remotely in pairs English vocabulary stemming from a text. The system was composed of a production tool called “Jibiki” (a specific collaborative text editor to create multilingual dictionaries), created for a research project in linguistics and
576
C. Courtin and S. Talbot
whose working depends on trace exploitation [10], of a communication tool called “CoffeeRoom” (a chat room in which communication spaces are represented by tables), of a group structuration tool and of an activity monitoring tool (awareness). The activity traces were gathered by a collection API (Application Programming Interface) of an observation station, by means of an instrumentation of the aforesaid software tools (see Tools 1 and 2 in the Fig. 1). The semantic level of the traces which were analysed automatically with the observation station thanks to predefined tool use models, fit the analysis objectives of the concerned observers, i.e. the users of the system (learners and teachers). The chosen architecture, namely the externalisation of the trace processing towards an observation station, allows us to plan a posteriori other more detailed exploitations of the trace base thus constituted. We will present below the Query / Notify API, of the CARTE observation station, for the exploitation of this base with other analysis objectives. The analysis of human interactions with a computer system consists frequently in replaying the activities which have been carried out with software tools or described from other sources (e.g. video transcription). The DREW © system has been designed with this idea in mind. DREW is a supported-collaborative learning platform [1] which allows the researcher to manage computer-mediated interaction traces stemming from the collaborative tools’ use (e.g. DREW’s chat or DREW’s shared text editor). DREW can read its own traces (sequences of events with XML format) and reproduce the activity, by means of an internal replayer, within the software tools (e.g. DREW’s shared text editor) which were initially used [11]. Furthermore, the DREW replayer can be synchronized and controlled by an external replayer, such as for example that of the TATIANA © analysis tool. Thus, the researchers can add indicators in the DREW replayer, for example a color for each participant to differentiate the contributions. This coupling between an analysis environment and the DREW collaborative platform allows manual creation of epistemic indicators (high level traces) from behavioral observeds (lower level traces). The DREW system meets in a satisfactory way the expectations of researchers in human and social sciences, for analyses based on the principle of replaying in specific tools. 3.2 Analysis Environments Analysis environments supply software tools to assist researchers in the management, synchronization, visualization and analysis of the traces in order to create new ones which make sense for them. Below we present briefly the TATIANA environment, which assists researchers in the analysis of traces stemming from contexts that are complex because multimedia, multi-modal (synchronization of several trace sources) and multi-user [9]. As explained above, the researchers can use TATIANA to create “replayables” (“display format”) and analyses (use of language for defining filters/rules) [12]. A “replayable” can be displayed in a tabular form (an array with several columns such as Time / User / Message / Tool) or a graphical one (using style sheets). The filters, based on XQuery scripts [13], are used to convert the data into an XML format (with the description of their structure), then into the format that can be understood by TATIANA (“display format” or pivot format or sequence of events). The TATIANA replayer reads a file of traces and interprets each event as an additional client would have done during the observation. The traces are replayed in the
Automatic Analysis Assistant
577
analysis sense of the term, but also in the action sense of the term, i.e. in the tools of the collaborative environment (e.g. DREW’s chat tool). TATIANA has therefore been designed to interact with replayers of collaborative platforms. A script to import is used to select one or several sources in an input file (e.g. a discussion in a chat, a video transcription). It is worth noting that tasks relative to analysis (e.g. categorization, annotation, argumentation), realized manually, may be complicated and repetitive. The automation of some of these tasks devolved to the researchers, would require interoperability with an observation station to carry out transformations of traces thanks to predefined use models of the tools. We will present below the API which allows this coupling with the CARTE observation station. 3.3 Observation Stations The third family concerns the observation stations, i.e. environments dedicated to the collection of traces stemming from various sources, their structuring according to a generic format, their transformation according to the predefined analysis objectives and their re-injection towards analysis tools or towards the tools from which they arise (retroaction principle of CARTE [2]) to make possible the regulation of the activity. In our study context, an observation station is a system, based on a theoretical observation model, which provides observers (e.g. researchers) with observation services in order to facilitate their tasks of analysis. According to the CARTE system’s architecture, the analysis and the tools to exploit the traces are different from the tools used to carry out the activities of the application domain, i.e. all the functionalities relative to the collection, structuration and analysis activities are integrated into the observation station. In our study context, the researchers (observers) can use the CARTE observation station to keep track of interactions whose treatment must aim to facilitate their tasks in their analysis system. CARTE allows the real time collection of traces in order to make analyses either in parallel with different use models, or that are successive (thus cumulative) in order to increase the abstraction levels of the resulting traces. Thus, certain repetitive elementary analysis tasks which are carried out with TATIANA can be automated with CARTE, as well as certain more complex tasks stemming from the researchers’ experience. Furthermore, we have observed that the TATIANA replayer allowed the manual integration of awareness data to increase the understanding of the traces. By means of the retroaction mechanism, the CARTE system can automatically supply external awareness tools, which could be those of TATIANA. Therefore, the analyst him/herself has to first specify, in the observation station, with a rule-based use model of the tools, the awareness information s/he wishes to receive during the analysis with TATIANA. We have noted that the trace analysis could be carried out at various abstraction levels: from the primary traces (e.g. raw data from a log file, data from instrumentation [4]) in the collaborative platforms, from temporarily situated behavioral “observeds” in the observation stations and epistemic indicators (elaborated by the researchers themselves) in the analysis environments. To determine the levels of analysis according to [14], sociologists consider three criteria: the temporal context engaged in the observation, the volume and the nature of the units of observation. Then there is the question of the selection of the analysis material and the sources from which it comes. Certain types of analysis, which consist in climbing the scale of
578
C. Courtin and S. Talbot
abstraction, imply interaction traces stemming from collaborative platforms, traces that are calculated (or created) by means of use models in an observation station and traces that are enriched by the techniques of categorization, of argumentation and of annotation in the analysis environments. In the following part, we shall present the traces formats of the various levels which will be exploited in the transformations in order to lead to relevant analyses.
4 Traces Structuration With the observation theoretical model, traces are placed at the heart of the tracebased system. In this approach, we add an abstraction level between actions to be carried out and the various applications. Thus, if a communication tool (e.g. a structured chat room) is replaced by another one which fits the same specification, the collected raw traces (that we call “signals”) will then be impacted, but not the enriched traces (that we call “sequences”) because of their high abstraction level. Therefore, the trace format used in the CARTE system has to be as generic as possible. We consider that trace information is divided into two parts: the first one which is common to all the traces (e.g. source, date, etc.) and the second one which is specific to the various tools used (e.g. a sentence in a collaborative text editor, a table name in a structured chat room). The former is represented by the metadata and the latter by a list of parameters (see Fig. 2). Both of them are described below. With the trace manager of CARTE, we are able to manage two kinds of activity traces: − signals which correspond to time pinpoints and elementary elements (e.g. a user action, a state modification of the system, and so on); − sequences which split into a chronological succession of signals or sub-sequences. Obviously, a sequence also has a duration and normally should make sense to understand what has happened with the tools. − Signals contain : − a source (the person or the tool which has generated the signal); − a tool (the tool or the instance tool in which the event has taken place); − a date (the timestamp which says when the event happened); − an event id (the list of possible events will of course depend on the tools we use: connection, disconnection, message emission, etc.); − a textual description and − a list of parameters (which contains the variable parts of signals). In this list we should find everything that is needed for us to understand what has happened. For example who and what are involved in the event. Obviously, this part will also change with events and tools. The sequences are more complex than signals. They are composed of signals or subsequences and, as signals, have parameters. In our implementation, each sequence is stored with: − a start date (timestamp – the beginning of the sequence); − an end date (timestamp – the end of the sequence);
Automatic Analysis Assistant
579
− a type id, which characterizes the kind of sequence; − a source (the person or the tool which has recognized the sequence – the source is generally the analyzer); − a textual description and − a list of parameters (which gives all significant details of the memorized episode).
Parameter
TraceElement
p arameters *
0..1
value : string
list of parameters
source : string descrip tion : string
*
metadata
S ignal
S equence
date : Date event : string tool : string
beginDate : Date endDate : Date ty p e : string
*
Fig. 2. UML model for traces
We consider that whatever the trace format we obtain, it could be possible to transform it into the CARTE trace format (which is based on the XML standard) thanks to its flexibility (i.e. parameters). From a technical point of view, there are specific scripts to import and export traces (e.g. from the XML format of DREW) according to the objectives of the interoperability (i.e. collection, exploitation, interrogation, notification) between the observation station and the external tools. We present below the various corresponding API (Application Programming Interfaces) of the CARTE observation station.
5 Observation Station API 5.1 Collection API The collection API (Application Programming Interface) is aimed at simplifying the process of collection of the traces. This API provides application developers with tools which intend to help them during the task of instrumentation of the tools in the implementation phase (or adapt them afterwards). Furthermore, we postulate that if there are tools which provide the possibility of transforming and adapting application traces from other formats (log files, xml files, and so on), we expect that more and more applications will offer the possibility of generating traces. As a consequence, we
580
C. Courtin and S. Talbot
should quite easily be able to create tools to exploit these traces and to provide the users and groups with added values. In our prototype, CARTE, a service (the collector) is used to give access to the collection API. Two methods sendSignal and sendSequence have been defined and allow tools to send signals or sequences to the TBS (Trace-Based System). Because we are using a J2EE server for our system, the API are accessible either as a WEB service, or as java RMI or Corba objets (depending on how the J2EE server is configured). In order to simplify the use of our system, we are also providing different versions of these methods. In the simplest ones, just one parameter should be given: the signal or the sequence to record. In a second version of the methods, the signals or the sequences can be described using their inner parts: time_stamp, source, tool, event, description, parameters, and so on. Last but not least, a special version (that actually the analyzer utilizes) includes a supplementary parameter which is used to associate a session ID (identifier) with the traces gathered and generated during the same analysis. This ID can afterwards be reused, for example, to visualize the results of a given analysis or to replay it. 5.2 Trace Exploitation API In the first version of our system, the analyzer was tightly coupled with the trace manager. This had some drawbacks – for example, it was impossible to run different analyses simultaneously or to use different analyzers. Using the same approach that we had employed for the collection, we were led to define the trace exploitation API in order to normalize the use and the interrogation of traces. Actually, we have identified two kinds of usages: − interrogation needs – we should be able to request the trace base for interesting traces; − notification needs – some tools (the on-line analyzers, for example) should be notified when new traces (signals or sequences) are produced and recorded in the trace base. So we have defined API for each of these two usages: interrogation API and notification API. 5.3 Interrogation API This version of the API is used to make requests on the trace base. It's a “pulloriented” API: the tools will use it to ask the TBS to search for particular trace elements (signals or sequences) and send them back. As for the collection API, the interrogation one is still very simple: actually, the service provided defines only two methods searchSignal and searchSequence. The names are quite self-explanatory: the first one is used to search signals and the second one sequences. Each method exists in different variants. In the first one, a “pattern”, i.e. a partially instantiated signal (or sequence), is used for the search: the type and the parameters of the signals (the sequences) can be defined or left free. The TBS then returns the list of
Automatic Analysis Assistant
581
the signals (or sequences) which match the pattern. For example we could search all the signals generated by a chat room in which the first parameter is equal to “Arthur”. In the second variant, the patterns are not applied directly to the traces but are defined and interpreted using some “use model” (which, of course, should have been previously defined). The interpretation of the traces is then used to find out which ones should be returned. For example, we could search all the sequences where one parameter can be interpreted (using a particular “use model”) as a “user name” and has “Arthur” as its value. In this last form, the pattern format is very similar to the one used within the analyzer. 5.4 Notification API This version of the API is “push-oriented”: the tools (analyzers, visualization tools, …) act as trace consumers and are registered in the TBS, which acts as a trace producer and notifies the tools when new signals or sequences arrive. The notification service of the observation station defines two methods, addSignalListener and addSequenceListener, which are used respectively to register signals and sequences consumers. On the analysis environment side, the tools have to implement either the SignalListener or the SequenceListener interfaces which define respectively the methods sendSignal and sendSequence that the TBS uses to send the signals and sequences it receives. As for the interrogation API, the listeners (the traces consumers) have the ability to specify, using a “pattern”, which kind of traces they want to be notified of. As we have seen before, this pattern should be used to match the traces directly or by means of a particular “use model”. Our main goal with these API is to obtain more modularity in our system, in particular to untie the “trace manager” from the analyzer. The advantages of the approach are numerous. For example, if we need to we could replace the actual analyzer by another one (and use a statistical analyzer or a case-based one if they are more suitable) or even run different analyzers side-by-side. These API also greatly simplify the connection of external third-party tools like activity trackers or visualizers and improve the extendability of the system: more sophisticated filters can easily be implemented using the simple ones described above and be used to gradually improve the complexity of the filters and requests the tools could use to interact with the TBS.
6 Utilization of the API As a proof concept we have used the API with the tools we are employing with our system, CARTE, for the experiments we have carried out at the university [4]. So all the tools used during these experiments, the group structuration tool, the dictionary elaboration tool (the “JIBIKI” [10]) and the chat tool (the “CoffeeRoom”), have been instrumented in order to send traces to the TBS (using the collection API). For example, in the chat facility, each time a user performs an action, the chat server generates a specific signal: for each kind of action a participant can do, we have defined a specific event (and a family of signals).
582
C. Courtin and S. Talbot
− On connection/disconnection the chat tool sends a signal, which has this particular event id (connection or disconnection) and with one parameter (the user name of the participant who has connected/disconnected). − When a participant sends a new message, the server sends a “talk” signal with three parameters (the user name of the sender, the communication space where the message has been sent and the content of the message). − And when a chat communication space is created or deleted (in our chat tool, the “CoffeeRoom”, users can create or delete new communication spaces, represented by tables), two specific signals are also created and sent to the TBS. These particular signals have two parameters: a user name and a name identifying the corresponding communication space created or deleted. The other two tools used for our experiments have been similarly instrumented. So, during an experiment, we are able to record all the significant actions performed by any of the participants (i.e. the students and the teacher). We have defined the Query/Notify API more recently, so actually we are only using them in order to separate the analyzer from the TBS. The first steps in this direction were made a few months ago and the separation is now effective. The first consequence is that the same analyzer can now be used to perform on-line or off-line analyses (to perform the off-line analyses we just have to re-play the selected traces). We are now changing the analyzer in order to be able to run different analyses in parallel. Another outcome is that normally, any analyzer which implements our API should be able to be plugged in the TBS. Next we will similarly separate the internal visualization tools from the TBS. And to go one step further, we also plan, in a new project, to use the API to connect our system with DREW [1] and TATIANA [9]. After that, DREW will be seen as a new collaborative platform able to use CARTE to store its traces. In the same way we hope to be able to use TATIANA on these traces as a new analysis tool; not as an automated one like CARTE’s current analyzer, but as a learning expert-oriented one. New experiments will then be carried out, which we hope will demonstrate firstly that our approach is realistic and secondly that if we could combine automatic analyses with human ones, we would obtain better results.
7 Conclusion and Perspectives The work presented in this paper is a preliminary stage for a multi-disciplinary project. We have set out the observation area as being the association of three families of tools (1. The collaborative platforms; 2. The analysis environments; 3. The observation stations), providing the researchers (observers) with traces which allow various abstraction levels. We have described a specific module of the CARTE observation station which is able to interoperate with raw traces from the DREW collaborative platform and enriched traces from the analysis environment TATIANA, in order to assist researchers in human and social sciences automatically in their analysis tasks. In the near future, we plan to place at researchers’ disposal modules, included in the CARTE observation station, implementing functionalities to facilitate the automation of certain tasks of the analysis process in any analysis environment. The final objective of such research work is the creation and the sharing of corpora of traces by the communities of researchers, who try to understand the activities performed.
Automatic Analysis Assistant
583
References 1. Corbel, A., Girardot, J.-J., Lund, K.: A Method for Capitalizing upon and Synthesizing Analyses of Human Interactions. In: van Diggelen, W., Scarano, V. (eds.) Workshop proceedings Exploring the potentials of networked-computing support for face-to-face collaborative learning, 1st European Conference on Technology Enhanced Learning (EC-TEL 2006), Crete, Greece, pp. 38–47 (2006) 2. Courtin, C.: CARTE: an Observation Station to Regulate Activity in a Learning Context. In: IADIS International Conference, Cognition and Exploratory Learning in Digital Age (CELDA 2008), Freiburg, Germany, pp. 191–197 (2008) 3. Courtin, C., Talbot, S.: Trace Analysis in Instrumented Collaborative Learning Environments. In: 6th IEEE International Conference on Advanced Learning Technologies (ICALT 2006), Kerkrade, The Netherlands, pp. 1036–1038 (2006) 4. Talbot, S., Courtin, C.: Trace Analysis in Instrumented Learning Groupware: an experiment in a practical class at the university. In: Seventh IASTED International Conference on Web-based Education (WBE 2008), Innsbruck, Austria, pp. 418–422 (2008) 5. Settouti, L.: Systèmes à base de trace pour l’apprentissage humain. In: Actes du colloque de l’Information et de la Communication dans l’Enseignement Supérieur et l’Entreprise (TICE 2006), Toulouse, France (2006) 6. Settouti, L., Prié, Y., Marty, J.-C., Mille, A.: A Trace-Based System for TechnologyEnhanced Learning Systems Personalisation. In: 9th IEEE International Conference on Advanced Learning Technologies (ICALT 2009), Riga, Latvia (2009) 7. Ellis, C., Wainer, J.: A Conceptual Model of Groupware. In: Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work (CSCW 1994), Chapel Hill, North Carolina, United States, pp. 79–88 (1994) 8. Ferraris, C., Martel, C.: Regulation in groupware: the Example of a Collaborative Drawing Tool for Young Children. In: 6th International Workshop on Groupware (CRIWG 2000), Madeira, Portugal, pp. 119–127 (2000) 9. Dyke, G., Lund, K., Girardot, J.-.J.: TATIANA: un logiciel pour l’analyse des interactions humaines médiatisées par ordinateur, de la spécification à l’implémentation. Research report, G2I-EMSE 2008-400-005 (2008) 10. Mangeot, M., Chalvin, A.: Dictionary Building with the Jibiki Platform: the GDEF case. In: 5th Language Resources and Evaluation Conference (LREC 2006), Genoa, Italy, pp. 1666–1669 (2006) 11. Corbel, A., Girardot, J.-J., Jaillon, P.: Drew: A Dialogical Reasoning Tool. In: 1st International Conference on Information and Comunication Technologies in Education (ICTE 2002), Badajoz, Spain (2002) 12. Dyke, G., Girardot, J.-J., Lund, K., Corbel, A.: Analysing Face to Face Computermediated Interactions. In: 12th Biennial International Conference (EARLI 2007), Budapest, Hungary (2007) 13. W3C. XQuery 1.0, An XML, http://www.w3.org/TR/xquery/ 14. Drulhe, M.: Orientations épistémiques et niveaux d’analyse en sociologie. SociologieS, Théories et recherches (2008), http://sociologies.revues.org/index2123.html
Real Walking in Virtual Learning Environments: Beyond the Advantage of Naturalness Matthias Heintz Fraunhofer Institute for Experimental Software Engineering, Fraunhofer-Platz 1 67663 Kaiserslautern, Germany
[email protected]
Abstract. Real walking is often used for navigation through virtual information spaces because of its naturalness (e.g. [1]). This paper shows another advantage.We present a within-subjects controlled experiment in the area of document retrieval. It compares two concepts of navigation: mouse and tracking. The latter was chosen for its naturalness and its feature to create proprioception. Our idea is that this helps users to orientate themselves and recall positions. This would result in better retrieval of previously visited information.The experiment shows a benefit in accuracy of finding content with tracking. This means proprioception improves users capacity of memory. This finding can serve as a decision support for choosing input devices, when designing immersive virtual learning environments. Therefore it can either help to build a base for a new interaction model for learning environments. Or it can broaden the theoretical framework of Chen et al. [2] to include immersive environments. Keywords: navigation, immersion, input devices, virtual learning environment.
1 Introduction Many virtual environments use real walking as input for navigation through a virtual information space, because it is the most natural way to navigate (e.g. [1] (learning environment) and [3, 4] (non-learning virtual environment)). It is considered natural, because we learn to walk early in our life and use it every day. This is a benefit from the point of view of the developer who creates the virtual environment. The user will be able to navigate right away without having to learn a navigation metaphor or how to work with the input device used. But beside these advantages there is another, less obvious one. It is also natural from the users inner point of view. Navigation by natural walking also feels like real navigation. Not like still standing there and magically move through the (virtual) world by just moving your thumb (to push the button on a gamepad). This as well is the reason, why walking-in-place is not as natural as real walking [4]. The movements and perceptions made by the user are only alike, not equal. With walking for navigation it is possible to take advantage of the additional kinesthetic feedback, the user gains from real movement. “[...] proprioception and kinaesthetic sense [...] [is] the sense that allows us to know our body position and the movement of our limbs.” [5] It causes “[...] the awareness of parts of our body's U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 584–595, 2009. © Springer-Verlag Berlin Heidelberg 2009
Real Walking in Virtual Learning Environments
585
position with respect to itself or to the environment [...]” [6] “[...] via feedback produced within the user's own muscles and joints [...]” [7]. The movements made when using traditional input devices for navigation (e.g. mouse and keyboard, gamepad, joystick) are (negligible) small. Thus proprioceptive feedback is little and can hardly aid the user. This work addresses the research question, if proprioception can help the user in an immersive virtual environment to retrieve information he has seen before faster and with less errors. A possible advantage of real walking for navigation in virtual learning environments, which is not explicitly measured in the conducted experiment, is the following. With proprioception it is possible to plan and carry out movements without the need of visual feedback and as it happens “naturally” it is not causing a high cognitive load. Also not in the scope of this work was the fact that real movement causes a stronger feeling of presence than navigating by using input devices [4].
2 State of the Art As said in the introduction, real walking has been used in several virtual environments for its naturalness. Kuan and San [1] developed a virtual learning environment to teach basic physics concepts by presenting a game of ballooning. They evaluated several navigation methods and found real walking the most natural one. But because real walking depends on the physical space at hand, they combined natural walking for short with gaze-directed steering (as second best method) for larger distances. LaViola et al. [3] “[…] believe that foot and torso movements may inherently be more natural for some navigation tasks.” To test this assumption they created some interaction concepts for hands-free navigation through an immersive virtual environment. Their informal observation shows that real walking is appropriate for navigation and simplifies the cognitive load of navigating through complex virtual environments. Usoh et al. [4] compared real walking with walking-in-place and push-button-fly (along the floor plane). The result of their study showed that real walking as a mode of locomotion is significantly better in ease (simplicity, straightforwardness, naturalness) than the other two alternatives. There have been previous attempts to use proprioception to aid the user in his navigation tasks. Ängeslevä et al. [8] developed “body mnemonics” an interface design concept to interact with portable devices. As their screen estate is usually very small it can be hard to navigate through menus on the screen. To overcome this problem the selection of files or options is done by moving the portable device to parts of the users body. This movement is interpreted and the assigned action (e.g. open the image folder or call home) is completed. The name “body mnemonics” suggests, that the body and its movements can aid the users memory. Strachan et al. [9] used a similar approach to create “BodySpace”. This concept is used for interaction with a music player, by moving it to different parts of your body. Unfortunately none of them conducted a user study to compare their input method with standard menus on screen. So it is unclear for which users their ideas would really work and nothing can be said about the usefulness and usability of their approaches.
586
M. Heintz
The last two approaches used proprioception, because screen space was small and thus limited. But it has also been used for large screens, like for the application described in this paper, before. Mine et al. [10] integrated a menu in their application, which was built to use proprioception to find it. Irrespective of the users position in the virtual world, the three menu items were always above the user, just outside his field of view. Thus the menu did not occlude any screen estate while not needed. But when the user wanted to use it, it could easily be reached by grabbing above and pulling it into sight. A study showed that tree different menu items side by side (above and left or above and center or above and right) can be distinguished by the user. Tan et al. [11] implemented an application to research if proprioception helps in learning information. They compared the amount of learned elements from a list of words in two conditions. Displayed on a single monitor was compared to separated on three different ones (building a large screen), where the words were shown alternating. The conducted user study demonstrated a 56% increase in memory for the information presented distributed. Thus combining information with location and movement helps the user to recall it. But proprioception is not only applicable in virtual environments. In the area of learning it can also be used to enhance traditional classroom education, which is then called kinesthetic learning. For example [12] have described this “[…] process where students learn by actively carrying out physical activities rather than by passively listening to lectures.” In their paper “A Collection of Kinesthetic Learning Activities for a Course on Distributed Computing”, they describe five examples, how this kind of learning could be applied. As learning in classroom was not in the focus of the original work, this kind of kinesthetic learning has not been investigated further.
3 Application The application used for the controlled experiment is an immersive virtual environment for document retrieval (see Figure 1). But as it does not matter to which kind of content (in this case virtual documents) the user is navigating, the general findings of the conducted experiment can be adapted to any immersive virtual environment.
Fig. 1. User navigating through the document space by walking in front of the screen
Real Walking in Virtual Learning Environments
587
3.1 Premises To create the immersive virtual environment, a room with a PowerWall was used. The PowerWall is a 2.9 m by 2.3 m area illuminated by rear projection. Two projectors create a stereoscopic image through circular polarization, thus the user has to wear spectacles with filters to see the three dimensional image. There are two types of input devices to be used: The standard mouse and keyboard and an Ascension “Flock of Birds” magnetic tracking system [13]. To be able to track the walking user in the entire physical space available (3 m by 4.5 m) the extendedrange transmitter for the “Flock of Birds” tracking system is used. We utilized two sensors of the tracking system to track the users head and one hand. This allowed for a natural navigation and interaction while creating proprioceptive feedback from the users movements. The mouse was used as common input device to compare the tracking system to. 3.2 Idea As already described in the motivation and state of the art sections, there have been some other applications [e.g. 1, 8 and 11] which use human movement for navigation and remembering of positions. With the application described in this paper we want to use proprioception to ease document storage and retrieval for users. In the scope of our work we concentrated on the retrieval part, but we had the storage process (which was not implemented yet) already in mind. The nine documents have been positioned in a 3 by 3 grid by the application without any interaction of the user (Figure 2 shows two documents and the user in the nine field interaction grid in which all the documents are arranged. The field labeled “F” on the left is for interacting with the “following document”.).
Fig. 2. Drawing of the interaction grid with the user and two documents displayed
The idea behind this application is to let the user compare different documents in an immersive virtual environment, by just walking to them. Therefore he has one document “attached” to him (called “following document”), which is in his field of view regardless of where he is standing inside the real room. The documents to
588
M. Heintz
compare the “following document” with (called “floating documents”) were distributed in the virtual environment (arranged in a 3 by 3 grid). To be able to measure the influence of proprioception the application has been implemented to work with two different interaction devices. They were chosen with the goal to have much difference in produced proprioceptive feedback. One of them is the common mouse, which is operated with relatively small movements of forearm and hand. Position and orientation of the rest of the body is irrelevant for the navigation. This produces not much useable proprioceptive feedback to link to the documents position, which the user has to remember correctly to retrieve the document later on successfully. Additionally the navigation movements are repetitive. To move forwards the user has to scroll several times with the scrolling wheel of the mouse. To distinguish between distances “scrolled” (both virtually and real) is very hard if they are close together (e.g. Did I scroll five or seven times?). And even worse the movement to reach the documents on the right hand side (3, 6 and 9) is identically for each apart from navigating to the right plane by scrolling the mouse wheel first (e.g. see Figure 2 When interacting with a mouse, the movement to reach the document in cell 3 and 6 is identical. The only difference is, that the user has to scroll ahead one row [from 1,2,3 to 4,5,6] first, if he wants to select the document in cell 6 instead of 3). The other input device used is tracking of real walking. This incorporates the whole body and its movements as every change in the real world is directly mapped to the virtual environment. Therefore nearly the complete proprioception of the users body is relevant for his current position. That means it is possible for the user to link his positions and movements in the real room to the position of a document in the virtual environment. Thus besides the visual feedback he has a second channel of information to locate and find the information he is looking for when retrieving a document. Additionally the movement to reach each document is unique (e.g. see Figure 2 To reach the document in cell 3 is a step to the left, to get to cell 6 the user has to go one step diagonally [with an angle different of the one to reach the document in cell 9]). 3.3 Implementation Mouse Input. To enable the user to navigate three dimensions with the mouse as a two dimensional input device, the scrolling wheel has been used. To navigate the mouse pointer inside the visual plane (x,y) the mouse is moved on the desk as known from standard two dimensional graphical user interfaces. To navigate on the additional third axis (z) the user has to scroll with the mouse wheel in the desired direction. Turning the wheel away from the user lets him navigate ahead, turning the other direction makes him move backwards in the virtual environment. Tracking Input. Two tracker sensors have been used: One as head tracker for navigation and the second as hand tracker for interaction. The head tracker data was (with some calculations to eliminate tracking errors) directly applied to the virtual camera. Thus walking around in the real world caused the virtual view to change accordingly, so the user had the impression of walking around in the virtual environment as well. The information from the hand tracker was used to implement interaction metaphors based on postures. To select a document the user moves and stands in front of
Real Walking in Virtual Learning Environments
589
it. This causes a “focus frame” to appear to let the user know which document has the focus at the moment. If he angles his forearm the focused document is chosen and displayed larger and closer to the user. For scrolling through the document the posture of bending the hand to the right or left is used, which imitates the movement of thumbing through a book. To deselect a document the user has to move his hand (tracker) to the shoulder (like he would “throw it away” over his shoulder). Output. The application is written in C++ and creates the visualization with OpenGL. The documents are created from images of real documents as textures. The scrolling through the document pages is visualized by sliding them from one side to the other. Figure 3 shows a screenshot of the application with the “following document” on the left and the “floating documents” on the right hand side.
Fig. 3. Screenshot of the application
4 Evaluation 4.1 Experiment Design and Procedure This evaluation should answer the question, which advantage the use of proprioception can have for document retrieval. Therefore two interaction concepts are compared. The first one used the mouse as input device for navigation. This creates little proprioception, because the movements made are small and involve only one forearm and hand. Additionally the real movements are repetitive for larger movements in the virtual environment, which makes it hard to distinguish between two different positions. The second concept uses tracking of real walking as input for navigation. This creates much kinesthetic feedback because the whole body is moved and involved. Factors of influence on the result other than the input device and therefore the proprioception created have been excluded by applying the same settings each time and just switching the input device. We used a within-subjects approach, so every person used both input devices. This was done to avoid the possibility that experience in using the mouse (e.g. by
590
M. Heintz
experienced computer users) or in the subjects general ability to remember information and positions distort the result in a between-subjects approach. The task for the users was to memorize the content and position of nine documents containing three pages each. To lower the amount of information to learn to a feasible level, only document titles and pictures should be remembered. On the beginning of each run the subjects had twenty minutes to get familiar with the application and to memorize the content. After this learning phase they should search for three document titles and six pictures presented in the “following document”. To eliminate the possibility that the information learned in the first run had an influence on the results of the second run (with the other input device), the documents were interchanged. The two document sets have been chosen to be equal in title complexity and number and size of pictures to allow for comparability of run one and two. The experiment was conducted with 16 persons (most of them pupils and students; age ranging from 17 to 57). The independent factor was the kind of interaction device used (mouse or tracking). The participants where divided in two groups of 8 each. In doing so it has been checked that the two groups where equally distributed in age and experience in computer usage. Both started with document set 1, but the first group used mouse input and the second tracking first. For the second run the document set was changed to set 2 (see Figure 4). The first group then used tracking and the second one mouse input to navigate. This change of the set and order of input device used was done to prevent undesired sources of variation from a previous test or differences in the difficulty to remember a document set. As objective result data the time needed and errors (opened documents which were not the right one) occurred during document retrieval have been measured. For subjective answers a questionnaire had been completed after the second run.
Fig. 4. Experimental Design
4.2 Hypotheses From the findings of earlier research in the field of proprioception, we came to the following three hypotheses: − Every user will benefit from the use of tracking his real movements by making fewer errors during document retrieval, because he can (unconsciously) use the proprioceptive feedback from walking as additional aid to retrieve the position of the searched document.
Real Walking in Virtual Learning Environments
591
− Because the navigation by tracking real movement is more natural, it will cause faster retrieval of information than mouse input. − As tracking is more intuitive, users with less experience in the use of computers will benefit even more than experienced computer users. 4.3 Results To check the first hypothesis the number of errors from all subjects has been added (see Table 1). As you can see with tracking of real walking the number of errors compared to mouse input is nearly bisected. If you look at it in more detail this is not true for every user, but for nine (56,25%) using mouse instead of tracking resulted in two or more additional errors. Four subjects (25%) have similar (difference of number of errors is one or less) results. Only three (18,75%) get a negative result from using tracking instead of mouse input (more than one error difference). Table 1. Total number of errors for all document retrievals
mouse input tracking
document title 19 11
pictures 114 58
total 133 69
To check the second hypothesis the average time for a single retrieval has been calculated (see Table 2). As you can see the users needed more time when using tracking compared to mouse input. This finding is against the hypothesis. A reason for this divergence could be that all subjects have been so familiar with using a mouse, that this is “natural” for them as well. Then the smaller movements necessary when using a mouse for navigation can be accomplished faster than walking some steps. Table 2. Average time to find content (in seconds)
mouse input tracking
document title 23,52 25,77
pictures 25,92 27,27
total 49,44 53,06
To check the third hypothesis the users had to be categorized into less and more experienced computer users. To do this their answers to three questions of the questionnaires have been used. First of all their personal rating of experience in working with a computer has been used for a rough estimation. As self reflection could be wrong the factors of how long the subject has been using computers and how much time he spends working with a computer every day have been analyzed. All subjects (nine) working with computers for more than ten years and using it more than three hours every day have been put into the group of experienced computer users. All others (seven) [working with computers 5 – 10 years at most and using it less than three hours a day] build the group of less experienced (called inexperienced) computer users.
592
M. Heintz
As you can see in tables 3 and 4 less experienced users benefit from tracking both in time and number of errors, whereas experienced ones benefit just in number of errors. If you compare experienced with less experienced computer users you can see that the inexperienced benefit even more than the experienced ones. Table 3. Comparison of experienced with inexperienced computer users - average number of errors
mouse input tracking
experienced computer users 3,44 2,56
inexperienced computer users 14,57 6,57
Table 4. Comparison of experienced with inexperienced computer users - average time needed (in seconds)
mouse input tracking
experienced computer users 132,00 186,11
inexperienced computer users 347,00 290,71
It can be assumed that the additional time needed with tracking by the experienced users is due to their familiarity with mouse input. The use of tracking is new and unusual and although they had time to get familiar they might not have been able to get familiar enough to outperform the mouse (in terms of time). 4.4 Threats of Validity Although the findings might be very helpful, there are some points to be considered, when examining the results. − Number of documents Walking for virtual navigation needs an appropriate amount of real space. The presented concept for tracking is only applicable for a small number of documents. Larger sets would need additional interaction metaphors. − Number of subjects Because only 16 people have been tested, the results might depend very much on the performance of each subject. − Deviation of subjects Most of the subjects have been pupils and students. This might cause the findings to not being applicable for all users. − Short period of getting accustomed to navigation with tracking The results for real walking (especially time needed) might have been even better if the subjects had more time to get familiar with tracking as input for interaction. − Categorization of experienced and less experienced computer users based on their own estimation
Real Walking in Virtual Learning Environments
593
To avoid a wrong segmentation we did not rely solely on their answer to the question about their computer literacy. We also used “objective” values which determined the duration of computer usage. But more time spent using it does not necessarily mean more experience (e.g. less experienced users need more time to accomplish a task than more experienced users). So we still could just estimate the subjects experience in working with a computer because we did not conduct an evaluation to test it.
5 Conclusion In this paper we answer the question if the user profits from the proprioception created from real walking for navigation. We did this by comparing two different interaction concepts in an immersive virtual environment. The first is using the standard mouse input for navigation. The second one is tracking the walking user. The proprioceptive feedback created by these two methods of input is very diverse (due to the amount of movement involved), thus their comparison makes it possible to determine the impact of proprioception. Our experiment showed that most of the subjects made fewer errors when using real movement, therefore the proprioception helped them to retrieve information. They were not faster in general, as estimated given the naturalness of real walking. But the results showed a difference depending on the experience of the subjects in using a computer. The less experienced ones benefit from tracking more than the experienced users (see Figure 5). These advantages in information retrieval make a good case for using tracking of real walking as input for navigation in immersive virtual environments, which go behind its obvious “naturalness”. If the findings of this experiment are applied to the area of learning they suggest, that advantages found in traditional kinesthetic learning, e.g. by Tan et al. [12], could also be used in virtual learning or e-learning environments.
Fig. 5. Research question, experimental comparison and results
594
M. Heintz
The fact, that proprioception can be used for planning and carrying out navigation, has several advantages for learning environments. As it needs little direct attention of the user he can use his cognitive capacity to examine and learn the displayed content. Because it can be done without using visual feedback the user is able to make sense from visualization while moving, e.g. watching and following meaningful connections between elements. The motivation for learning and learning process of the user can additionally be supported by his feeling of presence (immersion) (e.g. [14]). The more immersed the user is the fewer distraction from objects or actions in the real world occurs. Thus the learner can better concentrate on the content, which eases learning. The advantages of higher immersion for learning have also been researched and shown by Sowndararajan et al. [15]. The findings of the conducted experiment can also serve as a decision support with regard to choose input devices, if you are designing an immersive virtual learning environment. Thus it can either help to build a base for a new interaction model for learning environments. Or it can be used to broaden the theoretical framework of Chen et al. [2] to include immersive environments.
References 1. Kuan, W.L., San, C.Y.: Constructivist physics learning in an immersive, multi-user hot air balloon simulation program (iHABS). In: SIGGRAPH 2003: ACM SIGGRAPH 2003 Educators Program. ACM, New York (2003) 2. Chen, C.J., Toh, S.C., Wan, M.F.: The theoretical framework for designing desktop virtual reality-based learning environments. Journal of Interactive Learning Research 15(2), 147– 167 (2004) 3. LaViola, J.J., Feliz, D.A., Keefe, D.F., Zeleznik, R.C.: Hands-free multi-scale navigation in virtual environments. In: I3D 2001: Proceedings of the 2001 symposium on Interactive 3D graphics, pp. 9–15. ACM, New York (2001) 4. Usoh, M., Arthur, K., Whitton, M.C., Bastos, R., Steed, A., Slater, M., Frederick, P., Brooks, J.: Walking > walking-in-place > Flying, in virtual environments. In: SIGGRAPH 1999: Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pp. 359–364. ACM Press/Addison-Wesley Publishing Co., New York (1999) 5. Larssen, A.T., Robertson, T., Edwards, J.: The feel dimension of technology interaction: exploring tangibles through movement and touch. In: TEI 2007: Proceedings of the 1st international conference on Tangible and embedded interaction, pp. 271–278. ACM, New York (2007) 6. Tan, D.S., Pausch, R., Stefanucci, J.K., Proffitt, D.R.: Kinesthetic cues aid spatial memory. In: CHI 2002: CHI 2002 extended abstracts on Human factors in computing systems, pp. 806–807. ACM, New York (2002) 7. Balakrishnan, R., Hinckley, K.: The role of kinesthetic reference frames in two-handed input performance. In: UIST 1999: Proceedings of the 12th annual ACM symposium on User interface software and technology, pp. 171–178. ACM, New York (1999) 8. Ängeslevä, J., O’Modhrain, S., Oakley, I., Hughes, S.: Body mnemonics. In: Physical Interaction (PI03) – Workshop on Real World User Interfaces, a workshop at the Mobile HCI Conference 2003, Udine (2003)
Real Walking in Virtual Learning Environments
595
9. Strachan, S., Murray-Smith, R., O’Modhrain, S.: Bodyspace: inferring body pose for natural control of a music player. In: CHI 2007: CHI 2007 extended abstracts on Human factors in computing systems, pp. 2001–2006. ACM, New York (2007) 10. Mine, M.R., Brooks, F.P., Sequin, C.H.: Moving objects in space: exploiting proprioception in virtual-environment interaction. In: SIGGRAPH 1997: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 19–26. ACM Press/Addison-Wesley Publishing Co., New York (1997) 11. Tan, D.S., Stefanucci, J.K., Proffitt, D.R., Pausch, R.: The Infocockpit: Providing location and place to aid human memory. In: Workshop on perceptive user interfaces, Orlando, pp. 1–4 (2001) 12. Sivilotti, P.A.G., Pike, S.M.: A collection of kinesthetic learning activities for a course on distributed computing: ACM SIGACT news distributed computing column 26. In: SIGACT News, vol. 38 (2), pp. 56–74. ACM, New York (2007) 13. Ascension Technology Corporation, http://ascension-tech.com/realtime/RTflockofBIRDS.php 14. Limniou, M., Roberts, D., Papadopoulos, N.: Full immersive virtual environment CAVETM in chemistry education. In: Comput. Educ., September 2008, vol. 51(2), pp. 584–593. Elsevier Science Ltd, Oxford (2008) 15. Sowndararajan, A., Wang, R., Bowman, D.A.: Quantifying the benefits of immersion for procedural training. In: Proceedings of the 2008 Workshop on Immersive Projection Technologies/Emerging Display Technologiges, IPT/EDT 2008, Los Angeles, California, August 09 - 10, pp. 1–4. ACM, New York (2008)
Guiding Learners in Learning Management Systems through Recommendations Olga C. Santos and Jesus G. Boticario aDeNu Research Group, Artificial Intelligence Department, UNED, Calle Juan del Rosal, 16, Madrid 28040, Spain {ocsantos,jgb}@dia.uned.es http://adenu.ia.uned.es
Abstract. In order to support inclusive eLearning scenarios in a personalized way, we propose to use recommenddrs systems to guide learners thorugh their interactions in learning management systems. We have identified several issues to be considered when building a knowledge-based recommender system and propose a user-centered methodology to design and evaluate a recommender system that can be integrated via web services with exiting learning management systems to offer adaptive capabilities. We report some results from a formative evaluation carried out with users receiving recommendations in dotLRN open source eLearning platform. Keywords: Recommender systems, User experience, Recommendations, Learning Management Systems.
Accessibility,
1 Introduction Information and Communication Technologies have been considered from the beginning as a way to remove geographical and temporal barriers. Moreover, accessibility barriers can also be eliminated if this technology is properly applied. In this context, a proper usage of technology provides even more opportunities to enhance the learning. Addressing the individual needs in the learning process is difficult to achieve in face to face learning scenarios. However, in the current eLearning scenarios, learners can access a virtual course spaces which provides contents and services (e.g. communication tools) through a web-based interface. These interfaces have evolved from simple web pages to complex systems to facilitate the management of the learning. Learning managements systems (LMS) are broadly used in many institutions and efforts are being done to integrate them with their current technological infrastructure. The interactions done by the users in these systems can be gathered. These data, combined with information obtained explicitly from the users can be used to build user profiles. As a result, these profiles can provide the needed information to describe the needs and expectations of the users with the system. With this information personalized responses that address these individual needs could be offered to the users in LMS. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 596–601, 2009. © Springer-Verlag Berlin Heidelberg 2009
Guiding Learners in Learning Management Systems through Recommendations
597
This situation presents several challenges to the research community on technology enhanced learning (TEL): • • • •
What are the needs of the users in LMS? How can users be supported in their needs when using LMS? Is there a way to evaluate that users are properly supported in the LMS? Can these services be supported in terms of web services architectures?
Our research follows on the idea of combining design and runtime adaptations. According to this approach, adaptations should be applied along the full life cycle of eLearning making a pervasive use of standards to support users in the process. The idea behind is that adaptation is not an idea that can be plugged in a learning environment, but a process that influences the full life cycle of learning, which consists on four steps where the user (and not the system) is the focus.
2 Learning Scenarios and Recommender Systems The first approaches to support learners with technology were done through the usage of Intelligent Tutoring Systems (ITS) [1]. These systems provide direct customized instruction or feedback to students, without the intervention of human beings, whilst performing a task. They contained a description of the knowledge or behaviors that represent expertise in the subject-matter domain the ITS is teaching which was used to detect the misconceptions and knowledge gaps of the learners as they work in the system to offer them the appropriate support. Conceptually, ITS are domain independent, although in practice most ITS have been designed for very specific domains and the knowledge is wired in the system, and hence, any changes on the domain require a development process. Moreover, ITS do not consider the interactions of the users within more collaborative learning scenarios which are supported by a wide diversity of communication tools. Current eLearning scenarios are supported by LMS. A common feature of these systems is the dispersion on the information available and the variety of communication channels to consider. Some can be structured in terms of learning resources within well-design units of learning, but there may be additional materials provided ad-hoc by the tutor or even by some student. There may be interactions done by users with the communication tools that present relevant information for the users. For instance, a discussion in a forum thread may clarify what learners are expected to do in a particular activity of the course. In the context of this large space of information and communication sources that learners are supposed to deal with, we propose the application of recommender systems to guide learners through inclusive eLearning scenarios. eLearning scenarios share the same objective as recommenders for e-commerce applications but there are particularities that make not possible to directly apply existing solutions from those systems [2, 3]. We have carried several attempts to involve users in the process of building a knowledge-based recommender aimed to plug recommendation strategies in standardbased LMS to extend their functionality with adaptive navigation support. A recommendations model was proposed and a prototype of a knowledge-based recommender
598
O.C. Santos and J.G. Boticario
was implemented and integrated into a well-known open source standard-based LMS called dotLRN. This knowledge-based recommender aims at generating suitable recommendations and reasoning about which elements of the domain meet the current user’s needs and context. The recommender system model describes i) what should be recommended (different recommendation types have been identified and can be offered, which relate to the actions that can be done on the LMS objects, such as send a forum message, work on a particular objective or share some opinion), ii) when a recommendation should be provided (considering the user and course context, the conditions of application and the timeout restrictions), iii) how a recommendation should be displayed (considering accessibility and usability criteria) and iv) why a recommendation has been produced (in terms of what category the recommendation applies to, what technique has been used to generate it, and the source that originated the recommendation). Details on the model are provided elsewhere [4]. A snapshot of the system is included next.
Fig. 1. Recommender System integrated in dotLRN LMS
The figure shows a coursepace in dotLRN where, in addition to communication services such as Forums and a SCORM player for the learning materials, a new portlet has been added that provides two recommendations for the current user.
3 Experiences with Users In order to understand the users’ perception on the recommender, we have followed some formative evaluations. We run a couple of experiences in two summer courses. The first course entitled “Accessibility and disability in the University: a development based on ICT” was organized by UNED in July 2008. The participants in this course were mainly accessibility experts and people with disabilities. The second course entitled “Services of the Web: applications towards the frontiers of knowledge” was organized by the UIMP in August 2008. The participants in this course had experience in using web-based applications for learning and teaching.
Guiding Learners in Learning Management Systems through Recommendations
599
A total of 29 users took part in the experience. Two of them were disabled. Anonymity was assured for the participants, since they were randomly given fictitious logins for the experience. The users were given access for one hour to an instance of dotLRN hosted at our institution where we had integrated -via web services- a dynamic support provided by the recommender system. An accessible SCORM-based course was offered to the users as well as several platform services, such as questionnaires, file storage area, forums, calendar, frequently asked questions, chat room, blog, statistics and recommendations. Taking the proposed model as our design framework, we defined thirteen recommendations to be given to the users in different course situations. Once followed the recommendation by the user, they were no longer displayed to them. When the users entered the system, they were recommended to read the help section on the platform usage and to fill in the learning style questionnaire to be able to adapt the contents to thei learning style. Moreover, they were also recommended to go to the course contents. To promote collaboration, once in the course space they were suggested to present themselves in the course forum. Experts from aDeNu group observed the users as they interacted with the system for one hour. Afterwards, users were given a questionnaire to evaluate their satisfaction. The data log was further analyzed and compared with the results from the observations and the responses to the questionnaire. From the questionnaires, the user satisfaction was evaluated positively. From the observations when running the pilots, no usability nor accessibility problem were detected. 55% of the users were able to finish the tasks required in the given time. Analyzing the logs we found out that when users followed the recommendations, the number of users who carried out tasks increased with respect to the users who did not follow the recommendations. Details on the experience are given in [4].
4 On Going Works In this paper we have presented an approach to guide learners through LMS via recommendations and have commented on a formative evaluation carried out with users. For that experience, we defined several recommendations that could be provided in an LMS. The recommendations for this experience did not had a strong psychoeducational background, but were aimed to tests the user’ attitude towards the recommender system within the LMS. In order to define recommendations following this model to feed the knowledge of the recommender that address the real needs of the learners and tutors in eLearning scenarios, we have been researching the appropriate user-centred design methods that help us to obtain from psycho-educational experts samples of psycho-educational sound recommendations. The methodology proposed is described elswhere [5]. Our current approach is to first understand the eLearning domain and elicit relevant recommendations from their users. On a second step, we will apply aritificial intelligence techniques to automate the process of generating recommendations [6]. Moreover, we are also reseraching on a methodological appraoch to evaluate the recommender. From literature, an approach towards the evaluation of adaptive systems is to decompose the adaptation process and evaluate the system in a “piecewise” manner. In this approach, adaptation is “broken down” into its constituents, and
600
O.C. Santos and J.G. Boticario
each of these constituents is evaluated separately where necessary and feasible. The constituents into which adaptation is decomposed are typically termed “layers” and the resulting approach “layered evaluation” [7]. This approach can be used to evaluate the effectiveness of the system and the advantages of the adaptation it provides [8]. We propose to add an extra layer on top of the existing layered approaches from evaluation. The two more cited in literature propose 2 and 5 layers, respectively. The 2-layer evaluation process defined by [9] consists on 1) evaluation of user modeling and 2) evaluation of the adaptation decision making. The 5-layer evaluation process is defined by [10] and consists on 1) collection of input data, 2) interpretation of the data collected, 3) Modeling of the current state of the “world”, 4) Deciding upon adaptation and 5) Applying adaptation decisions. The later is a refinement of the former. For the eLearning domain, we propose an extra layer to evaluate the design time issues, The purpose of this additional layer is to cover those issues which relate to psycho-educational considerations. From our experince, this is a critical sisssue since adaptive systems in education will only sube succesfull in practice when teachers can easily deal with it. In any case, the aim of these layers is to focus the evaluation to different directions, as identified in [6]: 1) the design of user interface of the tools required, 2) the process to design/generate the recommendations, 3) the process to select the appropriate recommendations, and 4) the analysis of the users’ interactions. This new layer (evaluation of design time issues) includes the evaluation of the design of the user interface of the tools required and the process to design the recommendations. The second layer (evaluation of user modeling) covers the analysis of the users’ interactions and the third layer covers the process to select the appropriate recommendations. Several principles are to be taken into account in the evaluation process: i) accessibility, ii) usability, iii) learnability, and iv) standards compliance. Accessibility issues and usability heuristics are to be focused on learnability and they are to be integrated in the layered-based evaluation. The latter provides different layers reflecting the various stages/aspects of the adaptation, starting from low-level input data acquisition or user monitoring, and going up to high-level assessment of the behavioral complexity of the users. This approach provides a series of advantages over those that focus on the overall user’s performance and satisfaction, such as useful insight into the success or failure of each adaptation stage separately, facilitation of improvements, generalization of evaluation results, and re-use of successful practices.
Acknowledgements The work presented here is framed in the context of the projects carried out by the aDeNu Research group. In particular, the EU4ALL (IST-2006-034478) funded by the European Commission and A2UN@ funded by the Spanish Government.
References 1. Psotka, J., Massey, L.D., Mutter, S.A.: Intelligent tutoring systems: lessons learned. Lawrence Erlbaum Associates, Mahwah (1988) 2. Drachsler, H., Hummel, H.G.K., Koper, R.: Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology (2007)
Guiding Learners in Learning Management Systems through Recommendations
601
3. Tang, T., McCalla, G.: Smart recommendation for an evolving e-learning system. In: Workshop on Technologies for Electronic Documents for Supporting Learning, Proceedings of 11th International Conference on Artificial Intelligence in Education, Sydney, Australia, July 20–24, pp. 699–710 (2003) 4. Santos, O.C., Boticario, J.G.: Users’ experience with a recommender system in an open source standard-based learning management system. In: Proceedings of the 4th Symposium of the WG HCI&UE of the Austrian Computer Society on Usability & HCI for Education and Work, USAB 2008 (2008b) 5. Santos, O.C., Martin, L., del Campo, E., Saneiro, M., Mazzone, E., Boticario, J.G., Petrie, H.: User-Centered Design Methods for Validating a Recommendations Model to Enrich Learning Management Systems with Adaptive Navigation Support. In: Weibelzahl, S., Masthoff, J., Paramythis, A., van Velsen, L. (eds.) Proceedings of the Sixth Workshop on User-Centred Design and Evaluation of Adaptive Systems, held in conjunction with the International Conference on User Modeling, Adaptation, and Personalization (UMAP 2009), Trento, Italy, June 26, pp. 64–67 (2009) 6. Santos, O.C., Boticario, J.G.: Building a knowledge-based recommender for inclusive eLearning scenarios. In: Proceedings of the International Conference on Artificial Intelligence, AIED 2009 (2009) ( in press) 7. Paramythis, A., Totter, A., Stephanidis, C.: A Modular Approach to the Evaluation of Adaptive User Interfaces. In: Weibelzahl, S., Chin, D., Weber, G. (eds.) Empirical Evaluation of Adaptive Systems. Proceedings of workshop held at the Eighth International Conference on User Modeling in Sonthofen, Germany, July 13, pp. 9–24. Pedagogical University of Freiburg, Freiburg (2001) 8. Karagiannidis, C., Sampson, D.G.: Layered Evaluation of Adaptive Applications and Services. In: Brusilovsky, P., Stock, O., Strapparava, C. (eds.) AH 2000. LNCS, vol. 1892, p. 343. Springer, Heidelberg (2000) 9. Brusilovsky, P., Karagiannidis, C., Sampson, D.: Layered Evaluation of Adaptive Learning Systems. In: International Journal of Continuing Engineering Education and Lifelong Learning, Special issue on Adaptivity in Web and Mobile Learning Services, vol. 14(4/5), pp. 402–421 (2004) 10. Paramythis, A., Weibelzahl, S.: A decomposition model for the layered evaluation of interactive adaptive sysems. In: Ardissono, L., Brna, P., Mitrovic, A. (eds.) User Modeling 2005, pp. 438–442. Springer, Heidelberg (2005)
Supervising Distant Simulation-Based Practical Work: Environment and Experimentation Viviane Guéraud1, Anne Lejeune1, Jean-Michel Adam1, Michel Dubois2, and Nadine Mandran1 1
Laboratoire d’Informatique de Grenoble (LIG), CNRS, Université de Grenoble, France 2 Laboratoire Interuniversitaire de Psychologie, Université de Grenoble, France
Abstract. In this paper we present research targeting distant simulation-based practical work in various scientific domains. For the 6 past years, we continuously tried to improve the FORMID environment tools that we have designed and developed for building, running and observing such learning situations. This paper focuses on FORMID-Observer which is the FORMID tool intended to provide teachers with semantic information about the learners’ progress. We present the analysis of teachers’ observation practices during a recent session involving a secondary school group of learners in a practical work in electricity. Throughout the experiment’s results, we aim at showing how teachers’ diagnosis of learners’ domain-knowledge benefit both from the general principles of FORMID-Authoring tool and from the particular features of FORMID-Observer. Keywords: Supervision, Distant monitoring, Semantic data visualization, Teacher interface, Tutoring system, Virtual learning environment, Simulation, Computer supported learning, Distance Learning.
1 Motivation and Background There is an increasing use of e-learning systems providing teaching material via the Web. What happens in a virtual classroom where learning activities are automatically managed by e-learning environments? Which kind of awareness does a teacher need to understand learners’ progression throughout the learning process? These questions are not new [1, 2], but the related answers vary depending on the learning situation. Collecting the pertinent data to know exactly what happens during a learning session depends on the learning context. Some well used course management systems like WebCT/Blackboard, Moodle or Dokeos [3] usually provide general information about the students’ activity. These data are content independent, they provide the teacher with an idea of the effort done by each student, but don’t really inform about the quality of learning. Several research projects have developed tools that allow teachers to keep track of their learners’ interactions with the environment [4, 5, 6, 7], and of their learners’ communication activity in distant learning [8, 9, 10, 11]. Learning data can be collected by various means such as computer-interaction traces, videos, voice U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 602–608, 2009. © Springer-Verlag Berlin Heidelberg 2009
Supervising Distant Simulation-Based Practical Work
603
recordings, etc. The size of the collected data is also a crucial differential factor of awareness. However, making sense of these data remains the most difficult task. Our work is centered on learning situations based on formalized computer-readable learning scenarios [12]. The FORMID environment [13, 14] that we have designed, developed and continuously improved, allows setting up practical work sessions engaging a group of learners to interact individually with a simulation or a micro-world. Our approach consists in tracking very fine-grained information about learners’ simulation-based activity, and showing synchronously the gathered data to the teacher. A specific interface called FORMID-Observer aims to make teachers aware of learners’ progression throughout a virtual practical work session with an underlying approach very close to [15]. In this paper, we address the question of FORMID-Observer utility. In the context of a typical FORMID-managed practical work in electricity involving secondary school students, we try to know if (1) Teachers can understand what distant learners are doing; (2) Teachers perceive which student or sub-group of students fail or succeed in doing a particular exercise and why it happened, while such observations are not trivial [2].
2 Theoretical Aspects about Monitoring a Learning Session Monitoring learner progress in the e-learning context requires a consideration of the mental representation which a teacher generates for the purposes of diagnosing standards of learner development. The cognitive process involved in the perception of onscreen data is intimately connected with the monitoring of learning. The monitoring activity may therefore be conceived as the cognitive mechanisms determining correlations between data drawn from the system and data based on teachers’ acquired knowledge. The meaning of the situation is therefore not established a priori, but is rather constructed from the lowest level data interacting with a variety of cognitive processes. Monitoring a learning session does not merely consist in generating a representation of available on-screen data, but also involves forming a representation of what the on-screen information signifies in terms of learning [16]. In processing surface information, a teacher gradually forms several representations that vary in terms of duration and richness [17]. Except in exceptional circumstances [18], the surface code has a very brief duration and is quickly superseded by a level of representation reflecting meaning rather than the available surface data [19]. In monitoring a learning session, teachers establish connections between different types of on-screen data according to their acquired knowledge of class supervision [20, 21]. The teacher is therefore engaged in several distinct processes: A coherent and internal initial integration of the data provided and selected by the teacher on the interface helps to define connections between different signifying surface elements. Though important, this initial integration is not sufficient. A second integration appeals to long-term memorized knowledge of developmental factors required for learning in a particular area. The teacher supplements the data available with information drawn from his/her experience, to forming an improved mental representation of learner achievements and difficulties. A third integration occurs between indications drawn from the system and the activated knowledge known as a situational or mental model [22, 23].
604
V. Guéraud et al.
The issues relating to a teacher’s use of FORMID-Observer encompass the following: how is effectively integrated the information shown by FORMID-Observer, and what kinds of situational model are devised by teachers?
3 FORMID-Observer: A Flexible Environment for Teachers The FORMID environment (Fig 1) is composed of three distinct tools: (1) FORMIDAuthor for describing computer-supported practical work sessions, (2) FORMIDLearner for engaging learners into such sessions and storing learning traces in a database, (3) FORMID-Observer, on which this paper focuses, for making teachers aware of the class progression throughout a session, thanks to the database. We are dealing with active learning individual situations in which the learner must solve a problem while interacting with a simulation. In FORMID sessions problems are called exercises. Each exercise is described by a pedagogical scenario structured in steps; each step includes: − the goal to be achieved by each learner on the simulation: i.e. a condition on the final simulation state to be evaluated (as correct or not) and traced each time the learner requests an end-of-step validation; − a set of specific situations on the simulation, each revealing a typical error (or at the opposite a good behaviour/reasoning) to be automatically detected and traced during the step: i.e. a condition on the simulation state which value has to be observed by automatic frequent inspections. use
Interacts with
FORMID Environment
FORMID Author
pilot Scenario
refers Interacts with
pilot SIMULATION
use
interprets
FORMID Learner
(traces)
Structures and Learners makes DB sense
displays use
FORMID Observer
(traces)
Fig. 1. FORMID Environment
Tutors DB
Supervising Distant Simulation-Based Practical Work
605
Functionalities to assist the distant monitoring of learners involved in active learning situations are classified by [24] in perception, support and monitoring activity’s management. Our work focuses here on the teachers’ perception of learners' activities. To favour a good perception, FORMID provides teachers with the following features: − When defining a scenario, the teacher specifies what the execution tool will control during the learner's progression, according to his pedagogical approach, to the targeted learners group and to his knowledge of past teaching practices; these high levels indicators are further displayed by FORMID-Observer; − FORMID-Observer provides the teacher with three levels of perception [13,14], thus he can choose which view he needs to achieve his supervision at a given time, according to his monitoring strategy; − At each level, for each displayed event, detailed information on the related simulation state can be displayed and provide the teacher with additional insights of what is good or bad in the simulation state when the event occurred. − Semantic learning traces resulting from the execution are based on (1) each learner' successive end-of-step validation requests and their value (correct or not); (2) each learner's specific situation being detected during the step and their value (error or good behaviour/reasoning). These elements are automatically registered at run-time by FORMID-Learner and encapsulated with learner’s information (who) and time-stamp. Based on the scenario which structures a session, these learning traces are synchronously displayed in FORMID-Observer.
4 Teachers’ Supervision with FORMID: Analysis and Results We are working since a while with four second degree teachers in physics engaged with the French National Institute of Research in Pedagogy (I.N.R.P.). The experiment we discuss here involved these teachers (2 Males, 2 Females, average age: 52) observing a practical work in electricity intended to learners with the same scholar level than they usually are teaching for. There are the preparation steps of our experiment. (1) Session Design: We helped the teachers to design the scenarios related to the learning session they wanted to set up and observe; The resulting FORMID session was structured into 3 exercises using electric circuits’ simulations and its duration was estimated to 90 minutes. (2) Session Execution: fifteen learners of secondary school, unknown by the teachers acting in the experiment, ran the FORMID session. FORMID-Learner automatically registered traces of their individual learning process as explained in part 3. (3) Session Observation: Firstly, a 20 minutes training course introduced teachers with the FORMIDObserver spirit and features. After that, teachers “replayed” individually the 90 minutes session with FORMID-Observer. They were asked to comment aloud all their actions and thoughts when using FORMID-Observer. There are now the technical means we used for our experiment: (1) Observation tracing: All independent teachers interactions with FORMID-Observer were traced and automatically registered (see Fig. 1.); (2) Verbal comments recording (verbatim): We used an external system to record the verbal comments expressed by teachers
606
V. Guéraud et al.
when interacting with FORMID-Observer. Teachers were advised to be as expressive they could be about their own analysis of learners progress. The question we tried to answer was: “How is effectively integrated the information shown by FORMID-Observer, and what kinds of situational model are devised by teachers?” To access the mental representations of the learners’ actions, behaviours and knowledge, we analysed their verbatim recorded during the experiment. We can observe that FORMID-Observer usage allowed teachers to describe a learner or a group doings (79%) /“Dubois has replaced the lamp, but has forgotten to set the switch off, he didn’t modify the resistance again, so the lamp burnt out again!”/. Other verbatim are commentaries about a learner or a group work and concern the method employed to solve the problem. They show the representations that a teacher has from the learner or a group doings (38%) /“Among those who go forward by guesswork, there are those who have a good intuition: they see how to modify the resistance in order to find the right value, and there are those who do nonsense! It’s easy to see that”/. The teachers expressed also the related domain-knowledge they diagnosed as being mobilized by a learner or a group of learners solving the problem (31%) /“They have a wrong reasoning about the tension: they are thinking in the same way that for the previous circuit. On the other hand they have a good reasoning about the intensity."/. Note that the sum of the percentages is superior to 100 % because some verbatim are related to more than one of the previous categories. Some verbatim are both a description of doings and a comment on the method /“OK, basically I see that they have all already first set the switch on to see how the basic circuit works.”/; others are both a comment and a diagnosis /“This one has worked hard and now he knows how to solve the problem for the other series circuits.”/. They may also belong to the three categories /“Perrin is modifying and directly setting the resistance to 30 ohms. This is an interesting tentative. Missed! He validated too early, but it was because he has understood how it works, he didn’t do any calculation but he has understood. Congratulations Perrin!“/. The verbatim analysis seems to show that FORMID-Observer usage allowed the involved teachers to describe what a learner or the group was doing, which method is employed to solve the problem, and also the related domain-knowledge they diagnosed as being mobilized by the learners. The analysis seems also to indicate that the teachers were processing by steps: (1) description, (2) comment and (3) diagnosis.
5 Conclusion and Perspectives This paper presents an observation tool, called FORMID-Observer which is a part of a complete environment based on Web technologies for authoring [25], running and synchronously or asynchronously observing distant practical work learning sessions. A session consists on a set of exercises where learners try individually to solve problems by interacting with a simulation. Indicators of the learning process depend from the simulation states which are continuously evaluated during a session and are previously chosen at the design stage. The combination of the scenario structuring a session and this kind of indicators make sense of the recorded learning traces. Thus the
Supervising Distant Simulation-Based Practical Work
607
four teachers involved in this experiment can be aware of how learners try to solve the problems and what domain knowledge they are mobilizing. Another study [26], based on observation tracing and on a “a priori” interview shows that their use of the three FORMID-Observer levels was clearly dependant of the supervision strategy they clamed to use without observation tool. Others studies are in progress. Beyond their interesting results, these case studies with four teachers allowed us to elaborate and finalize the appropriate methodology for studying, tracing and analysing the use of FORMID-Observer. A larger experimentation can now be realized for generalizing these results.
References [1] Zinn, C., Scheuer, O.: Getting to know your student in distance-learning contexts. In: Nejdl, W., Tochtermann, K. (eds.) EC-TEL 2006. LNCS, vol. 4227, pp. 437–451. Springer, Heidelberg (2006) [2] Hijón-Neira, R., Velásquez-Iturbide, J.Á.: How to Improve Assessment of Learning and Performance through Interactive Visualization. In: ICALT 2008 proceedings, pp. 472–476 (2008) [3] Goldberg, M.W.: Student participation and progress tracking for Web-Based Courses using WebCT. In: Proc.of the 2nd North American Web Conference, Fredericton, NB, Canada (1996) [4] Mazza, R., Dimitrova, V.: CourseVis: Externalising Student Information to Facilitate Instructors in Distance Learning. In: Proc. AIED 2003, pp. 279–286. IOS Press, Amsterdam (2003) [5] Scheuer, O., Zinn, C.: How did the e-learning session go? The Student Inspector. In: Proceedings of AIED 2007, Marina Del Rey, Ca., USA, pp. 487–494. IOS Press, Amsterdam (2007) [6] Razzaq, L., Heffernan, N., Feng, M., Pardos, Z.: Developing Fine-Grained Transfer Models in the ASSISTment System. Journal of Technology, Instruction, Cognition, and Learning 5(3), 289–304 (2007) [7] Feng, M., Heffernan, N.: Towards Live Informing and Automatic Analyzing of Student Learning: Reporting in ASSISTment System. Journal of Interactive Learning Research 18(2), 207–230 (2007) [8] Bratistis, T., Dimitracopoulou, A.: Data Recording and Usage Interaction Analysis in Asynchronous Discussions: The D.I.A.S System. In: AIED 2005 proceedings, pp. 17–24 (2005) [9] Harrer, A., Ziebarth, S., Giemza, A., Hoppe, U.: A framework to support monitoring and moderation of e-discussions with heterogeneous discussion tools. In: ICALT 2008 proceedings, pp. 41–45 (2008) [10] May, M., George, S., Prevôt, P.: Sutdents’ Tracking Data: an Approach for efficiently Tracking Computer Mediated Communications in Distance Learning. In: ICALT 2008, pp. 783–787 (2008) [11] Donath, J., Karahalios, K., Viégas, F.: Visualizing conversation. Journal of Computer Mediated Communication 4(4) (1999) [12] Adam, J.M., Lejeune, A., Michelet, S., David, J.P., Martel, C.: Setting up on-line learning experiments: the LearningLab platform. In: Proceedings of ICALT 2008, pp. 761–763. IEEE Computer Society, Los Alamitos (2008)
608
V. Guéraud et al.
[13] Guéraud, V., Cagnat, J.M.: Automatic semantic activity monitoring of distance learners guided by pedagogical scenarios. In: Nejdl, W., Tochtermann, K. (eds.) EC-TEL 2006. LNCS, vol. 4227, pp. 476–481. Springer, Heidelberg (2006) [14] Guéraud, V., Lejeune, A., Adam, J.M., Dubois, M., Mandran, N.: Flexible Environment for Supervising Simulation-Based Learning Situations. In: AIED 2009, Brighton (UK) (July 2009) [15] Ben-Naim, D., Marcus, N., Bain, M.: Visualization and Analysis of Student Interactions in an adaptive Exploratory Learning Environment. In: International Workshop on Intelligent Support for Exploratory Environment, EC-TEL 2008, CEUR-WS.org, vol. 381 (2008) [16] Graesser, A.C., Millis, K.K., Zwaan, R.A.: Discourse comprehension. Annual Review of Psychology 48, 163–189 (1997) [17] Noordman, L.G.M., Vonk, W.: Memory-based processing in understanding causal information. Discourse Processes 28, 191–221 (1998) [18] Kintsch, W., Bates, E.: Recognition memory for statements from a classroom lecture. Journal of Experimental Psychology: Human Learning and Memory 3, 150–159 (1977) [19] Sachs, J.S.: Recognition memory for syntactic and semantic aspects of connected discourse. Perception and Psychophysics 2, 437–442 (1967) [20] Frank, S.L., Koppen, M., Noordman, L.G.M., Vonk, W.: Modeling knowledge-based inferences in story comprehension. Cognitive Science 27, 875–910 (2003) [21] Frank, S.L., Koppen, M., Noordman, L.G.M., Vonk, W.: Modeling Multiple Levels of Text Representation. In: Schmalhofer, F., Perfetti, C.A. (eds.) Higher level language processes in the brain: inference and comprehension processes, pp. 133–157. Erlbaum, Mahwah (2005) [22] Bransford, J.D., Barclay, J.R., Francks, J.J.: Sentence memory: a constructive versus interpretive approach. Cognitive Psychology 3, 193–209 (1972) [23] Glenberg, A.M., Meyer, M., Lindem, K.: Mental models contribute to foregrounding during text comprehension. Journal of Memory and Language 26, 69–83 (1987) [24] Després, C.: Synchronous Tutoring in Distance Learning. In: Hoppe, U., Verdejo, F., Kay, J. (eds.) Proc. AIED 2003, pp. 271–278. IOS Press, Amsterdam (2003) [25] Cortés, G., Guéraud, V.: Experimentation of an authoring tool for pedagogical simulations. In: Proceedings ofInternational Conference CALISCE 1998, Göteborg, Sweden, pp. 39–44 (1998) [26] Guéraud, V., Adam, J.-M., Lejeune, A., Dubois, M., Mandran, N.: Teachers need support too: FORMID-Observer, a flexible environment for supervising simulation-based learning situations. In: Workshop ISEE, AIED 2009, Brighton (UK) (July 2009)
Designing Failure to Encourage Success: Productive Failure in a Multi-user Virtual Environment to Solve Complex Problems Shannon Kennedy-Clark Centre for Research on Computer Supported Learning and Cognition, Faculty of Education and Social Work at the University of Sydney, Sydney Australia
[email protected] http://coco.edfac.usyd.edu.au/
Abstract. The purpose of this research project is to gain an understanding of the initial stage of a productive failure treatment. The research focuses on how learners solve complex or ill-defined problems in Virtual Singapura, a multi-user virtual environment. The research uses a mixed method approach that employs conversation analysis, questionnaires and pre, mid and post-tests. Complex problems, by their very nature, are difficult for learners to connect with, and this project will focus the initial cycle of a productive failure treatment in order to develop a series of design considerations that teachers can implement in an immersive learning environment to help students develop the strategies necessary to engage with complex problems across domains of knowledge. The project aims to inform theory on productive failure, learner processes and learning in immersive environments. Keywords: Productive failure, education, multi-user virtual environments, complex problems
1 Introduction The ability to solve a diverse range of complex problems is a requirement of many learning and workplace situations. However, learners are often taught how to solve a complex problem through the use of highly structured or scaffolded activities and are not provided with opportunities to engage in processes such as defining problems, creating hypotheses and testing these hypotheses [1]. The restriction created by the use of these structures and a lack of opportunities to engage in ill-defined problem solving activities can impair a learner in developing a complex problem solving toolkit as the scaffolds constrain the learner to a narrow problem solving scope. A robust body of literature indicates that students have difficulties both visualising and solving complex problems in domains of inquiry (e.g. science, physics and history). Accordingly, there have been numerous computer supported learning projects aimed at enabling students to grasp often diffuse and complicated concepts such as weather patterns and astronomy [2-6]. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 609–614, 2009. © Springer-Verlag Berlin Heidelberg 2009
610
S. Kennedy-Clark
However, many of these interventions have focused on the use of highly scaffolded or structured treatments that guide a learner through a series of activities. What is being proposed in this paper is that a move away from scaffolding in the initial encounter with a problem may result in better learning. Research indicates that making mistakes and failing to arrive at the correct answer can encourage learners to reflect on their learning process and to access domains of knowledge and previous experiences, thus encouraging a deeper level of engagement and critical thinking [7-10]. This paper will present the benefits of using Virtual Singapura (VS), a multi-user virtual environment (MUVE), as a platform for learners to engage with complex or ill-defined problems. The research is part of a larger research project, which is the first of its kind in Australia, will focus on the initial cycle of a Productive Failure (PF) treatment in a MUVE.
2 Productive Failure – Reaching an Impasse PF is a learning strategy that designed to enhance or facilitate the transfer of knowledge from one domain activity to another. Recent research utilising PF in complex environments has resulted in significant findings that support the use of a PF treatment [see work by Kapur and Jacobson, Pathak et. al., Jacobson et al.]. As a simple analogy, PF can be viewed as an hourglass wherein students are able to explore an ill-defined problem domain with no structure in the initial activity, before being exposed to a structured activity and then re-exposed to an un-structured activity (see Fig. 1. below). This presents students with an opportunity to reach an impasse in the activity. Often instructors shy away from allowing students to reach an impasse, however research by VanLehn et al. [10] indicates that allowing students to reach an impasse may encourage students to think more critically about a situation and that reaching an impasse can encourage learning. As Kolodner [9] indicates, we may be novices in one domain, but we can bring a multitude of past experiences into this domain that we can apply to attempt to solve the new problem. Kapur [7, 11] further argues that through using unstructured learning activities a student may develop more flexible and adaptive learning in the long run based on their initial failures. Hence, the underlying premise of PF suggests that the lack of structure in the initial activity is the key to successful problem solving in subsequent structured and unstructured activities.
3 Multi-user Virtual Environments as Learning Tools MUVEs are becoming widely recognised for their benefits as learning environments. A MUVE is a persistent virtual environment that is usually accessed online via a downloadable software platform such as Active Worlds or is located online. There is a developing body of literature around MUVEs and with this comes crystallisation of the key criteria that classify a MUVE as a distinct tool from other forms of online learning activities. The five main criteria are a) an avatar that represents the participant, b) a 3D virtual environment, c) the ability to interact with artefacts and agents,
Designing Failure to Encourage Success
611
d) participants can communicate with other participants and, in some instances, communicate with intelligent agents and e) a ‘real world’ context that is created to provide an authentic experience that a student may not be able to encounter in a classroom environment [12-20]. MUVEs, such as Quest Atlantis [21] and VS [22] provide students with an opportunity to visualise and engage with complex learning systems in a setting that is motivating and engaging. Nevertheless, all games and educational MUVEs have limitations and educators need to be aware of these limitations in order to maximise the benefits of the experience for leaner –in this research project the cycles of feedback and iteration may address some of the pedagogical and design issues that concern VS and other MUVE environments.
4 Virtual Singapura – Solving Complex Problems in a MUVE VS was developed in Singapore as part of a collaborative project between researchers at Singapore Learning Sciences Laboratory (National Institute of Education) and faculty in Computer Engineering and in Art, Design, and Media at Nanyang Technological University. The story or scenario for the VS lends itself to the trial of PF and it presents a rich problem solving environment. VS is set in 19th century Singapore and is based on historical information about the cholera epidemic of 1873 – 74. The students are transported back in time to help the Governor of Singapore, Sir Andrew Clarke, and the citizens of the city to try and solve the problem of what is causing the illnesses. Students are encouraged to develop appropriate scientific inquiry skills such as defining the scope of the problem; identification of research variables; establishing and testing hypothesis and presentation of findings. In order to create an authentic learning experience, 19th century artefacts about Singapore have been included in the environment. These artefacts include historical 3D buildings and agents that represent different ethnic groups in Singapore at the time such as Chinese, Malay, Indian, westerners, and historic period photographs. The PF treatment activities will be adapted to suit to specific learning outcomes of the Australian NSW High School Science Curriculum.
5 Research by Design The research is part of a larger research project that uses a Design Based Research framework (DBR). While DBR is often associated with the learning sciences, a field that is known for its utilisation of technology in education, the focus of a DBR approach is on pedagogy and learning theories rather than on the development of technological tools and artefacts. Whilst technology is often an important feature of the research, the learner is still the focus. DBR is often seen as a series of approaches, rather than a single approach that is aimed at the development of new theories and practises in naturalistic settings [23, 24].
612
S. Kennedy-Clark
6 Participants The participants in this study will be drawn from an Australian High School. The participants will be studying science in years 7 - 9 (12 – 15 years of age) to develop scientific inquiry skills this trial is scheduled for December 2009. Pilot studies will be held in August 2009 with pre-service teachers undertaking a Master Degree at Sydney University.
7 Data Collection A mixed-method approach to data collection will be used. The intervention will have three phases. The first phase of the intervention will expose students to an unstructured activity. The second phase will expose participants to a structured activity. The third phase will expose participants to another unstructured activity. Pre, mid and post-tests will be used. Verbal communication analysis of the initial unstructured activity will be used and the data will be coded on two levels – firstly, for convergence of ideas and secondly, for linguistic features [25, 26]. Screen capture software will be used to ascertain what aspects of the environment the learners are focusing on [25, 27, 28]. A broad analysis of this data can express whether students are claiming, predicting, eliciting, creating and acting and will be coded to see how students collaborate when trying to solve or engage with the problem domain.
8 Final Considerations One final point to reflect on is that the aim of this research is not to produce a definitive theory on PF, but rather to complement and add to the small body of work that is currently available, and to hopefully provide further data to substantiate the rationale underpinning the use of a PF strategy in complex problem solving activities. Current research projects indicate that there is unquestionable potential in the use of MUVEs in learning environments, this research into a PF treatment in VS will add to this growing body of work, and may provide another avenue which instructors can use to enable learners to develop problem solving strategies that move beyond the bounds of a traditional classroom environment.
References 1. Zydney, J.M.: Eighth-Grade Students Defining Complex Problems: The Effectiveness of Scaffolding in a Multimedia Program. Journal of Educational Multimedia and Hypermedia 14(1), 61–90 (2005) 2. Bodemer, D., et al.: Supporting learning with interactive multimedia through active integration of representations. Instructional Science 33, 73–95 (2005) 3. Barnett, M., et al.: Using Virtual Reality Computer Models to Support Student Understanding of Astronomical Concepts. The Journal of Computers in Mathematics and Science Teaching 24(4), 333–356 (2005)
Designing Failure to Encourage Success
613
4. Kim, P., Olarciregui, C.: The effects of a concept map-based information display in an electronic portfolio system on information processing and retention in a fifth-grade science class covering the Earth’s atmosphere. British Journal of Educational Technology 39(4), 700–714 (2008) 5. Lowe, R.: Interrogation of a dynamic visualisation during learning. Journal of Learning and Instruction 14, 257–274 (2004) 6. Puntambekar, S., Goldstein, J.: Effect of Visual Representation of the Conceptual Structure of the Domain on Science Learning and Navigation in a Hypertext Environment. Journal of Educational Multimedia and Hypermedia 16(4), 429–459 (2007) 7. Kapur, M.: Productive Failure. Cognition and Instruction 26(3), 379–424 (2008) 8. Hmelo, C.E., Holton, D.L., Kolodner, J.L.: Designing to Learn about Complex Systems. The Journal of the Learning Sciences 9(3), 247–298 (2000) 9. Kolodner, J.L.: Case-Based Reasoning. In: Sawyer, K. (ed.) The Cambridge Handbook of the Learning Sciences, pp. 225–242. Cambridge University Press, Cambridge (2006) 10. VanLehn, K., et al.: Why Do Only Some Events Cause Learning During Human Tutoring? Cognition and Instruction 21(3), 209–249 (2003) 11. Kapur, M.: Productive Failure. In: International Conference on Learning Science, Bloomington, Indiana (2006) 12. Dickey, M.D.: 3D Virtual Worlds: An Emerging Technology For Traditional And Distance Learning. In: Convergence of Learning and Technology, Ohio Learning Network, Easton (2003) 13. Squire, K.D., et al.: Electromagentism Supercharged! Learning Physics with digital simulation games. In: International Conference of the Learning Sciences, Los Angeles, CA (2004) 14. Dieterle, E., Clarke, J., Pagani, M. (eds.): Multi-User Virtual Environments for Teaching and Learning. Encyclopedia of Multimedia. Idea Group, Inc., Hershey (in press) 15. Ketelhut, D.J., et al.: A multi-user virtual environment for building higher order inquiry skills in science. American Educational Research Association, San Francisco (2006) 16. Nelson, B.: Exploring the Use of Individual, Reflective Guidance In an Educational MultiUser Environment. Journal of Science Education and Technology 16(1), 83–97 (2007) 17. Rieber, L.P.: Seriously Considering Play: Designing interactive learning environments based on the blending of microworlds, simulations, and games. Educational Technology, Research and Development 44(2), 43–58 (1996) 18. Shaffer, D.W., Gee, J.P. (eds.): Epistemic Games as education for innovation. Learning through Digital Technologies. Underwood, J.D.M., Dockrell, J.(eds.): British Journal of Educational Psychology. Leicester, pp. 71–82 (2007) 19. Steinkuehler, C.A.: Massively Multiplayer Online Video Gaming as Participation in a Discourse. Mind, Culture, and Activity 13(1), 38–52 (2006) 20. Taylor, T.L.: Multiple Pleasures: Women and Online Gaming. Convergence: The International Journal of Research into New Technologies 9, 21–46 (2003) 21. Barab, S.A., et al.: Making Learning Fun: Quest Atlantis, A Game Without Guns. Educational Technology, Research and Development 53(1), 86–107 (2005) 22. Jacobson, M.J., et al.: An Intelligent Agent Augmented Multi-User Virtual Environment for Learning Science Inquiry: Preliminary Research Findings. In: 2008 American Educational Association Conference, New York (2008) 23. Barab, S.A., Squire, K.: Design-Based Research: Putting a Stake in the Ground. Journal of the Learning Sciences 13(1), 1–14 (2004) 24. The Design-Based Research Collective, Design-based research: An emerging paradigm for educational inquiry. Educational Researcher 32(1), 5–8 (2003)
614
S. Kennedy-Clark
25. Sawyer, K.: Analyzing Collaborative Discourse. In: Sawyer, K. (ed.) The Cambridge Handbook of the Learning Sciences, pp. 187–204. Cambridge University Press, Cambridge (2006) 26. Kapur, M., Kinzer, C.K.: Productive Failure in CSCL Groups. International Journal of Computer-Supported Learning 4(1), 21–46 (2009) 27. Mazur, J.: Conversation Analysis for Educational Technologists: Theoretical and Methodological issues for Researching the Structures. In: Jonassen, D. (ed.) Processes and Meaning of On-line Talk, in Handbook of Research for Educational Communications and Technology. MacMillian, New York (2004) 28. Mazur, J., Lio, C.: Learner Articulation in an Immersive Visualization Environment. In: Conference on Human Factors in Computing Systems, Vienna, Austria. ACM, New York (2004)
Revisions of the Split-Attention Effect Athanasios Mazarakis Forschungszentrum Informatik, Haid-und-Neu-Str. 10-14, 76131 Karlsruhe, Germany
[email protected]
Abstract. For the learning process with multimedia contents the split-attention effect postulates that learning results are better the higher the spatial proximity of text and picture elements is. This article shows that by the use of an artificially generated relationship between texts and pictures which are far away (according to the new principles of grouping by Palmer[1]), it is possible to attain learning results which are at least equal. The negative impact of the spatial distance between text and picture elements can therefore be avoided in a different way. So an online survey has been conducted and the data of 869 subjects have been evaluated regarding to their retention and transfer performance. Keywords: multimedia learning, cognitive load theory, cognitive theory of multimedia learning, split-attention effect.
1 Introduction There are two commonly used and very similar cognitive theories of learning with multimedia contents: the Cognitive Load Theory [2] and the Cognitive Theory of Multimedia Learning [3]. But both approaches have theoretical weaknesses if they try to handle effects which came into being directly from the theories. In this context the split-attention effect will be discussed further.
2 Background of the Used Theories According to the Cognitive Load Theory of Sweller [5] there are three different so called "loads": The intrinsic cognitive load, the extraneous cognitive load and the germane cognitive load. These three loads are added up to the cognitive load. Here the extraneous cognitive load is the load, which originates from an unadjusted design of the instructions, like e. g. additional multimedia elements, which divert the attention of the learner. However the germane cognitive load is responsible for the construction and automation of schemata which Sweller [5] regards to be the ideal solution for the learning with multimedia content. For the construction and automation of schemata it is important to observe the limited capacity of the working memory according to Baddeley [6]. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 615–620, 2009. © Springer-Verlag Berlin Heidelberg 2009
616
A. Mazarakis
Finally, the intrinsic cognitive load again arises from the natural complexity of the information which has to be learned. On the one hand there are elements which can be learnt independently from others and therefore only cause a low cognitive load. Sweller [7] calls this low-element interactivity material. On the other hand there are elements which correspond strongly to each other, called high-element interactivity. Here a high cognitive load arises due to the fact that the information has to be learned simultaneously in order to achieve a high level of understanding by the learner. Besides the already presented Cognitive Load Theory, the Cognitive Theory of Multimedia Learning by Mayer and Moreno [8] is the second prominent theory of learning within the multimedia field. This theory is very similar to the Cognitive Load Theory and is mentioned at this point in order for completeness. 2.1 The Split-Attention Effect Ayres and Sweller [9] define the split-attention as present, when the learner has to divide his attention between different sources and thereby simultaneously has to mentally combine the contents of these sources, e.g. text and picture on a computer screen. For the now arisen split-attention effect the cognitive load is increased, especially the extraneous cognitive load. The usual solution of the problem according to Ayres et al. [9] is shown on the left side of illustration 1.
Illustration 1. Integrated version (left) and separated version (right) of the material in the experiment 1 of Moreno and Mayer [10]
Illustration 1 clarifies the material of an often replicated experiment for the Cognitive Theory of Multimedia Learning of Moreno and Mayer [10]. The picture shows the integrated display format on the left side. The according text is placed in proximity to the corresponding graphic illustration, which should be useful for learning success. The right side shows the separated version, the descriptive text is in remote distance at the lower edge of the screen. The learning success is obstructed according to Moreno et al. [10]. 2.2 New Principles of Grouping In order to find alternatives to the previous approach of spatial proximity, cognitive psychological expansions are considered. Palmer [1] has created amongst others the factors of common region and element connectedness, which have been confirmed by
Revisions of the Split-Attention Effect
617
Beck and Palmer [11] empirically. Due to restricted space only the grouping factor common region is described in this article. The grouping factor common region implies according to Palmer [12] that – all else being equal - elements are perceived as a group if they are integrated within a connected, similarly coloured or uniformly structured area with the same included contour and color. By “all else being equal” Palmer [12] means that all other features are held constant or being eliminated, the so called "ceteris-paribus-rule". However if this is not the case, an estimation of the result can no longer be made due to the fact that interactions are neither measureable nor controllable.
Illustration 2. Example of Palmer [12] for the factor of the common region
An example for the grouping factor common region is shown in illustration 2. It's clarified that the proximity of the points is no longer important for the perceived grouping, although the points within an ellipse are more distant than the two bordering points in the two bordering ellipses. 2.3 Formulation of the Question and Hypotheses Derived from the work of Mayer [4], as well as of Moreno et al. [10] the following result is presented: Text and a graphic illustration should be grouped as near as possible on the computer screen, due to the fact that otherwise it would result in significant losses of learning performance. In this article it is argued against it, that not only the proximity between the elements “text" and "picture" is important, but also that an artificially created relationship between these elements leads to at least equal learning success for the subjects. The following hypotheses are examined: H1. The linked display format with the new principles of grouping according to Palmer [1] does not lead to less learning- and transfer performance than the integrated display format. H2. Persons with less meteorological previous knowledge benefit significantly more according to Mayer [13] - than persons with a high meteorological previous knowledge and therefore produce more and more creative solutions. H3. The animation without a descriptive text performs as a control condition significantly worse than all other test conditions, the animation is therefore not selfdescriptive.
3 The Study In this part the conducted field study will be introduced, an online-survey which was realized on the Internet, in which the subjects had to solve retention and transfer tasks regarding the meteorological phenomenon "The creation of lightning". The study has been divided into three subsequent phases. In phase one the subjects first had to judge
618
A. Mazarakis
about their own meteorological knowledge. This action is analogous to the proceeding of the Moreno et al. [10] experiment. Subsequently in phase two the subjects have been assigned by random to one of six conditions for the experiment in which a three minute-long animation about the creation of lightning has been displayed. The conditions of the experiment provided a connection between the split-attention effect and the new principles of grouping, respectively surveyed the split-attention effect itself. The conditions of the experiments were different in respect to 2 characteristic features: On the one hand the spatial proximity of text to the corresponding animation and on the other hand the used principle of grouping. By the combination of these factors the following six conditions were created: • • • • • •
The integrated text condition with the text placed in spatial proximity (IT) The integrated text condition with common region (ITCR) The control condition without a descriptive text (CG) The separated text condition with text placed in spatial distance (ST) The separated text condition with common region (STCR) The separated text condition with element connectedness (STEC)
Due to the restrictions of the length of this article only two out of six illustrations are shown: STCR and IT. The English translation of the German text is always: “Warmed moist air rises rapidly”.
Illustration 3. Pictures of the animation in the experiment about the creation of lightning. Hereby it is shown on the left side the integrated condition and on the right side the separated condition with common region in German language (STCR).
In phase three the subjects answered five open questions with time constraint connected to the seen animation. The questions in full detail were: 1.) Please explain how lightning works. 2.) What could you do to decrease the intensity of lightning? 3.) Suppose you see clouds on the sky, but no lightning. Why not? 4.) What does air temperature have to do with lightning? 5.) What causes lightning? Therefore the first question was the retention question, questions 2 to 5 the transfer questions. For every correct answer a point was awarded, false answers were not counted.
4 Results for Retention and Transfer Performance The sample included 869 subjects, with 452 of male gender. The subjects were on average 25 years old (sd=7), 63 % were students.
Revisions of the Split-Attention Effect
619
Table 1. F-values for the comparison of the test conditions for transfer performance. Italic printed values cannot be interpreted in an unequivocal way due to the „ceteris-paribus-rule“.
IT ITCR CG ST STCR STEC
IT -------------------
ITCR .01 ----------------
CG 9.43** 10.80*** -------------
ST .74 .91 5.17* ----------
STCR .32 .42 9.82** .26 -------
STEC .24 .33 6.80** .13 .01 ----
* p < 0.05; ** p < 0.01; *** p < 0.001. It is apparent from table 1 that the transfer performance in the linked text conditions (ITCR, STCR and STEC) is not significantly worse than in the integrated text (IT) condition, the first hypothesis is accepted. The results for the retention performance are the same, although not displayed due to page restrictions. Also the control group without descriptive text performed significantly worse in both conditions. Therefore the third hypothesis is accepted, the animation is not self-descriptive. The analysis of variance for the second hypothesis did not lead to significant results with FR(1,867) = 1.47, p < 0.3 and FT(1,867) = 1.30, p < 0.3 respectively, the null hypothesis was retained with no significant differences for both groups.
5 Discussion and Outlook The aim of this article was to test additional possible solutions for the split-attention effect in an empirical way. The until now used way of spatial proximity for knowledge acquisition and knowledge transfer of multimedia contents was extended by the new principles of grouping of Palmer [1] in cognition psychology, detailed by common region and element connectedness. The first hypothesis regarding the equal value of the linked text conditions and the integrated text condition was supported. An artificial connection of the elements text and picture didn't lead to significantly worse results than a display of these elements in spatial proximity. On the other hand the animation without a descriptive text was not self-descriptive, the third hypothesis was confirmed. The results by Mayer [13], which show that novices especially benefit from the integrated formats, couldn't be verified in the course of this study, the second hypothesis was therefore rejected. This study is measured by its size of the sample probably the largest in the context of research done on the split-attention effect. The number of subjects of the 37 studies in the meta-analysis of Ginns [14] in respect to this effect were mostly in the range of two number digits, sometimes even in the very low three digits number of subjects. The results of the present study as well as the results of related studies of Michas and Berry [15], and of Bodemer et al. [16] generally lead to doubts about the often commonly cited universal validity of the split-attention effect. But it has to be stated that the two mentioned studies didn't have the aim of questioning the effect, but can only be interpreted in that direction by the non discovery of this effect.
620
A. Mazarakis
In conclusion it can be recorded that the split-attention effect cannot be replicated as universally valid and science has to carve out the relevant conditions for the occurrence of this effect in the future. However the new principles of grouping have successfully passed their debut in research about the Cognitive Load Theory due to the acceptance of the first hypothesis and should be investigated more extensively and should be used more often in this context.
References 1. Palmer, S.E.: Vision science: Photons to phenomology. MIT Press, Cambridge (1999) 2. Sweller, J., van Merriënboer, J.J.G., Paas, F.G.W.C.: Cognitive architecture and instructional design. Educational Psychology Review 10, 251–296 (1998) 3. Mayer, R.E.: Cognitive theory of multimedia learning. In: Mayer, R.E. (ed.) The Cambridge Handbook of Multimedia Learning, pp. 31–48. Cambridge University Press, New York (2005) 4. Mayer, R.E.: Multimedia Learning. Cambridge University Press, New York (2001) 5. Sweller, J.: Implications of cognitive load theory for multimedia learning. In: Mayer, R.E. (ed.) The Cambridge Handbook of Multimedia Learning, pp. 19–30. Cambridge University Press, New York (2005) 6. Baddeley, A.D.: Human memory: Theory and practice (Rev. ed.). Psychology Press, Hove (1997) 7. Sweller, J.: Evolution of human cognitive architecture. The Psychology of Learning and Motivation 43, 215–266 (2003) 8. Mayer, R.E., Moreno, R.: Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist 38, 43–52 (2003) 9. Ayres, P., Sweller, J.: The split-attention-principle in multimedia learning. In: Mayer, R.E. (ed.) The Cambridge Handbook of Multimedia Learning, pp. 135–146. Cambridge University Press, New York (2005) 10. Moreno, R., Mayer, R.E.: Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology 91, 358–368 (1999) 11. Beck, D.M., Palmer, S.E.: Top-down influences on perceptual grouping. Journal of Experimental Psychology: Human Perception and Performance 28, 1071–1084 (2002) 12. Palmer, S.E.: Common region: A new principle of perceptual grouping. Cognitive Psychology 24, 436–447 (1992) 13. Mayer, R.E.: Multimedia Learning: Are we asking the right questions? Educational Psychologist 32, 1–19 (1997) 14. Ginns, P.: A meta-analysis of the spatial contiguity and the temporal contiguity effects. Learning and Instruction 16, 511–525 (2006) 15. Michas, I.C., Berry, D.C.: Learning a procedural task: Effectiveness of multimedia presentations. Applied Cognitive Psychology 14, 555–575 (2000) 16. Bodemer, D., Plötzner, R., Feuerlein, I., Spada, H.: The active integration of information during learning with dynamic and interactive visualisations. Learning and Instruction 14, 325–341 (2004)
Grid Service-Based Benchmarking Tool for Computer Architecture Courses Carlos Alario-Hoyos, Eduardo Gómez-Sánchez, Miguel L. Bote-Lorenzo, Guillermo Vega-Gorgojo, and Juan I. Asensio-Pérez School of Telecommunication Engineering, University of Valladolid, Camino Viejo del Cementerio s/n, 47011 Valladolid, Spain {calahoy@gsic,edugom@tel,migbot@tel,guiveg@tel,juaase@tel}.uva.es
Abstract. Benchmarking for educational purposes in the context of computer science can be hindered by the low number and the homogeneity of machines to be assessed, and the inaccuracy of the benchmarks to represent specific workloads. Thus, this paper proposes a benchmarking tool developed within a service-oriented grid in order to allow students to benchmark multiple workloads in machines that may belong to several educational institutions. This tool has been validated in a real educational scenario within a course on Computer Architecture. Keywords: Benchmarking, education, architecture, service-oriented grid.
1
Introduction
As part of their learning process, computer science students should develop skills related to the design and evaluation of computer systems. To achieve these learning objectives, the Association for Computer Machinery and the IEEE Computer Society state in their guidelines for Computing Curricula [1] that educators should challenge students with realistic scenarios, so that they can reflect on measurement techniques, as well as on the impact of computer organization to the performance of computer systems for specific workloads. Thus, benchmarking (i.e. execution of pieces of software to get performance measures for standardized workloads) plays a significant role as a quantitative measurement approach. Indeed, several Computer Architecture courses (e.g. [2] or [3]) make use of benchmarking to illustrate basic principles such as the dependence of performance on the workload or design driven to improve the performance/cost relationship. Though it is easy to include a benchmarking activity that supports the evaluation of one machine (for example the lab main server) with a couple of workloads in a Computer Architecture course, it is far more challenging and complex for educators to propose a scenario in which students must advise a realistic customer to acquire a computer system suitable for its workload, due to several reasons.
This work has been partially funded by the Spanish Ministry of Science and Innovation (TIN2008-03023) and the Autonomous Government of Castilla y Leon, Spain (VA106A08).
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 621–626, 2009. c Springer-Verlag Berlin Heidelberg 2009
622
C. Alario-Hoyos et al.
First of all, the number of different machines available for benchmarking is often reduced in most educational institutions, thus hindering the educational interest of the activity and biasing the conclusions of the students. This is mostly because some machines can be too expensive, but it also happens often that available computer systems are much alike in features because they have been acquired simultaneously to benefit from vendor discounts. Moreover, existing machines are often outdated and might not represent an up-to-date realistic case study. Besides, the benchmarks available for these machines may not represent the intended workload making conclusions reached by the students unreliable. In addition, benchmarking increases security concerns since benchmarks are normally run locally in computer systems, and thus students and educators should remotely connect to these machines. This entails additional problems: the administrator burden is increased to create accounts or allow somehow these connections, besides configuring machines and installing benchmarks; and students and educators should handle a larger number of logins, passwords and commands for the remote connection and the execution of benchmarks in different machines, increasing their cognitive load. It may very well happen that students concentrate too much on the procedure of benchmarking, instead of devoting efforts to plan the experiments or to interpret the results. Many of these limitations could be overcome if there was a way that several academic units (let them be departments, schools or whole universities) could share their machines in a secure environment, so that they could be used for benchmarking in addition to other processes. The pool of machines to be benchmarked would thus be much larger and more diverse, enriching the learning activity. The administrative burden could be somehow shared among the different unit’s administrators. Even more, the complexity of handling logins, passwords or commands could be hidden by a visual front-end. In this context, this paper proposes a Grid Service-Based Benchmarking Tool that makes use of the service-oriented grid [4], that allows institutions to federate and share computational resources in a secure and controlled way. This tool has been designed and developed with a front-end to the grid to facilitate educators the definition of an environment for the assessment of computer systems, and students the execution of benchmarks without having to care for establishing remote connections. The approach of sharing computational resources for learning has somehow been followed in the literature. For example, authors in [5] proposed a web portal to allow students and educators to run simulation tools in distributed computer resources, including some tools that might be useful in Computer Architecture courses such as the DLXView or CacheSim5 simulators. Besides, education can be considered as one relevant field in which grid computing can be applied [6], and so, applications such as the collaborative network simulation environment in [7] or the online grid service-based laboratory in [8] have been developed. However, none of them uses the service-oriented grid technology to share machines as a pool of resources for benchmarking, as proposed in this paper.
Grid Service-Based Benchmarking Tool
2
623
Grid Service-Based Benchmarking Tool
This section first justifies the use of the service-oriented grid to overcome the limitations that have been previously introduced. Then, an overview of the Grid Service-Based Benchmarking Tool is reported to steer towards a prototype. 2.1
Service-Oriented Grid to Overcome the Limitations
A computational grid is a large-scale infrastructure composed by heterogeneous resources that are shared by multiple administrative organizations [9]. The access to these resources may be granted to the authorized members by the grid middleware. A service-oriented grid exposes these virtualized resources as services according to OGSA (Open Grid Service Architecture) [10] and the WSRF (Web Services resource Framework) [11] specifications. They promote the transparent access to the resources through a well-defined service interface. Thus, within a service-oriented grid, different educational institutions, distributed geographically, could announce benchmarks that can be run in their machines for the benefit of their Computer Architecture courses. In addition, the service-oriented grid can decrease the administration burden by splitting the administration tasks. The main reason is that the distributed members in the grid usually have their own administrators. A service-oriented grid also provides the infrastructure in charge of controlling the access to a secure environment, for example through credential management or delegation mechanisms. Finally, the service-oriented grid can abstract low level details about resources, as they are exposed through a well-defined service interface. Therefore, users do not need to know how to communicate with remote machines for benchmarking as it is internally done between services and resources. 2.2
Grid Service-Based Benchmarking Tool Architecture
As any other service-oriented application, the design of the Grid Service-Based Benchmarking Tool implies splitting the functionality into a set of different services to maximize the reusability when building other applications. Each of the identified services can be offered by one or more institutions participating in the grid, running them using their own local resources. The general architecture of the Grid Service-Based Benchmarking Tool, as well as the set of identified services is shown in Figure 1 and is described next. The benchmark service is a front-end that allows any institution offering a set of machine/benchmark pairs. The administrator in the local institution, through the administration client, only needs to provide access to the machines in which benchmarks run, to any authorized user. The integration service allows educators through their educator client to select for their students a subset of machine/benchmark pairs from the ones offered by different institutions and gather them as a collection of benchmarks. The index service supports the register and discovery of resources and services. In this case, the index service is used by educators to find machine/benchmark pairs or by students
624
C. Alario-Hoyos et al. Educational institution 2 Client - Service Service - Service Service - Resource
Administrator Benchmark Service
Administration. Client
Index Service
Index Service
Index Service
Integration Service Credential Repository Service
Credential Repository
Benchmark Service
Educator Client
Administration . Client
Educator
Administrator Benchmark Service
Student
Student
Student Client
Educational institution 1
Educational institution delivering Computer Architecture courses
Educational institution 3
Fig. 1. Grid Service-Based Benchmarking Tool architecture. There must be one index service and at least one benchmark service in the institutions offering machines. There must be also at least one credential repository service and at least one integration service.
through the student client to find collections of benchmarks and subsequently execute them transparently. In addition, the credential repository service enables secure access to the tool through credentials. 2.3
Grid Service-Based Benchmarking Tool Prototype
A prototype of the Grid Service-Based Benchmarking Tool has been developed according to the WSRF standards and supported by the middleware Globus Toolkit 4.0 [12]. This prototype implements the following services: benchmark service, integration service, and index service. The first two have been developed from scratch while the last one belongs to the middleware. The internal communication between the benchmark service and the machines that it abstracts is based on SSH (Secure Shell ). It entails an advantage because the machines do not need to be configured to run a service execution environment, thus there is no need to install the middleware and deploy services on them. Instead, SSH access to the machine needs only to be granted through the frontend service. The three clients (administration client, educator client and student client) have been implemented and distributed with Java Web Start being exposed to the users with a GUI. As an example, Figure 2 represents the GUI screenshoot from the Administration and Student Clients.
Grid Service-Based Benchmarking Tool
625
Fig. 2. Screenshots from the Grid Service-Based Benchmarking Tool prototype. a) Administration Client adding a new machine/benchmark pair (verdejo.lab2.tel.uva.es/Dhrystone); b) Student Client executing the benchmark Dhrystone on verdejo.lab2.tel.uva.es, and obtaining the results.
3
Validation
Computer Architecture is a fourth year course in the Degree of Telecommunication Engineering (University of Valladolid, Spain). One of the tasks of this course consists on the students assessing and comparing the performance of several real machines through benchmarks to determine which is the most suitable for a given customer workload. In this educational scenario some experiences with the Grid Service-Based Tool prototype were carried out, using 36 machine/benchmark pairs from two educational institutions: the Computer Architecture lab and the GSIC (Intelligent & Cooperative Systems Group) research group. A questionnaire was voluntarily answered by 47 students after the experience, to detect general tendencies on the validity of the tool and to make suggestions for its improvement. As a sample result, 95.6% of students agreed or completely agreed with the easiness of use of the tool, supporting their opinions with the reduction of their cognitive load. In addition, more than 90% agree with the usefulness of this tool in the context of this course. Nevertheless, students cannot express any opinion about the underlying architecture and technology, because they interact with a front-end client that allows the execution of benchmarks in machines, no matter where they are located, or how this execution is done. Additionally, administrators expressed some positive opinions when configuring this tool. For example, one remarked that he needed to invest less time than in previous years all along the benchmarking activity. The reason is related to the fact that students did not connect explicitely to the machines and thus, no one changed the password or deleted shared files, saving him the time to restore the original configuration. Besides, educators did not found problems when using the tool and even point out that students needed less assistance than in previous experiences.
626
4
C. Alario-Hoyos et al.
Discussion
The Grid Service-Based Benchmarking Tool has proved to overcome the limitations found in typical educational scenarios in terms of available machines and benchmarks, by sharing distributed and more varied resources between several institutions. Furthermore, the administration burden is also shared among local institutions, and even simplified in the provisioning of access to authorized users. Additionally, the tool removes low-level details for students and educators through an execution front-end, reducing their cognitive load and letting them focus on benchmarking plans and results instead of the procedure. Nevertheless, some improvements can be done in the design of this tool. For example, an analysis service can be considered to facilitate the interpretation of the results with statistics for the same benchmark and different loads in the machine to be assessed. Besides, a visualization service may compare graphically these statistics in terms of response time or throughput. Additionally, the architecture could be complemented with an execution service that would be used by administrators to install and deploy benchmarks from a benchmark source repository with several benchmarks compiled for different architectures.
References 1. ACM/IEEE: Computing Curricula 2007: Guidelines for Associate-Degree Transfer Curriculum in Computer Engineering (2007), http://www.acmtyc.org/re-ports/ TYC_CEreport2007Final.pdf (last visited May 2009) 2. Martínez-Monés, A., et al.: Multiple Case Studies to Enhance Project-Based Learning in a Computer Architecture Course. IEEE Transactions on Education 48(3), 482–489 (2005) 3. Figueiredo, R.J., et al.: Network-Computer for Computer Architecture Education: a Progress Report. In: 2001 ASEE. Albulquerque, New Mexico (June 2001) 4. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers, San Francisco (1998) 5. Kapadia, N.H., et al.: PUNCH: Web Portal for Running Tools. IEEE Micro 20(3), 38–47 (2000) 6. Fox, G.: Education and the enterprise with the grid. In: Berman, F., Fox, G.C., Hey, A.J.G. (eds.) Grid Computing: Making the Global Infrastructure a Reality, pp. 963–976. John Wiley and Sons, Chichester (2003) 7. Bote-Lorenzo, M.L., et al.: A Grid Service-Based Collaborative Network Simulation Environment for Computer Networks Education. In: ICALT 2007, Niigata, Japan (July 2007) 8. Bagnasco, A., et al.: Computational grids and online laboratories. In: ELeGI Conference. Workshops in Computing, BCS Napoles, Italy (2005) 9. Bote-lorenzo, M.L., et al.: Grid Characteristics and Uses: A Grid Definition. In: Fernández Rivera, F., Bubak, M., Gómez Tato, A., Doallo, R. (eds.) Across Grids 2003. LNCS, vol. 2970, pp. 291–298. Springer, Heidelberg (2004) 10. Foster, I., et al.: The Open Grid Services Architecture, Version 1.0. Technical report, Gridforum (2005) 11. Committee Draft 02: Web Service Resource Framework (WSRF) - Primer v1.2. Technical report, Oasis (2006) 12. The Globus Alliance, http://www.globus.org (Last visited May 2009)
Supporting Virtual Reality in an Adaptive Web-Based Learning Environment Olga De Troyer, Frederic Kleinermann, Bram Pellens, and Ahmed Ewais Vrije Universiteit Brussel, WISE Research Group, Pleinlaan 2, 1050 Brussel, Belgium {Olga.DeTroyer,Frederic.Kleinermann,Bram.Pellens, Ahmed.Ewais}@vub.ac.be
Abstract. Virtual Reality (VR) is gaining in popularity and its added value for learning is being recognized. However, its richness in representation and manipulation possibilities may also become one of its weaknesses, as some learners may be overwhelmed and be easily lost in a virtual world. Therefore, being able to dynamically adapt the virtual world to the personal preferences, knowledge, skills and competences, learning goals and the personal or social context of the learning becomes important. In this paper, we describe how an adaptive Web-based learning environment can be extended from a technological point of view to support VR. Keywords: Virtual Reality, E-Learning, Adaptive Learning Environment.
1 Introduction Virtual Reality (VR) provides ways to use 3D visualizations with which the user can interact. For some learning situations and topics, VR may be of great value because the physical counterpart may not be available, too dangerous or too expensive. The most famous example is the flight simulator that pilots safely teaches how to fly. Most of the time, when VR is considered for learning, it is offer as a stand-alone application (e.g., [1], [2], [3]) and there is usually no way to adapt it to personal preferences, prior knowledge, skills and competences, learning goals and the personal or social context of the learner. Augmenting a virtual world with adaptive capabilities could have many advantages [4]. It may be more effective to guide learners through the world according to their background and learning goals, or only show them the objects and behaviors that are relevant for their current knowledge. In this paper, we explain how VR can be supported in the context of an adaptive Web-based learning environment developed in the context of GRAPPLE, an EU FP7 project. GRAPPLE is mainly oriented towards classical learning resources, but the use of other types of learning materials (VR and simulations) is also investigated. Here, we concentrate on how the learning environment is extended to support VR.
2 The GRAPPLE Architecture GRAPPLE aims at providing a Web-based adaptive learning environment. The two main components are the Authoring Tool and the Adaptive Engine (see figure 1). The U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 627–632, 2009. © Springer-Verlag Berlin Heidelberg 2009
628
O. De Troyer et al.
Authoring Tool (Web-based) allows a course author to define a course at a conceptual level. This is done by means of a (graphical) Domain Model (DM) and Conceptual Adaptation Model (CAM) [5]. The DM describes the concepts that should be considered in the course. The CAM expresses at a high-level and by using pre-defined pedagogical relations (such as the prerequisite relation) how the content and structure needs to be adapted at runtime. The authoring tool can also be used (by a more experienced person) to define new pedagogical relations, called CRTs. Defining a CRT also implies defining the adaptive behavior associated with the relation. Different adaptive behaviors may be possible for the same pedagogical relation, e.g., the prerequisite relation (“A is prerequisite for B”) can be associated with an adaptive behavior that hides the dependent concept B as long as A has not been studied, or with an adaptive behavior that forces the learning sequence A than B. Next, the graphical CAM is translated into a format (called GAL – Generic Adaptation Language) that the adaptive engine can handled. During this translation, the adaptive behaviors, associated with the definition of the pedagogical relations, are used to express the desired adaptive behavior for the course.
Fig. 1. GRAPPLE Architecture
The Adaptive Engine does the actual adaptive delivery of the content. Based on the state of the learner’s profile (captured in the User Model) and the GAL specifications, the Adaptive Engine will select the proper learning resources and deliver the required navigation structure and content to a Web browser. The Adaptive Engine also keeps track of the progress of the learner. Internal variables are maintained in order to be able to instruct the User Model service what to update in the learner’s User Model. This allows runtime adaptation. The adaptive engine of GRAPPLE is implemented as a client-server application. All its functionality is located at the server-side.
3 Adaptive VR in the Context of GRAPPLE To support VR, it was necessary to extent the architecture of GRAPPLE. The VR material considered may range from some simple 3D objects to complete 3D virtual environments (VE) in which the user can navigate and interact with the 3D (or 2D) objects in it. Because GRAPPLE is a web-based learning environment, it is using the XML format for the learning resources. For displaying 3D objects inside a browser, X3D [6] can be used, which is XML-based. Therefore, individual 3D resources can be included or excluded in the same way as the regular XML resources. However, to
Supporting VR in an Adaptive Web-Based Learning Environment
629
support adaptive VE’s, extensions are necessary for the authoring tool and for the adaptive engine. These extensions are necessary because the regular GRAPPLE tools consider a resource as a black box. To be able to adapt the VE itself, i.e. adapting the presentation of the objects in the world, enabling and disabling behaviors and interaction, including objects conditionally, and/or providing dedicated navigation possibilities in the virtual world, this approach is no longer suitable. Although some of the adaptations can be seen as extensions of adaptations for text resources, they are very specific for 3D material and the possibilities are much richer. For instance, to visually indicate that an object has not yet been studied, we may want to give it a different color or make it smaller; when a learner is studying a complex object (like a planet of the solar system), the visual appearance of the object could change according to the aspects being studied (size, temperature, geography, …) or become more detailed while more and more knowledge is acquired. It may also be necessary to disable or enable behavior and the interaction for an object according to the progress of the learner. To allow specifying this, a dedicated VR authoring tool is necessary. Furthermore, it is necessary to provide a preview of the VE to the author while he is specifying the adaptations because otherwise he needs to specify adaptations blindly, which may be very difficult. For instance, it is not possible to replace one 3D object in a VE by any other 3D object, as this object may not fit into the VE. Also behaviors are usually strongly connected to the actual 3D object, and it is not always possible to replace a behavior by any other behavior. The VR authoring tool is also implemented as a Web application (see figure 2). It allows specifying VR-specific CRTs and has a component to define CAMs. The VR specific authoring tool does not need a specific DM component; it uses the DM tool of the general authoring tool. Note the availability of a Previewer. The Loading component is responsible for retrieving the necessarily information from the different repositories (using available web services) and for retrieving the VR resources that needs to be previewed. The Saving component is responsible for storing all the information defined by the author, i.e. newly defined CRTs and CAM specifications. The output format is GAL but we also have an independent XML format to be able to connect to other systems. The extension of the adaptive engine towards VR is realized by means of a browser plug-in (see figure 3) responsible for (1) updating the VE if the adaptive engine instructs to do so, (2) monitoring the learner’s behaviour, and (3) sending information about the learner’s behavior to the Adaptive Engine. For the retrieval of VR content by the VR browser plug-in, a server-side plug-in is added, the VR-Manager. The VR browser plug-in has three components namely the Monitor component, the Update component, and the VR player. An existing VR player is used to visualize the VR content. We require that the VR player supports Document Object Model (DOM) [7], JavaScript [8], Scene Authoring Interface (SAI) [9], and X3D [6]. The scene Authoring Interface is used to communicate to the VR player. In this way, it is possible to update at runtime the scene graph without the need to reload it completely. The Update Component is responsible for interpreting the adaptation requests received from the Adaptive Engine and for translating it into a form that can be understood by the VR player; it instructs the VR player to update the scene. The Monitor Component is responsible for keeping track of what happens in the VE and translating this in a form understandable by the Adaptive Engine, which on its turn will inform the User Model service about the progress made by the learner.
630
O. De Troyer et al.
Fig. 2. VR Authoring Tool - Components
Fig. 3. Adaptive VR Delivery
Fig. 4. Studying the Sun
To validate parts the adaptive delivery of the VR material, we have created a prototype and elaborated an example course with it. As the adaptive engine of GRAPPLE was not yet available at that time, the prototype is based on AHA! 3.0 [10], an adaptive learning engine on which the GRAPPLE adaptive engine will be based. As VR player we have used Ajax3D [11] that uses the Vivaty player [12]. It can use Ajax and can be embedded inside Firefox and Internet Explorer. Both the Monitor Component and the Update Component have been prototyped. To test the prototype, an example adaptive course has been developed. The adaptive course is about the solar system. To investigate the issues related to the combination of different types of content, this course contains plain text explaining the solar
Supporting VR in an Adaptive Web-Based Learning Environment
631
system, as well as a VE of the solar system were the sun and different planets are displayed in 3D (see figure 4). The text as well as the VE will adapt according to the learner’s knowledge and progress. E.g., a planet appears when the learner starts to study it; planets that have been studied will stop rotating; and when all planets are studied the whole solar system is available in the VE.
4 Related Work Brusilovsky et al. [13] have integrated some adaptive hypermedia methods (mainly for navigation) into Virtual Environments. The approach of Santos and Osorio [14] is based on agents that help the user by providing him more information about interesting products, and by guiding him to their preferred area. The work of Moreno-Ger et al. [15] on 2D games is not on VR but is interesting as it provide an authoring tool allowing authors to create adaptive courses. Celentano and Pittarello [16] monitor a user’s behavior and compare it with previous patterns of interaction. Whenever the system detects that the user is entering a recurrent pattern, it may perform some activities on behalf of the user. Chittaro and Ranon did quite some related work, first in the context of e-commerce, later on also for e-learning. Some of their work can be found in [17], [18] and [19]. The work in [19] is close to ours, and especially to the prototype that we have developed and which is also using AHA! (see section 3). However, they don’t provide an authoring tool for specifying the adaptive story lines like we do. Furthermore, the main file containing the VE is reloaded at fixed time intervals to keep the adaptation inline with the student’s user model. This is a serious drawback, especially for large VEs, because it will make the system very slow. In our approach, runtime adaptations don’t require reloading the complete VE.
5 Conclusions and Future Work This paper describes an approach to support the adaptive delivery of Virtual Reality learning material inside GRAPPLE, a Web-based adaptive learning environment. The approach is innovative from different aspects. Firstly, it contains a visual authoring tool for specifying the adaptive strategy for a VE. Next, the adaptation of the VE is done at run-time without the need to reload the VE each time, which will provide the necessarily performance required for large VE’s. In addition, VR material and classical (textual and 2D multimedia) can be integrated in a single course and the adaptation can be performed for whatever type of content. The activities performed by the learner in the VE can be monitored and the effect can be directly reflected in the VE. First a prototype has been developed for the adaptive delivery. Currently, the implementation of the VR-authoring tool has been started as well as the implementation of the final VR-plug-in. Several experiments are planned to validate the approach as well as it usability and effectiveness. Acknowledgments. This work is realized in the context of the EU FP7 project GRAPPLE (215434). The design of the overall GRAPPLE architecture has been a collaborative effort of the different partners.
632
O. De Troyer et al.
References 1. Alexiou, A., Bouras, C., Giannaka, E., Kapoulas, V., Nani, M., Tsiatsos, T.: Using VR technology to support e-learning: the 3D virtual radiopharmacy laboratory. In: Distributed Computing Systems Workshops, 2004. Proceedings 24th International Conference, pp. 268–273 (2004) 2. KM Quest, http://www.kmquest.net 3. De Byl, P.: Designing Games-Based Embedded Authentic Learning Experiences. In: Ferdig, R.E. (ed.) Handbook of Research Effective Electronic Gaming in Education. Information Science Reference (2009) 4. Chittaro, L., Ranon, R.: Adaptive Hypermedia Techniques for 3D Educational Virtual Environments. IEEE Intelligent Systems 22(4), 31–37 (2007) 5. Hendrix, M., De Bra, P., Pechenizkiy, M., Smits, D., Cristea, A.: Defining adaptation in a generic multi layer model: CAM: The GRAPPLE Conceptual Adaptation Model. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 132–143. Springer, Heidelberg (2008) 6. Brutzman, D., Daly, L.: X3D: Extensible 3D graphics for Web Authors. The Morgan Kaufmann Series in Interactive 3D technology (2008) 7. W3C Document Object Model, http://www.w3.org/DOM/ 8. JavaScript, http://www.javascript.com 9. Scene Authoring Interface Tutorial, http://www.xj3d.org/tutorials/general_sai.html 10. AHA! 3.0, http://aha.win.tue.nl/ 11. Ajax3D, http://www.ajax3d.org/ 12. Vivaty, http://www.vivaty.com/ 13. Brusilovsky, P., Hughes, S., Lewis, M.: Adaptive Navigation Support in 3-D E-Commerce Activities. In: Proceedings of Workshop on Recommendation and Personalization in eCommerce at the 2nd International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems (AH 2002), Malaga, Spain, pp. 132–139 (2002) 14. dos Santos, C.T., Osorio, F.S.: AdapTIVE: An Intelligent Virtual Environment and Its Application in E-Commerce. In: Proceedings of 28th Annual International Computer Software and Applications Conference (COMPSAC 2004), pp. 468-473 (2004) 15. Moreno-Ger, P., Sierra-Rodriguez, J.L., Ferandez-Manjon, B.: Games-based learning in elearning Environments. UPGRADE 12(3), 15–20 (2008) 16. Celentano, A., Pittarello, F.: Observing and Adapting User Behaviour in Navigational 3D interface. In: Proceedings of 7th International Conference on Advanced Visual Interfaces (AVI 2004), pp. 275–282. ACM Press, New York (2004) 17. Chittaro, L., Ranon, R.: Adaptive 3D Web Sites. In: Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.) Adaptive Web 2007. LNCS, vol. 4321, pp. 433–462. Springer, Heidelberg (2007) 18. Chittaro, L., Ranon, R.: Adaptive Hypermedia Techniques for 3D Educational Virtual Environments. IEEE Intelligent Systems 22(4), 31–37 (2007) 19. Chittaro, L., Ranon, R.: An Adaptive 3D Virtual Environment for Learning the X3D Language. In: Proceedings of the 2008 International Conference on Intelligent User Interfaces (IUI 2008), pp. 419–420. ACM Press, New York (2008)
A Model to Manage Learner’s Motivation: A Use-Case for an Academic Schooling Intelligent Assistant Tri Duc Tran1,2, Christophe Marsala1, Bernadette Bouchon-Meunier1, and Georges-Marie Putois2 1
UPMC-CNRS-LIP6, 104 avenue du président Kennedy, 75016 Paris, France {Tri-Duc.Tran,Christophe Marsala, Bernadette Bouchon-Meunier}lip6.fr 2 ILOBJECTS, 104 avenue du président Kennedy, 75016 Paris, France {tran,gmputois}ilobjects.com
Abstract. The scope of our research is to build a non pedagogical intelligent assistant, I-CAN (Intelligent Coach and Assistant to New way of learning) that supports students during their academic schooling and prevents them from dropout. The management of learner’s motivation is one of the main features for schooling success. A high level of motivation implies more engagement and positive emotion to overcome difficulties. The problematic of this paper is “How to enhance the student’s motivation during his academic schooling?” We propose a motivation management framework for a personal intelligent assistant on a LMS (Learning Management System). This framework takes in input data from learner’s academic schooling (absenteeism, tardiness, marks, tasks, sanction) to diagnose learner’s state and to enhance the motivation to learn through an embodied conversational agent. Keywords: motivation diagnosis, motivation enhancement, schooling support, personal intelligent, dropout.
1 Introduction The motivation management is a problem widely developed in the ITS (Intelligent Tutoring System) field but few works are focused on the academic schooling motivation support. The purpose of this paper is to analyze the previous researches and to propose a model of motivation management system for a personal assistant to support students during their schooling. Our assistant I-CAN is designed to be used in a LMS (Learning Management System) or ENT (Espace Numérique de Travail), to assist student’s academic schooling. The data to analyze are principally oriented to the academic learning as absenteeism, tardiness, results, tasks, punishments, ... In a multiagent view, our system will collaborate with an ITS in providing the analysis of the student’s academic schooling. The goal of the motivation intelligent system is to maintain or increase the student’s desire to learn and her/his willingness to expand effort in undertaking the activities that lead to learning [5]. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 633–638, 2009. © Springer-Verlag Berlin Heidelberg 2009
634
T.D. Tran et al.
The research questions raised in this paper, in the academic context, are: • • •
“What are the mechanisms to evaluate the student motivation?” “Which are the intervention strategies to enhance the student’s motivation”, “How to implement theses strategies for a personal schooling intelligent agent?
To determine the mechanisms to evaluate the student motivation, we will analyze the previous researches in learner’s motivation in the first section and propose a model in three layers to assess the learner’s motivation state from the academic schooling data. In the second section, we will analyze the different strategies and interventions to enhance the learner’s motivation and present our approach.
2 Academic Schooling Motivation Diagnosis Most motivation theorists are convinced that motivation is involved in the learning process. A simple definition of motivation can be [8]: • • • •
internal state or condition that activates behavior and gives it direction; desire or will that energizes and directs goal-oriented behavior; influence of needs and desires on the intensity and direction of behavior; arousal, direction, and persistence of behavior.
2.1 Motivational States Computation In a motivational intelligent system, the first challenge is to determine the learner’s motivational state. We can do it with self-report methods, intelligent analysis from sensors to determine the affective state or with the study of data from the interaction between learners and learning content. The table 1 is an extended review of measuring motivation through user-computer/ITS interaction data [7]. It resumes the different research directions in the learner’s motivation diagnosis. We find that: 1) The motivation is not directly evaluated, the diagnosis uses intermediate indicators. 2) The motivation’s diagnosis uses in most of the cases at least the engagement and the confidence indicators. 3) The computation parameters come from the interaction with a learning content. In our case we don’t have data from the interaction with a learning content; our assistant I-CAN has only the data from the academic schooling. So the input data are: – Academic data: assessment results, quarterly reports, tardiness, absenteeism,
suspension, disciplinary, sanction, trouble relationships with adults … – Interaction data: between the student and the intelligent assistant and with the
information system. These data won’t permit us to assess indicators such as the confidence, confusion, attention, independence. We will focuse on the diagnosis of the performance (regularity, progression, result), the efficiency, the engagement, and emotional/ physical conditions.
A Model to Manage Learner’s Motivation
635
Table 1. Previous researches on motivation recognition Indicators Engagement Engagement
Confidence, confusion, effort Control, challenge, independence, fantasy; Confidence, sensory/cognitive/intere st, effort, Attention and confidence
Engagement, energizing, source of motivation (internal or external) Effort, confidence, independence
Confidence, effort
Computation parameters Time and performance to multiple choice question assessments. number of reading pages; average time spent reading; number of taking tests; average time spend on the quiz; total time for a learning sequence Time to perform the task; time to read the paragraph related to this task; decision time to perform a task; number of finished tasks; number of task performed Interaction with the system : mouse movement Performance to a test; speed to answer; give up
Authors [1]
Number of compiling; number of compiling without errors; ratio of working time and class’ average; number of hints in the process of doing task; number of execution; time from start until typing in editor Time on task percentage; average session duration; average pace of activity within sessions; average time between sessions; exam activities percent; game activities percent Challenge seeking : choice of challenge ‘s level; business : number of help request, adding/deleting organism; hopping : switches one view to another Degree of quiz using In the learning content : spent time, Help request, Number of activities In the exercise : quality of problem solving , spent time, hint or solution request, relevant request; Self report, Answer to system question
[13]
[3]
[10]
[4]
[7]
[11]
[9]
2.2 Our Model of Motivation Diagnosis Our model of motivation is made of three steps and we divide the main problematic: “How to diagnose the learner’s motivation?” into sub-problems to solve. Motivation is an aggregation of performance and efficiency. The first computation step consists in pre-processing the input data and summarizing them into four types of indicators: 1) Performance: result, regularity and progression indicators are calculated from the academic results. It shows the student’s global performance. 2) Emotional state: motivation and emotion are closely linked; the emotion can have an impact on the motivation, and vice versa. We can determine it by a self report method or the analysis of student’s interaction with the assistant and/or LMS. 3) Physical state: self report methods can be correlated with absences or tardiness. 4) Engagement/effort: tardiness, absenteeism, suspension, disciplinary sanction, trouble relationships will help us to analyse the student’s engagement.
636
T.D. Tran et al.
Fig. 1. Motivation diagnosis model
Then the efficiency factor is the aggregation of performance, physical state and the engagement indicators. This model can be implemented by fuzzy rules system and association rules to determine indicators and theirs aggregation. The motivational and its intermediate indicators will help us to design strategies of dialogue acts such as positive feedback, encouragement and praise adapted to the student’s schooling state.
3 Motivation Enhancement Once the learner’s motivational state computed, our intelligent assistant I-CAN will use this information to adapt it to this state and to find adequate strategies to maintain or enhance the learner’s motivation. There are three mains motivation strategies: – Motivational design: it increases the effort put into learning tasks. – Learning design: it changes the learning content or selects/recommends appropriate content – Contingency design: it makes the learner confident that effort and performance are closely coupled with consequences. As our system I-CAN doesn’t manage the contents, the strategies of I-CAN are based on motivational design and contingency design. 3.1 Strategies to Manage Motivation The source of motivation can be categorized as either extrinsic or intrinsic (internal to the person) [12]. Intrinsic motivation will only occur if the learner is highly interested in the activity. If a student has an intrinsic motivation to learn, he will feel satisfaction, enjoyment, and interest [2]. Motivational and contingency design can be done with affective dialogue that contains positive feedback and praise and it uses words and phrases that help attribute success to learner’s effort and ability [14]. The interaction with the learner’s and I-CAN is in natural language with an embodied conversational module and it will follow three principles:
A Model to Manage Learner’s Motivation
637
1) To design a discourse motivational model with communication and educational theories. According to the Ginott model [6], the teacher should practice the congruent communication when giving feedback to students. Congruent communication is a way of communicating that increases self-esteem and decreases conflict. The main rule is “Talk to the situation, not to the personality and character”. 2) The global strategy is to enhance the intrinsic learner’s motivation. Students with high intrinsic motivation often outperform students with low intrinsic motivation. They engage more in learning activities and are more likely to complete course [14]. 3) The development of a self-attribution explanation of the success [8], effort and internal and control ability are needed. The motivation module is designed to be included in an embodied conversation agent; the aim of our model is to build a motivational discourse model. 3.2 Our Motivation Enhancement Design Figure 2 shows the design process to build knowledge database to manage the learner’s motivation and the global working of our motivation system. The construction of the dialogue model is based on two main processes: 1) The first database is constructed from the interview of teachers and the analysis of quarterly reports. We obtain a real corpus of motivational dialog acts (positive feedback, encouragement, praise, reassurance). 2) The second database enhances the first one with the communication and educational theories as the Ginott Model [6]. The motivation management module takes in input the data from diagnosis module and matches it with the knowledge database. Motivation management module can be designed with associated rules or with a decision tree.
Fig. 2. Motivation enhancement model
The motivational intelligent system would be able to improve our assistant I-CAN in several ways; the diagnosis can provide more information about the students’ profiles, give more details about their difficulties. The motivation enhancement module would be able to increase the relation with the student; the dialogue will be more “human” and more personalized. The motivation management can play an important role to reduce the academic dropout.
638
T.D. Tran et al.
3 Conclusion In this paper we have proposed an approach to monitor and to adapt to the learner’s motivational during his academic learning. Our assistant I-CAN can collaborate with an ITS, the information exchanged will improve the performance of the both system. One of the most important limitations of our system resides in the other parameters of the learner’s motivation: – the classroom learning, school environment : teacher, peers, learning process/activity – the social environment : peers, friends, parents, family Our future works consist in exploring more models in motivation research and in developing this motivational module. We need also to analyze the impact of the motivation in a discourse model. We will add the motivation management to our assistant I-CAN and test it in a real world application with students and teachers.
References 1. Beck, J.E.: Using response times to model student disengagement. In: ITS2004 Workshop on Social and Emotional Intelligence in Learning Environments. Maceio, Brazil (2004) 2. Blanchard, E., Frasson, C.: An Autonomy-Oriented System Design for Enhancement of Learner’s Motivation in E-learning (2004) 3. Cocea, M., Weibelzahl, S.: Cross-system validation of engagement prediction from log files. In: Duval, E., Klamma, R., Wolpers, M. (eds.) EC-TEL 2007. LNCS, vol. 4753, pp. 14–25. Springer, Heidelberg (2007) 4. De Vicente, A., Pain, H.: Motivation diagnosis in intelligent tutoring systems. In: Goettl, B.P., Halff, H.M., Redfield, C.L., Shute, V.J. (eds.) ITS 1998. LNCS, vol. 1452, pp. 86– 95. Springer, Heidelberg (1998) 5. Du Boulay, B., Rebolledo-Mendez, G., Luckin, R., Martinez-Miron, E.A., Harris, A.: Motivationally intelligent systems: Three questions. In: Second International Conference on Innovations in Learning for the Future (2008) 6. Ginot, H.: Teacher and child. Congruent Communications, Inc., New York (1972) 7. Hershkovitz, A., Nachmias, R.: Developing a Log-based Motivation Measuring Tool. In: 1st International Conference on Educational Data Mining, Montreal, Canada (2008) 8. Huitt, W.: Motivation to learn: An overview. Educational Psychology Interactive (2001) 9. Kim, Y.-S., Cha, H.-J., Cho, Y.-R., Yoon, T.-B., Lee, J.-H.: An Intelligent Tutoring System with Motivation Diagnosis and Planning. In: 15th International Conference on Computers in Education (2007) 10. Qu, L., Johnson, W.L.: Detecting the learner’s motivational states in an interactive learning environment Artificial Intelligence in Education. The Netherlands, Amsterdam (2005) 11. Rebolledo-Mendez, G., Du Bouley, B., Luckin, R.: Motivating the Learner: An Empirical Evaluation. In: International Conference on Intelligent Tutoring Systems, Jonghli, Taiwan, pp. 545–554 (2006) 12. Ryan, R.M., Deci, E.L.: Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist (2000) 13. Zhang, G., Cheng, Z., He, A., Huang, T.: A WWW-based learner’s learning motivation detecting system International Workshop on Research Directions and Challenge Problems in Advanced Information Systems Engineering, Honjo City, Japan (2003) 14. Weibelzahl, S., Kelly, D.: Adaptation to Motivational States in Educational Systems. In: Proceedings of the workshop week Lernen-Wissensentdeckung-Adaptivität (2005)
Supporting the Learning Dimension of Knowledge Work Stefanie N. Lindstaedt1, Mario Aehnelt2, and Robert de Hoog3 1
Know-Center and Knowledge Management Institute, Graz University of Technology, Austria
[email protected] 2 Fraunhofer IGD Rostock, Germany
[email protected] 3 Faculty of Behavioral Sciences, University of Twente, Enschede, The Netherlands
[email protected]
Abstract. We argue that in order to increase knowledge work productivity we have to put more emphasis on supporting this learning dimension of knowledge work. The key distinctions compared to other TEL approaches are (1) taking the tight integration of working and learning seriously, (2) enabling seamless transitions on the continuum of learning practices, and (3) tapping into the resources (material as well as human) of the organization. Within this contribution we develop the concept of work-integrated learning (WIL) and show how it can be implemented. The APOSDLE environment serves as a reference architecture which proves how a variety of tightly integrated support services implement the three key distinctions discussed above. Keywords: workplace learning, professional learning, self-directed learning, collaboration scripts, user profiles, recommendation systems.
1 The Learning Dimension of Knowledge Work We conceptualize learning as a dimension of knowledge work which varies in focus (from focus on work performance to focus on learn performance) and time available for learning. This learning dimension of knowledge work describes a continuum of learning practices. It starts at one side with brief questions and task related informal learning (work processes with learning as a by-product), and extends at the other side to more formalized learning processes (learning processes at or near the workplace). This continuum covers the whole learning practices typology of Eraut and Hirsh [9] and emphasizes that support for learning must enable a knowledge worker to seamlessly switch from one learning practice to another as time and other context factors permit or demand. Research on supporting workplace learning and life long learning so far has focused predominantly on the formal side of this spectrum, specifically on course design applicable for the workplace and blended-learning. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 639–644, 2009. © Springer-Verlag Berlin Heidelberg 2009
640
S.N. Lindstaedt, M. Aehnelt, R. de Hoog
In contrast, the focus of our work is on the informal side of the spectrum, specifically covering work processes with learning as a by-product and learning activities located within work processes. In order to refer to this type of learning practices we coined the term work-integrated learning (WIL) [11]. By using this term we emphasize that learning at the workplace needs to be truly integrated in current work processes and practices and makes use of existing resources within an organization – knowledge artifacts (e.g. reports, project results) as well as humans (e.g. peers, communities). WIL is relatively brief and unstructured (in terms of learning objectives, learning time, or learning support). The main aim of WIL activities is to enhance task performance. From the learner’s perspective, WIL is spontaneous and/or unintentional. Learning in this case is a by-product of the activities at the workplace. This conceptualization enables a shift from the training perspective of the organization to the learning perspective of the individual. We have shown in [2] that the learning continuum exists for all commonly agreed upon knowledge work types (create, transfer, apply, package). For example, on the one hand knowledge can be informally created within work practices when people learn from each other based on observations. On the other hand more formalized settings at the workplace such as dedicated brainstorming sessions can be employed to create knowledge. That is, in order to support knowledge work we have to provide learning support for all four knowledge work types on a continuum of formality. Therefore we have chosen to present our proposed WIL support functionalities structured along the four knowledge work types.
2 Supporting WIL with APOSDLE This section provides a brief overview of how the APOSDLE1 environment supports the learning dimension of knowledge work. We already have evaluated much of the presented WIL support in previous prototypes within workplace situations of our application partners as well as within controlled lab studies, for example [12]. Future work in the APOSDLE project will mainly focus on a summative evaluation of the entire APOSDLE environment. This summative evaluation will be carried out at the site of four organizations participating in the project and will span a period of three months. 2.1 Supporting Creation and Transfer of Knowledge Sharing Knowledge Artifacts In APOSDLE knowledge resides in knowledge artifacts: documents, parts of documents (referred to as snippets), notes, collaboration reflections, collections, etc. Such artifacts are containers for more or less structured information which relate to individual or collaborative tasks and activities. Knowledge artifacts are created from resources within the organizational memory by (automatically) attaching metadata which define the relationship and semantic meaning of artifacts in relationship to the work domain. They are shared throughout the organization. 1
Developed in the EU funded Integrated Project APOSDLE (www.aposdle.org).
Supporting the Learning Dimension of Knowledge Work
641
A variety of different knowledge artifact types can be created and edited by knowledge workers. For example, knowledge workers can create notes in relation to other knowledge artifacts. They can create collections containing other knowledge artifacts which stay in relationship to each other. As an important outcome of collaborative learning or work, reflections contain not only transcripts of collaborative activities but also individual reflections of knowledge applied for a certain learning context or situation. Knowledge workers are made aware of knowledge artifacts through automatic suggestions (see below). Scripted, Contextualized Collaboration Collaboration is a social interaction in which knowledge workers transfer and construct knowledge while working or learning together. In APOSDLE, the collaboration process is structured into a pre-collaboration, collaboration and postcollaboration phase to allow a clear allocation of preparatory, executive and closing work or learning functions [13]. This structure is made explicit with the Collaboration Wizard, a visual component which guides all collaborating knowledge workers through the collaboration process. It provides collaboration scripts on macro and micro level [7] for each process phase in order to support collaboration as a structured process. These scripts help knowledge workers to use each process phase as efficiently as possible. In the pre-collaboration, a combination of problem formulating, social and fading script is used to collect all required information for coupling knowledge workers in collaborative interactions. In addition, the Collaboration Wizard contextualizes the work environment of collaborating knowledge workers with information required for a common anticipation of collaborative activities in which knowledge needs to be transferred. This contextual information is taken from previous and current activities of knowledge workers: information they searched for, knowledge artifacts which relate to their activities and tasks, snapshots of individual work environments, etc. 2.2 Supporting Application of Knowledge Providing an Overview of Past Experiences Meta-cognitive skills have been identified as an important ingredient of self-directed learning [3] [13]. In particular studies suggest that mirroring the learner’s actions and their results have positive effects on learning. The goal of these interventions is to provide the learner with a (more objective) external perspective on her actions. APOSDLE provides the user with an overview of tasks performed and topics engaged with in the past. The activities are grouped into three categories: seeking knowledge, applying knowledge, and providing knowledge and are displayed within a tree map: • • •
Seeking knowledge: The user seeks information or help about the topic (for example by accessing hints and contacting colleagues about the topic). Applying knowledge: The user applies knowledge about a certain topic (for example, performing a task which requires knowledge about that topic). Providing knowledge: The user provides knowledge about a topic to other people or to the APOSDLE system (for example, sharing a resource, storing a note, creating an annotation).
642
S.N. Lindstaedt, M. Aehnelt, R. de Hoog
Within the tree map the size of a square is related to the frequency with which the user has been engaged with the topic. The larger the square, the more frequent the engagement has been. This overview of activities allows the user to reflect on her past actions, to immediately asses her activity patterns, and to become aware of topics which she might want to advance further in. Suggesting Knowledge Artifacts In order to apply knowledge to a specific work situation, a knowledge worker has to assess the situation and transform the knowledge to fit the situation. Reducing the effort for this learning transfer is believed to improving the likelihood of application of learned knowledge. APOSDLE takes the following approach: an intelligent recommendation algorithm suggests knowledge artifacts to the learner which are similar to the task or topic at hand and which have been retrieved from the organizational memory – thus improving the likelihood of offering highly relevant information which can be directly applied to the work situation with little or no learning transfer required. In doing so, APOSDLE also utilizes the automatically maintained user profile of the learner in order to compute a learning gap. The learning gap expresses the difference between knowledge about topics needed when executing a work task and the knowledge the user possesses about these topics. Based on this learning gap APOSDLE suggests relevant learning goals which the learner could pursue within her current work situations. Suggesting Knowledgeable People Besides suggesting knowledge artifacts to the user, APOSDLE also suggests people in the organization which are knowledgeable in this specific task or topic. The goal is not, to always suggest the most knowledgeable person (e.g. the official topic matter expert). Instead, our algorithm seeks to identify peers which have (recently) executed the task before and which are believed to possess more or equal knowledge to the user in question. The identification of knowledgeable persons is based on the user profile. 2.3 Supporting Acquisition of Knowledge Learning Paths Sometimes, learners wish to learn a substantive part of a relatively unfamiliar learning domain, but this will frequently take more than several hours to complete. In order to successfully realize such learning, learners should carefully plan and manage the entire learning process. For self-directed learners, planning a learning path is often difficult, as most learners can not rely on instructional knowledge and have limited prior knowledge about the learning domain. In APOSDLE, planning is supported with learning paths. A learning path describes how a learning domain can be traversed in an ordered way when learning about the domain. There are many possible paths through a learning domain. Learning paths can be created by the system or by learners. The learning path wizard helps learners to construct and optimize learning paths. The wizard takes existing knowledge of the learner in the user profile into account and checks whether learners lack the prerequisite knowledge for their learning goals. The wizard suggests prerequisite topics and topics that might be relevant for a learning trajectory.
Supporting the Learning Dimension of Knowledge Work
643
Topics in a learning path are automatically ordered in such a way, that the learning paths can be traversed easily. Basic knowledge is addressed first and more advanced knowledge that builds on the basic knowledge is addressed afterwards. Hints Though in general it is expected that workers are motivated to acquire new knowledge in the context of their work, the knowledge acquisition process can be enhanced by providing hints how one could process the information retrieved. In APOSDLE hints are based on two features of learning: learning goals and the possible instructional meaning of retrieved information. According to Gery [10] and Choo [5], people often have specific questions or requests that come to mind when faced with performing new or complex tasks. For instance, questions like: “What must I do? How do I do it? Am I doing it right?”, or requests like: “Show me…”. The information type associated with such a question or request can reasonably be defined. One way of supporting learners is be to identify a set of relevant questions and requests and a set of related information types. This is similar to the approach followed by Anderson and Krathwohl [1] who developed a taxonomy of learning goals which are subsequently used for assessment purposes. In APOSDLE, we opt for a generic categorisation of information types/materials that could be used to specify and limit the type of content that should be presented to learners (with a specific question). There are some categorisations available (based on projects like LOM, Ariadne, and SCORM), but these are rather low in meaning from a learning perspective. A classification schema is used based on the one developed in the IMAT project [8], which classifies fragments extracted from maintenance manuals into categories like: definition, overview, example, assignment, guideline, how-to, summary, etc. For instance, a learner with the question “How do I do it?” will be referred to fragments from retrieved documents that are labeled with categories like “guideline” or “how-to”. Learning hints can be contemplative in nature (without observable output), but can also be aiming at explicit outcomes that can be observed and assessed by others. In the latter case, the hints will contain an activity giving the opportunity to the user to enter information in specific input fields. Every hint consists of two elements: • •
An activity that states what a learner could do to process the information. A rationale that states why it is considered useful to engage in this activity.
The content of the hints is adjusted to the specific material resource type associated with a learning goal (need). This means that hints that accompany an example are (slightly) different from hints related to a guideline or a constraint.
Acknowledgements APOSDLE (www.aposdle.org) is partially funded under grant 027023 in the IST work programme of the European Community. The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.
644
S.N. Lindstaedt, M. Aehnelt, R. de Hoog
References 1. Anderson, L.W., Krathwohl, D.R.: A taxonomy for learning, teaching and assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Pearson Education, London (2001) 2. APOSDLE Consortium: Integrated APOSDLE Deliverables 2.8 and 3.5 APOSDLE Approach to Self-Directed Work-Integrated Learning (2009) 3. Bannert, M.: Metakognition beim lernen mit Hypermedien: Erfassung, Beschreibung und Vermittlung wirksamer metakognitiver Strategien und Regulationsaktivitäten. Waxmann Verlag (2007) ISBN 3830918720 4. Bonestroo, W., Kump, B., Ley, T., Lindstaedt, S.: Learn@Work: Competency Advancement with Learning Templates. In: Memmel, M., Ras, E., Wolpers, M., VanAssche, F. (eds.) Proceedings of the 3rd Workshop on Learner-Oriented Knowledge Management, pp. 9–16 (2007) 5. Choo, C.W.: The knowing organization. How organizations use information to construct meaning, create knowledge, and make decision. Oxford University Press, New York 6. Davenport, T.O.: Human Capital: What It Is and Why People Invest it. Jossey-Bass, San Francisco (1999) 7. Dillenbourg, P., Hong, F.: The mechanics of CSCL macro scripts. International Journal of Computer-Supported Collaborative Learning H. 3, 5–23 (2008) 8. de Hoog, R., Kabel, S., Barnard, Y., Boy, G., DeLuca, P., Desmoulins, C., Riemersma, J., Verstegen, D.: Re-using technical manuals for instruction: creating instructional material with the tools of the IMAT project. In: Workshop proceedings Integrating technical and training documentation, 6th International Intelligent Tutoring Systems conference (ITS 2002), San Sebástian, Spain, pp. 28–39 (2002) 9. Eraut, M., Hirsh, W.: The Significance of Workplace Learning for Individuals, Groups and Organisations. SKOPE Monograph, vol. 9. Oxford University: Department of Economics (2007) 10. Gery, G.: Electronic Performance Support Systems: how and why to remake the workplace through the strategic application of technology. Cambridge ZIFF Institute (1991) ISBN 0961796812 11. Lindstaedt, S., Ley, T., Scheir, P., Ulbrich, A.: Applying Scruffy Methods to Enable Workintegrated Learning. The European Journal of the Informatics Professional 9(3), 44–50 (2008) 12. Scheir, P., Ghidini, C., Lindstaedt, S.N.: A Network Model Approach to Retrieval in the Semantic Web. In: Sheth, A. (ed.) International Journal on Semantic Web and Information Systems, vol. 4, pp. 56–84. IGI Global Publishers, Hershey (2008) 13. Simons, P.R.J.: Towards a constructivistic theory of self-directed learning. In: Straka, G.A. (ed.) Conceptions of self-directed learning: theoretical and conceptional considerations, pp. 155–169. Waxmann, Münster (2000)
User-Adaptive Recommendation Techniques in Repositories of Learning Objects: Combining Long-Term and Short-Term Learning Goals Almudena Ruiz-Iniesta, Guillermo Jiménez-Díaz, and Mercedes Gómez-Albarrán Facultad de Informática – Universidad Complutense de Madrid c/ Prof. José García Santesmases s/n, 28040 Madrid, Spain
[email protected],
[email protected],
[email protected]
Abstract. In this paper we describe a novel approach that fosters a strong personalized content-based recommendation of LOs. It gives priority to those LOs that are most similar to the student’s short-term learning goals (the concepts that the student wants to learn in the session) and, at the same time, have a high pedagogical utility in the light of the student’s cognitive state (longterm learning goals). The paper includes the definition of a flexible metric that combines the similarity with the query and the pedagogical utility of the LO. Keywords: User-adaptive Learning, Personalization, Recommendation Techniques, Learning Objects.
Content-based
1 Introduction Although recommender systems have been traditionally applied in e-commerce, their use has been recently transferred to the academic field [1, 2]. Particularly, the use of recommendation technologies has a clear application in e-learning: providing support for personalized access to the Learning Objects (LOs) that exist in repositories. Usually, the high number of LOs makes difficult the access to those adapted to the individual knowledge, goals and/or preferences of the students. In this paper, we present an approach that extends and improves our previous work on LO recommendation in web-based repositories. In [3] we described a novel approach for recommending LOs where a reactive single-shot content-based recommender acted as the primary recommender and its decisions were refined by a collaborative one. The recommendation strategy locates a set of relevant LOs after the student has posed a query. Priority is given to those LOs that are most similar to the student query and were assigned high ratings by other students. This previous recommendation strategy has shown one handicap: it provides a weak personalization that only takes into account the student’s short-term goals disclosed in the session in the form of a query. This way, two students that pose the same query within a session will receive the same recommendations, even if their long-term learning goals and domain mastery differ to a great extent. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 645–650, 2009. © Springer-Verlag Berlin Heidelberg 2009
646
A. Ruiz-Iniesta, G. Jiménez-Díaz, and M. Gómez-Albarrán
In order to overcome this handicap we have explored a model of strong personalization. As we will show, the new content-based recommendation strategy could be tailored to the student’s long-term learning goals without significantly compromising the in-session interest that the recommended LOs can have for the target student. The content-based recommender described in [3] is now enhanced by giving priority to those LOs that are most similar to the student’s query (short-term goals) and, at the same time, have a high pedagogical utility in the light of her profile (long-term goals). The paper is organized as follows. Section 2 sketches the different knowledge sources required, independently of the educational domain. Section 3 details the two phases of the recommender, retrieval and ranking. Last section concludes the paper and outlines future work.
2 Describing the Required Knowledge We agree with Draschler et al. [4] that the learning field imposes specific requirements on the recommendation process. For instance, recommenders would benefit from taking into account the cognitive state of the learner, which changes over time. Successful learning paths and strategies could also provide guiding principles for recommendation. For instance, recommendation could benefit from simple pedagogical rules like ‘go from simple to complex tasks’ o ‘gradually decrease the amount of guidance’. Learning paths could represent routes and sequences designed by the instructors and successfully tried in the classroom, or they could correspond with successful study behaviour of advanced learners. Our recommendation approach follows a two-step process, retrieval and ranking. The retrieval stage looks for LOs that satisfy, in an approximate way, the student’s short-term learning goals represented in a query (in-session learning goals). These LOs should be “ready to be explored” by the student according to her current knowledge and the defined learning paths. Once LOs are retrieved, the ranking stage sorts them according to the quality assigned to each LO. The quality is computed so that priority is given to those LOs that are most similar to the student’ query and, at the same time, have a high pedagogical utility in the light of the student’ cognitive state (long-term learning goals). Our previous weak personalization required domain knowledge in order to compute the similarity between the query and the domain concepts covered by the retrieved LOs. The adopted strong personalization imposes some additional requirements from the knowledge representation point of view. The retrieval stage requires the existence of suitable learning paths over the different domain concepts as well as information about the student cognitive state in the form of persistent profiles. The ranking stage also profits from the student profiles. It also follows remedial instructional strategies as a guideline for facing the student long-term learning goals and filling the student knowledge gaps, which results in an improvement of her mastery and skills. Next we detail the knowledge sources in our recommendation approach.
User-Adaptive Recommendation Techniques in Repositories of LOs
647
2.1 The Domain Ontology We suggest the use of an ontology to index LOs within the repository. Ontologies provide a general indexing scheme that lets include similarity knowledge between the concepts representing the domain topics, which is a crucial knowledge in the similarity-based retrieval and ranking contexts employed by the recommender. The ontology is populated with concepts in the field of study. Concepts are organized in a taxonomy using the typical relation is_a. The ontology should also establish a precedence property among the concepts. We use this precedence to reflect a traditional sequence or order of concepts used when teaching in the corresponding field. The precedence lets establish the learning paths that will be used in the retrieval stage to filter out LOs that exemplify non-reachable concepts given a concrete cognitive state of the student. 2.2 The Learning Objects In our context the recommended items are LOs of educational repositories. The LOs can be developed according to Learning Object Metadata (LOM). We propose to use the next upper-level LOM categories: General, Life cycle, Technical, Educational, Relation and Classification. The General category plays an important role in the retrieval stage. This category contains keywords that describe what domain ontology concept(s) the concrete LO covers. The other categories represent descriptive information that it is not used in the recommendation phases. 2.3 The Student Profile As we noted above, the strong personalization requires persistent profiles of the students. A profile stores information about the student’ history of navigation −the LOs that she has already explored− and the goals achieved in the learning process. Concepts already explored by the student are assigned the competence level attained in each one. This level is considered as a degree of satisfaction, a metric that allows the recommender to know about the student’ knowledge level on a particular concept. The competence level will be an important element in the retrieval stage.
3 Describing the Recommendation Phases The content-based recommendation strategy presented here follows a reactive approach: the student provides an explicit query and the recommender system reacts with a recommendation response. The student poses a query using the concepts existing in the domain ontology. This query represents her in-session learning goals: the concepts she wants to learn in the session. The recommendation response is obtained in a two-step process, retrieval and ranking, which are described next. 3.1 The Retrieval Stage The retrieval stage looks for an initial set of LOs that satisfy, in an approximate way, the student query. The retrieval process tries first to find the LOs indexed by the
648
A. Ruiz-Iniesta, G. Jiménez-Díaz, and M. Gómez-Albarrán
query concepts. If there are no LOs that satisfy this condition, or if we are interested in a more flexible location, LOs indexed by a subset of the (same or similar) concepts specified by the student are retrieved. This initial set of LOs is filtered. The goal of the filtering process is to discard LOs indexed by concepts non-reachable by the target student. We say that an ontology concept is “reachable” by a given student if, according to her current profile and the learning paths defined in the ontology, it fulfils any of the following conditions: • It is a concept already explored by the student, so that it appears in her profile with its corresponding competence level. • It is a concept that the student has not explored yet but she can discover it: if a concept c1 precedes a concept c2 in the ontology, a student can discover c2 if the student competence level for c1 exceeds a given “progress threshold”. If several concepts c1, c2, ..., ck directly precede a concept cx, cx could be discovered if the student competence level in all the directly preceeding concepts exceeds the given “progress threshold”. 3.2 The Ranking Stage The ranking phase sorts the LOs retrieved according to the quality assigned to each LO. Priority is given to those LOs that are most similar to the student’ query and, at the same time, have a high pedagogical utility in the light of the student’ profile. In order to compute the quality of a given LO L for a student S that has posed a query Q we have chosen a quality metric defined as the weighted sum up of two relevancies: the similarity (Sim) between Q and the concepts that L covers, and the pedagogical utility (PU) of L with respect to the student S: (1) The similarity Sim(L,Q) between the concepts gathered in the query Q and the concepts that L covers requires to compute the similarity between two sets of concepts. A simplification consists on comparing the concept that results as the conjunction of the query concepts (Q_conj_concept) and the concept that results as the conjunction of the concepts covered by L (L_conj_concept). Assuming this simplification, we can use any accepted metric for comparing two hierarchical values, for instance, one that we defined and successfully used in the past [5]: (2) where super(Q_conj_concept) represents the set of all the concepts contained in the ontology that are superconcepts of Q_conj_concept and super(L_conj_concept) contains all the concepts within the ontology that are superconcepts of L_conj_ concept. Sim(L,Q) values lie in [0, 1]. In short, this similarity metric computes the relevance due to the in-session goals that L satisfies. The higher the number of query concepts that L lets learn is, the higher the similarity value is. The more similar L concepts and query concepts are, the higher the similarity value is.
User-Adaptive Recommendation Techniques in Repositories of LOs
649
In order to measure the pedagogical utility PU(L, S) that the LO L shows for a given student S, we have adopted an instructional strategy that promotes filling the student’ knowledge gaps by including remedial knowledge [6]. The goal is to assign a high pedagogical utility to L if it covers concepts in which the student has shown a low competence level. This remedial knowledge could give priority to LOs that the student has not explored yet, or can deal equally explored and not explored LOs. We have decided to compute the pedagogical utility as follows: (3) where AM (L, S) is the arithmetic mean of the competence levels that the student S shows in the concepts that L covers, normalized so that AM (L, S) lies in [0, 1]. This way, PU(L, S) also takes values between 0 and 1, both included. In short, (3) computes a low value of PU(L, S) if the student knows well the concepts that L covers. High values of PU(L, S), on the contrary, are obtained if the student has a poor knowledge of a high number of the concepts covered by L. In addition, (3) deals equally explored and not explored LOs. The resulting quality metric defined in (1), together with the pedagogical utility exposed in (3), lets introduce a considerable degree of personalization in the ranking stage. The final influence of the pedagogical utility and, as a consequence, the level of long-term personalization achieved in the definitive list of LOs recommended, could be controlled by means of the value assigned to the weight α used in (1). Once the value of α used in (1) is fixed, the resulting recommender system exhibits a concrete behaviour with respect to the type of personalization it provides. We can obtain a more flexible behaviour if, in a given recommender, α could take different values. For instance, the value of α could depend on the kind of student that uses the recommender. High values of α could be appropriate for students whose profiles exhibit good performance. These students seldom need knowledge reinforcement training and the recommender could focus on their in-session learning goals giving priority to those LOs that are highly correlated with the query. Low values of α, on the contrary, could be appropriate for students with lower performances, such that the recommender fosters filling knowledge gaps without significantly compromising the in-session interest that the recommended LOs can have for these students.
4 Conclusions and Future Work Our approach fosters high levels of personalization in content-based recommendation of LOs. The filtering step in the retrieval stage gives way to a light long-term personalization: when two students pose the same query within a session but their subject masteries differ, the set of retrieved LOs could be different. Introducing the metric (1) in the ranking stage lets the recommender fosters a strong-personalized recommendation. In order to compute the two partial relevancies, Sim and PU, different approaches and metrics can be tried. This way, the ranking model presented here offers a generic framework for personalized recommendation of LOs that can be instantiated and results in diverse recommendation approaches.
650
A. Ruiz-Iniesta, G. Jiménez-Díaz, and M. Gómez-Albarrán
We have carried out experiments in the Computer Programming domain but the approach could be easily tranferred to others learning domains. In order to alleviate the steep-use curve related with posing a query, we plan to complement the reactive approach with a proactive strategy that proposes the student LOs that could be of interest in a learning session, without the need of an explicit query. Preliminary work about the proactive strategy appears in [7]. Nowadays, we use the information about the navigation history recorded in the student profile in order to visually mark the recommended LOs that the student has already explored. A refinement of the quality metric could also take into account this fact in order to penalize these LOs. Acknowledgments. This work has been supported by the Spanish Committee of Education and Science project TIN2006-15202-C03-03 and the UCM project PIMCD2008-136.
References 1. Farzan, R., Brusilovsly, P.: Social navigation support in a course recommender system. In: Wade, V.P., Ashman, H., Smyth, B. (eds.) AH 2006. LNCS, vol. 4018, pp. 91–100. Springer, Heidelberg (2006) 2. O’ Mahony, M., Smyth, B.: A Recommender System for On-line Course Enrolment: An Initial Study. In: ACM Conference on Recommender Systems, pp. 133–136. ACM, Minneapolis (2007) 3. Gómez-Albarrán, M., Jiménez-Díaz, G.: Recommendation and Students’ Authoring in Repositories of Learning Objects: A Case-Based Reasoning Approach. International Journal on Emerging Technologies in Learning 4(Special issue), 35–40 (2009) 4. Draschler, H., Hummel, H., Koper, R.: Recommendations for learners are different: Applying memory-based recommender systems techniques to lifelong learning. In: Workshop on Social Information Retrieval for Technology-Enhanced Learning (2007) 5. González-Calero, P., Díaz-Agudo, B., Gómez-Albarrán, M.: Applying DLs for Retrieval in Case-Based Reasoning. In: International Workshop on Description Logics, pp. 51–55 (1999) 6. Siemer, J., Angelides, M.C.: Towards an Intelligent Tutoring System Architecture that Supports Remedial Tutoring. Artificial Intelligence Review 12(6), 469–511 (1998) 7. Ruiz-Iniesta, A., Jiménez-Díaz, G., Gómez-Albarrán, M.: Recommendation in Repositories of Learning Objects: A Proactive Approach that Exploits Diversity and Navigation-byProposing. In: IEEE International Conference on Advanced Learning Technologies. IEEE Computer Society Publications, California (2009)
Great Is the Enemy of Good: Is Perfecting Specific Courses Harmful to Global Curricula Performances? Maura Cerioli and Marina Ribaudo Computer Science Department, University of Genova {cerioli,ribaudo}@disi.unige.it
Abstract. We describe the lessons learned in a hands-on project on instructional design techniques and e-learning technologies. Our experience showed that, though each course designed within this experiment improved its results, the global results of the students were not completely satisfactory. Indeed, the restructured courses absorbed the attention of the students to the detriment of traditional programs. We argue that this side-effect is due to peculiarities of the Italian university system.
1
Introduction
Content Sharing, Flickr, YouTube, Social Networking, Wikipedia, . . . These are only some of today buzzwords, the words used in the Web 2.0 era, term defined to denote a set of principles and practices characterizing the ”new” web, intended as a social and participation platform in which the content is produced by a multitude of users, e.g., the wisdom of crowds [1]. Also e-learning takes advantage of Web 2.0 potentialities. Nowadays, in fact, the emphasis is posed on Collaborative Learning, a social oriented e-learning strategy in which collaboration plays the major role. The network is no longer a mere tool for content distribution but it is rather a facilitator for the interaction among the participants involved in the educational process [2], and the literature clearly states that collaborative learning is superior to individual learning. This paper describes our experience at the University of Genova, a traditional university where a Moodle-based platform is used to complement the daily faceto-face educational process. Although the numbers of users enrolled to the platform have a steep growing curve, the “quality” of the average exploitation of the available software tools is poor, certainly far from Web 2.0 philosophy. To fill this gap a course on instructional design [3,4] has been offered to faculty members, and this paper reports on such an experience. Our role in the project has been that of students, and we think we were privileged participants: we are computer scientists, so that it was easy for us to gain confidence in the use of the involved technologies, and the participation of teachers from our curriculum has been quite high, so that we can compare the results and see global trends. In Section 2 we describe the context of our university and the project-based formative model on instructional design methodologies we have been involved. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 651–656, 2009. c Springer-Verlag Berlin Heidelberg 2009
652
M. Cerioli and M. Ribaudo
Section 3 describes the results for our specific subjects in the Computer Science curriculum. In Section 4 we discuss the impact of the individual course changes on the overall curriculum year and draw a few conclusions.
2
The Context of Our Experience
The University of Genova is a medium-size traditional university offering face-toface courses. Since the beginning of 2005, efforts have been made to improve the quality of teaching by the introduction of ICT support in the educational process and, accordingly, of a Moodle-based Learning Management System (called AulaWeb 1 ) to be offered as a central service at the university level [5]. Though the numbers and the fast growth of the AulaWeb use are greatly encouraging, so far its use has been mostly unsophisticated, with download of material (slides, papers, lecture notes) as prevalent activity. But, it is now well accepted that the best learning occurs when students are actively involved in the learning process, possibly co-constructing pieces of knowledge, and software tools such as wikis, blogs, fora can be used for introducing some form of collaboration within the class. Initially, resource sharing had the lion’s share of the online activities. This phenomenon is well known in literature for the first attempts to introduce web-based technologies in a traditional educational process, so far mostly based on lecturing and information giving. To make the technological leap and actually take advantage of the full power of Web 2.0 support, instructional strategies and teaching styles have to be changed and to speed up this process professors have to be formed with the help of professionals, so that they can readily overcome their fear of technology and change their way of teaching to accommodate the novelty. In the last year our university has been involved in such a project. The action Web Enhanced Learning (WEL) has been launched in Apr. 2008, with the specific objectives of devising and experimenting a model for the transfer of instructional design knowledge and skills to subject-oriented university teachers. It is worth noting that WEL was not centered on the mere transfer of technological knowledge. Indeed, a short analysis showed that the causes behind the limited usage of tools supporting interactive ways of learning were mostly the lack of place for such kind of activities in the classical in-presence teaching/learning process, preferred by the quasi totality of our faculties. The WEL course consisted of a few initial plenary lectures followed by faceto-face individual meetings with instructional design experts. A plenary meeting has been organized in Feb. 2009 to share the experiences of those colleagues who have been online in the first semester. Thanks to the introduction of a team of instructional specialists supporting faculty members, “many professors indicated their instructional strategies and teaching styles had changed.” Comparing the usage data of the intervals April 07-08 and April 08-09, we can observe that the usage of instruments like wikis and glossaries is increasing this year respectively of almost 75% and 140%, which is a percentage of growth much larger than that of the courses (24%) and the resources (41%). Moreover, 1
http://www.aulaweb.unige.it
Great Is the Enemy of Good
653
the numbers suggest that also the approach to the use of the tools is improving. For instance, not only the number of wikis has increased, but that of pages and versions has dramatically grown, about 4 times the percentage of the wikis themselves! Thus, we can infer that not only we have a larger number of wikis, but also that they are used differently, with more online activities going on.
3
Our Experience: Benefits for the Involved Courses
We describe now our personal experiences on three subjects delivered in Sept.Dec. 2008 with the benefit of the WEL course. The first two courses are both mandatory and expected to have the same audience, that is, the students of the third and last year of the first cycle. The third course is an advanced course for the second cycle and it is interesting to compare the different approaches of the students to this and the previous courses. Programming Advanced Techniques (TAP2 ) since its introduction in 2003/04 has being organized around a project, requiring the students to individually realize a component. The course has a traditional organization, with lessons and small activities in the laboratory finalized to the project (which used to go mostly deserted), and was loosely supported by AulaWeb, to distribute material and technically support the students through a forum. In the past, more than half of the about 90 enrolled students never showed up for a lesson, nor completed any activity, including the project. The students justified their lack of participation by the perceived missing connections of the laboratory activities to the project, and their unpalatable individual organization. But, ignoring the laboratory activities, they did not acquire the knowledge needed to work on the project, so that they failed it (and the overall exam). Thus, during the WEL project, the TAP course has been redesigned around a group activity, simulating in the small the work for the project. As this activity was so clearly connected to the project and not individual, 92 students out of 108 participated in it and all the 89 students active in the first semester successfully completed it. The groups for the activity were formed by teachers and strictly structured, with a specific role for each participant (for a discussion on group formation criteria and team working, see e.g., [6]). All the interactions were required to take place on AulaWeb, and indeed 2/3 of the final score were attributed on the basis of the posts on the dedicated fora. Each group had private forum and wiki as workspace, and the students in the role of managers could interact among them and with the teacher in a devoted forum. The students were enthusiast about the activity and spent on it much more time than expected, so that the size (but not the technical difficulties) of the final project had to be reduced to meet the expected average effort for the course. On the other hand, the results of this year were impressive (see Table 1). The number of active students sky rocketed and the students (in percentage) who passed the project within the semester more than tripled w.r.t. the past and were even more than those who passed the project in a whole year in the past. 2
In the following we will use the acronym of the Italian name of the course.
654
M. Cerioli and M. Ribaudo Table 1. Results of IS and TAP restricted to the first semester
06/07 Enrolled Students 86 Active Students 44.19% Passed Exams 6.98% Passed Projects 6.98%
TAP 07/08 85 35.29% 5.88% 8.24%
08/09 108 82.41% 18.52% 34.26%
06/07 86 75.58% 18.6% 32.56%
IS 07/08 114 75.44% 15.79% 31.58%
08/09 97 87.63% 20.62% 72.16%
Software Engineering with Project (IS) introduces the main concepts of software engineering, relying, to let the students experience such concepts, on a group project on realistic software development, taking up about half credits of the course. Until five years ago, the project was traditionally managed: the students worked on an assignment at their own pace with the only constraint of the final deadline. But, probably due to the inherent complexity of this kind of project, most groups were unable to complete it. Thus, in the academic year 2004/05 the project has been totally restructured, with the introduction of phases, with intermediate milestones and more interaction between students and teachers (see e.g., [7] for a detailed description). With the current organization, the results were much improved (see Table 1 for the data of the last two years). Since IS already had a lot of interaction and community construction, the changes introduced accordingly to the WEL project were mostly technical. The structure of the course on AulaWeb was reorganized to make easier to find information; some activities, like for instance common development of exercise solutions, which were formerly loosely supported by an all-purposes forum, got their own devoted wiki and so on. The results were quite disappointing, being comparable with those of the previous years, as shown in Table 1. The active students are about 12% more, the completed projects have doubled, but the percentage of passed written examinations is only slightly better than the past years. Apparently the students focused on the project to the detriment of the preparation for the written examination and in particular all optional activities, mostly finalized to such preparation indeed, were dropped altogether. We think that IS already took its quality leap when the project organization changed and now technical adjustments gives only small improvements. Network’s Applications 2 (AR2) is an optional course, for the students of the second cycle of study, and this edition involved just 13 participants. Lectures varied from Web 2.0 technologies to the theory of Complex Networks, and the choice was that of mixing in-presence lessons with a small online activity for the first part of the course, that related to Web 2.0 technologies. In the first two weeks of Oct. 2008, students, split into 4 groups formed by teachers, have been asked to collaboratively write on a wiki a CookBook of software examples using technologies such as Web Services, REST, Ajax. For each example (individually chosen by each group) the students had to describe the product, the software language and the software libraries selected to develop it, and the overall architecture. All the decisions have been taken by sending posts to a technical forum
Great Is the Enemy of Good
655
associated with the wiki. 2 out of the 13 initial students dropped out since they realized they could not meet the online activity deadlines. Indeed, these 2 students were in their first level of study and decided to anticipate the subject, but experimented an interference with other courses. The other students realized imaginative projects which have been presented to the class in a demo session two weeks after the end of the online phase. Communication has been intensive: 71 posts3 have been sent on the technical forum. The construction of a community and the acquired healthy habits on communication have also positively influenced the traditional part of the course. Indeed, 95 messages have been sent in another technical forum suggesting scientific papers, URLs, software libraries, . . . , and other 75 messages have also been posted in an all-purposes forum. The oral examination consists of a discussion of all their products plus some questions on the theory introduced during the course. 9 (out of 11) students took and passed the exam within the semester, with 28 (over 30) as average mark.
4
Analysis of Global Data: Lessons Learned
The data shown in Section 3 prove that the WEL project was successful: the involved courses experimented an improvement in their results, more or less depending mostly on how much space for improvement there was, students participated in the proposed activities and so forth. However, this is only one side of the coin. Indeed, during the semester we got negative feedback from other courses having the same expected audience of IS and TAP. The student participation was decreasing, with peeks in occasion of deadlines for activities of IS and TAP. For instance, another mandatory course for the same semester in 2007/08 had 80 enrolled students, 27 out of which took the exam within the semester with 22 positive results. But the same course in 2008/09 had 70 enrolled students, 15 out of which took the exam within the semester with 9 positive results. It is important to note that for both IS and TAP we take care not to exceed the expected effort for average students, monitoring their efforts during the semester and adjusting the assignments accordingly. Moreover, we globally plan the deadlines of the activities of the different courses to avoid conflicts. Hence the interferences should be non-existing or very limited, while they are devastating. We think that the main reason behind these failures is a peculiarity of the Italian university system, where the concept of passing from a year to the next does not exist, as students have to eventually collect positive marks in all the courses in their curriculum, but they can do that in an arbitrary number of academic years, disregarding the learning agreement subscribed at the beginning of each year. Indeed, this scenario does not encourage the students to balance their efforts among the courses by the requirement of passing all of them and so they tend to distribute their time on the basis of their fancy for some topics, their enthusiasm for a proposed activity, the teacher’s charisma. . . But, in this way quite often students invest too much effort in an activity or in a specific 3
Recall this is a niche course with 4 groups only.
656
M. Cerioli and M. Ribaudo
course to the disadvantage of others, so that improving some courses damages the others, even if each course is correctly planned and the internal deadlines are globally orchestrated. Actually, we think that the current trend toward a more participative and technologically supported learning model will amplify the problem, introducing a divide between the modern and the traditional courses, till the latter will be completely extinct. The delicate point will be to find a balance among the different competing courses to let students participate in all of them. We argue that at this aim it is mandatory to change the university organization so that students can be forced to take groups of exams in the same year. Moreover, the instructional design approach should be moved up one level and applied to the design of the overall curriculum, so that different courses and their activities could be harmonized. A totally different story is that of the course of AR2. Indeed, in that case we did not experience any kind of interference with the other courses. Both the students and the teachers claim that the activities proposed were not creating problems. We think that the reason behind the difference is the maturity of the students, who have learned their limits and hence are able to sensibly plan their curriculum and distribute their effort.
References 1. Surowiecki, J.: The Wisdom of Crowds. Why the Many Are Smarter Than the Few. Abacus (2005) 2. Trentin, G.: La sostenibilit` a didattico-formatica dell’e-learning. Social networking e apprendimento attivo. Franco Angeli (2008) (in Italian) 3. Dick, W., Carey, L., Carey, J.O.: The Systematic Design of Instruction, 6th edn. Merrill (2004) 4. Leshin, C.B., Pollock, J., Reigeluth, C.M.: Instructional Design Strategies and Tactics. Education Technology Publications, Englewood Cliffs (1992) 5. Ribaudo, M., Rui, M.: AulaWeb, web-based learning as a commodity. The experience of the University of Genova. In: 1st Int. Conf. on Computer Supported Education, Lisbon, Portugal (2009) 6. Oakley, B., Felder, R.M., Brent, R., Elhajj, I.: Turning Student Groups into Effective Teams. Journal of Student Centered Learning 2 (2004) 7. Astesiano, E., Cerioli, M., Reggio, G., Ricca, F.: A phased highly-interactive approach to teaching uml-based software development. In: Staron, M. (ed.) Proc. of Educators’ Symposium at MoDELS 2007. Research Reports in Software Engineering and Management, IT University of G¨ oteborg, pp. 9–18 (2007)
Evolution of Professional Ethics Courses from Web Supported Learning towards E-Learning 2.0 Katerina Zdravkova1, Mirjana Ivanović2, and Zoran Putnik3 1
University Ss Cyril and Methodius, Faculty of Natural Sciences and Mathematics, Institute of Informatics, Skopje 2,3 University of Novi Sad, Faculty of Natural Sciences and Mathematics, Department of Mathematics and Computer Science, Novi Sad
[email protected], {mira,putnik}@dmi.uns.ac.rs
Abstract. Skopje and Novi Sad share several joint courses in Professional Ethics at undergraduate level and at postgraduate level. These courses have been delivered to almost 1000 students from 14 different target groups. For seven years, teaching, learning as well as assessment have been steadily growing from traditional Web supported learning, through blended learning, towards Web 2.0. This paper presents all the stages of the courses evolution using several learning management systems, and the effort to enhance teaching, learning and active contribution of all the actors in the educational process. Particular attention is paid to our latest experience using Moodle and its social networking aspects in education. This survey reveals all the activities during the course delivery including student workload, grading system, and teacher’s efforts to maintain the courses. Student encouraging impressions regarding the content delivery and assessment, their personal opinion about the impact of e-learning 2.0 to quality and quantity of acquired new knowledge, and sincere suggestions to persist in the same direction are the greatest assurance that social networks are currently the best way to deliver computer ethics courses. At the same time, it seems that this approach is the most exhausting and the most challenging for the teachers, but at the same time, the best balance between the effort undertaken and the results obtained. Keywords: Web supported learning, Blended learning, Social networking.
1 Introduction Almost 15 years ago, Markus [1] concluded that despite integration of multimedia and first e-mail etiquecy, negative social impact of new technology “may not prove easy to eradicate”. In their survey from 2003 [2], Morahan-Martin and Schumacher claimed that Internet use caused loneliness. And, they were not sole. Negative social impact of information technology was usually predominant. Starting from 1997, when Jay Cros [3] coined the term e-learning, i.e. Internet-enabled learning computer technology, Internet became one of the basic media for teaching and learning content. Traditional face-to-face education lost its role, and many students decided to attend on-line classes. And, some students became even more aliened than previously. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 657–663, 2009. © Springer-Verlag Berlin Heidelberg 2009
658
K. Zdravkova, M. Ivanović, and Z. Putnik
First attempt to “socialize” Internet users was the site Classmates.com [4]. Launched too early, it couldn’t get high attention, but soon later, social networking sites made a revolution between Internet users. It was high time to switch from Web 1.0 to Web 2.0. Recent research made by Nielsen Online [5] reports that top 10 social networking sites had almost 76 million unique visitors in September 2008. Compared with 33 million visitors in September 2007, the average growth is 167%. In Web 1.0 a few content authors provided content for a wide audience of relatively passive readers. Web 2.0 is already transforming our social lives and is quickly becoming a competitive tool for education [6]. Very important conclusions connected to usage of social networks in education are given by De Weaver [7]. A survey conducted on a large group of students and instructors, revealed that newer forms of activities, like collaborating and sharing information to a community, are less popular though. Social software has the potential not only to enhance particular aspects of teaching and learning, but also to significantly contribute to the creation of new forms of these activities. Bryant [8] summarises potential developments in this area as: ‘The adoption of social software tools, techniques and ideas will be the most important and visible example of the use of emerging technology in education over the next few years’. Another example of related work is reported by Franceschi [9], where a suggestion for improvement of social networks within e-learning systems has been given. The last, but not the least research is done within Comtella project [10, 11, and 12]. It is an impressive example of the implementation of Web 2.0 in blended classes, based on self-developed peer-to-peer file and bookmark sharing system aimed to share papers in several courses, including a professional ethics course.
2 Related Work The emergence of Web 2.0 technologies promotes the growth of service-based applications and greater user-control over content and connection [13]. Recent developments in web-based services and the enhancement of collaborative tools have fuelled the demand for similarly-specified educational software and services. A lot of universities across the world now deploy blogs, ePortfolios and educational social software for use by the academic community. In spite of the widespread support of these learning tools, still there is no adequate number of reports and analyses to appropriately validate the level of their utilization by tutors and students. But there are some publications bringing more or less optimistic results. The main analysis in [14] was based on observing student access and use of educational tools as well as on the anonymous recording of student experiences of using other social software in a noneducational context. More complex view of educational activities is given in [15]. They concluded that usage of social tools allow students to share capabilities and knowledge, bringing the synergetic effect to learning and life as well. Recent paper by Bernsteiner [16] presents the results of an empirical survey in order to highlight the benefits of the Web-based social software tools from the student’s point of view. As motivation is on different levels, the lecturers have to increase it during lessons. Fortunately there are students, who were highly motivated and were creating the content and adding them to the wikis [17].
Evolution of Professional Ethics Courses
659
3 Setting Up the Scene In last decade, many LMS have been developed to support new education trends. Probably one of the most popular, particularly for educators, is Moodle with more than 28 million users, supporting the delivery of more than 2.5 million courses [18]. Starting from academic 2005/06, both institutions presented in this paper switched from static LMS to Moodle (Skopje), or from static usage of Moodle to social networks (Novi Sad), showing that static LMS at both institutions become obsolete. Encouraged by the appreciation of more than 3500 active participants at both institutions, we can claim that the success of social network strategy in e-education utilized in our institutions is evident. One additional note should be made here – while most of the research papers, and experience reports present positive attitudes and opinions about social networks in general [19] and their usage within e-learning [20], there are some negative positions too [21].
4 Evolution of the Courses: From Static Form to the Social Network The development of educational technologies in last decade directs to think at learning as both a personal and collective experience. Cooperative and the collaborative learning promote the use of social tools in order to involve all e-learners in building a common knowledge. 4.1 Stage One: Web Supported Learning First delivery of the course on ethics at undergraduate level started in October 2002 in Skopje. All the contents were prepared by the teacher and by students. Presentations were oral, and they were followed by small discussions during the lecture. The contents were periodically uploaded on a static course site. Mutual communication between students and the teacher was either face-to-face, or by e-mail. Similar course in Novi Sad for the first time was realised in October 2005. While presented to the students through oral lectures and PowerPoint presentations, the whole teaching material was published on a web-site using Moodle. A greater interest in the topics presented within the course and methods of course delivery, was confirmed by higher number of students in each new school year. Moodle was used for static presentation of teaching material, but since Bologna principles required class attendance, it was more of a material repository, than used for some profound purposes. Although discussions, as a valuable social element of learning were announced at least a week in advance, feedback of all the students independently of the generation was rather poor. Discussions were directed by the teacher, usually involving very few participants. Forums were the only elements of social networks used for publication of announcements of important events, and e-mail correspondence of students with lecturers. They were selected as primary way of communication as they were mostly topic (not people) focused.
660
K. Zdravkova, M. Ivanović, and Z. Putnik
4.2 Stage Two: Blended Learning Beginning of the course for first generation of postgraduate students in Novi Sad and in Skopje started in 2006, when Moodle was implemented, initially aiming to augment face-to-face lectures. Initially, Moodle was mostly used as a repository of teaching materials, either as a fixed collection of files, or as an active set of animated e-lessons. Still, the repository was in its essence static. Communication was aimed towards teachers and teachers only. While lecture attendance was part of the obligatory requirements for the new generations of students, e-communication was still an idea worth introducing, as an attempt in perfection of the course. In order to avoid exhausting oral examinations, an e-test with 250 questions was designed in Skopje. Initially weak results soon became impeccable. After a small investigation, it appeared that students who had already finished the e-test copied the questions and their correct answers, and distributed them to those who will have the exam later. This student fake showed that it was high time to change the delivery of the course, to enhance student active participation, and to change the grading scheme. In Novi Sad, similar repository of around 200 questions exists, but it is not used for etesting, but within “regular” classroom tests instead. Since this assumes presence of the assistant, elements of cheating, while existing, were not that flagrant. 4.3 Stage Three: Active Contribution of All the Participants The experience gathered during the usage of Moodle from some other joined courses [22] showed that the inclusion of other elements available within LMS, like forums, chats, or e-mail usage, could create a more dynamic system, system known in contemporary research as a social network. This academic year, almost all the students participated in social network rather freely. Even those who are recognized as shy and silent persons during lectures, find themselves very involved in discussions, arguments, and even quarrels with other colleagues, when it comes to questions important to them. Yet, this does not come as a surprise, since the tendency of introvert students to reveal their opinions within electronic communication, when not faced literally with the rest of colleagues. Another point worth mentioning is the fact that created social networks influenced widening of topics in question. Even though at the beginning points to be discussed were strictly defined, very often discussions diverged to various directions, touching each matter connected to the original one that is interesting for students. As a natural improvement, forums were used to apply well-known technique of role-playing games. Students were given certain roles and were invited to participate in a scenario connected with some ethical and moral issues, discussing and defending opinions represented by their roles. During a fortnight, student teams actively defended their roles, with an average of 9.73 posts per student. Teachers were also involved in the discussions to direct them. At the end, using a supporting forum, student prepared team reports of their groups. This forum had 10.07 posts per student, showing the usefulness of on-line discussions. It took some time for students to start communicating and sharing opinions, but each year, it eventually came to this point. Probably because of shared experience with previous generations, time between those phases has been shortened.
Evolution of Professional Ethics Courses
661
5 Conclusion and Intended Evolution of the Courses Obvious benefit of the steady evolution of our joint courses towards social software was the active involvement of all the students, including those who are usually idle. With new “socialised” approach, students were motivated, stimulated and sometimes provoked to reveal their own ideas. To support their assertions, they dug into different sources to discover other sources in favour of their opinion. Such research stimulated their intellectual capacities, and prepared them for future research. In many occasions, research was not directed to computers ethics only, but also to related areas. Further great benefit of Web 2.0 in our course was the possibility of relaxed, and at the same time, efficient group collaboration. Using forums, students virtually met their colleagues, followed the development of the group project, and presented their findings. Research and group essay preparation progress was clearly evident at every moment, individual contribution was obvious, so nobody could object that his/her contribution was neglected, or the freedom of speech withdrawn. Grading facilities of Moodle were another asset. At every moment, students could exactly know current result, including the information whether their contribution was successful, or not. Any of usual student complaints connected with the grading, such as underestimation of their workload, or overestimation of the others, was impossible since the entire communication and effort are completely overt to all the students and to teachers. And, completely transparent communication was an ideal way to judge personal achievements in relation to the achievements of the others. Frequent discussions on different topics involving all the students and the teachers were the best way to be always in line with the newest events related to the course, including the breaking news. As a result, the awareness of students and teachers for the course increased. At the same time, the repository of teaching materials enlarged. The last and certainly not the least advantage of Moodle was the impossibility to cheat and to fake personal outcomes. Namely, students couldn’t finish the assignments behind schedule and claim that the deadline was not precise, or that they delivered the assignment on time, because all the closing dates were visible, and Moodle kept records of all their activities. Furthermore, even if somebody decided to do the assignments instead of another colleague (which is still common in the region), he/she could not replace the actual student in the forums. Apart of these advantages, e-learning 2.0 brings some problems. First of all, technical prerequisites must be faultless, such as constant availability of the server, impeccable Internet connection, and a permanently high scalability. In the beginning of the academic year, occasional slow response, due to many users competing for the same resource close to final deadline happened. Hopefully, this problem was gradually settled, because students became more professional. Social networking was exhausting both for the students, and for the teachers. Whenever students were not on-line, they could not actively participate. However, unlike students from the survey mentioned earlier on [21], our students were much more enthusiastic with e-learning 2.0. They never complained that fully transparent approach was a problem for them. However, social software in education is a treat to student privacy. We are aware that this is one of the weakest aspects, and it can’t easily be resolved.
662
K. Zdravkova, M. Ivanović, and Z. Putnik
References 1. Markus, M.L.: Finding a happy medium: explaining the negative effects of electronic communication on social life at work. ACM Transactions on Information Systems (TOIS) 12(2), 119–149 (1994) 2. Morahan-Martin, J., Schumacher, P.: Loneliness and social uses of the Internet. Computers in Human Behaviour 19(6), 659–671 (2003) 3. Cross, J.: An informal history of eLearning. On the Horizon (12/3), 103–110 (2004) 4. Boyd, D.M., Ellison, N.B.: Social Network Sites: Definition, History, and Scholarship, http://jcmc.indiana.edu/vol13/issue1/boyd.ellison.html 5. Nielsen online blog “Connecting the dots”, http://blog.nielsen.com/nielsenwire/wp–content/uploads/ 2008/10/press_release24.pdf 6. Franklin, T., Van Harmelen, M.: Web 2.0 for Content for Learning and Teaching in Higher Education, http://www.jisc.ac.uk/publications/ publications/web2andpolicyreport.aspx 7. De Wever, B., Mechant, P., Veevaete, P., Hauttekeete, L.: E-Learning 2.0: social software for educational use. In: Proc. of 9th IEEE International Symp. on Multimedia, pp. 511–516 (2007) 8. Bryant, L.: Emerging trends in social software for education. British Educational Communications and Technology Agency Emerging Technologies for Learning (2007) 9. Franceschi, K., Lee, R., Hinds, D.: Engaging E-Learning in Virtual Worlds: Supporting Group Collaboration. In: Proc. of 41st Hawaii International Conf. on System Sciences (2008) 10. Vassileva, J.: Harnessing P2P Power in the Classroom. In: Lester, J.C., Vicari, R.M., Paraguaçu, F. (eds.) ITS 2004. LNCS, vol. 3220, pp. 305–314. Springer, Heidelberg (2004) 11. Webster, A.S., Vassileva, J.: Visualizing Personal Relations in Online Communities. In: Wade, V.P., Ashman, H., Smyth, B. (eds.) AH 2006. LNCS, vol. 4018, pp. 223–233. Springer, Heidelberg (2006) 12. Vassileva, J., Sun, L.: Using Community Visualization to Stimulate Participation in Online Communities. e-Service Journal 6(1), 3–40 (2007) 13. O’Reilly, T.: What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software (2005) 14. Stepanyan, K., Mather, R., Payne, J.: Awareness of the capabilities and use of social software attributes within and outside the educational context: moving towards collaborative learning with Web 2.0. In: Proceedings of Conference ICL 2007, pp. 1–9 (2007) 15. Itamar, S., Bregman, D., Israel, D., Korman, A.: Do eLearning Technologies Improve the Higher Education Teaching and Learning Experience? In: Fifth International Conference on eLearning for Knowledge-Based Society, pp. 24.1–24.7 (2008) 16. Bernsteiner, R., Ostermann, H., Staudinger, R.: Facilitating E-Learning with Social Software: Attitudes and Usage from the Student’s Point of View. Int. J. of Web-Based Learning and Teaching Technologies 3(3), 16–33 (2008) 17. Drazdilova, P., Martinovic, J., Slaninova, K., Snasel, V.: Analysis of Relations in eLearning. In: Proc. of IEEE/WIC/ACM Int. Conference on Web Intelligence and Intelligent Agent Technology, pp. 373–376 (2008) 18. Moodle statistics, http://www.moodle.org/stats
Evolution of Professional Ethics Courses
663
19. Alexander, B.: Web 2.0: A New Wave of Innovation for Teaching and Learning? EDUCAUSE Review 41(2), 32–44 (2006) 20. Miler, A.: Moodle from a Students Perspective, http://dontbeafraid.edublogs.org/2008/11/05/ modle-from-a-students-perspective 21. Iadecola, G., Piave, N.A.: Social Software in Educational Contexts: Benefits and Limits. In: Fourth International Scientific Conference eLearning and Software for Education 22. Budimac, Z., Putnik, Z., Ivanović, M., Bothe, K., Schuetzler, K.: On the assessment and self-assessment in a students teamwork based course on software engineering. Computer Applications in Engineering Education, early view (2009), doi:10.1002/cae.20249 23. Costa, C., Beham, G., Reinhardt, G., Sillaots, M.: Microblogging in Technology Enhanced Learning: A Use-Case Inspection of PPE Summer School (2008)
Towards an Ontology for Supporting Communities of Practice of E-Learning “CoPEs”: A Conceptual Model Lamia Berkani1 and Azeddine Chikh2 1
National Institute of Computer Science, INI, Algiers, Algeria
[email protected] 2 Information Systems Dept., King Saud University, Riyadh, Saudi Arabia
[email protected]
Abstract. The Community of Practice of E-learning (CoPE) represents a virtual space for exchanging, sharing, and resolving problems faced by actors in elearning. One of the major concerns of CoPEs is to favor practices of reuse and exchange through the capitalization of techno-pedagogical knowledge and know-how. In this paper, we present a conceptual model of CoPEs. This model constitutes the theoretical platform upon which an ontology dedicated to CoPEs will be built. This ontology aims to annotate the CoPE’s knowledge resources and services, so as to enhance individual and organizational learning within CoPEs. Keywords: E-learning, CoP of e-learning, O’CoPE, ontology concepts.
1 Introduction Today, we are witnessing a fast and significant expansion of the e-learning domain. Companies, schools, universities, and organizations of all sizes are currently using elearning as a tool of training, learning and professional development. The increase in interest of e-learning is seen through the development of large projects launched everywhere in the world and through the proliferation of specifications and standards for e-learning systems too. However, despite the large quantity of knowledge accumulated in this field, the know-how and the feedback from acquired experience are not always capitalized and exchanged in a systematic way between its actors. Furthermore, this field is facing a number of challenges related to: (1) the difference of interpretation of its concepts. For example, software tools (e.g. simulation or translation tools) used in an online course are considered sometimes either as resources or services; (2) the complexity resulting from the multiplicity of its standards (LOM, SCORM, IMS-LD, IMS-LIP, …), and the heterogeneity of its tools such as authoring tools and LMS (Learning Management systems) like Moodle1, Acolad2 or Blackboard3; (3) the diversity of its teaching domains from arts, literature, fundamental and 1
http://moodle.org/ http: //acolad.u-strasbg.fr/ 3 http://www.blackboard.com/ 2
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 664–669, 2009. © Springer-Verlag Berlin Heidelberg 2009
Towards an Ontology for Supporting CoPEs
665
applied sciences to engineering requiring different educational approaches and techniques. Accordingly, actors involved in e-learning must exchange efficiently both of their problems and experiences. Based on work done on Communities of Practice (CoPs) and the success they made in collaborative learning [1], especially in the domain of teaching [2; 3; 4; 5], we have thought to extend this technology to e-learning as sub-domain of teaching. So, we consider CoPEs (Communities of Practice of E-learning) as a virtual framework for exchanging and sharing techno-pedagogic knowledge and know-how between actors of e-learning. In [6; 7] we have defined a CoPE and the underlying concepts. In the present paper, we try to refine and enrich the previous definitions through a conceptual model. This model constitutes the theoretical grounding upon which an ontology dedicated to CoPEs will be built. This last will offer a uniform vocabulary to explicitly specify all the CoPE’s concepts, and with which the CoPE’s resources and services can be annotated, so as to support the learning processes in the CoPE. In the following, section 2 introduces the background of our research and some related works. Section 3 presents our main contribution related to the conceptual model for CoPEs. The conclusion in section 4 highlights the main results and opens some future perspectives.
2 Background and Related Work A CoPE is a group of professionals in e-learning who gather, collaborate, and organize themselves in order to: (i) share information and experiences related to e-learning development and use ; (ii) collaborate to solve together e-learning problems (e.g. interoperability, adaptativity) and to build techno-pedagogic knowledge and best practices; (iii) learn from each other and develop their skills in instructional engineering; (vi) promote the use of e-learning standards: IMS-LD, SCORM, LOM... We address in this paper the need to model the CoPE’s concepts, in order to enable the formalization of core aspects related to CoPEs, so as to support automatically the exchanges and the communication within the CoPE. These models will constitute the basis from which an ontology dedicated to CoPEs can be built. This last aims at representing a uniform vocabulary to explicitly specify all the CoPE’s concepts and relations, and will help to annotate semantically both of the knowledge resources and services used in a CoPE. Such semantic annotations (for example, on the profile, role and competencies of a CoPE’s member, on the effective uses of a tool by the CoPE, on the collaboration or cooperation mode preferred by the CoPE for a given common activity, on the arguments leading to a decision making or a problem solving…), can then be used by services such as knowledge search services and they can thus support the learning processes in the CoPE. Most of the work that has been done in the teaching domain through online CoPs didn’t take into consideration a formal modelization of the concepts. For example Teacher Bridge [8] proposed a set of online tools to help create a community of teachers using dynamic web pages creation. This work lacks of semantic annotations of knowledge especially the tacit knowledge. So, there is no mean to retrieve it. However, some existent works are attempting to formalize the CoP’s concepts: Dubé et al. identified in [9] twenty-one characteristics for distinguishing and comparing Virtual CoPs (VCoPs). However, they didn’t try to formalize the VCoPs based on common conceptual models.
666
L. Berkani and A. Chikh
On the other hand, Palette project [10] proposed several models useful for describing a CoP [11]: community, actor, learner profile, competency, collaboration, process/activity, and lessons learnt. These models are dedicated to CoPs in general and are built based on analysis of information sources gathered from twelve existing CoPs.
3 Contribution: A Conceptual Model for CoPEs Fig. 1 depicts the most important concepts of the CoPE: “Community”; “Actor” with “Role” and “Profile”; “Activity”; “Competency”; “Knowledge”; “Environment”.
Fig. 1. Main CoPE’s concepts
Community CoPEs can be characterized by three fundamental features [12]: (i) a mutual engagement, indicates how the CoPE functions and binds members together into a social entity; (ii) a joint enterprise, indicates what the CoPE is about, as understood and continuously negotiated by its members and (iii) a shared repository, represents the CoPE memory including a set of resources (Knowledge, learner profiles, outcomes, ...). Community and Practice are other characteristics of CoPE. Community builds relationships that enable collective learning. While the Practice anchors the learning in what people do. Actor and Role Actors of CoPEs are mainly working in the e-learning domain, with different levels of skills and knowledge based on their training and experience. They might be involved as: (1) members; (2) contributors (individuals participating in particular activities or during some specific periods of the CoPE’s life cycle); or (3) Partners (entities supporting the CoPE). For a better management of their work, the actors can organize themselves in groups on the basis of their objectives and concerns. A group may include actors with different roles. We distinguish two main roles: support member and learner member. The former contributes to the continuous and effective function of the CoPE (e.g. coordinator, animator, reporter, manager and administrator). While the later contributes to the realization of the current activities of the CoPE. Each role is described with data that is either already defined by IMS Learning Design specification (IMSLD) or specific to CoPEs (i.e. enriched with CoPE’s concepts in order to increase its expressing power in modeling learning situations in CoPEs). Fig. 2 shows the elements that have been added: “Category”; “Rights”; “Profile”; and “Participation”;
Towards an Ontology for Supporting CoPEs
667
Fig. 2. Conceptual model of Role
Activity We propose to classify the activities carried out within CoPEs into four categories: Analysis activities; Design activities; Implementation activities; and Utilization activities, corresponding to the steps of an e-learning development life cycle.
Fig. 3. Conceptual model of Activity
Each activity is described with data that is either already defined by IMS-LD or specific to CoPEs (i.e. enriched with CoPE’s concepts in order to capture the richness of interactions, which are inherent to collaborative activities and more particularly within CoPEs). Fig. 3 describes the elements that have been added: “Approach”; “Metadata”; “Classification”; “Execution”; “Result”. Environment The environment is composed of resources and services. Resources are classified by activity-type into: Analysis resources, Design resources, Implementation resources, and Utilization resources. We classify the CoPE’s Services as in Palette project [10]
668
L. Berkani and A. Chikh
into three categories: (1) Knowledge Management Services; (2) Mediation Services; and (3) Information Services. To describe the service sub-concept, we have adopted the Group-service structure proposed in [13] by Hernández-Leo et al., doted of necessary information. Moreover we have enriched this structure and proposed some new elements, among them:
“Service category”: indicates KM, Mediation or Information services. “Service mission”: specifies the nature of the required service (e.g. edition, communication, argumentation, help and research aspects). Its category is determined by “Service category”. “Service profile”: indicates the technical characteristics; techniques; information about connection and access.
In addition, we have proposed the relation “Composed by”, which gives the possibility to define new services by composition of some existent ones. Knowledge One of the major concerns of CoPEs is to capitalize techno-pedagogical knowledge, which can be classified into Tacit Knowledge (TK) and Explicit Knowledge (EK) as defined by Nonaka in [14]. To take advantage of the assets in the CoPE, a categorization of knowledge is done based on the four modes of the SECI framework as defined by Nonaka et al. [15]: (1) Experiential knowledge assets can be interpreted as handson experiences and skills acquired through discussion and shared practice; (2) Conceptual knowledge assets represent the EK articulated through symbols and language; (3) Systemic knowledge assets consist of systematized and packaged EK; and (4) Routine knowledge assets consist of the EK that is customized and embedded in the actions and practices.
Fig. 4. Conceptual Knowledge Model
As shown in Fig. 4, we have adopted this classification and adapted it to CoPE’s context as subclasses of the super class Techno-pedagogic knowledge. The subclasses can be illustrated respectively by the following examples: (1) the use of acquired development and/or utilization skills of an e-learning system during the analysis stage; (2) the use of knowledge acquired from e-learning standards and pedagogical ontologies to design an e-learning system; (3) the use of systemic knowledge (e.g. pedagogical resources) to develop an e-learning system; (4) and finally with some feedback, the utilization stage can lead to the definition of best practices considered as lessons learnt.
Towards an Ontology for Supporting CoPEs
669
The Knowledge concept is composed of other sub-concepts: “Description”, “Context”, “Content”, and “Metadata”.
4 Conclusion In this paper, we proposed a conceptual model of CoPEs. A set of models useful for describing such communities have been presented: “Community”, “Actor”, “Role”, “Activity”, “Environment” and “Knowledge”. Based on these models, an ontology dedicated to CoPEs will be built. This last aims at representing a uniform vocabulary to explicitly specify all the CoPE’s concepts, and with which the CoPE’s resources and services can be annotated. In our future research, we will refine and extend this work. Different aspects may be included, among them: “learner profile”, “process” and “communication”.
References 1. Langelier, L., Wenger, E.: Work, Learning and Networked, Québec, CEFRIO (2005) 2. Center for Teaching Excellence (CTE), http://www.sc.edu/cte/cop/ 3. Learning Network for Teachers (Learn-Nett), http://ute2.umh.ac.be/learn-nett/ 4. ePrep, http://www.eprep.org/ 5. Did@cTIC, http://www.unifr.ch/didactic/ 6. Chikh, A., Berkani, L., Sarirete, A.: Modeling the Communities of Practice of E-learning – CoPEs. In: 4th Annual Conference of Learning International Networks Consortium, LINC 2007 (2007) 7. Chikh, A., Berkani, L., Sarirete, A.: Communities of Practice of E-learning “CoPE” – Definition and Concepts. In: IEEE International Workshop on Advanced Information Systems for Enterprises, IWAISE 2008, pp. 31–37 (2008) 8. Rosson, M.B., Dunlap, D.R., Isenhour, P.L., Carrol1, J.M.: Teacher Bridge: Creating a Community of Teacher Developers. In: 40th Annual Hawaii International Conference on System Sciences, HICSS 2007 (2007) 9. Dubé, L., Bourhis, A., Jacob, R.: Towards a typology of virtual communities of practice, Cahiers du GReSI 03-13 (2003) 10. PALETTE: Pedagogically sustained Adaptive Learning through the Exploitation of Tacit and Explicit Knowledge, http://palette.ercim.org/ 11. Vidou, G., Dieng-Kuntz, R., Ghali, A.E., Evangelou, C.E., Giboin, A., Tifous, A., Jacquemart, S.: Towards an Ontology for Knowledge Management in Communities of Practice. In: Reimer, U., Karagiannis, D. (eds.) PAKM 2006. LNCS (LNAI), vol. 4333, pp. 303–314. Springer, Heidelberg (2006) 12. Wenger, E.: Communities of Practice: Learning as a Social System. Systems Thinker (1998) 13. Hernández-Leo, D., Asensio-Pérez, J.I., Dimitriadis, Y.A.: IMS Learning Design Support for the Formalization of Collaborative Learning Patterns. In: 4th International Conference on Advanced Learning Technologies (2004) 14. Nonaka, I.: The knowledge creating company. Harvard Business Review 69, 96–104 (1991) 15. Nonaka, I., Toyama, R., Konno, N.: SECI, Ba and Leadership: a Unified Model Knowledge Creation. Long Range Planning, vol. 33. Elsevier Science Ltd., Amsterdam (2000)
Using Collaborative Techniques in Virtual Learning Communities Francesca Pozzi Institute for Educational Technology – National Council of Research Via De Marini, 6 16149 Genoa, Italy
[email protected]
Abstract. The present paper illustrates the experience gained within two “twin” online courses, where three collaborative techniques, namely the Role Play, the Jigsaw and the Discussion, were used for triggering collaboration and interactions among students. The use of the techniques in the two courses is analyzed by looking at the participative, the social, the cognitive and the teaching dimensions and the way these components vary across techniques and across the two courses. Despite the results are certainly affected by factors that could not be set aside in a real context (the individual differences of students, the topics and sequence of activities, etc.), still it is possible to draw some final considerations concerning the strong points and weaknesses of the three techniques in online learning contexts. Keywords: CSCL, collaborative technique, role play, jigsaw, discussion, evaluation.
1 Introduction Computer Supported Collaborative Learning (CSCL) is the research area that focuses on debate-based learning and peer negotiation in online learning environments (The Cognition and Technology Group at Vanderbilt, 1991; Scardamalia & Bereiter, 1994; Dillenbourg, 1999; Kanuka & Anderson, 1999). In these contexts it is quite common to adopt “techniques” or “scripts” with the aim of providing a structure to activities, so as to foster collaboration and exchange (Kanuka & Anderson, 1999; Dillenbourg 2002; Hernández-Leo et al., 2005; Persico & Sarti, 2005; Jaques & Salmon, 2007; Fischer et al., 2009). Techniques and scripts are usually content-independent and serve as scaffolds to activities (which on the other hand are content-dependent). Examples are: Discussion, Peer Review, Role Play, Jigsaw, Case Study, etc. In this paper a study is described, which illustrates the application of a Jigsaw1, a Role 1
During the Jigsaw (Aronson et al., 1978) the content to be addressed is segmented into subitems and each learner is assigned the task to study in detail his/her sub-item. To do so, all the students who should become “experts” of a specific sub-item, join together in the so called “expert group”, with the aim of discussing the main points of their segment and rehearsing a presentation. At the end of this phase, expert groups are loosened and new groups are formed, called “jigsaw groups”. Within his/her new jigsaw group, each learner is asked to report his/her segment to the others, so that at the end all the groups gain a complete overview of the content.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 670–675, 2009. © Springer-Verlag Berlin Heidelberg 2009
Using Collaborative Techniques in Virtual Learning Communities
671
Play2 and a Discussion3 within two online courses. Aim of this study is to investigate the above mentioned techniques in real contexts, and appreciate the differences (if any) in the learning processes they are able to trigger in online learning situations.
2 Research Context and Methods The present study is rooted in the context of two twin courses run in 2007 respectively in Liguria and Veneto on the issue “Educational Technology” (hereafter called “TDSSIS Liguria” and “TD-SSIS Veneto”). The courses were devoted to student teachers and the main aim was that of making students familiarize with the most important issues related to the introduction of ICT in schools. The communities of both the courses consisted of post-graduate adults who were diversified as for backgrounds, interests and expectations from the course; the majority of them was at its first experience of online collaborative learning. In the present study we concentrate on one class of TD-SSIS Liguria, composed of 21 students, and one class of TD-SSIS Veneto, consisting of 24 students; the two classes were tutored by the same tutor. The courses shared the same contents and structure, and thus they both envisaged three subsequent online collaborative activities (lasting 3 weeks each). The first activity was based on a Jigsaw; during the second activity students were proposed a Role Play; finally the last activity was based on a Discussion. The CMC system used for carrying out the online activities was in both cases Moodle4 (Persico et al., 2009). In order to investigate the nature of the interactions occurred while performing the proposed online activities, an evaluation framework was used, which had been previously developed and extensively used to assess similar online experiences (Pozzi et al. 2007; Persico et al., 2009). The model considers four dimensions as those characterizing a learning process in CSCL contexts, namely the participative, the cognitive, the social and the teaching dimensions. In the model, each dimension is defined by a set of relevant indicators that can be used to evaluate it; in particular: -
the participative dimension is defined by indicators of: Active Participation (P1), Reactive Participation (P2) and Continuity (P3); the social dimension is defined by indicators of: Affection (S1) and Cohesion (S2); the cognitive dimension is defined by indicators of: Individual Knowledge Building (C1), Group Knowledge Building (C2) and Meta-Reflection (C3); the teaching dimension is defined by indicators of: Organizational matters (T1), Facilitating Discourse (T2) and Direct Instruction (T3) (Persico et al., 2009).
As far as the methods and means that have allowed to gauge these indicators, an analysis of all the messages exchanged by the students during the activities (1164 2
The Role Play is a technique where students are assigned roles, so that during the discussion they cannot express their personal ideas, but they have to argument positions according to the assigned roles (Renner, 1997; Kanuka & Anderson, 1999). 3 The Discussion is a simple technique where students are asked to discuss around a topic, with the aim of collaboratively carry out a task (usually writing a document, solving a problem, etc.). 4 http://www.moodle.org
672
F. Pozzi
messages) was carried out. In particular, the indicators concerning the participative dimension have been gathered directly from the data tracked by Moodle, whereas the analysis of the cognitive, the social and the teaching dimensions is based on a “manual” content analysis5.
3 Results In the following data are synthesized concerning the participative, the social, the cognitive and the teaching dimensions, as they have been developed during the execution of the Jigsaw, the Role Play and the Discussion respectively in TD-SSIS Liguria and in TD-SSIS Veneto. As far as the participative dimension is concerned, Table 1 reports the data of active participation in the two courses. As one may note, in TD-SSIS Liguria during the Discussion the students sent the highest number of messages, while in the Jigsaw and the Role Play the number of sent messages is nearly the same. Besides, the mean messages per student is quite high in all the three activities. In TD-SSIS Veneto the number of messages is overall lower than in Liguria, but, despite this, here again the Discussion resulted to be the most participated technique, followed by the Jigsaw and then the Role Play. Table 1. Active participation in TD-SSIS Liguria and TD-SSIS Veneto6
Jigsaw Role Play Discussion
Tot. sent msgs. 203 209 265
TD-SSIS Liguria Mean SD msgs per student 7,9 3,87 8,68 5,16 11,04 5,98
Range
1-19 3-24 3-26
Tot. sent msgs. 168 137 182
TD-SSIS Veneto Mean SD msgs per student 5,6 3 5,1 2,6 6,2 2,9
Range
2-12 2-9 1-14
Going further, we looked at more qualitative data: Figure 1 and 2 contain data concerning the social, the cognitive and the teaching dimensions obtained by the three techniques in the two courses. In particular, by looking at Figure 1 (TD-SSIS Liguria), it is interesting to note that indicators seem to follow the same path independently on the technique used, namely: S1 of the social dimension (affection) is always quite low and especially the value of S1 in the Jigsaw and that in the Role Play resulted very close; S2 (cohesion) in contrast reached the highest values in all the three activities and again values of the Jigsaw and the Role Play are very similar. As far as 5
The content analysis was carried out by two coders. Each message was split into units of analysis (“units of meaning” – see Henri, 1992), so that each unit could be classified as belonging to a certain indicator. The inter-rater-reliability between the coders was calculated on a sample of 110 messages (Holsti coefficient = 0,81; percentage agreement = 0,83) (Persico et al., 2009). 6 Unfortunately, due to administrative matters, it was not possible to have data concerning the Reactive Participation (P2) and Continuity (P3) in TD-SSIS Veneto; for this reason these data are omitted here for the TD-SSIS Liguria as well.
Using Collaborative Techniques in Virtual Learning Communities
673
the cognitive dimension is concerned, C1 (individual knowledge building) is always lower than group knowledge building (C2), whereas C3 (meta-reflection) is almost absent in all the three techniques. Values of C1 in the three techniques are again quite close and the same applies to values for C2 and C3. Finally, the three indicators of the teaching dimension (T1, T2 and T3) are more or less all at the same level with the only exceptions of T1 in the Discussion and T2 in the Role Play, which both reached higher levels. Indicators of the three techniques (TD-SSIS Liguria 2007) 350 326
300 250 228 205
200
Role Play
179 163
150
Jigsaw Discussion
142 107
100
97
87 75
72 56
71
64 63
50
121
121
117
77 67 52
29 28 6
0 S1
S2
C1
C2
C3
T1
T2
T3
Fig. 1. Social, cognitive and teaching dimensions in TD-SSIS Liguria
Indicators of the three techniques (TD-SSIS Veneto 2007) 180 167 156
160
148
140 121
120
113
100
Role play
102
99 86
80 62 61 56
60
Jigsaw
81 76
69 68
Discussion
60
59
47 36
40 20
44
44
18 11 5
0 S1
S2
C1
C2
C3
T1
T2
T3
Fig. 2. Social, cognitive and teaching dimensions in TD-SSIS Veneto
As already mentioned, TD-SSIS Veneto (Figure 2) registered an overall lower number of messages. Still, again here there is a common trend in all the three activities. In particular, a bias is registered in the social dimension between affection (S1), which is quite low, and cohesion (S2), which is very high (with the only exception of the Role Play, whose S2, even if higher than S1, is sensibly lower than S2 of the other two techniques). In all the three techniques C1 is lower than C2, with the Jigsaw
674
F. Pozzi
developing the highest cognitive dimension, followed by the Discussion and then by the Role Play. Again here meta-reflection (C3) was not particularly developed by none of the proposed techniques. As far as the teaching dimension is concerned, again here differences among T1, T2 and T3 are not so evident across the three techniques, with the exception of T2 during the Discussion and T3 in the Role Play, which were sensibly lower than those of the other two techniques.
4 Discussion and Conclusions First of all, it should be noted that in both the courses, despite some differences in the values assumed by the indicators, these seem to follow the same trend independently of the technique used. In particular, the group cohesion always shows high values, while affection tends to be much lower; at the same time, it seems that individual knowledge building is on average quite low during this kind of activities, while group knowledge construction is usually high (and this is reasonable as we are in a collaborative learning context), whereas meta-reflection indicators are quite scarce in all of the three proposed activities. It should also be noted that individual knowledge building and meta-reflection are latent variables and therefore, as De Wever et al. (2006) pointed out, their low levels might not necessarily mean that they did not take place but that, simply, they were not made explicit in the student messages. Besides, it seems that all the activities have supported adequate levels of teaching dimension with no particular predisposition for one or the other aspect of it. Together with such a general “common trend”, one should also consider that each activity in our study revealed a specific ability as for supporting one or another dimension, namely: the Discussion resulted more participated by both the groups and the one which mostly fostered the social dimension; the Role Play always obtained the lowest levels for C1, as well as for C2 and C3 while it seems to be quite good as for the teaching dimension (especially for the aspects of discourse facilitation which concern taking responsibility of the group learning process); the Jigsaw obtained in both the courses the highest level of group knowledge building. This leads us to think that, if on the one hand there is no activity that - in principle - is better than others, on the other hand, the technique or script used may have a different impact on the different dimensions, i.e. a low structure seems to foster more the social dimension (as people feel more free to express their own impressions and feelings), whereas a higher degree of structuredness seems to have more positive effects on the cognitive dimension. By taking into account these final remarks, it is also worthwhile noting that some of the data in our study may have been even affected by some factors, which were impossible to be set aside in a real context: the order in which the activities were proposed, the topics of the activities themselves and even the individual differences of students (made explicit for example by the different levels of the participative dimension within the two courses) may have (at least partially) affected the results. It would therefore be interesting to carry out further investigations to ascertain whether there are significant changes in the distribution of the indicators when these variables can be set aside.
Using Collaborative Techniques in Virtual Learning Communities
675
References 1. Aronson, E., Blaney, N., Stephin, C., Sikes, J., Snapp, M.: The jigsaw classroom. Sage Publishing Company, Beverly Hills (1978) 2. De Wever, B., Shellens, T., Valcke, M., Van Keer, H.: Content analysis schemes to analyze transcripts of online asynchronous discussion groups: A review. Computers and Education 46, 6–28 (2006) 3. Dillenbourg, P. (ed.): Collaborative Learning: Cognitive and Computational Approaches. Pergamon Press (1999) 4. Dillenbourg, P.: Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In: Kirschner, P.A. (ed.) Three worlds of CSCL. Can we support CSCL, pp. 61–91. Open Universiteit Nederland, Heerlen (2002) 5. Fischer, F., Kolla, r.I., Mandl, H., Haak, J.M.: Scripting Computer-Supported Collaborative Learning. Springer, New York (2009) 6. Henri, F.: Computer conferencing and content analysis. In: Kaye, A.R. (ed.) Collaborative Learning Through Computer Conferencing, The Najaden Papers, New York, pp. 115–136. Springer, Heidelberg (1992) 7. Hernández-Leo, D., Asensio-Pérez, J.I., Dimitriadis, Y., Bote-Lorenzo, M.L., Jorrín-Abellán, I.M., Villasclaras-Fernández, E.D.: Reusing IMS-LD Formalized Best Practices in Collaborative Learning Structuring. Advanced Technology for Learning 2(3), 223–232 (2005) 8. Jaques, D., Salmon, G.: Learning in groups: A Handbook for Face-To-Face and Online Environments. Routledge, London (2007) 9. Kanuka, H., Anderson, T.: Using Constructivism in Technology-Mediated Learning: Constructing Order out of the Chaos in the Literature. Radical Pedagogy 1(2) (1999) 10. Persico, D., Pozzi, F.: Evaluation in CSCL: Tracking and analyzing the learning community. In: Szücs, A., Bø, I. (eds.) E-competences for Life, Employment and Innovation, Proceedings of the EDEN 2006 Annual Conference, Vienna, June 14-17, pp. 502–507 (2006) 11. Persico, D., Pozzi, F., Sarti, L.: A model for monitoring and evaluating CSCL. In: Juan, A.A., Daradoumis, T., Xhafa, F., Caballe, S., Faulin, J. (eds.) Monitoring and Assessment in Online Collaborative Environments: Emergent Computational Technologies for Elearning Support. IGI Global (2009) 12. Persico, D., Sarti, L.: Social Structures for Online Learning: a design perspective. In: Chiazzese, G., Allegra, M., Chifari, A., Ottaviano, S. (eds.) Methods and technologies for learning, Proceedings of the International Conference on Methods and Technologies for Learning. WIT Press, Southampton (2005) 13. Pozzi, F., Manca, S., Persico, D., Sarti, L.: A general framework for tracking and analyzing learning processes in CSCL environments. Innovations in Education and Teaching International 44(2), 169–180 (2007) 14. Renner, P.: The art of teaching adults: How to become an exceptional instructor and facilitator. The Training Associates, Vancouver (1997) 15. Scardamalia, M., Bereiter, C.: Computer support for knowledge-building communities. The Journal of the Learning Sciences 3(3), 265–283 (1994) 16. The Cognition and Technology Group at Vanderbilt: Some thoughts about constructivism and instructional design. Educational Technology 31(10), 16–18 (1991)
Capturing Individual and Institutional Change: Exploring Horizontal versus Vertical Transitions in Technology-Rich Environments Andreas Gegenfurtner1, Markus Nivala1, Roger Säljö1,2, and Erno Lehtinen1 1
University of Turku, Centre for Learning Research, Assistentinkatu 7, 20014 Turku, Finland 2 University of Gothenburg, Department of Education, Läroverksgatan 15, 40530 Göteborg, Sweden
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. Popular approaches in the learning sciences understand the concept of learning as permanent or semi-permanent changes in how individuals think and act. These changes can be traced very differently, depending on whether the context is stable or dynamic. The purpose of this poster is to introduce a distinction between horizontal and vertical transitions that can be used to describe individual and institutional change in technology-rich environments. We argue that these two types of transitions trace different phenomena: Vertical transitions occur when individuals, technologies, or domains develop in stable and fixed conditions within set boundaries. In contrast, horizontal transitions occur when individuals, technologies, or domains mature in the synergy with other fields. We develop our argument by working through relevant studies in medicine, and close by outlining implications for future research on professional technology enhanced learning. Keywords: technology, change, professional learning, expertise, humanmachine systems.
1 Introduction Popular approaches in the learning sciences understand the concept of learning as permanent or semi-permanent changes in how individuals think and act. The analysis of these changes is however challenging as the face of learning currently undergoes some substantial changes: These changes relate to the technologies for learning and the technologies at work, along with respective learning contexts and pedagogical models. An important aim of technology enhanced learning (TEL) is to understand the mechanisms and functions of the individual, social, and contextual development associated with technological tools. These developments are not always linear bottom-to-top movements; they also involve side steps. Changes in the individual and changes in the context are multi-directional, although this has been rarely addressed in past research. Several authors state that there is a need to learn more about the dialectics between vertical and horizontal transitions in the development of expertise [1,2], and how the institutional context shapes learning with technology [3]. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 676–681, 2009. © Springer-Verlag Berlin Heidelberg 2009
Capturing Individual and Institutional Change
677
The purpose of this poster is to introduce a distinction between what we term horizontal and vertical transitions that can be used to capture individual and institutional change in technology-rich environments. This distinction is seen as a methodological tool in that it directs our attention to the analytical practice. We argue that research on both vertical and horizontal transitions has merits and makes valuable contributions to advance our understanding of how to analyze change in technology-rich environments; both have their own explanatory power. Nonetheless, research investigating these transitions differs completely in its focus; while studies on vertical transitions employ a specific focus on individuals within one single domain or on a single tool, studies on horizontal transitions employ a broader perspective in that they extend their focus beyond a single domain or technology. A major problem however is that, in past research, both transitions are mixed up easily. We argue that although vertical and horizontal transitions in technology-rich environments go hand in hand, and thus can be reconciled to same extent, they should not be intermingled blindly: From an analytical stance, we argue that studies investigating vertical or horizontal transitions follow very different strategies and aims. To structure our argument on vertical and horizontal transitions, our discussion is organized in two sections. First, we discuss the concept of horizontal and vertical transitions in more detail. How does the individual and the institutional context change in technology-rich environments? How can this change be captured and analyzed? To illustrate an answer to these questions, we have chosen some significant studies in the domain of medical image diagnosis. Medicine is—among others—one example of a dynamic domain owing to its constant technological progress, and thus useful to show how learning and development occur under conditions of change. Second, we discuss implications of the proposed analytical distinction for future work in professional TEL, and how the analysis of horizontal and vertical transitions can add value for researching what it means to learn with and from technology.
2 Horizontal versus Vertical Transitions in Technology-Rich Environments In order to provide a detailed account for what we term horizontal and vertical transitions, we will focus on two levels: the individual and the institutional context in which the individual is embedded in. From an analytical perspective, the individual and the context are separated here for the sake of discussion. Our argument is put forward in the next two paragraphs by discussing individual and institutional change. 2.1 Individual Change Individual change in technology-rich environments is highly associated with technology. The interaction with technological tools and artifacts in different activities at work can trigger individual trajectories and stimulate the development of expertise. We focus here on individual trajectories although we acknowledge that these can be also related to a collective or an organizational level. Here, the individual development on a continuum of expertise can be traced as a vertical and a horizontal transition. Each is described in turn.
678
A. Gegenfurtner et al.
Traditionally, learning and the development of expertise in high-tech domains has been studied vertically by focusing on the development from novice to expert. The focus is on individual skill acquisition along a continuum of competence development. A typical assumption from studies investigating vertical transitions of an individual is that the institutional context, where the individual is working or learning in, is stable. We argue that such a perspective is useful for analyzing individual differences in routine tasks or in situations where rules are set. Examples for studies investigating individual vertical transitions can be found in classical expertise research, in domains that have reached a sufficient state of maturity. For example, in medicine, the reading of X-ray pictures with its roots back in the 19th century has been one of the most extensively studied tasks [4,5]. X-ray images remained rather constant over decades, and they even today afford the analysis of anatomical features based on grey-scale pictorial representations. Studies have mainly focused on individual differences in decision making, perceptual processes, and the representation of knowledge by comparing novices, intermediates, and experts. These comparisons are typically made in relation to a previously established “best practice”, thus treating the context as something relatively stable. Questions that are usually addressed in studies tracing vertical transitions relate to what are the characteristics of expertise on different skill levels? How can the development from novice to expert in a routine task be explained? Studies tracing horizontal transitions of the individual pose different questions, based on a different underlying assumption. Unlike to studies in stable environments, the interest here is in understanding how individuals adapt to non-routine tasks that emerge through contextual changes. To what extent are skills acquired in routine tasks transferable to non-routine tasks? The focus is on the transfer and generalizability of skills. A typical assumption from studies investigating horizontal transitions of an individual is that the institutional context is dynamic. We argue that such a perspective is useful in technology-rich environments, in order to analyze how professionals react to and cope with contextual changes. Examples for studies investigating individual horizontal transitions are surprisingly rare, although cases can be easily found in dynamic technology-rich domains. For example, in nuclear medicine, the technological standard has used to be positron emission tomography (PET). Recently however, PET has been combined with computer tomography (CT), a technology used in radiology. Physicians in nuclear medicine who have been able to analyze positron emission tomography (PET) images can now extend their skills horizontally to analyze also PET/CT images by crossing the boundaries to radiology. This boundary-crossing helps them adapt to changes in the technical domain standard. To summarize, individual change in technology-rich work environments is associated with technology. Depending on the persistence or change in the technology or domain, individuals can learn through vertical and horizontal transitions. We argue that stable conditions afford vertical learning through the mastering of routine tasks. On the other hand, dynamic conditions afford horizontal transitions through the adaptation to non-routine tasks. While we argue that both vertical and horizontal transitions account for different socio-cognitive processes, they of course complement each other. Specifically, radiologists who became experts in diagnosing X-ray images can also become experts in diagnosing PET/CT images; these two vertical movements are connected through a horizontal shift from one technology to another. We should also
Capturing Individual and Institutional Change
679
note that this shift requires a certain amount of willingness and motivation to be done. How do employees in technology-rich environments regulate their motivation? And which goals and motivational profiles support or impede transitory steps? Future research can address these questions along with the mutual complementarities of vertical and horizontal transitions that constitute individual change. 2.2 Institutional Change Institutional change in technology-rich environments is highly associated with technology. Although the institutional context can be traced on many more levels than just on the level of technology, we argue that changes in work practices, policies, communities, division of labor, or the domain as a whole are mainly following from technological changes. We illustrate this argument with two examples from medicine as a technology-rich environment: (1) the case of MRI as a vertical transition and (2) changing technologies in nuclear medicine and radiology as horizontal transitions. First, vertical transitions can be captured by focusing on one specific tool that is used in a particular domain. Questions that are usually addressed in studies tracing vertical transitions relate to how a technique has developed since its introduction, and how respectively what kind of institutional routines have emerged as a response to the development of the technical tool. In medicine, [6] analyzes the vertical transitions magnetic resonance imaging (MRI) has gone through since its development in the 1970s. First, its name changed from zeugmatography and nuclear magnetic resonance (NMR) imaging to the today established name of MRI. Second, MRI data representation changed from a numerical data output to a pictorial data output. In radiology departments, where MRI apparatuses have been installed, this has caused changes in work practices and also challenged the professional identity of radiologists. The implementation of MRI forced radiologists to adapt their work practices and to reconstitute their professional identities: New interpretation skills were required to make meaning of those new representations, and to handle the scanners appropriately. Since MRI makes no use of radiation, it was unclear if these apparatuses should be installed in radiology departments. Other departments raised a claim for the new techniques and with it a claim for the visual authority to analyze these digital pictures [7]. This example in radiology exemplifies how the transformation of imaging tools implies changes in current work practices which in turn demands professionals to renegotiate and re-organize their expertise, both in terms of individual knowledge and of their identity as a well-established discipline. In sum, the analysis of the vertical transition of one technology has the potential to uncover also the trajectories of institutional routines and how they develop over time. The second avenue to capture institutional change is to analyze horizontal transitions. This can be done by focusing on how a technological tool develops through connections to neighboring domains or by focusing on how a domain as a scientific discipline matures over time. Questions that can be addressed in studies tracing horizontal transitions relate to how a domain becomes more interdisciplinary through the introduction of a technology. How do technologies afford synergies and boundarycrossings to other domains? In medicine, horizontal transitions occur frequently through the evolution of imaging technologies. For example, as described above, nuclear medicine has faced the evolution of its technical domain standard from positron
680
A. Gegenfurtner et al.
emission tomography (PET) to a joint PET/CT image. This is seen as a horizontal transition since it involves a side step to a neighboring domain: PET/CT converges radiologic and nuclear medicine routines to produce and interpret medical images; it cuts across any neat boundaries between these two medical sub-specialties; and it creates a new stream from novice to expert in handling a new technical tool, associated with its emerging work practices and policies. Besides PET, another example is the shift from traditional X-ray technique to tomosynthesis, a new technology in which the images represent the anatomy of the lungs; the image is projected threedimensionally. Since tomosynthesis represents an improvement in the technology for diagnosing cancer in comparison to ordinary X-ray, and since the costs and radiation dose are lower than in the case of computer tomography (CT), the benefits for healthcare and patients promise to be considerable. To make full use of this technological advancement, however, it is important to further our understanding of how professionals develop expertise in using it, i.e. the very process in which they reason and make critical distinctions on how significant signs can be identified and how these should be classified. It is also interesting to trace how the introduction of a digital imaging technique in radiology departments ruptures current work practices associated with an analogous imaging technique. To summarize, institutional change in technology-rich environments is highly associated with technology. We argue that horizontal and vertical transitions of the institutional context refer to different phenomena, and they each occur under different conditions. While vertical transitions describe how a certain technology develops within a stable environment with fixed rules and clear boundaries, horizontal transitions describe how technologies and domains develop by crossing these boundaries to other technologies or domains.
3 Closing Remarks The purpose of this poster has been to introduce a distinction between horizontal and vertical transitions that can be used to capture individual and institutional change in technology-rich environments. We have argued that vertical and horizontal transitions account for different phenomena and they occur in different settings, depending on whether the context is stable or dynamic: Vertical transitions occur when individuals, technologies, or domains develop in stable and fixed conditions within set boundaries. In contrast, horizontal transitions occur when individuals, technologies, or domains mature by extending to other fields. Both transitions can and should be analyzed separately to capture micro- and macro-processes of development and change. It will be a goal for future research to identify when and how vertical and horizontal transitions intersect in the generation of learning. In closing, we discuss two implications of the proposed distinction for future work in professional TEL. First, the distinction in vertical / horizontal transitions indicates the multidirectional nature of individual and institutional development. It would be quite erroneous to assume that these developments are one-way streets. With the current speed of change in technology-rich environments, it is likely that almost every professional faces the challenge of adapting to completely new tools during one’s career. Changes can occur even in domains that seemed to be extremely stable for decades. The added
Capturing Individual and Institutional Change
681
value the vertical-horizontal distinction brings is hence associated with advancing our understanding on the multidirectional dialectics between the individual, the technology, and the broader institutional context in which both are enacted [1,2,7]. The second implication for future research relates to the ‘where’, i.e. the learning spaces in which vertical and horizontal transitions can be found. Multidirectional processes of learning occur frequently outside of school settings. [8] highlighted that the TEL community has invested maybe too much attention on technology-enhanced education and learning in formal institutions, and that they, we, need far more knowledge on learning occurring in informal settings. Hence, the workplace as a learning space becomes a central environment in which we can analyze the multi-directionality of individual and institutional development associated with technology. This is not to disregard the relevance of formal contexts; however, learning pathways over time and space, i.e. vertical and horizontal transitions, can also be addressed in realworld situations such as those arising in corporate technology-rich work settings. To conclude, both implications point to the challenge of capturing individual and institutional change which is due to the multi-directionality of both the mechanisms and the functions of individual, social, and contextual development associated with technological tools in professional work contexts.
References 1. Arnseth, H.C., Ludvigsen, S.: Approaching Institutional Contexts: Systemic versus Dialogic Research in CSCL. Int. J. CSCL 1, 167–185 (2006) 2. Sutherland, R., Lindström, B., Lahn, L.C.: Socio-Cultural Perspectives on TechnologyEnhanced Learning and Knowing. In: Balacheff, N., Ludvigsen, S., de Jong, T., Lazonder, A., Barnes, S. (eds.) Technology-Enhanced Learning. Principles and Products, pp. 39–54. Springer, Berlin (2009) 3. Ludvigsen, S.R., Havnes, A., Lahn, L.C.: Workplace Learning across Activity Systems: A Case Study of Sales Engineers. In: Tuomi-Gröhn, T., Engeström, Y. (eds.) Between School and Work: New Perspectives on Transfer and Boundary-Crossing, pp. 291–310. Pergamon, Amsterdam (2003) 4. Lesgold, A., Glaser, R., Rubinson, H., Klopfer, D., Feltovich, P., Wang, Y.: Expertise in a Complex Skill: Diagnosing X-ray Pictures. In: Chi, M.T.H., Glaser, R., Farr, M.J. (eds.) The Nature of Expertise, pp. 311–342. Erlbaum, Hillsdale (1988) 5. Morita, J., Miwa, K., Kitasaka, T., Mori, K., Suenaga, Y., Iwano, S., et al.: Interactions of Perceptual and Conceptual Processing: Expertise in Medical Image Diagnosing. Int. J. Hum-Comput. St. 66, 370–390 (2008) 6. Joyce, K.A.: From Numbers to Pictures: The Development of Magnetic Resonance Imaging and the Visual Turn in Medicine. Science as Culture 15, 1–22 (2006) 7. Burri, R.V., Dumit, J.: Social Studies of Scientific Imaging and Visualizations. In: Hackett, E.J., Amsterdamska, O., Lynch, M., Wajcman, J. (eds.) The Handbook of Science and Technology Studies, pp. 297–317. MIT Press, Cambridge (2008) 8. Pea, R.: Fostering Learning in the Networked World. Keynote presentation at the Third European Conference on Technology Enhanced Learning, Maastricht (2008)
A Platform Based on Semantic Web and Web2.0 as Organizational Learning Support Adeline Leblanc and Marie-H´el`ene Abel HEUDIASYC CNRS UMR 6599, Universit´e de Technologie de Compi`egne BP 20529, 60205 Compi`egne CEDEX, France {adeline.leblanc,marie-helene.abel}@utc.fr
Abstract. The organization’s knowledge and competences capital is increasingly crucial. Thus today, organizations are aware of the necessity to become learning organizations and to maximize organizational learning. Such a learning can be supported by information and communication technologies and more particularly by Web2.0 technologies. Within the approach MEMORAe we are interested in these new learning forms. We consider that they are connected to the knowledge management practices and we developed a learning environment based on the concept of learning organizational memory. This environment is a web platform using semantic annotations and Web2.0 technologies. Keywords: Knowledge management, Competences management, Organizational Learning, Learning Organizational Memory, Semantic Indexing.
1
Introduction
Globalization, information and communication technologies (ICT), are the new criteria of the economic environment. They transformed our way of learning and working. The organization’s knowledge and competences capital is increasingly crucial. The organization survival depends mainly on its capacity : – To access new knowledge; – To diffuse its competences quickly; – To exploit efficaciously and preserve its fields of expertise durably. However a great number of lessons, experience feedbacks are often acquired then lost. Thus today, more than ever, organization are aware of the necessity to become learning organization, i.e. organizations in which work is embedded in the organizational culture that allows and encourages the training at various levels (individual, group and organization) and the transfers of knowledge and competences between these levels. In short, they have to maximize organizational learning. Such a learning can be supported by Web2.0 technologies. Indeed after the ICT arrival, Web2.0 technologies offer new forms of sharing, exchange and learning. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 682–687, 2009. c Springer-Verlag Berlin Heidelberg 2009
A Platform Based on Semantic Web and Web2.0
683
Within the approach MEMORAe we are interested in these new learning forms. We consider that they are connected to the knowledge management practices and we developed a learning environment based on the concept of learning organizational memory. This environment is a web platform using semantic annotations and Web2.0 technologies. In this article, we focus on the modeling and the integration of competences in the MEMORAe2.0 project. Thus we present the link between organizational learning and competences. Then we present the approach MEMORAe and our organizational learning memory. Finally we show the E-MEMORAe2.0 web platform which we have developed.
2
Organizational Learning and Competences
In the current economic environment, to learn became the best means, for a company, to be competitive in preserving knowledge and experiments of each collaborator and each team. To become learning, on the one hand, companies must be able to capitalize and transfer the individual/collective experiments and competences created in their core. On the other hand, they must enable their members to develop their individual competences. According to the Commission of the European Communities1 competence is a combination of knowledge (explicit and implicit), abilities and skills influenced by needs, motives, personal goals, values, standards and attitudes. It is marked by effective use of resources, repeated application and accomplishment of tasks within defined conditions. Schmiedinger [1] extends the competence definition to organizations and includes therefore existing tools and materials to a new definition called ‘organizational competencies’ : Organizational competence is the combination of human competence and physical resources respectively actions successfully carried out by individuals using operating resources and work equipment or materials, to contribute to the organizational performance. These definitions show the necessity to define competences and manage resources linked to competences in order to facilitate organizational learning.
3
Approach MEMORAe
Organizational learning represents the organization capacity to increase the efficiency of its collective action. To favor this capacity organizations need to: – Manage their knowledge, competences and resources (facilitate their creation, share and capitalization),
1
Recommendation of the European Parliament and of the Council on Key Competences for Lifelong Learning (on line). (2005). http://www.ec.europa.eu/education/policies/2010/doc/keyrec en.pdf.
684
A. Leblanc and M. Abel
– Favor group work: define group (members and their functions in the group) and the group aims (project, problem, idea, ...), collaboration (group repository), communication(forum, chat,...) and coordination(shared agenda) between groups members. In the framework of the approach MEMORAe, we propose to answer to these needs in associating: – Knowledge engineering and educational engineering – Semantic Web and Web2.0 technologies to model and build a learning collaborative web platform as organizational learning support [2]. We chose to adapt the concept of Organizational Memory. Dieng define such a concept as an ‘explicit, disembodied, persistent representation of knowledge and information in an organization, in order to facilitate its access and reuse by members of the organization, for their tasks’ [3]. Extending this definition, we propose the concept of Learning Organizational Memory for which users’ task is learning.
4
Learning Organizational Memory Modeling
An organizational memory is composed of knowledge, competences and resources linked to these knowledge and competences. Our learning organizational memory modeling is structured by means of ontologies which define knowledge and competences within the organization. We used these ontologies to semantically index capitalized resources. We distinguished two types of ontology: the domain ontology and the application ontology. Each ontologies are composed by two sub-ontologies which represent competences and knowledge. Knowledge ontologies are describe in [4], in this paper we present competences ontologies. 4.1
Competence Domain Ontology
The domain ontology represents specific conceptualizations of a domain. In the framework of our projects: the domain is learning organization. Competence domain sub-ontology allows to model organizational learning competences. Stader and Macintosh proposed an ontology of organizational competences [5]. We adapt this ontology within our context. The figure 1 show a part of our domain sub-ontology centered on organizational learning competences. 4.2
Competence Application Ontology
The application ontology represents knowledge [6] and competences specific to a given application. In the framework of our project we built the ontology for B31.1 course, which is a course of applied mathematics at the University of Picardy (UPJV) in France. Figure 2 illustrates a part of the B31.1 competences ontology. These two sub-ontology are linked by the relation ‘Put into practice’. Thus a competence like ‘Summarize a random variable’ puts into practice knowledge like ‘random variable’, ‘real random variable’, etc.
A Platform Based on Semantic Web and Web2.0
5
685
E-MEMORAe2.0 Web Platform
In order to put into practice our modeling we developed the environment E-MEMORAe2.0 (see figure 3). The user interface proposes: – An access to different repositories (individual, group and organization), specifying the repository visualized and allowing to access to authorized repositories. – Entry points enabling to start the navigation with a given concept. – A short definition of the current notion. – A part of the ontology centered on the current notion. – A list of resources which contents are related to the current concept. – History of navigation. Thus, by means of this interface, users navigate through the ontologies and can explore the memory content. Vertical navigation (see figure 3) allows to explore subsumption relations and to reach related concepts. Horizontal navigation allows to explore proximity relations (other than subsumption) [2]. E-MEMORAe2.0 gives the possibility of learners to have a private space and participate to share spaces according to their rights. All these spaces (repositories) share the same ontologies but store different resources and different entry points. They can be visualized at the same time. Thus figure 3 illustrates the visualization of three spaces: one dedicated to organization members, one dedicated to gp3 members and one to the connected individual. Let us note that by default, user visualizes two repositories: one concerning his private memory and one concerning his organization memory. However he can choose spaces he wants to visualize by selecting them in the memories choice window (in the left top). These choices are registered and will be considered for the next session. Each
Fig. 1. Part of the domain ontology
Fig. 2. Part of the B31.1 competences ontology
686
A. Leblanc and M. Abel
Fig. 3. E-MEMORAe2.0 navigation interface (in French)
spaces enable to access to entry point (entry point vertical tab is selected) or resources indexed by the concept selected in the ontology map (resources vertical tab is selected). Group can work on a problem. User can use entry point to reach the problem concept (see figure 3). From this concept user can use vertical navigation to see problem type. Then he can use horizontal navigation to reach competences required to solve it. In the same way, users can reach knowledge puts into practice by competences. In such a platform, resource transfers can be done following two mainly ways: – Users can visualize different spaces/memories content at the same time. Thus, they can make a drag and drop to transfer a resource or an entry point from a specific repository to another one. – We developed a semantic forum. All the forum contributions are distributed in the resource space among the other resources (see figure 3). Users don’t access to the forum itself but to the repository resource space and then select resources of Forum type to participate to the forum about the selected concept (knowledge or competence) which thus represent the topic [2]. Consequently, users can exchange ideas about specific topics. We plan to develop semantic chats and semantic agendas in the same way.
6
Conclusion
In this paper we presented links we made between knowledge management, e-learning, semantic web, and Web2.0 technologies to build a collaborative environment in the framework of the approach MEMORAe. We focus to the
A Platform Based on Semantic Web and Web2.0
687
modeling and to the integration of competences in the MEMORAe2.0 project. We present the web platform developed E-MEMORAe2.0. It is a memory where it is possible to organize any resources or micro-resources (produced in the forum framework) in different work spaces (individual, group, organization) around shared ontologies (describing knowledge and competences). Thus users can easily transfer resources from one space to another one. All the micro-resources are capitalized and accessed like any resources in the memory (course, web site, exercise, etc.). With our approach we take into account at the same time formal (access, capitalization and sharing of explicit knowledge) and informal (tacit knowledge externalisation and capitalization) training. Our learning organizational memory allows to structure knowledge and competences of learning organization. It facilitates exchanges and interactions between learners. All these interactions are automatically capitalized and semantically indexed. E-MEMORAe2.0 evaluations gave us good results [6]. Learners used their different memories and forums. Currently our environment is used by academics. We have contact with industrials in order to evaluate such an environment to foster learning and innovation in their organization.
References 1. Schmiedinger, B., Valentin, K., Stephan, E.: Competence based business development - organizational competencies as basis for successful companies. Journal of Universal Knowledge Management (1) (2005) 2. Leblanc, A., Abel, M.-H.: E-MEMORAe2.0: an e-learning environment as learners communities support. International Journal of Computer Science and Applications, Special Issue on New Trends on AI Techniques for Educational Technologies 5(1), 108–123 (2008) 3. Dieng, R., Corby, O., Giboin, A., Ribi`ere, M.: Methods and tools for corporate knowledge management. In: Proceedings of the 11th workshop on Knowledge Acquisition, Modeling and Management (KAW 1998), Banff, Canada, pp. 17–23 (1998) 4. Abel, M.-H., Lenne, D., Leblanc, A.: Organizational Learning at University. In: Duval, E., Klamma, R., Wolpers, M. (eds.) EC-TEL 2007. LNCS, vol. 4753, pp. 408–413. Springer, Heidelberg (2007) 5. Stader, J., Macintosh, A.: Capability Modelling and Knowledge Management, Applications and Innovations. In: 19th International Conference of the BCS Specialist Group on KBS and Applied AI, Cambridge, pp. 33–50 (1999) 6. Leblanc, A., Abel, M.-H.: Using Organizational Memory and Forum in an Organizational Learning Context. In: Proceedings of the Second International Conference on Digital Information Management, ICDIM 2007, pp. 266–271 (2007)
Erroneous Examples: A Preliminary Investigation into Learning Benefits Dimitra Tsovaltzi1 , Erica Melis1 , Bruce M. McLaren1 , Michael Dietrich2 Georgi Goguadze2 , and Ann-Kristin Meyer2 1
German Research Center for Artificial Intelligence Stuhlsatzenhausweg 3, D-66123 Saarbr¨ ucken, Germany
[email protected] www.activemath.org 2 Universit¨ at des Saarlandes Fachbereich Informatik, D-66123 Saarbr¨ ucken, Germany
Abstract. In this work, we investigate the effect of presenting students with common errors of other students and explore whether such erroneous examples can help students learn without the embarrassment and demotivation of working with one’s own errors. The erroneous examples are presented to students by a technology enhanced learning (TEL) system. We discuss the theoretical background of learning with erroneous examples, describe our TEL setting, and discuss initial, small-scale studies we conducted to explore learning with erroneous examples.
1
Theoretical and Empirical Background
Correctly worked examples have traditionally been used to help students learn mathematics and science problem solving and have proven to be quite effective (1; 2). However, erroneous examples, that is, worked solutions including one or more errors that the student is asked to detect, explain, and/or correct, have rarely been investigated or used as a teaching strategy, particularly not in technology-enhanced learning systems. The question of if – and how – erroneous examples are beneficial to learning is still very much open. Some theoretical and empirical research has explored the effects of erroneous examples in mathematics learning and provides some evidence that studying errors can support learning by providing new problem solving opportunities and motivating reflection and inquiry, e.g. (3; 4; 5). Moreover, the highly-publicised TIMSS studies (6) showed that math students in Asian countries – where curricula often include the careful analysis and discussion of incorrect solutions – outperform their counterparts in most of the western world. One study explored self-explaining correct and incorrect examples (7; 8). Siegler et al found that when students self-explained both correct and incorrect examples they learned more in comparison to self-explaining correct examples only. Grosse and Renkl
This research was supported by the German DFG project ALoE (ME1136/7). The authors are solely responsible for its content.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 688–693, 2009. c Springer-Verlag Berlin Heidelberg 2009
Erroneous Examples: A Preliminary Investigation into Learning Benefits
689
also showed some learning benefit of erroneous examples but only for learners with strong prior knowledge and for far transfer learning (9). We plan to take the earlier studies further by investigating erroneous examples used in the context of TEL. In contrast to other studies, we are interested in the correlations between students’ benefit from erroneous examples and the situational and learner characteristics, with an eye toward eventually adapting erroneous examples instruction. To this end, we use the adaptive learning platform ActiveMath (10), a web-based learning environment for mathematics. In contrast to the Grosse and Renkl work, we are investigating erroneous examples with help. Our primary rationale for including help in the empirical studies is that students are not accustomed to working with and learning from erroneous examples and, hence, they need assistance and support in doing so. We hypothesise that learning from the ’errors of others’ can help students enhance their cognitive competencies as well as their meta-cognition and learning orientation. We propose two primary reasons for this. First, a student can best learn error detection and correction by reviewing and studying errors, something that is impossible to do with correct examples – and difficult to do with unsupported problem solving. Second, reviewing erroneous examples appears to be more supportive of a learning orientation rather than a performance orientation. Furthermore, we hypothesise that students will benefit from erroneous examples when encountered at the right time and in the right way. Rewarding a student for error detection may lead to marking of errors in memory such that they will be avoided in subsequent retrieval. Moreover, a student is less likely to exhibit the feared ’conditioned response’ of behaviourism (i.e., internalising the error and repeating it) when studying the errors of other students, since the student has not made the error him/herself and thus has not necessarily internalised it. A student is also unlikely to be demotivated by studying someone else’s error(s), as may be the case when emphasising errors the student has made him or herself. On the contrary, in an earlier observational study, we noticed positive motivational effects of erroneous examples (11). Another issue that we plan to investigate in our research is what system affordances are prerequisite to integrating the benefits of erroneous examples in a learning system and, more specifically, what extensions are necessary to the existing ActiveMath system to implement such affordances.
2
Erroneous Examples in ActiveMath
Observational Study. To begin investigating our research questions on erroneous examples, we designed and conducted an observational study with 25 German 6-graders. The study included two phases, error detection and error correction. Figure 1 displays both phases of an erroneous example presented to a student. The translation (of the first phase) is: Susanne mixes 3 l of milk and 46 l syrup. Susanne calculates how much milk shake is made by adding 3 and 46 . Her result is a 2 l milk shake. Find the error in Susanne’s calculation. Click on the first erroneous step. The student is asked to spot the erroneous step (Schritt 1 in
690
D. Tsovaltzi et al.
Fig. 1. An erroneous example
Figure 1) and then to correct it (Schritt 5 in Figure 1); feedback varies between conditions. For instance, in Figure 1 the student selects a correct step as incorrect (i.e., Step 1) and is flagged. The feedback (translated) is “Not really. Susanne’s 3rd step is wrong”. After displaying the help message, the system asks the student to explain the error, in Figure 2, “Why is the 3rd step wrong?” with the choices • • • •
because Susanne must translate the integer 3 into a fraction because 3 has to be added to both the numerator and denominator of because the 3 has to be cancelled: 3+ 23 I don’t know.
2 3
The first selection is the correct choice. After completing this phase, the student is prompted to correct the error, as shown at the bottom of Figure 1 (Schrit 5, “Now, correct Susanne’s first wrong step”).
Erroneous Examples: A Preliminary Investigation into Learning Benefits
691
Observations. A key observation was that the 6th grade students frequently did not know how to correct the erroneous step, even when they were able to choose the correct explanation for the error. This may mean that although students know the correct rules for performing operations on fractions and can recognise explanations that refer to these rules, they still have knowledge gaps that surface when asked to correct the error. Ohlsson (12) has described Fig. 2. Choices for Explaining the Error this phenomenon as a dissociation between declarative and practical knowledge. The same phenomenon occurred even with students who could solve exercises, but could not correct the erroneous example of the same type, e.g., addition of fractions with unlike denominators. Our interpretation in this case is that students tend to solve problems following well-practiced solution steps, so their knowledge gaps are not always revealed when solving exercises. We believe these gaps may be detected through the use of erroneous examples. Feedback Design. Based on this observation, we designed feedback for helping students correct the error. There are three types of unsolicited feedback provided: minimal feedback, error-awareness and detection (EAD) feedback, and help. Minimal feedback, consists of flag feedback (green colouring for correct and red for wrong answers) along with a correct/incorrect indication. EAD feedback intends to support the meta-cognitive skills of error detection and awareness. For example, for the task in Figure 1, the English EAD feedback would be ”Susanne’s result cannot be correct because 53 l is even less than the 3 l milk”. In the first phase of the erroneous examples (finding the error), students get EAD feedback, and then multiple choice questions (MCQs) which scaffold them to correcting the error. MCQs are explanations of the error like the ones in figure 2 and are nested (3 to 4 layers). Finally, they get minimal feedback and help messages on their choices, and eventually the correct answer. In the correction phase, error correction feedback is provided, e.g., You forgot to expand the numerators. Technical Experiment Support. To facilitate TEL studies with erroneous examples, we implemented an automated presentation of the study materials for use in a classroom setting. All materials are selected through a specific strategy of ActiveMath’s exercise sequencer, which defines the order in which students from a condition/group receive their material. On top of this, a selection routine was implemented that randomly chooses the order in which the sequences of the intervention appear each time a new user logs onto the system, and starts off where it stopped after a break (necessary for longer TEL experiments). Moreover,
692
D. Tsovaltzi et al.
all materials are online, including pre- and post-questionnaires. These features are important for running controlled studies in classrooms in general. Additionally, the erroneous examples and feedback described above, as well as the GUI that represents the worked examples, exercises, and erroneous examples are implemented as a tutorial strategy in ActiveMath. Pilot Study. Later, we ran a study informed by the initial observational study, to get preliminary indications of learning effects, to test the erroneous example design, and the online presentation of examples by ActiveMath. Ten 8thgraders were randomly assigned to one of two conditions (five per condition), and completed the pilot study in two sessions. The condition No-ErroneousExamples (NOEE) included worked examples and fraction exercises, but no erroneous examples. The condition Erroneous-Examples-With-Help (EEWH) included worked examples, exercises, and erroneous examples with provision of help. The design followed a pretest-familiarisation-intervention-posttest schema, with questionnaires also provided. Each group solved five sequences of three items. The posttest consisted of five exercises and two erroneous examples, including conceptual questions on error detection. Although our sample size was too small for inferential statistics, our descriptive statistics showed that the performance of the NOEE group decreased in the post-test (pre-/post-test difference mean=-13.7, stdv=13.6), whereas the EEWH group’s performance, increased (pre-/post-test difference mean=13.1, stdv=7.7). The EEWH condition reported in a group interview that they were satisfied with the help provided by the system and found it easy to understand. No difference in performance was observed in how the students from the different conditions answered the conceptual questions and solved the erroneous examples. However, with scores of 60% vs. 55%, there was certainly room for improvement in conceptual understanding. A positive outcome of the study was that all students reported that it was enjoyable to work with the system (e.g. ”It was fun until the end!”) despite complaints that the system was not fast enough (due to server problems).
3
Outlook
In upcoming studies we plan to investigate the interplay between the two competencies: finding and explaining an error vs. correcting it. In particular, we would like to test if we can eliminate the observed discrepancy that less-advanced (6th-grade) students could find and explain errors, yet could not correct them. Ohlsson (12) argues that when the competency for finding errors is active, it functions as a self-correction mechanism that, given enough learning opportunities, can lead to a reduction of performance errors. Although reducing ones own errors is arguably different from correcting errors of others, our erroneous examples with additional feedback that specifically targets the correction of performance errors seem to be a good candidate for creating the required learning opportunities.
Erroneous Examples: A Preliminary Investigation into Learning Benefits
693
References [1] McLaren, B.M., Lim, S.J., Koedinger, K.R.: When and how often should worked examples be given to students? new results and a summary of the current state of research. In: Love, B.C., McRae, K., Sloutsky, V.M. (eds.) Proceedings of the 30th Annual Conference of the Cognitive Science Society, Austin, TX, pp. 2176–2181. Cognitive Science Society (2008) [2] Trafton, J., Reiser, B.: The contributin of studying examples and solving problems. In: Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society (1993), http://www.citeseer.nj.nec.com/ [3] Borasi, R.: Capitalizing on errors as “springboards for inquiry”: A teaching experiment. Journal for Research in Mathematics Education 25(2), 166–208 (1994) [4] M¨ uller, A.: Aus eignen und fremden Fehlern lernen. Praxis der Naturwissenschaften 52(1), 18–21 (2003) [5] Oser, F., Hascher, T.: Lernen aus Fehlern - Zur Psychologie des negativen Wissens. Schriftenreihe zum Projekt: Lernen Menschen aus Fehlern? Zur Entwicklung einer Fehlerkultur in der Schule, P¨ adagogisches Institut der Universit¨ at Freiburg, Schweiz (1997) [6] OECD: International report PISA plus (2001) [7] Siegler, R.: Microgenetic studies of self-explanation. In: Granott, N., Parziale, J. (eds.) Microdevelopment, Transition Processes in Development and Learning, pp. 31–58. Cambridge University Press, Cambridge (2002) [8] Siegler, R., Chen, Z.: Differentiation and integration: Guiding principles for analyzing cognitive change. Developmental Science 11, 433–448 (2008) [9] Grosse, C., Renkl, A.: Finding and fixing errors in worked examples: Can this foster learning outcomes? Learning and Instruction 17, 612–634 (2007) [10] Melis, E., Goguadse, G., Homik, M., Libbrecht, P., Ullrich, C., Winterstein, S.: Semantic-aware components and services in ActiveMath. British Journal of Educational Technology. Special Issue: Semantic Web for E-learning 37(3), 405–423 (2006) [11] Melis, E.: Design of erroneous examples for ActiveMath. In: Looi, C.-K., McCalla, G., Bredeweg, B., Breuker, J. (eds.) 12th International Conference on Artificial Intelligence in Education. Supporting Learning Through Intelligent and Socially Informed Technology (AIED 2005), vol. 125, pp. 451–458. IOS Press, Amsterdam (2005) [12] Ohlsson, S.: Learning from performance errors. Psychological Review 103(2), 241–262 (1996)
Towards a Theory of Socio-technical Interactions Ravi K. Vatrapu Center for Applied ICT (CAICT), Copenhagen Business School Howitzvej 60, 2.floor, Frederiksberg, 2000, Denmark
[email protected]
Abstract. Technology enhanced learning environments are characterized by socio-technical interactions. Socio-technical interactions involve individuals interacting with (a) technologies, and (b) other individuals. These two critical aspects of socio-technical interactions in technology enhanced learning environments are theoretically conceived as (a) appropriation of socio-technical affordances and (b) structures and functions of technological intersubjectivity. Briefly, socio-technical affordances are action-taking possibilities and meaningmaking opportunities in an actor-environment system with reference to actor competencies and technical capabilities of the socio-technical system. Drawing from ecological psychology, formal definitions of socio-technical affordances and the appropriation of affordances are offered. Technological intersubjectivity (TI) refers to a technology supported interactional social relationship between two or more actors. Drawing from social philosophy, a definition of TI is offered. Implications for technology enhanced learning environments are discussed. Keywords: apperception, perception, and appropriation of affordances, technological intersubjectivity, socio-technical systems, technology enhanced learning, computer supported collaborative learning, comparative informatics.
1 Introduction There are two interrelated aspects of interactions in designing, developing, using, and evaluating technology enhanced learning (TEL) systems: (i) interacting with technologies and (ii) interacting with others such as peers and teachers. These two interactional aspects are mutually interdependent and are termed socio-technical interactions. Despite their critical centrality, socio-technical interactions in technology enhanced learning in general have not received necessary and sufficient theoretical consideration. This paper attempts to address this theoretical lacuna and hopes to jumpstart an empirically informed theoretical discussion on socio-technical interactions. As such, this theoretical project is not merely about Human Computer Interaction (HCI) – i.e., interacting with technology – it is also about technological intersubjectivity (TI) – i.e., interacting with people via technology. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 694–699, 2009. © Springer-Verlag Berlin Heidelberg 2009
Towards a Theory of Socio-technical Interactions
695
2 Theoretical Framework 2.1 Affordances The notion of affordance was introduced by J. J. Gibson [1]. Gibson was primarily concerned with providing an ecologically grounded explanation to visual perception. The ontological foundations of the notion of affordances are materialist and dynamicist [2]. Turvey [2, p. 180] citing Lombardo [3] identifies “the principle of reciprocity— distinguishable yet mutually supportive realities” as the central insight of Gibson’s ecological psychology of visual perception. This principle of reciprocity is highly relevant to technology supported collaboration as multiple individuals each with a specific subjectivity and identity shape mutually supportive interactional realities. The ecological approach is dynamicist but not dialectical and processual, holding that “everything changes in some respects, but not in all respects” [2, p. 175]. Drawing upon foundational work in ecological psychology on the formal definition of affordances [2, 4], the following definition of socio-technical affordance is provided. Narrative expositions follow the definition. 2.1.1 Definition of Socio-technical Affordance Let Wpqr (e.g., person-sending-email-to-another-person system) = (Tp, Sq, Or) be composed of different things T (e.g., concept-mapping technology); S (e.g., conceptmap node creator) and O (e.g., concept-map node receiving partner). Let p be a property of technology T; q be a property of subject S and r be a property of other O. The relation between p, q and r, p/q/r, defines a higher order property (i.e., a property of the socio-technical system), a. Then a is said to be a socio-technical affordance of Wpqr if and only if (i) Wpqr = (Tp ,Sq, Or) possesses a (ii) Neither T ,S, O, (T, S), (T,O), (S,O) possesses a The formal definition of socio-technical affordance presented above is for the minimal situation of dyadic interaction in technology supported interactional environments. For a social situation involving n distinct social actors, an n-tuple would characterize the system. This formalism can be read as an activity system of subject, object and tools [5]. Relating the definition to Latour’s actor-network theory [6], both actors and “actants” are implicated in the notion of socio-technical affordances. The formal definition of socio-technical affordance captures the two facets of interaction in socio-technical systems: (1) interacting with technology and (2) interacting with other persons (technological intersubjectivity to be discussed later). It is important to realize that affordances are action-taking possibilities and meaning-making opportunities in actual situations in an actor-environment system relative to actor competencies and technology capabilities. Norman’s [7] gulf of execution and gulf of evaluation can be read as gulfs in the perception of action-taking possibilities and meaning-making opportunities respectively. Socio-technical affordances are not things or widgets or features or functionalities. This category conflation has been the source of much confusion in the HCI design community [8]. Socio-technical affordances are the relational properties in particular situations of a specific user-technology system. By virtue of being relational properties with reference to an actor, socio-technical affordances can be termed relative to the actor and/or
696
R.K. Vatrapu
the technology, but relativity is not subjectivity. In that sense, affordances are not subjective properties. Affordances are neither arbitrary properties nor are they socially constructed [9]. Affordances are relational through and through, as they are the informational structure to be perceived in ambient arrays of the actor-environment system. The next section presents a brief discussion of the notion of “appropriation of affordances”. 2.2 Appropriation of Affordances Cognition in the ecological psychology sense has been articulated as the “cooperative appropriation of affordances” [10, p. 135]. After Rogoff and Lave [11], “cognition is something one uses, not something one has”. In my reading of Gibson [1], the notion of affordance simultaneously specifies the two concurrent levels of meaning and action. Affordance is a meaning-making opportunity and simultaneously an action-taking possibility in an actor-environment system in a particular situation. Although the perception of affordances can be accounted on ecological grounds, the perception of events cannot be accounted on strictly ecological ontological grounds [12]. The perception of events has interactional consequences in technology supported collaboration. It is here that Gibson’s rejection of a role for higher order cognitive processes is problematic. Social interactional consequences from an individual’s perception of affordances are influenced by a prospective projection into the future as well as a socio-psychological imagination of the other. Adapting Stoffregen’s discussion of behavior [4, p.125], appropriation is “what happens at the conjunction of complementary affordances and intentions or goals”. Based on Stoffregen’s definition of behavior [4], the following definition is offered for appropriation of affordances. 2.2.1 Definition of Appropriation of Affordances Let Wpqr (e.g., person-sending-email-to-another-person system) = c(a, i) be composed of different affordances, a (e.g., e, the opportunity to compose email, f, the opportunity to forward email, g, the opportunity to solve a science problem); and complementary intentions, i (e.g., h, the intention to send email, j, the intention forward email, k, the intention to solve a science problem), where both affordances and intentions are properties of the socio-technical system. A given appropriation b (e.g., sending email) will occur if and only if (and when) an affordance (e.g., e) and its complementary intention (e.g., h) co-occur at the same point in the space–time continuum, where c is a cultural-cognitive choice function. Unlike orthodox cognitivist views of the representational nature of human cognition that posits “copying” the external world, the cultural cognitive conception of socio-technical affordances and their appropriation views interaction as “coping” with the contingencies of the external world [13]. Interactions in socio-technical environments are a dynamic interplay between ecological information as embodied in artifacts and individual interpretation grounded in cognitive schemas. The essential mediation of all interaction is the central insight of socio-cultural theories of the mind [14]. The conception of interaction as being mutually “accountable” and systematic are the critical insights of ethnomethodology [15] and conversational analysis [16]. Accordingly, the cultural-cognitive choice function c represents the cultural-cognitive mediation of interaction. Interactions in socio-technical systems are conceived as the appropriation of socio-technical affordances. Even if socio-technical affordances are
Towards a Theory of Socio-technical Interactions
697
to be directly perceived, their appropriation is still influenced by the cultural cognition of social actors. This renders the concept of affordance ecologically cognitive. The notion of technological intersubjectivity (TI) is discussed next. TI addresses the second aspect of socio-technical interactions in technology enhanced learning environments: how participants relate to and form impressions and opinions of each other during and after technology supported interactions. 2.3 Technological Intersubjectivity Intersubjectivity is the key presupposition underlying human social interaction [17]. Human beings are not only functional communicators but also hermeneutic actors. Technological intersubjectivity is an emergent resulting from a technology supported self–other social relationship. In technological intersubjectivity, technology mediation can sometimes (but not necessarily always) disappear like in Clarke’s [18] third law of technology. 2.3.1 Definition of Technological Intersubjectivity Technological intersubjectivity (TI) refers to a technology supported interactional relationship between two or more participants. TI emerges from a dynamic interplay between the technological relationship of participants with artifacts and their social relationship with others. Information and Communication Technologies (ICT) and the Internet have changed our social relations with others and objects in fundamental ways that transcend technology mediation. Our psychological perception and phenomenological relation with others is changed fundamentally by the advances in information and communication technologies and social software. Our interactions with others and objects are increasingly informed by the logic of technology, hence technological intersubjectivity. (Note that natural language is the bedrock of TI). Technological intersubjectivity deals with the ICT enabled capabilities to place-shift (i.e., to be physically embodied in one physical space but to be able to virtually embodied in a different place) and the ability to time-shift (i.e., to be able to refer back to earlier interactions or to be able to defer forward interactions).
3 Discussion The rethinking of the productive notion of affordances can help inform the design of TEL systems. The concept of affordance has been much used, misused, and abused in fields of human computer interaction [8] as well as in the learning sciences. In my opinion, most current usages of the term affordance are far removed from its ecological origins and subsequent developments in ecological psychology. In many ways, the concept of affordance had been subjected to “conceptual stretching” by uncritical conflation with “technology features”. By returning the concept of affordance to its ecological roots and following its intellectual trajectory since Gibson’s seminal contribution, this theoretical framework rethinks affordances as socio-technical action taking possibilities and meaning making opportunities in an actor-environment sociotechnical system relative to actor competencies and technology capabilities. This
698
R.K. Vatrapu
allows us TEL researchers and practitioners to critically engage with design and evaluation of learning technologies by concentrating on all four aspects: (a) action taking possibilities, (b) meaning making opportunities provided by intended design or creative appropriation, (c) how these are relative to learner competencies in terms of digital literacy, domain-specific knowledge, motivation, critical thinking competencies, and (d) finally the pedagogically innovative technological capabilities built into the TEL system. The definition and discussion of the concept of appropriation of affordance indicates that learners situated in TEL environments might choose to appropriate culturally relevant (or appropriate) affordances. That is, context-sensitive and situation-bounded embodied actions of individual learners engaged in TEL environments will be influenced by not only the micro-genetic unfolding interactional contingencies but also by the macro-structural cultural concerns and metacognitive functions [19-21]. This allows for a richer conception, instrumentation, and analysis of interactional data from the TEL environments [see 22, for a description of a design framework of usability, sociability, and learnability]. The concept of technological intersubjectivity (TI) goes beyond the traditional HCI notions (such as presence and connected presence) and the humanities’ notions (such as networked individualism, information subject) by bringing together both psychological and phenomenological aspects of technology supported social interactions [23]. This provides for a broader and deeper understanding of the new generation of learners that are increasingly growing up with pervasive and ubiquitous information and communication technologies and other computational devices and gadgets (such as the so-called millennials and digital natives). One of the prime arguments for TEL has been that in a world of constant connectivity and near ubiquity of ICTs, technologies must be leveraged pedagogically. However, as pointed earlier, there hasn’t been theoretical work that sought to bring together these macro-sociological, technological, and pedagogical trends and aspirations together into a theoretically coherent framework that can be empirically evaluated. Hopefully, these efforts will jumpstart an empirically informed theoretical discussion on socio-technical interactions in TEL.
Acknowledgments Special thanks to Dan Suthers, Scott Robertson, Marie Iding, Marc Le Pape, Pat Gilbert, Nathan Dwyer, Richard Medina and anonymous reviewers for constructive feedback on an earlier version of these ideas.
References 1. Gibson, J.J.: The ecological approach to visual perception. Houghton Mifflin, Boston (1979) 2. Turvey, M.T.: Affordances and Prospective Control: An Outline of the Ontology. Ecological Psychology 4, 173–187 (1992) 3. Lombardo, T.J.: The reciprocity of perceiver and environment: The evolution of James. J. Gibson’s ecological psychology. L. Erlbaum Associates, Hillsdale (1987) 4. Stoffregen, T.A.: Affordances as Properties of the Animal-Environment System. Ecological Psychology 15, 115–134 (2003)
Towards a Theory of Socio-technical Interactions
699
5. Kaptelinin, V., Nardi, B.A.: Acting with Technology: Activity Theory and Interaction Design. MIT Press, Cambridge (2006) 6. Latour, B.: Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press, Oxford (2005) 7. Norman, D.: The design of everyday things. Doubleday, New York (1990) 8. Torenvliet, G.: We can’t afford it!: the devaluation of a usability term. Interactions 10, 12– 17 (2003) 9. Hacking, I.: The Social Construction of What? Harvard University Press, Cambridge (1999) 10. Reed, E.S.: Cognition as the Cooperative Appropriation of Affordances. Ecological Psychology 3, 135–158 (1991) 11. Rogoff, B., Lave, J.: Everyday Cognition: Its Development in Social Context. Harvard University Press, Cambridge (1984) 12. Stoffregen, T.A.: Affordances and Events. Ecological Psychology 12, 1–27 (2000) 13. Blumer, H.: Symbolic Interactionism: Perspective and Method. Prentice-Hall, Englewood Cliffs (1969) 14. Wertsch, J.: Vygotsky and the social formation of mind. Harvard University Press, Cambridge (1985) 15. Garfinkel, H.: Studies in Ethnomethodology. Prentice-Hall, Englewood Cliffs (1967) 16. Sacks, H., Schegloff, E.A., Jefferson, G.: A Simplest Systematics for the Organization of Turn-Taking for Conversation. Language 50, 696–735 (1974) 17. Crossley, N.: Intersubjectivity: The Fabric of Social Becoming. Sage, London (1996) 18. Clarke, A.C.: Profiles of the future: an inquiry into the limits of the possible. Harper & Row (1962) 19. Vatrapu, R.: Cultural Considerations in Computer Supported Collaborative Learning. Research and Practice in Technology Enhanced Learning 3, 159–201 (2008) 20. Vatrapu, R.: Technological Intersubjectivity and Appropriation of Affordances in Computer Supported Collaboration. Communication and Information Sciences, PhD. University of Hawaii at Manoa, Honolulu, 538 (2007), http://lilt.ics.hawaii.edu/~vatrapu/docs/ Vatrapu-Dissertation.pdf 21. Vatrapu, R., Suthers, D.: Culture and Computers: A Review of the Concept of Culture and Implications for Intercultural Collaborative Online Learning. In: Ishida, T., Fussell, S.R., Vossen, P.T.J.M. (eds.) IWIC 2007. LNCS, vol. 4568, pp. 260–275. Springer, Heidelberg (2007) 22. Vatrapu, R., Suthers, D., Medina, R.: Usability, Sociability, and Learnability: A CSCL Design Evaluation Framework. In: Proceedings of the 16th International Conference on Computers in Education, ICCE 2008 (2008) (CD-ROM) 23. Vatrapu, R., Suthers, D.: Technological Intersubjectivity in Computer Supported Intercultural Collaboration. In: Proceeding of the 2009 international Workshop on intercultural Collaboration, IWIC 2009, Palo Alto, California, USA, February 20-21, pp. 155–164. ACM, New York (2009)
Knowledge Maturing in the Semantic MediaWiki: A Design Study in Career Guidance Nicolas Weber1,2 , Karin Schoefegger1 , JennyBimrose3, Tobias Ley2 , Stefanie Lindstaedt1,2 , Alan Brown1 , and Sally-Anne Barnes3 1
Knowledge Management Institute, Graz University of Technology 2 Know-Center 3 Institute for Employment Research, University of Warwick
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. The evolutionary process in which knowledge objects are transformed from informal and highly contextualized artefacts into explicitly linked and formalized learning objects, together with the corresponding organisational learning processes, have been termed Knowledge Maturing. Whereas wikis and other tools for collaborative building of knowledge have been suggested as useful tools in this context, they lack several features for supporting the knowledge maturing process in organisational settings. To overcome this, we have developed a prototype based on Semantic MediaWiki which enhances the wiki with various maturing functionalities like maturing indicators or mark-up support. Keywords: Knowledge Maturing, Semantic MediaWiki.
1
Introduction
Resources in an organizational environment change over the time. Since enterprises need to become increasingly agile in order to compete successfully, the adaption of the resource to the users needs and the constantly changing requirements are a crucial factor. Resources like e-mail, web content, documents facilitate the fulfillment of our daily tasks by providing the basis for knowledge intensive work. The improvement and gradual standardization of knowledge artifacts over the time and the accompanying organisational learning processes have been characterized as Knowledge Maturing, see [7]. This paper describes a design study conducted in an ongoing EU funded project called MATURE (http://mature-ip.eu/en/) where the objective is to understand the maturing process and provide maturing support for knowledge workers in a collaborative U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 700–705, 2009. c Springer-Verlag Berlin Heidelberg 2009
Knowledge Maturing in the Semantic MediaWiki
701
environment. The design study as part of the requirements elicitation and analysis aims at identifying requirements for a future system supporting the maturing process of knowledge objects. Also, the purpose of the design study is to ground the theoretical ideas of knowledge maturing into organisational practice. There are several approaches analyzing the theory of Knowledge Maturing. [6] describes conceptual foundations for systems which support knowledge maturing. For that purpose the three dimensions, content, semantics and processes were taken into account. Indicators for content maturing were examined in [2] by analyzing articles within the online encyclopedia Wikipedia. [1] deals with ontology maturing in folksonomies and [4] covers ontology evolution in Web2.0 environments. [5] describes the lifecycle of task patterns as part of process management.
2
Maturing Services for the Semantic Media Wiki
Wikis are prime examples of tools that allow for a collective construction of knowledge in a community setting. There are certainly good examples of Wikis being used as tools for creating a collective knowledge repository, for teaching and learning purposes, and for organizational knowledge management, see [3]. In the perspective of a knowledge worker, Wikis might be very well suited for enabling the maturing of artefacts, especially because of the ease of editing the content and the policy that everyone can edit anything. Additionally, they make the collective construction process traceable (utilizing the wiki’s history functionality) and allow for discussion processes around artefacts. The career guidance sector is heavily content dependent (Labour Market Information, statistics etc.), thus a Semantic MediaWiki was chosen as a basis for a prototype supporting knowledge maturing in this sector. Several functionalities were developed to enrich the Semantic MediaWiki in terms of searching, collaborating, adding semantic mark-up and visualisation. Each of these services to support knowledge workers will be described in detail in the following. Search Support Service. This service provides a search interface which helps the user to aggregate information related to a certain topic without the need to use multiple search engines. Using different search facilities of various web resources (yahoo search, YouTube, wiki articles, Xing) and Yahoo Omnifind to enable including local information sources, the Search Support Service provides a combined interface that is embedded in the edit-mode of a wiki article. By default, the tags suggested by the system on the basis of the existing text in the article, are used as default search keywords. The wide range of information sources, varing between textual content, pictures, persons, ... stimulates the user’s inspiration and so provokes the evolutionary growth. Collaboration Initiation Service. This service offers the facility to initiate easy collaboration with authors of articles or interested persons via Skype (see fig. 2 (marker 4)) by not having to switch to another tool since it is embedded into the wiki and enables easier use. The user can send messages or web-links
702
N. Weber et al.
Fig. 1. Search Service - Interface
to wiki articles in order to support negotiation of and consolidation of artefacts. Additionally, within the visualisation of the wiki network, every author related to an article in the wiki can be contacted by clicking on the author’s node. Maturing Indicator Services. The objective of analyzing content is to facilitate the assessment of the maturity of a document. This maturity level allows to decide whether the maturity of a certain document should be improved by supporting the user in creating or editing a knowledge artefact. The bottleneck in assessing the maturity of text is the selection of qualified attributes reflecting the maturity of the content. Assuming that the readability and the maturity have a strong correlation, see [2], we tested within the design study two metrics for readability scores where both scores analyse English text samples. Mark-up Recommendation Service. Creating semantic mark-up conveys to the enrichment of wiki content. Additional annotation of articles enables the user to browse through the wiki and facilitates the retrieval of knowledge based on semantic mark-up. In addition, mark-up is used as a basis for recommendation of useful resources and visualisation of emergent content structures. The markup recommendation services strive for two goals. First, lowering the barrier for creating mark-up which replaces the complex Semantic MediaWiki syntax and second, improving the quality of structure by recommendation of meaningful, pre-consolidated mark-up. Depending on the content of an article, the system analyses used words and their frequencies to recommend the most used keywords as tags for the article, see fig. 2 (marker 3).In order to categorize articles, the system suggests already existing categories which corresponds best to the newly created content, see fig. 2 (marker 2). Additionally, the user can add a certain category which seems to be appropriate and can train the service with this category such that the system can suggest this category in future for appropriate and related articles. Visual Semantic Browsing Service. This service provides a visualization for the content of the Semantic Media Wiki. Each node in the graph represents either an article in the Wiki or a registered user. Directed edges represent the relations, for instance an article might have an assigned category, author, tag
Knowledge Maturing in the Semantic MediaWiki
703
Fig. 2. A Semantic Media Wiki Edit-Page with additional feature bar
or linked article. A user might have written one or more articles, or a category might contain one or more sections, articles, tags, etc. Depending on the choice of the maximum shown path-length, the user can define how many levels (and nodes) of the network are shown in the visualisation, as well as the type of the representing graph (e.g. hierarchically, cyclic). By clicking on a node in the graph, the visualization is updated and its connected nodes are shown, which enables the user to browse easily through the content of the wiki within the graph. Additionally, new nodes (users or articles) can be created; articles corresponding to a certain node in the graph can be opened and edited in a new browser window; and users corresponding to nodes can be contacted by using the Collaboration Initiation Service. This service supports the daily work of users by enabling visual browsing through wiki content from article to related articles or users. Thus, it assists by providing an overview of related topics and experts and offers easy negotiation by embedding a collaboration service.
3
Evaluation in a Real World Context
In order to gain insight into and to obtain new ideas about how a system could support the knowledge maturing process, a prototype was developed and evaluated in a real world context of career guidance organizations, whose service is delivered by specially trained Personal Advisers (P.A.s). The implementation of the prototype for this design study was done using rapid prototyping which involves iterative design phases using mock-ups and development phases combined
704
N. Weber et al.
Fig. 3. Visualisation Service
with regular input to generate feedback of the viability of our approach on supporting knowledge maturing in the context of career guidance. Questionnaires, interviews and a workshops with potential users from career organizations, were used to gain this regular feedback and input for further development in the evaluation process. Visual Appearance. Visual adaptation of the system would be necessary depending on individual preferences and learning styles. The easier a user can adapt the system to his/her needs, the more likely is it that the motivation of using a system for every day work grows. Easy Access to Relevant Information. Users might lack time to research information and therefore would need easy access to which articles are relevant for them. To support this, each article could have a summary which is shown when articles are listed as a search result or on the top of a page. Additionally, this summary could be shown within the visualisation of the wiki content when an employer moves the mouse over a node representing this article. Accuracy Control Concerning Time and Content. is necessary to make sure the data is accurate, up-to-date and relevant. Long articles are unlikely to be read and it will be too time consuming to search through for the information a user is looking for.Insted of a moderator, the idea of automatic date flags could be used to remind authors and editors to update a certain knowledge artifact. Awareness for Collaboration. Collaboration in organisations support employers to discuss new ideas and to provide help when questions arise or problems are encountered. The user should be able to see immediately who is online and who is not to be aware of whom to ask for help or discussion.
4
Conclusion
The main purpose of this work was to gain insight into the knowledge maturing process in the real world context of career guidance organizations by developing a tool that supports this process. The potential of the system in this context
Knowledge Maturing in the Semantic MediaWiki
705
was to be explored and it was to be researched how the utility of this system could be further enhanced. To support newly appointed personal advisors of career guidance organisations in a typical working process and the corresponding stages of knowledge maturation, a Semantic Media Wiki was employed which was enriched by several user interfaces that extend the usability of the Wiki in terms of collaboration, content visualisation and easy use of the system. Several maturing indicators and services have been designed that try to bridge the gaps in the maturing process. Furthermore, an evaluation of the prototype in a realworld-context helped to gain a deeper insight on features that are relevant for supporting knowledge maturing in career guidance organisations and the main aspects of their requirements can be easily adopted for a system supporting knowledge maturing of a knowlegde worker in other contexts. Acknowledgement. This work has been partially funded by the European Commission as part of the MATURE IP (grant no. 216346) within the 7th Framework Programme of IST. The Know- Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labor and by the State of Styria.
References 1. Braun, S., Schmidt, A.: People Tagging & Ontology Maturing: Towards Collaborative Competence Management. In: 8th International Conference on the Design of Cooperative Systems (COOP 2008), Carry-le-Rouet, France (2008) 2. Braun, S., Schmidt, A.: Wikis as a Technology Fostering Knowledge Maturing: What we can learn from Wikipedia. In: 7th International Conference on Knowledge Management (IKNOW 2007), Special Track on Integrating Working and Learning in Business (IWL), Austria (2007) 3. Jaksch, B., Kepp, S.J., Womser-Hacker, C.: Integration of a wiki for collaborative knowledge development in an e-learning context for university teaching. In: Holzinger, A. (ed.) USAB 2008. LNCS, vol. 5298, pp. 77–96. Springer, Heidelberg (2008) 4. Juffinger, A., Neidhart, T., Granitzer, M., Kern, R., Scharl, A.: Distributed Web2.0 Crawling for Ontology Evolution. International Journal of Internet Technology and Secure Transactions 5. Ong, E., Grebner, O., Riss, U.: Pattern-Based Task Management: Pattern Lifecycle and Knowledge Management. In: WM 2007 Proceedings of the 4th Conference Professional Knowledge Management. IKMS 2007 Workshop Potsdam, Germany, pp. 357–364 (2007) 6. Schmidt, A., Hinkelmann, K., Ley, T., Lindstaedt, S., Maier, R., Riss, U.: Conceptual Foundations for a Service-oriented Knowledge and Learning Architecture: Supporting Content. In: Process and Ontology Maturing. Springer, Heidelberg (2009) 7. Schmidt, A.: Knowledge Maturing and the Continuity of Context as a Unifying Concept for Knowledge Management and E-Learning. In: Proceedings of I-KNOW 2005, Graz, Austria (2005)
Internet Self-efficacy and Behavior in Integrating the Internet into Instruction: A Study of Vocational High School Teachers in Taiwan Hsiu-Ling Chen Graduate School of Technological and Vocational Education, National Taiwan University of Science and Technology, Taipei 106, Taiwan
[email protected]
Abstract. The purpose of the study was to explore the relationship between Internet self-efficacy and behavior in integrating the Internet into instruction. Participants in the study were 449 vocational high school teachers in Taiwan. A validation study was conducted with Internet Self-Efficacy Scale (ISES) and Integrating the Internet into Instruction Behavior Scale (IIIBS). The findings revealed that general and communicative Internet self-efficacy might foster behavior in integrating the Internet into instruction. The teachers’ behavior was classified as five aspects: course preparation, teaching activities, learning guidance, assessment, and product sharing. Furthermore, this study employed structural equation model (SEM) to investigate the causal relations among the variables considered in this study. The SEM analysis revealed that teachers with higher Internet self-efficacy showed more Internet integration in their course preparation. In addition, course preparation is a mediating factor between Internet self-efficacy and other four aspects of behavior in integrating the Internet into instruction. Keywords: Internet self-efficacy, integrating the Internet into instruction, vocational high school teachers.
1 Introduction The Internet is widely used in vocational high schools. The use of Internet technology to learn for educational purposes has been growing rapidly. Through the Internet, students can access useful tools and resources to enhance their learning. Perceiving the power of Internet-based technologies, the Ministry of Education in Taiwan has encouraged the use of information technology in schools. In addition, teachers have been encouraged to attend a series of workshops to prompt their informational literacy and further foster their ability to integrate information technology into their teaching. However, the implementation of technology integration is burdensome. Not every teacher is willing to adopt new approach for their teaching. Thus, study on factors related to teachers’ behavior in integrating the Internet into instruction is crucial for educational policy and intervention. Self-efficacy can be described as a person’s beliefs, expectations and perceived confidence in him/her to successfully perform a task [1][2][3]. Research has showed U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 706–711, 2009. © Springer-Verlag Berlin Heidelberg 2009
Internet Self-efficacy and Behavior in Integrating the Internet into Instruction
707
that self-efficacy affects people’s effort to devote while performing a task, and people’s persistence to deal with difficult situations [4][5]. Studies have further revealed a link between teachers’ self-efficacy and students’ achievement [6][7]. Thus, teachers’ self-efficacy is a valuable issue for educators. Harrison, Rainer, Hochwarter, & Thompson [8] suggested that employees with a high level of computer self-efficacy increased performance with computer-related tasks significantly. The explosive growth of computers and the Internet into the classroom over the last decades has made most teachers access to the Internet easily. Therefore, their Internet selfefficacy, which may affect their behavior in integrating Internet into instruction, would be an important topic to study.
2 Methodology 2.1 Participants This investigation used purposive sampling, and focused on the vocational high school teachers in Taiwan. A total of 449 paper-and-pencil survey questionnaires were gathered. The participants were 449 teachers from a selection of schools in Taiwan. They dispersed in 36 vocational high schools, including 316 female and 133 male teachers. 2.2 Instruments Internet Self-Efficacy Scale (ISES) and Integrating Internet into Instruction Behavior Scale (IIIBS) were utilized to meet the purpose of this study. Internet Self-Efficacy Scale (ISES) is designed to assess teachers’ self-perceived confidence and expectation of using Internet, including general internet self-efficacy and communicative internet self-efficacy. ISES used a six-point Likert scale with 10 items, which were adapted from the items developed by Tsai and Tsai [9] and Peng, Tsai and Wu [10]. The ten items were divided into two factors, the first one assessed teachers’ Internet selfefficacy in general (5 items) and the second one assessed teachers’ efficacy in communication and interaction on the Internet (5 items). The Cronbach coefficient alpha reliability for these two scales were 0.91 and 0.96, and the overall alpha was 0.93, indicating that the internal reliability is adequate [11]. Integrating Internet into Instruction Behavior Scale (IIIBS) is designed to understand teachers’ behavior in integrating Internet into instruction. IIIBS has 18 items. IIIBS scale used a six-point Likert scale with five subscales, including Course preparation (4 items), Teaching activities (3 items), Learning guidance (4 items), Assessment (4 items) and Product sharing (3 Items). The Cronbach coefficient alpha reliability for these five scales were 0.95, 0.86, 0.89, 0.90 and 0.83, and the whole scale was 0.94, indicating that the internal reliability is adequate [11].
3 Results 3.1 Descriptive Data Table 1 shows descriptive data for teachers’ responses on Internet self-efficacy and Integrating the Internet into instruction behavior. These teachers displayed better
708
H.-L. Chen
general Internet self-efficacy than communicative Internet self-efficacy (mean score 5.40 versus 4.32). Moreover, they expressed inconsistent tendency for five aspects of integrating the Internet into instruction behavior (between 4.71 and 2.55 in 1-6 Likert scale). They preferred to integrate Internet into their course preparation more than teaching activities, product sharing, learning guidance and assessment. Table 1. Descriptive data for teachers’ scores on ISES and IIIBS
Scale General Internet Self-efficacy Communicative Internet Self-efficacy Assessment Course Preparation Learning Guidance Teaching Activities Product Sharing
Mean 5.397 4.318 2.553 4.710 3.410 3.578 3.531
S.D. 0.659 1.403 1.166 0.891 1.123 1.132 1.191
3.2 The Correlation between Internet Self-efficacy and Integrating the Internet into Instruction Table 2 displays Pearson correlation analysis between teachers’ scores on ISES and IIIBS. It was found that teachers’ communicative Internet self-efficacy and their scores on each scale of IIIBS were all significantly positively correlated. That is, teachers with higher communicative self-efficacy integrated more Internet resources into all aspects of their instruction. On the other hand, teachers’ general Internet self-efficacy was also significantly related to their behavior in integrating the Internet into instruction, except for “assessment.” Both general and communicative Internet self-efficacy played an important role on teachers’ integrating Internet into instruction. High Internet self-efficacy may promote teachers to integrate the Internet into their teaching. Table 2. The correlation between teachers’ responses on ISES and IIIBS
General Internet Self-efficacy Course Preparation .461*** Teaching Activities .272*** Learning Guidance .184*** Assessment .055 Product Sharing .245*** ***p<0.001, **p<0.01, *p<0.05.
Communicative Internet Self-efficacy .413*** .401*** .312*** .275*** .380***
3.3 Structural Model: The Causal Relation between Internet Self-efficacy and Integrating the Internet into Instruction Behavior This study applied the structural equation modeling (SEM) analysis to explore the causal relationships for the variables. The predictor (exogenous) variable was Internet
Internet Self-efficacy and Behavior in Integrating the Internet into Instruction
709
self-efficacy and the outcome (endogenous) variable was integrating the Internet into instruction behavior. According to previous literature [12][13][14][15][16][17], the fit measures of the structural model in the present study indicated an adequately fit (Chisquare/df =3.5, recommended value ≤ 5; Root Mean Square Error of Approximation (RMSEA) = 0.079, recommended value ≤0.08; Goodness-of-Fit Index (GFI) = 0.86, recommended value ≥ 0.90; Normed Fit Index (NFI) = 0.96, recommended value ≥ 0.90; Comparative Fit Index (CFI) = 0.97, recommended value ≥ 0.90). The structural model is presented in Figure 1. Assessment General Internet Self-efficacy
0.09*
0.78* Learning Guidance
0.83*
Course Preparation
0.89*
Teaching Activities
0.39* Communicative Internet Self-efficacy
0.82* Product Sharing
Fig. 1. The structurally causal relationships between Internet self-efficacy and integrating Internet into instruction behavior
From Figure 1, the structural model implied that both general and communicative Internet self-efficacy had positive effects on course preparation; but communicative Internet self-efficacy had a stronger effect than general Internet self-efficacy. Furthermore, course preparation had strongly positive effects on the other four aspects of integrating the Internet into instruction behavior, such as teaching activities, learning guidance, assessment, and product sharing. That is, course preparation is a mediating factor between Internet self-efficacy and other four aspects of behavior in integrating the Internet into instruction. In other words, teachers with higher communicative Internet self-efficacy tended to integrating Internet resources into their courses preparation and the more they integrated Internet into their course preparation, the more they did in their teaching activities, learning guidance, assessment, and product sharing.
4
Conclusions
This research has explored teachers’ self-efficacy and behavior in integrating the Internet into instruction. According to the results, despite the explosive growth of the Internet in classroom and pressure from the Ministry of Education, teachers are not well prepared to integrating Internet into their instruction. The adoption of the Internet was limited in scope and substance. They preferred to integrate Internet into their course preparation more than teaching activities, product sharing, learning guidance and assessment. However, as the use of computers in schools grows, teachers need to
710
H.-L. Chen
develop ways to integrate the new technologies into a suitable approach to facilitate students grow and learn more. SEM (structural equation model) analysis in this study revealed that teachers’ Internet self-efficacy had a positive effect on their behavior in integrating the Internet into instruction. Communicative Internet self-efficacy affected their integrating the Internet resource into their course preparation directly; and then through the mediator of course preparation further affected the other four aspects of behavior in integrating the Internet into instruction. Thus, for effectively promoting teachers’ Internet-integration instruction, it’s essential to enhance teachers’ Internet self-efficacy, especially communicative Internet self-efficacy.
Acknowledgement Funding of this research work was supported by National Science Council, Taiwan, under grant numbers NSC 96-2511-S-011-002-MY3.
References 1. Bandura, A.: Self-Efficacy Mechanism in Human Agency. American Psychologist 37, 122–147 (1982) 2. Bandura, A.: Perceived Self-efficacy in Cognitive-development and Functioning. Educational Psychologist 28, 117–148 (1993) 3. Bandura, A.: Multifaceted Impact of Self-efficacy Beliefs on Academic Functioning. Child Development 67, 1206–1222 (1996) 4. Bong, M., Clark, R.E.: Comparison between Self-concept and Self-efficacy in Academic Motivation Research. Educational Psychologist 34(3), 139–153 (1999) 5. Klassen, R.: Writing in Early Adolescence: A Review of the Role of Self-efficacy Beliefs. Educational Psychology Review 14, 173–203 (2002) 6. Ross, J., Hogaboam-Gray, A., Hannay, L.: Effects of Teacher Efficacy on Computer Skills and Computer Cognitions of Canadian Students in Grades K-3. The Elementary School Journal 102(2), 141–156 (2001) 7. Cannon, J., Scharmann, L.C.: Influence of a Cooperative Early Field Experience on Preservice Elementary Teachers Science Self-efficacy. Science Education 80, 419–436 (1996) 8. Harrison, A., Rainer, R., Hochwarter, W., Thompson, K.: Testing the Self-efficacyperformance Linkage of Social-cognitive Theory. The Journal of Social Psychology 137(1), 79–87 (1997) 9. Tsai, M.-J., Tsai, C.-C.: Information Searching Strategies in Web-based Science Learning: The Role of Internet Self-efficacy. Innovations in Education and Teaching International 40, 43–50 (2003) 10. Peng, H., Tsai, C.-C., Wu, Y.-T.: University Students’ Self-efficacy and their Attitudes Toward the Internet: The Role of Students’ Perceptions of the Internet. Educational Studies 32, 73–86 (2006) 11. Nunnally, J.C., Bernstein, I.H.: Psychometric Theory, 3rd edn. McGraw-Hill, New York (1994) 12. Bagozzi, R.P., Yi, Y.: On the Evaluation of Structural Equation models. Academy of Marketing Science Journal 16, 74–94 (1988)
Internet Self-efficacy and Behavior in Integrating the Internet into Instruction
711
13. Bentler, P.M.: EQS: Structural Equations Program Manual. Multivariate Software, Encino (1995) 14. Hu, L., Bentler, P.B.: Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Citeria Versus New Alternatives. Structural Equation Modeling 6(1), 1–55 (1999) 15. Jöreskog, K.G., Sörbom, D.: LISREL V: Analysis of Linear Structural Relationships by the Method of Maximum Likelihood. National Educational Resources, Chicago (1981) 16. MacCallum, R.C., Browne, M.W., Sugawara, H.M.: Power Analysis and Determination of Sample Size for Covariance Structure Modeling. Psychological Method 1, 61–77 (1996) 17. Mulaik, S.A., James, L.R., Aline, J.V., Bennett, N., Lind, S., Stilwell, C.D.: Evaluation of Goodness-of-fit Indices for Structural Equation Models. Psychological Bulletin 105, 430–445 (1989)
Computer-Supported WebQuests Furio Belgiorno, Delfina Malandrino, Ilaria Manno, Giuseppina Palmieri, and Vittorio Scarano ISISLab Dipartimento di Informatica ed Applicazioni “R.M. Capocelli” Universit`a di Salerno, Fisciano (SA), Italy {furbel,delmal,manno,palmieri,vitsca}@dia.unisa.it http://www.isislab.it
Abstract. WebQuests are among the most popular techniques to enhance collaboration in learning; they are an inquiry-based activity, grounded on constructivist learning theory, where the information that learners interact with is mostly found on the Internet. We present here a system that offers computer-support during a WebQuest, by offering a structured discussion and debate space, besides the navigation and resource sharing. We integrate the WebQuest design process with an operational design phase and describe how our system can completely support the design of a computer-supported WebQuest.
1 Introduction Cooperative learning addresses the situations where the learners are to work together in a group on a set of collective tasks. Traditionally, it has been studied in classroom settings where students meet face-to-face [1,2]. With the widespread usage of computers in the classrooms, the richness and variety of information available to learners over the Internet is unprecedented in history. As an archetype way of using the Internet and the resources on the World Wide Web as learning tool, among the most popular ones, we find the WebQuest, “an inquiry-oriented activity in which some or all of the information that learners interact with comes from resources on the Internet ” [3,4,5]. Several studies in literature [6,7,8] highlithg that it is needed to place more emphasis on cooperation around a WebQuest, possibly with the support of instructional scaffolding and concept mapping templates, which can be useful tools in resource-based learning scenarios where students often suffer from cognitive overload. Therefore, we want to foster the usage of computer-mediated communication during the WebQuest to enrich the WebQuest process with a structure that orchestrates and guides the interactions of students and provides scaffolding and guidance during students’ activities. Furthermore, in our opinion, computer-supported WebQuests could support the implementation of WebQuests: a computer-supported design and execution of a WebQuest can help unexperienced teachers in realizing/using effectively WebQuests from the very beginning. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 712–718, 2009. c Springer-Verlag Berlin Heidelberg 2009
Computer-Supported WebQuests
713
2 WebQuests and Their Design On the Web, there are some attempts to provide a design process for a WebQuest but they are mostly focused on the pedagogical background, preparation and motivation, for which, of course, there is very little support a computer can provide. In this section, we propose an operational design process of a WebQuest that is meant to identify the phases where the computer-support can help teachers in building effectively and efficiently new WebQuests. As pedagogical design process, we refer to the one provided into the WebQuest official site [9]. The phases are depicted as follows: (1) Select a topic appropriate for WebQuests; (2) Select a design; (3) Describe how learners will be evaluated; (4) Design the process; (5) Polish and Prettify. The output of the process is the (specific) WebQuest containing at least the following building blocks: (a) an introduction, to provide background information; (b) a task involving the learners to analyze and transform the gathered information; (c) a process, that delineates the steps to follow in accomplishing the task; (d) a set of evaluation criteria for the work accomplished (e) a conclusion, to inform the learners about their results and finally a teacher page, that could be useful for other teachers to implement WebQuests. Our proposal is to augment the WebQuest scenario with the computer-support to orchestrate the whole set of activities, from the design and authoring to the execution and documenting of the WebQuest. In fact, our operational design process of a WebQuest is defined as composed by the following fundamental phases: – Design, that is to help the teacher in choosing a pedagogically motivated template; the template contains not only introduction, task, evaluation criteria, but also a stepby-step executable sequence of cooperative actions to be performed by the students in a synchronous work session, each one using his/her own PC. – Authoring, when the teacher fills-in the details of the topic found and the base material (links, etc.) for the students. In this phase, adjustment of the process provided by the template are possible, and merging/splitting/creating new steps into the process (as well as changing the nature of the steps) are to be supported. – Execution. This is the new phase that allows each student to communicate and discuss both orally and computer-mediated and, at the same time, use the computer as information-seeker and note-taker. – Documenting is the final phase that consists in publishing on the Web the results (in HTML) in such a way that the results of the WebQuest are publicly available. While a lot of useful resources are available on the Web [5] to create high-quality WebQuests and different tools are available to support online authoring and hosting services [10,11,12,13,14], no system is actually able to fully support the WebQuest process, by lacking the computer-support in the execution of the WebQuest in the classroom. In general all the state-of-the-art tools are fill-in-the-blank tools that guide authors through the phases of picking a topic, searching resources in Internet and documenting them into online learning activities. These activities encompass three (i.e., designing, authoring and documenting) of the four steps of the operational design of WebQuests. Conversely, the execution phase has not been envisioned and implemented by any system that use WebQuests for face-to-face educational settings (see Table 1).
714
F. Belgiorno et al.
Table 1. Summary of tools comparison: a “-” symbol in the table means that the specific phase is not supported
CoFFEE [15] QuestGarden [10] Filamentaly [11] zWebquest [14] PHPWebQuest [12] TeacherWeb [13]
Design √ √ -
Authoring √ √ √ √ √ √
Executing √ -
Documenting √ √ √ √ √ √
3 Computer-Supported WebQuests with CoFFEE CoFFEE is a suite of applications developed to support face-to-face collaborative problem solving and learning [15,16,17,18]. The CoFFEE Controller and the CoFFEE Discusser are the applications used by teacher and learners in classroom during the collaborative process. The teacher drives the collaborative process using a script named session, that allows to structure the learning activities as a sequence of steps. In each step it is possible splitting the students in groups, providing each group with a set of collaborative tools among those offered by CoFFEE1 . We have used in particular the Threaded Discussion and the Graphical tools. The Threaded Discussion tool is a synchronous discussion system where the contributions are structured as threads. The structure in threads allows learners to create separate discussion branches helping to keep focus on several arguments. The Graphical tool offers a synchronous shared graphical space where the learners can create contributions as textual boxes as well as links between contributions. This tool is designed to support brainstorming processes and conceptual maps creation, but it is generic enough to satisfy other usage scenarios. The flexible scripting mechanism provided by CoFFEE [19] accomodates the design and authoring phases of the operational design of a WebQuest. The applications in CoFFEE, meant to facilitate the creation of sessions, are the Lesson Planner and the Session Editor. The first one allows the teacher to select a predefined CoFFEE template, i.e. a session where the steps, the tools and their configuration are already set and the tasks and other simple details can be adapted to create a session. If the teacher wants to modify details of the session (or create a session from scratch), the Session Editor can be used, that provides full control on each single configuration detail. The pedagogical design phase is currently helped by the “instructionally solid” Design Patterns [20]. Grouped in categories following high-level activities in the Bloom’s taxonomy, the Design Patterns represent the pedagogical starting point to create a WebQuest. Here the operational design comes on stage, since the provided WebQuest templates only give a coarse grain definition of the process and do not encourage (or guide the teacher in providing) sufficient scaffolding. Through the Lesson Planner 1
The set is freely extendible, since designing and fully integrating new tools is facilitated by the plugin-based, open-source architecture that is fully documented, with support authoring tools (i.e. Eclipse-based wizards) available on Sourceforge [18].
Computer-Supported WebQuests
715
Fig. 1. The example of one of the WebQuests Design Patterns brought into Lesson Planner
and the Session Editor we address the issues raised in [7] and [21] about structure and implementation by unexperienced teachers. We leverage on the Lesson Planner ability to provide CoFFEE-based templates and created the implementation of several known and pedagogically well-motivated design patterns taken from [20]. As a proof of concept, we created templates for the following patterns: “Commemorative” (in the “Design tasks” category), “Comparative judgment” (from “Decision tasks”) and “Behind the book” from the category “Creative Tasks”). They can be downloaded (with additional material) from http://coffee-soft.wiki.sourceforge.net/CoFFEE+ within+WebQuest and imported into the Lesson Planner (that comes with CoFFEE). As shown in Fig. 1, the teacher is supported by a narrative, wizard-like structure. First, the teacher chooses a CoFFEE template from the set shown (grouped in categories2). For each template, additional information (metadata and description) are shown. Then, the teacher can advance to the “Edit” tab where it is possible to instantiate the tasks and instructions for each step (plus changing the number of groups). If the teacher is satisfied he/she can move to the “Summary and Save” tab, otherwise he/she can choose the “Advanced Editor” tab, where the Session Editor offers complete control over the session, giving the opportunity to change any detail . Of course, the Session Editor can also be used to author a session from scratch. In the “Save/Publish” tab, it is also possible to export the template as a zip file (to facilitate exchange of templates). 3.1 Executing a WebQuest The most important technical issue we had to face, when trying to support WebQuests, was to introduce tools that were not directly related to the discussion and debate but also on the Web browsing and resource sharing. Here we describe the new tools that we 2
Lesson Planner comes also with a set of scenarios from the Lead project, that, of course, do not refer to WebQuests and have been deleted from the set of templates in the picture.
716
F. Belgiorno et al.
have developed to provide a full support to the execution of a WebQuest in classroom: the InternetExplorer and the DocumentBrowser tool. The InternetExplorer tool allows each learner to navigate the World Wide Web through a standard browser. It offers the standard functionalities of a (private) browser, but also the opportunity for the teacher to provide a “follow-me” mechanism so that the starting point of a navigation can be illustrated to the learners, but also avoiding the “free” surfing of the Web, if a scaffolded navigation is needed. Moreover, the navigation is also stored in the log file of the session (on the Controller) so that it can be later-on analyzed. The tool also offers, on teacher side, a repository of documents (HTTP server) where the students can freely browse or, if the “follow-me” is activated, then everybody’s browser loads the same document. It can be also used for showing Powerpoint presentations (saved in HTML format) to all the students machines. The architecture of the tool embedded the OLE (Object Linking and Embedding) technology in the graphical widget, providing a complete Internet Explorer Control behavior. The teacher-side of the tool allows the teacher to set/reset navigation mode at run-time, therefore changing instantaneously from synchronous (“follow-me”) to asynchronous (free navigation). Particularly interesting for the WebQuests, is the contemporaneous usage of Threaded discussion and the InternetExplorer tool. In fact, the browser can be used by each learner to discover and dig information from the Web, then report the URL as contributions on the Threaded discussion tool and stimulate discussion about them from the team. The DocumentBrowser tool is an HTTP server that allows the teacher to share documents with the classroom. The interface of the tool consists of a Web browser automatically connected to the root directory of the HTTP server, where the teacher have placed the documents. Both the teacher and the learners can open the documents and navigate the links within the browser, but no “free browsing” is allowed: no search function, no address input field, just the basic “back”, “forward”, “home” navigation functions. This should fit the requirement for a scaffolded process: the teacher addresses the search by providing the starting points for exploration, i.e. the initial documents and links. If free browsing is needed, the InternetExplorer tool should be used instead. The DocumentBrowser cannot be considered a “collaborative” tool (as most CoFFEE tools are), because each learner performs its own navigation independently from the others: this characteristic is essential to perform the personal tasks assigned to each learner. The collaborative part of the work, such as exchanging links and collecting ideas, can be done with the standard CoFFEE tools. 3.2 Documenting a WebQuest The documenting phase of the operational design of a WebQuest enables the inquiry activities to be published on the Web in such a way that the results of the WebQuest are publicly available. To this aim CoFFEE allows to export structured discussions (i.e., CoFFEE sessions) in PDF, RTF and HTML formats through the Controller. The most important observation is that since a discussion can be saved in HTML format, the whole student learning activity can be subsequently accessed and edited by any external system, editor and so on, by allowing the modification of any detail of the produced discussion and not only the mere output of the WebQuest.
Computer-Supported WebQuests
717
4 Conclusions and Future Works In this paper we presented a system to computer-support WebQuests from the creation to the publication phase. The system is based on CoFFEE [15] but required two new tools to be introduced and integrated into the system. We also showed a operational design process that is completely supported by the system. Among the future work, we are currently working on the implementation of group synchronous navigation for the InternetExplorer tool, since now the “follow-me” is only teacher-to-class and does not allow turn-taking “follow-me” mode within the class/ group.
References 1. Webb, N.M., Palincsar, A.S.: Group processes in the classroom. Macmillan, New York (1996) 2. Slavin, R., Hurley, E., Chamberlain, A.: Cooperative learning and achievement: Theory and research. Handbook of Psychology, 177–198 (2003) 3. Dodge, B.: WebQuests: A Technique for Internet-Based Learning. Distance Educator 1(2), 10–13 (1995) 4. Dodge, B.: FOCUS: Five Rules for Writing a Great WebQuest. Learning & Leading with Technology 28(8), 6–9 (2001) 5. Dodge, B.: WebQuest Portal (2009), http://www.webquest.org/ (accessed April 18, 2009) 6. Soloway, E., Norris, C., Blumenfeld, P., Fishman, B., Krajcik, J., Marx, R.: Log on education: K-12 and the internet. Commun. ACM 43(1), 19 (2000) 7. MacGregor, S.K., Lou, Y.: Web-Based Learning: How Task Scaffolding and Web Site Design Support Knowledge Acquisition. Journal of Research on Technology in Education 37(2), 161–175 (2004) 8. Hill, J.R., Hannafin, M.J.: Teaching and learning in digital environments: The resurgence of resource-based learning. Educational Technology, Research and Development 49(3), 37–52 (2001) 9. Dodge, B.: The WebQuest Design Process (2009), http://webquest.sdsu.edu/designsteps/index.html (accessed April 18, 2009) 10. Dodge, B.: QuestGarden (2009), http://questgarden.com/ (accessed April 18, 2009) 11. Filamentaly, http://www.kn.pacbell.com/wired/fil/ 12. Temprano, A.: PHPWebQuesty, http://eduforge.org/projects/phpwebquest/ 13. TeacherWeb, http://teacherweb.com/tweb/WQHome.aspx 14. Unal, Z.: zWebQuest, http://www.zunal.com/ 15. De Chiara, R., Di Matteo, A., Manno, I., Scarano, V.: CoFFEE: Cooperative Face2Face Educational Environment. In: Proceedings of the 3rd International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom 2007), New York, USA, November 12-15 (2007) 16. Belgiorno, F., De Chiara, R., Manno, I., Overdijk, M., Scarano, V., van Diggelen, W.: Face to face cooperation with CoFFEE. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 49–57. Springer, Heidelberg (2008)
718
F. Belgiorno et al.
17. Coffee-soft, http://www.coffee-soft.org/ 18. Coffee page on Sourceforge, http://sourceforge.net/projects/coffee-soft/ 19. Belgiorno, F., De Chiara, R., Manno, I., Scarano, V.: A Flexible and Tailorable Architecture for Scripts in F2F Collaboration. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 401–412. Springer, Heidelberg (2008) 20. Dodge, B.: WebQuest Design Patterns (2009), http://webquest.sdsu.edu/designpatterns/all.htm (accessed April 18, 2009) 21. Zheng, R., Perez, J., Williamson, J., Flygare, J.: Webquests as perceived by teachers: Implications for online teaching and learning. Journal of Computer Assisted Learning 24(4), 292–304 (August 2008)
A 3D History Class: A New Perspective for the Use of Computer Based Technology in History Classes Claudio Tosatto1 and Marco Gribaudo2 1
Discipline Storiche, Dottorato in Storia e Informatica, Universit` a di Bologna
[email protected],
[email protected] 2 Dip. di Informatica, Universit` a di Torino
[email protected]
Abstract. The job of the historian is to understand the past like the people who have lived it have comprehended it, but also to communicate it with instruments and techniques that belong to an age and that influence the mentality of whom in that age live. Technologies, especially in the area of multimedia and virtual reality, allow the historians to communicate the experience of the past in a wide variety of senses. In this work we present a case study, that tries to answer the demand to “make history” involving directly the students, engaged in 3D reconstruction of a building from the past, by carefully checking all the problems of authenticity and by using methods of historical research. In this way, students are able to learn history by re-creating the past with information technology instruments.
1
Introduction
There is a wide range of multimedia products related to historic subjects, especially for what concerns the delivery of the results. The need of the historians to use appropriate media to communicate the results of their researches, comes from the fact that students, and all the other possible audiences, are starting to show less and less interest for the traditional text-based and paper printed media. It is thus important, due to the fast evolution of computer tools, to get used to the idea that a research does not finish with the publication of a paper, but that it should be accompanied by an on-line version, capable of exploiting a wide variety of multi-media and interactive features. The representation of the past in fictions broadcasted on televisions or into generic non-peer reviewed web sites (like wikipedia) poses several questions about the validity of those presentations and about the fidelity of the sources they use [11]. Visual culture is nowadays code and can change our way of looking at past: while a book tends to split its contents over several units that deals with separate analysis, the flows of images in a movie or events in a video game mix all the different subjects in a single entity. The question is then whether the emotions carried by such media can be actively used to convey knowledge about the past [5]. Many independent projects have tried to give an answer to this question, several successful experiences have been made in this direction. The Nu.M.E. (Nuovo Museo Elettronico) U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 719–724, 2009. c Springer-Verlag Berlin Heidelberg 2009
720
C. Tosatto and M. Gribaudo
project from the University of Bologna (Italy), tries to reconstruct its city with the introduction of a temporal variable that allows the user to see the 3D reconstruction of the place in a given temporal epoch. It also proposes an effective methodology for an extensive use of computer-based techniques to present the results of the researches made by the historians [1]. John Bonnet’s experience and his “3D Virtual Buildings” are the outcome of a cooperation among the National Research Council of Canada, Industry Canada and the Ottawa University. Its focus is to help students to understand an important concept: the historical model should remain separate from the objects it wants to represent. History does not exist until it is reconstructed, and this reconstruction can also be made using a 3D modeling software [9]. None of them, however, has considered in depth the problem of correct transmission of historical knowledge.
2
Our Proposal
In this paper we propose a methodology that extends the one proposed in Bonnet works. We share with that works the idea that 3D reconstructions can be important tools from an academic point of view. In particular, we focus on the communication of results of historic researches, and we show with an experience (called Conceria Fiorio in Real Time) how we can improve the students learning process (in history) by making them use the 3D computer tools in first person. Our idea is that presenting researches’ results with various media - in this case with the construction of 3D models - can improve students learning process, and increase their critical abilities, giving them the opportunity to face a large number of problems that historians have to solve while their do their job of trying to reconstruct the past. This allows the students to directly face the problems that characterize the gathering of the sources and of all the necessary data to perform the reconstruction. In this sense, we intend “multimedia” as a collection of superposed narrations (coming from different sources), and we exploit the interactivity features of computer based presentations not only to present the facts, but also to allow the final user to move among different narrations [10]. We use the PC as a tool with which students can reconstruct the past. This allow them to became authors of a historical narration themselves, and more importantly, to face with historical research. We restrict our attention to the reconstruction of a building because we believe that is the easiest reconstruction (from a technical point of view) that can make students correctly focus on the methodologies of historical research. For this purpose, it is required to research various material using different sources. Students will have to look for those data by themselves and will discover how documents can give us much more information if correctly compared one with the other. For the 3D reconstruction, we wanted to use a framework that could be well-known by most of the students. For this reason, we have chosen to use the 3D engine of a famous video game: Unreal Tournament [2], depicted in Figure 1. Our key idea, is that the user interface of a game engine like Unreal, has been created to allow any player (regardless of their computer background) to create their own world. It has thus been created
A 3D History Class: A New Perspective for Computer Based Technology
721
“simple enough” that students are able to manage it quite quickly. This allows the course to focus on historical methodology, requiring only a very short introduction on the use of the authoring application. When the sources that narrate the historical facts, and the media used to reconstruct it are of different nature (in this case we are dealing with texts, pictures, architectural blue prints, virtual reality), the results of the learning process and of the critic capacity of the students increase, thanks to the work done on the sources. If we consider that 3D graphics is the characteristic expression of our times, we can understand how important it is to give to the student all the necessary knowledge to master this field. However courses that focus only on the technical side of 3D production, usually forget about the importance and the characteristics of the contents of the objects they are teaching to reconstruct. By inserting the technical descriptions along side the methodological content-oriented issues, the nature of the object being reconstructed is always kept in focus. This will allow the students to obtain better results not only in history research, but also in 3D modeling. For students of non-technical subjects, the course can give them additional skills, emphasizing how computer can be used to create effective didactical material. In our approach, we took care using only free or open-source software. In particular UnrealEd, the main tools used to construct the 3D model of the building, is free for non-commercial and academic purposes. We used GIMP [3] to create the relative textures, and Blender [4] for modeling of 3D objects. This reduces the costs in setting up the course we propose: the requirements reduces to the one of a standard class in history - that are the access to libraries, archives, interviews or reports, images and pictures - plus the use of a computer equipped lab. The main difference with traditional courses is the way in which conventional historical tools are integrated inside the 3D reconstruction of the designated subject. We believe that the proposed methodology could be applied in all the related context, without any additional investment, since the additional requirements of a computer lab, is already met by most of the structures. Of course, courses must be restructured and include the use of 3D technologies.
3
Implementation
In this paper we will present the tools that were used to create the 3D reconstruction. In particular, the building is recreated in Unreal Engine, a game engine developed by Epic Games. The engine was initially shipped with the games Unreal Tournament 2003 and Unreal Tournament 2004. Later, a reduced version (called Unreal Engine) was made freely available online for non commercial purposes. The choice of Unreal as the platform for the reconstruction of the “Conceria Fiorio” has thus been motivated by the fact that it can realize a faithful reproduction, without facing the complexity that a CAD software would have required. In this way, students were able to carry on their task, without having an in-depth knowledge of the software. We have applied the proposed methodology to a Master Level course, followed by a restricted number of students. The course focused on the study of the historical places that involved in the rebel
722
C. Tosatto and M. Gribaudo
Fig. 1. The UnrealEd user interface and the plan of the “Conceria Fiorio”
actions during the Nazi occupation during the 2nd world war. We focused the city as a political institution, so the main idea that the course wanted to teach, was that cities contain evidences of the different moments of history they have witnessed. Our activity is concentrated on a production factory: the “Conceria Fiorio”, in December 1943, the building under consideration becomes one of the centers of the clandestine activity of the CLN (national liberation committee) thanks to the activity of its owner (Sandro Fiorio). The relation that exists between buildings and peoples in Turin is significant in its manifestation between the factory and the workers organization in the period 1943-45, that was an important political and social subject during the war [8]. The factory was part of the “Class and Cross” mission in contact with the Oss (the U.S. intelligence service). Most of financial support destined to the ally, passed through this factory before reaching the CLN. The whole factory participated at the insurrectional activities with the conscious participation of the workers: offices, plants, trucks, the entire building was used by the Resistance for its operations inside the city [6]. On September the 8th 1943, started a dramatic period for the city that lasted 18 months. Factories are the conceptual node through which we can read the existence of the city: factories are the places where people spend most of
A 3D History Class: A New Perspective for Computer Based Technology
723
Fig. 2. The technical workflow to perform the reconstruction
their day, whatever happens in the factory influences the city, so factories becomes the place where politics is done. These are the motivations for which we decided to focus our experiment to the reconstruction of this particular factory: the “Conceria Fiorio” [7]. A visual representation of the technical workflow that was followed to perform our reconstruction is presented in Figure 2. The historical research is mainly based on sources. These sources can came from archives, on-site inspections, and description provided by witnesses. The maps found into the archives can provide essential hints in the reconstruction of the environment. Picture made during on-site inspection (as well other available photographic material) can be used to produce the textures required to define the surfaces. From the description coming from witnesses, objects that can be used to fill the virtual environment can be chosen. All these elements are then mixed together inside the Unreal Engine, to construct the final project.
4
Conclusions
The aim of this project is to test the usability of 3D virtual reconstruction as an instrument to support historical instruction. The main motivation comes from the fact that it permits to perceive reality in a spatial way and it helps a great audience to comprehend historical structures and phenomena. But in “Conceria in Real Time” the concept of communication of research results has been turned upside down: the project participants becomes actors of the communication, making directly history through their jobs, the historical information does not only move from historical professionals to the not experts: the people who have reconstructed the past, have also created historical information. The present case study tries to answer the demand to “make history” involving directly the students, engaged in 3D reconstruction of a building from past using the methods and techniques of historical research and representation of the past. This offers the students the opportunity to experience directly a number of the problems that historians encounter while attempting to reconstruct the past. This way to communicate the past in several senses, allows the students/developers to grow up in their historical background, while they get information from the past, as well as when they reconstruct 3D artifacts. The challenge to re-build the past using information technologies suggest that there are rich opportunities for historians to be gained from using 3D objects and environments.
724
C. Tosatto and M. Gribaudo
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
http://www.storiaeinformatica.it/ http://www.unrealtechnology.com/ http://www.gimp.org/ http://www.blender.org/ Rosenstone, R.A.: Visions of the Past: The Challenge of Film to Our Idea of History. Longman History (1995) Pavone, C.: Una Guerra civile, Saggio storico sulla moralit` a nella Resistenza. Bollati Boringhieri (2006) De Luna, G.: La passione e la ragione. La Nuova Italia (2004) De Rege, G.: Un’Azienda torinese nella Resistenza: la Conceria Fiorio. L’Arciere (1985) Bonnett, J.: Pouring new wine into an old discipline: using 3d to represent the past. National Research Council of Canada, NRC 47170 (2003) Taylor, T.: Simulations and the future of the historical narrative. Journal of the Association for History and Computing 6(2) (2003) Thomas, W.: Blazing trails toward digital history scholarship. Social History 34(68), 415–426 (2001)
Language-Driven, Technology-Enhanced Instructional Systems Design Iván Martínez-Ortiz, José-Luis Sierra, and Baltasar Fernández-Manjón Fac. Informática. Universidad Complutense de Madrid C/ Prof. José García Santesmases s/n 28040 Madrid, Spain +34913947606 {imartinez,jlsierra,balta}@fdi.ucm.es
Abstract. In this paper we propose to extend the ADDIE (Analysis – Design – Development – Implementation – Evaluation) process for Instructional Systems Design (ISD) with a new linguistic layer. This layer allows developers to provide instructors with domain-specific languages to support and guide them through ISD. Instructors use the toolsets associated with these languages to produce technology-enhanced learning systems more effectively. We also describe how to put these ideas into practice by adopting modern model-driven software development processes together with the language engineering principles. This language engineering approach has been applied to <e-LD>, a highly flexible and extensible authoring tool for IMS Learning Design Units of Learning. Keywords: Technology-Enhanced Instructional Systems Design, ADDIE, Software Language Engineering, IMS Learning Design, <e-LD>.
1 Introduction Instructional Systems Design (ISD) and the generic Analysis – Design – Development – Implementation – Evaluation ADDIE process were conceived as means of designing and developing learning systems, independently of whether these systems are technology-enhanced or not [2]. However, the introduction of a technological factor in the development process also introduces new issues that must be carefully addressed. One of the most important problems is the need to manage the active collaboration of instructors and developers. A way of addressing this collaboration is to use suitable domain-specific languages (DSLs) [10]. The application of DSLs results in a more rational distribution of roles: instructors use the languages to configure the technology-enhanced components, while developers provide the instructors with all the required machinery to make such a configuration possible. In this paper we propose an extension of the generic ADDIE process model with a linguistic layer and illustrate this new process model using <e-LD> [6,7], an authoring tool for the production and reengineering of IMS Learning Design (IMS LD) Units of Learning (UoL) developed at Complutense University. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 725–731, 2009. © Springer-Verlag Berlin Heidelberg 2009
726
I. Martínez-Ortiz, J.-L. Sierra, and B. Fernández-Manjón
2 The Language-Driven ADDIE Model The Language-Driven ADDIE (LD-ADDIE) model is sketched in Fig. 1. This model is based on the revised ADDIE model proposed by the US Department of the Air Force (see [2]). It organizes the concepts and phases of the revised ADDIE model into five different layers. More precisely: Quality Improvement Layer Management
System Layer Linguistic Layer
Linguistic Design
Linguistic Analysis
Production Layer System Design
System Analysis Delivery
Support
Evaluation Layer Evaluation
System Implementation
System Development
Linguistic Development
Linguistic Implementation
Administration
Fig. 1. The LD-ADDIE Model
− The evaluation layer includes activities centered on the continuous evaluation of the different aspects of the instructional system. It corresponds to the evaluation phase in the original ADDIE model. − The production layer encompasses the systematic sequence of phases oriented to the production of the instructional system. It corresponds to the other four ADDIE phases (i.e., analysis, design, development and implementation). − The linguistic layer contains phases for the systematic production of the domainspecific languages and the associated toolsets. Although these phases mirror the phases in the production layer, their purpose is very different: to develop the languages and tools used by instructors for the development of learning systems. − The system layer contains the main functions of the learning system: management, administration, support and delivery.
Language-Driven, Technology-Enhanced Instructional Systems Design
727
− Finally, the quality improvement layer represents the mechanisms needed to carry out continuous quality improvement. LD-ADDIE adds a new layer, the linguistic layer, to explicitly address the technological factor of technology-enhanced instructional systems. The aim of the phases within this layer is to develop languages and tools. Also, they are mainly carried out by developers: − During linguistic analysis, developers analyze the instructional domain addressed by the learning system and the vocabulary and terminology used by instructors. The goal is to determine the main terms and concepts in this domain, as well as the relationships between these concepts. This analysis can be carried out using standard domain analysis techniques, as understood in software and domain engineering [3]. − During linguistic design, developers specify the syntax and constraints of the domain-specific language, as well as its operational semantics. In modern software language engineering practice, the language usually will be equipped with several syntaxes [4]: an abstract syntax, in terms of which the operational semantics is defined, and one or several concrete syntaxes, oriented to facilitate the use of the language by instructors. All these syntaxes will be linked by suitable transformations. Operational semantics, in their turn, will specify how technology-enhanced components can actually be produced from utterances in the language. During this phase developers also conceive the tools associated with the language. Typical tools will be authoring tools based on suitable concrete syntaxes, as well as generators of the technology-enhanced instructional components. − During linguistic development, developers build the toolset supporting the DSL. For this purpose, they can use well-established traditional techniques in the construction of language processors [1]. They can also adopt one of the emerging tendencies in software language engineering, based on model-driven software development concepts and the use of language workbenches [4]. − Finally, during linguistic implementation, the DSL and the associated toolsets are made available for instructors. These tools will be integrated into the final leaning system as part of the support function.
3 The Language-Driven ADDIE Model in Practice with <e-LD> To illustrate the LD-ADDIE model, we use <e-LD>, an experimental and highly adaptable and extensible authoring tool for IMS LD UoL developed at Complutense University [6,7]. The tool supports three main functions: − Importation. Using this function, instructors can load pre-existing IMS LD UoL. The function also produces useful information to understand the structure and behavior of each imported UoL: a hypertextual view (Fig. 2a), and a dependency graph with the representation of the dependencies among the design elements related to learning activity sequencing [8] (Fig. 2b). − Authoring. Using this function, instructors can load pre-existing IMS LD UoL. This function lets instructors edit the description of a UoL. For this purpose, they use the visual notation detailed in [7] (Fig. 2c).
728
I. Martínez-Ortiz, J.-L. Sierra, and B. Fernández-Manjón (a)
(b)
(c)
Fig. 2. (a) Hypertextual view of a UoL’s method; (b) a dependency graph; (c) edition of a method in <e-LD>
− Exportation. This function makes it possible to generate an IMS LD UoL automatically from an <e-LD> description. The core of the function is an automatic translation of flowcharts into rule-based systems [9]. Since <e-LD> considers IMS LD UoL as essential parts of a learning system, it is possible to systematize the design of evaluation instruments in terms of the structure imposed by <e-LD> on such UoL (for example a satisfaction survey on a UoL can mirror the static structure of the UoL, say a method decomposed into several plays, each one integrating several acts, each one integrating several role-parts, etc.). Also, <e-LD> plays a prominent role in the different production phases: − During system analysis, instructors can find it useful to examine pre-existing UoL used in previous levels of instruction to determine the students’ expected knowledge and capabilities, as well as to better determine the nature of the learning process and the more convenient performance exigencies. − During system design, instructors can reuse pre-existing UoL in the instructional domain, importing them into the tool and modifying them in accordance with the target learning task. Also, instructors can use <e-LD> to author formalized plans of instruction for technology-enhanced components that effectively determines the instructional methods and strategies.
Language-Driven, Technology-Enhanced Instructional Systems Design
729
− During system development, <e-LD> provides a catalog to determine the different instructional resources and materials to be developed. − Finally, during system implementation, instructors use <e-LD> to automatically generate standardized versions of the authored UoL encoded in IMS LD. Regarding the linguistic layer, the development of <e-LD> follows the principles of modern software language engineering [4]. Indeed, the root of <e-LD> is a DSL developed using the language workbench provided by the Eclipse Modeling Project. Thus, <e-LD> can be meaningfully conceived as the main product of an incarnation of the LD-ADDIE linguistic layer: − As regards linguistic analysis, <e-LD> represents a cost-effective solution to the otherwise costly domain analysis processes. Indeed, <e-LD> reuses many of the conceptual structures of a pedagogically neutral language (IMS LD) with the hope of increasing the applicability of the solution while still maintaining a reasonable domain-specific nature. − During linguistic design, the abstract syntax of the <e-LD> modeling language is characterized as a metamodel [4] that captures the main terms and concepts required to describe UoL in <e-LD>, as well as the relationships between these concepts, and the additional constraints affecting these elements. On the other hand, the concrete syntax corresponds to the aforementioned visual notation. These two syntaxes are related by an abstract-to-concrete-syntax mapping. Thus, by changing the concrete syntax model and this mapping, it is possible to tailor <e-LD> to the particular idiosyncratic requirements of each particular community of instructors. Finally, the operational semantics in <e-LD> are actually defined by the translation of flowchart-oriented specifications to rule-based ones used in the exportation function and described in [9]. − Linguistic development takes full advantage of the Eclipse Modeling Project. Indeed, the metamodels of <e-LD>'s abstract and concrete syntaxes are supported by EMF (the Eclipse Modeling Framework). Translation to IMS LD (carried out during exportation) is currently done as an ad-hoc model-to-model transformation; however, we are starting to refactor this process using the model-to-model transformation languages provided by the Eclipse Model to Model project. <e-LD> also takes full benefit of GMF (the Graphical Modeling Framework of Eclipse) to facilitate the development of the <e-LD> authoring function. Finally, the <e-LD> importation function is implemented as an XML processing component. We are currently refactoring it using XLOP (XML Language Oriented Processing) [12], an environment for the processing of XML documents with attribute grammars [11] also developed at Complutense University. − Finally, during linguistic implementation, <e-LD> is deployed for the instructors as an Eclipse-based standalone authoring tool. Currently we are also working on integrating it with other IMS LD compliant platforms and tools, particularly IMS LD players. Finally, following the guidelines encouraged by LD-ADDIE, <e-LD> is an integral part of the learning systems’ support function. In addition, it is also subject to continuous improvement. The adoption of principles strongly rooted in software language engineering in its design and development facilitates this continuous improvement.
730
I. Martínez-Ortiz, J.-L. Sierra, and B. Fernández-Manjón
4 Conclusions and Future Work In this paper we have described an extension of the ADDIE model for instructional systems design that highlights the collaboration between instructors and developers during the development of learning systems with significant technology-enhanced components. For this purpose, the extension promotes the production of domainspecific languages and associated toolsets as support for instructors. The resulting model (LD-ADDIE) makes explicit a linguistic layer oriented to the systematic production of language-oriented assets. We have illustrated the model with <e-LD>, an authoring tool for IMS LD UoL. From a linguistic point of view, the development of <e-LD> takes advantage of the language workbenches provided by the Eclipse Modeling Framework. We are currently applying the same principles to other language-driven e-Learning systems: <e-QTI>, a toolset for the authoring and deployment of QTI assessments [5], and <e-Tutor>, a system for the description of Socratic tutorials [13]. Finally, we plan to further experiment with the adaptation of (the concrete syntax of) <e-LD> to different communities of instructors in several instructional domains.
Acknowledgements We wish to thank the projects TIN2005-08788-C04-01, TIN2007-68125-C02-01, Flexo-TSI-020301-2008-19, Santander/UCM PR34/07 – 15865 and CID-II-0511-A, as well as the UCM Research Group 921340.
References 1. Aho, A.V., Lam, M.S., Sethi, R., Ullman, J.D.: Compilers: principles, techniques and tools, 2nd edn. Addison-Wesley, Reading (2006) 2. Allen, C.W.: Overview and Evolution of the ADDIE Training System. Adv. in Dev. Human Res. 8(4), 430–441 (2006) 3. Czarnecki, K.: Generative Programming: Methods, tools and Applications. AddisonWesley, Reading (2000) 4. Kleppe, A.: Software Language Engineering: Creating Domain-Specific Languages Using Metamodels. Addison-Wesley, Reading (2008) 5. Martínez-Ortiz, I., Moreno-Ger, P., Sierra, J.L., Fernández-Manjón, B.: <e-QTI>: a Reusable Assessment Engine. In: Liu, W., Li, Q., Lau, R. (eds.) ICWL 2006. LNCS, vol. 4181, pp. 134–145. Springer, Heidelberg (2006) 6. Martínez-Ortiz, I., Sierra, J.L., Fernández-Valmayor, A., Fernández-Manjón, B.: Language Engineering Techniques for the Development of E-Learning Applications. J. Network Comp. Appl. 32(5), 1092–1105 (2009), 7. Martínez-Ortiz, I., Sierra, J.L., Fernández-Manjón, B.: Authoring and Reengineering of IMS Learning Design Units of Learning. IEEE Trans. on Learning Tech. (March 27, 2009), http://doi.ieeecomputersociety.org/10.1109/TLT.2009.14 8. Martínez-Ortiz, I., Sierra, J.L., Fernández-Manjón, B.: Enhancing IMS LD Units of Learning Comprehension. In: ICIW 2009 (2009)
Language-Driven, Technology-Enhanced Instructional Systems Design
731
9. Martínez-Ortiz, I., Sierra, J.L., Fernández-Manjón, B.: Translating e-learning FlowOriented Activity Sequencing Descriptions into Rule-based Designs. In: ITNG 2009 (2009) 10. Mernik, M., Heering, J., Sloane, A.M.: When and how to Develop Domain-Specific Languages. ACM Comp. Surv. 37(4), 316–344 (2005) 11. Paakki, J.: Attribute Grammar Paradigms – A High-Level Methodology in Language Implementation. ACM Comp. Surv. 27(2), 196–255 (1995) 12. Sarasa, A., Sierra, J.L., Fernández-Valmayor, A.: XML Language-Oriented Processing with XLOP. In: WAMIS 2009 (2009) 13. Sierra, J.L., Fernández-Valmayor, A., Fernández-Manjón, B.: From Documents to Applications Using Markup Languages. IEEE Software 25(2), 68–76 (2008)
The Influence of Coalition Formation on Idea Selection in Dispersed Teams: A Game Theoretic Approach Rory L.L. Sie, Marlies Bitter-Rijpkema, and Peter B. Sloep Open University of The Netherlands, Centre for Learning Sciences and Technologies, Valkenburgerweg 177, 6419 AT Heerlen, The Netherlands {Rory.Sie,Marlies.Bitter,Peter.Sloep}@ou.nl
Abstract. In an open innovation environment, organizational learning takes place by means of dispersed teams which expand their knowledge through collaborative idea generation. Research is often focused on finding ways to extend the set of ideas, while the main problem in our opinion is not the number of ideas that is generated, but a non-optimal set of ideas accepted during idea selection. When selecting ideas, coalitions form and their composition may influence the resulting set of accepted ideas. We expect that computing coalitional strength during idea selection will help in forming the right teams to have a grand coalition, or having a better allocation of accepted ideas, or neutralising factors that adversely influence the decision making process. Based on a literature survey, this paper proposes the application of the Shapley value and the nucleolus to compute coalitional strength in order to enhance the group decision making process during collaborative idea selection. Keywords: idea selection, game theory, coalition formation, dispersed team, open innovation.
1
Introduction
With the increased use of Internet technology, companies are increasingly trying to reduce transactional costs. R&D costs may similarly be reduced by the adoption of Internet technology, as this fosters the communication in dispersed working teams and across collaborating companies. Indeed, with the adoption of these collaboration tools, we are well on the road to open innovation. The expertise relevant for the design of a new product is not always available within the boundaries of one team or firm. Hence the idea of open innovation suggests to create online distributed teams in which people from different companies and disciplines co-operate on the design of a new product. However, utilising a team’s full innovation potential poses some serious problems. Most research thus far has focused on the extension of the set of ideas, and researchers have tried to neutralise potential pitfalls. There are however indicators that dispersed teams do come up with enough ideas, but just do not select the right ideas. Hence, we U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 732–737, 2009. c Springer-Verlag Berlin Heidelberg 2009
The Influence of Coalition Formation on Idea Selection
733
should take a closer look at enhancing idea selection, rather than looking at ways to extend the set of ideas during idea generation[1]. Focusing on the idea selection stage of the creative process and the corresponding coalition formation may help find ways to optimise the selection process. Besides, people are often more risk averse when having an idea accepted with a chance of having more ideas accepted and more risk-seeking when preventing to lose an idea that had already been accepted. This in turn may lead to the escalation of commitment by participants. Eventually ideas have to be accepted, and as a result of the escalation of commitment, we search for an optimal allocation of accepted ideas among all participants to satisfy each participant, also known as satisficing: this may lead to the adoption of minimally acceptable solutions. These non-optimal solutions may also be caused by coalition formation during both idea generation and idea selection. We will further explain this in this paper. This paper presents a literature review on the problems dispersed teams currently face during the idea generation and selection process. Furthermore, it stresses the use of coalitional strength during the process of idea selection by presenting a game theoretic approach. In section 2, we will describe economic and psychological factors that influence collaboration, followed by a game theoretic approach meant to overcome problems in idea selection. We will draw our conclusions in section 3 based on the literature review in section 2. The theoretical framework sketched in this paper will be part of a PhD study conducted within the EU FP7 funded idSpace project. Future research to be conducted in this context will be described in section 4.
2
Theoretical Background
When looking at the incentives for collaboration, we see that collaboration is a way for people to learn from each other, or to create new things with the combined knowledge that they have. In corporate environments, teams are created to generate innovative solutions or new products. While historically we see that research and development mainly took place inside the firm, we now see a tendency towards an increase of inter-firm alliances to support so-called open innovation[2]. The reasons for alliances between companies involve sharing risks, obtaining access to new markets and technologies[3], reducing product-tomarket times, and pooling complementary skills[4,5]. Research and development departments of these companies tend to use open innovation to introduce new products faster and at a lower cost. This however requires collaboration and the corresponding notions of trust, reciprocity and negotiation, as co-operation is likely to have competitive aspects as well[6]. When firms collaborate through open innovation, we see that they are hindered by a variety of problems. They may experience individual problems regarding decision making, such as emotional involvement, exogenous factors[7], bounded rationality[8] and escalation of commitment[9]. Besides, the collaboration may be subject to group deficiencies, such as social loafing, group think and group polarisation. The latter two influence the formation of coalitions in open
734
R.L.L. Sie, M. Bitter-Rijpkema, and P.B. Sloep
innovation teams. Especially in idea generation and selection, we see that people need additional support for their ideas to have them accepted. Hence, they form coalitions to stand stronger against other people’s coalitions and ideas. They are self-interested, however, as one may support other people’s ideas in return for their support, also known as reciprocity. As a consequence, coalition formation in idea generation leads to a non-optimal set of accepted ideas. For instance, in a collaborative idea generation session, when person A is above person B in the organisation’s hierarchy, person A may be more informed on the company’s strategy and mission statement. Therefore, person B, who actually has a good idea, will be likely to accept person A’s ideas, as he knows person A is more informed. Though, person A may rather be self-interested, and names one of his own moderate ideas that is not so close to the organisation’s strategy. Thus, person A names an idea with a lower utility, but person B is willing to form a coalition under the presumably rational thought that person A is higher informed and acts accordingly. This example shows that good ideas are often generated in collaborative idea generation, but due to individual and group deficiencies, the selection of ideas is disturbed. In order to overcome the problem of a non-optimal set of accepted ideas, we need to study the influence of coalition formation on the allocation of accepted ideas. To compute this, we need to know what the share of each participant is in the coalition. After doing so, we may propose a division of the coalition’s payoff. A considerable amount of research has been conducted on the division of the coalition’s payoff. In formal game theory, there exist mainly two types of approaches to compute the share of each participant in the coalition and thus the division of the coalition’s payoff: the Shapley value[10] and the nucleolus[11]. Both these concepts are central to games in coalitional form, also known as many-person co-operative games. In such games, players may gain profit from their actions and this profit may be transferred to others as a result of forming coalitions. This transferrable utility is expressed in the form of side payments among players. Side payments are a from of sharing profit from mutually beneficial strategies. For instance, consider three companies that decide to co-operate and share their R&D departments. They find out that it is wiser to shut down one R&D department to reduce the costs. The revenue will then be accountable to the two other R&D departments, whereas the third company made the decision to shut down their R&D department to reduce costs, a decision from which all three companies benefit. Therefore, the company that shut down its R&D department will receive a share of the profit made by the other two company’s R&D departments, the so-called side payment. To compute the side payment, we need to compute the value of a coalition with respect to not forming a coalition. The characteristic function v of the game defines the values of the set of coalitions that may be formed by the players. To compute the values of the set of coalitions, we first need to define what eligible coalitions are. For instance, if we have three players, eight different coalitions may be formed. First, we have the empty coalition denoted by , an empty set with no players. Second we have the one-person coalitions {1}, {2} and {3}. The
The Influence of Coalition Formation on Idea Selection
735
two-person coalitions are{1,2}, {1,3} and {2,3}. The grand coalition in which every player participates is called N. The grand coalition is considered to be the coalition that has the highest payoff, thereby satisfying the common statement that the sum of the whole is more than the sum of any of its parts. The Shapley value focuses on the way participants of an n-person co-operative game view the value of forming a coalition. The so-called players of the game weigh the value of co-operation against the value of not co-operating. The value of the game is computed by taking the value of the coalition and subtracting the value of the sub coalitions, divided by the number of participants in the coalition. For instance, if the coalition {1,2} has value 4, coalitions {1} has value 2 and coalitions {2} has value 1, then the value of coalition {1,2} is 4 2 1 = 1. We denote this as constant c{1,2} = v{1,2} v{1} v{2}. Let’s assume that the following values are given: c{1} = 2 c{2} = 1 c{1,2} = 1 c{1,3} = 3 cN = -2 We can now compute the Shapley value for person 1, which is the sum of the constant values of each coalition person 1 participates in divided by the number of participants in the coalition. With the values given above, we compute player 1’s Shapley value φ 1 = c{1} + c{1,2}/2 + c{1,3}/2 cN/3 = 2 + 1/2 + 3/2 2/3 = 3 + 1/3. If we do this for all three players, we have the Shapley value for the coalition N. The Shapley value may then be used to divide the coalition’s payoff. In our example, player 1 receives a 3 + 1/3 share of the coalition’s payoff of for instance 12. Another way of dividing the coalition’s payoff is the nucleolus. The nucleolus is an extension of the Shapley value, that is, we try to find the characteristic function v and the minimal amount of payoff the players would receive if they co-operate. The payoff vector containing the minimum payoffs is called the imputation, which has the form x = (x1, ..., xn). We then ask the participants how dissatisfied they are with the proposed imputation (that is, the worst division of payoffs) and try to minimise the maximum dissatisfaction. The payoff computed by use of the nucleolus may differ from the Shapley value, as we take into account what the players expect to have. For instance, if a bank goes bankrupt, people would like to claim their savings. Player A has 2000 euros in his savings account, player B has 4000 euros in his savings account and player C has 6000 euros in his savings account. However, the bank has only 7200 euros to divide among the players. Player C is sure of receiving 1200 euros, as players A and B receive a total of 6000 euros. Thus v(C) = 1.2. If we do the same for A and B, we find v(A) = 0 and v(B) = 0. Similarly, v(AB) = 1.2, v(AC) = 3.2, v(BC) = 5.2 and v(ABC) = 7.2. After a series of calculations, the nucleolus v is found to be (1,2.1,4.1), while the Shapley value is (1.2,2.2,3.8). The division of payoff would then be respectively (1200,2100,4100) versus (1200,2200,3800). The Shapley value and the nucleolus will thus lead to different payoff distributions. For player B this makes a difference of 300 euros extra money, while player C will receive 300 euros less. If we compare this to the pro rata distribution of (1200,2400,3600), we see that player C, will actually receive 500 euros extra when the nucleolus is used for payoff distribution.
736
R.L.L. Sie, M. Bitter-Rijpkema, and P.B. Sloep
If we translate the example given above to idea selection, it is not always the case that we have an equal distribution of the set of ideas among participants, based on their individual skills in idea generation. If we compare the outcomes for the coalitions and the individual payoff when not co-operating, we may see different distributions of the payoff. For instance, if we base our imputation on the number of ideas generated during individual idea generation, it may be that forming a coalition pays off. We expect that this is the reason why people choose to form coalitions during idea selection.
3
Conclusions
We think that studying coalition formation in open innovation is a sensible approach, which regrettably has been ignored thus far. We need to pay attention to the way coalitions are formed during collaborative idea selection and to what extent this influences the allocation of accepted ideas among the participants. Based on literature, we see that people often run into a number of problems while co-operating, such as escalation of commitment, bounded rationality, group think and group polarisation, which may lead to the formation of coalitions in such a way that a non-optimal set of ideas are accepted during idea selection. It is shown that the nucleolus and the Shapley value may lead to different distributions than the pro rata distribution of ideas. We expect that if we present the participants with the computations of the nucleolus and Shapley value, they may become better aware of the group’s potential, thus forming coalitions that are better suited to optimise the set of accepted ideas. And if such coalitions are not formed, a moderator may try to put different people together during idea selection to have the right coalitions formed. However, forming coalitions may not always be beneficial for all participants, due to the problems we have sketched in this paper. We may thus choose to try to neutralise the factors that benefit some, but are detrimental to others. For instance, if a group is polarised, we may add people that bridge the gap between the groups that represent the poles to prevent a sub optimal idea from being accepted. If we do so, we may deviate from the original game theoretical notions of the Shapley value and the nucleolus, as we include external (social) factors.
4
Future Research
The above overview suggests many avenues for further research on coalition formation in open innovation. These avenues will be investigated in the context of the EU funded idSpace project, which focuses on tools for distributed, collaborative product innovation. The following steps are envisaged. Based on the literature, we will first define a model that describes the formation of coalitions in idea selection. This will be followed by a social simulation that will help us in analysing the resulting set of accepted ideas. After that, we will try to adapt the model in such a way that we will be able to predict the formation of coalitions. The desired result of our final model will be either the optimisation of the
The Influence of Coalition Formation on Idea Selection
737
formation of ’optimal’ coalitions, that the influencing factors of ’sub-optimal’ coalitions will be neutralised, or that the right people will be chosen in advance of idea generation to eventually have a grand coalition during idea selection. These findings will be empirically tested and underpinned in suitable contexts in which open innovation takes place. We will also look into the possibility of extending our results to contexts in which collaboration takes place which is not necessarily focused on (open) innovation. A case in point would be so-called Learning Networks [12], which are online, social networks designed to foster non-formal learning and knowledge exchange. Acknowledgments. This paper provides a theoretical framework that will be part of a PhD study conducted within the idSpace project. The idSpace project is partially supported/co-funded by the European Union under the Information and Communication Technologies (ICT) theme of the 7th Framework Programme for R&D. This document does not represent the opinion of the European Union, and the European Union is not responsible for any use that might be made of its content.
References 1. Barki, H., Pinsonneault, A.: Small group brainstorming and idea quality: Is electronic brainstorming the most effective approach? Small Group Research 32(2), 158 (2001) 2. Chesbrough, H.W.: The era of open innovation. MIT Sloan. Management Review 44(3), 35–41 (2003) 3. Hagedoorn, J.: Inter-firm R&D partnerships: an overview of major trends and patterns since 1960. Research Policy 31(4), 477–492 (2002) 4. Kogut, B.: The stability of joint ventures: Reciprocity and competitive rivalry. The Journal of Industrial Economics, 183–198 (1989) 5. Powell, W.W., Koput, K.W., Smith-Doerr, L.: Interorganizational collaboration and the locus of innovation. Administrative Science Quarterly 41, 1 (1996) 6. Nash, J.F.: Two-Person cooperative games. Econometrica 21(1), 128–140 (1953) 7. Tetlock, P.E.: The impact of accountability on judgment and choice: Toward a social contingency model. Advances in Experimental Social Psychology 25 (1992) 8. Simon, H.A.: Models of bounded rationality. MIT Press, Cambridge (1982) 9. Shubik, M.: The dollar auction game: a paradox in noncooperative behavior and escalation. Journal of Conflict Resolution 15(1), 109–111 (1971) 10. Shapley, L.S.: A value for n-person games. contribution to the theory of games. Annals of Mathematics Studies 2, 28 (1953) 11. Schmeidler, D.: The nucleolus of a characteristic function game. SIAM Journal on Applied Mathematics, 1163–1170 (1969) 12. Sloep, P.: Fostering sociability in learning networks through Ad-Hoc transient communities. In: Purvis, M., Savarimuthu, B.T.R. (eds.) ICCMSN 2008. LNCS (LNAI), vol. 5322, pp. 62–75. Springer, Heidelberg (2009)
How to Support the Specification of Observation Needs by Instructional Designers: A Learning-Scenario-Centered Approach Boubekeur Zendagui Computer science laboratory of the Maine university IUT of Laval/ Dep. Info 52 rue des docteur Calmette et Guérin. 53020, Laval – France
[email protected]
Abstract. In this paper, we present the conceptual model we propose to specify observation needs. Because our work takes place in a learning scenario reengineering context, the observation process is prepared while instructional designers define their learning scenarios. Our work aims at helping these designers to specify the informations they want to get by the observation of the learning situation progress in order to improve the underlying learning scenario for future uses. In this paper we show how the observation needs specification can be guided by informations specified in learning scenarios. We will show how we use the Engeström triangle to model the observation context and how, from the context, some observables will be proposed and used by some observation techniques we propose to use, to define the informations to get by the observation process. Keywords: Instructional design, Observation, observation needs, learning scenario, observation context.
1 Introduction The preparation of a distant learning situation is generally done by the design of a learning scenario that contains informations about the learning activities, usually by using an educational modeling language (EML) [1]. This scenario is qualified as a predictive model of the learning situation. Our goal is to help instructional designers to specify, for a given predictive model, what is important to observe when the actors implied in the learning situation perform their activities. The results of the observation process will be used to improve the predictive scenario for future uses: reengineering of the learning scenario. Into the REDiM project [2], we noted that it is difficult for instructional designers to specify their observation need: they have to guess how will be used the learning environment and have to make some assumptions about the learning situation progress; it is necessary to clarify and formalize the description of observation needs in order to guide the development of tools for collecting and analyzing tracks. The lacks of expressiveness from both learning scenario and EML may also add some difficulties to specify observation needs. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 738–743, 2009. © Springer-Verlag Berlin Heidelberg 2009
How to Support the Specification of Observation Needs by Instructional Designers
739
We want to help instructional designers in specifying the informations they want to have by the observation process. These informations are specified in a model we call the observation needs. These observation needs are defined in the design of a learning scenario step and used to guide the observation process during the learning situation progress. They help to define and develop the observation means, ie. tools to collect and analyze data about the effective progress of the learning situation. We think that thanks to these observation needs, the observation process will produce more helpful informations for designers to improve their learning scenario. To assist instructional designers in their observation needs specification task, we have studied the observation activity and its preparation within both classic face-toface and distance learning situations. We also worked with a teacher that uses the UMTice1 learning environment to give some courses in addition of the ones given in the classroom. From this theoretical study and work with the teacher, we proposed the conceptual model of observation needs presented in the next section.
2 Conceptual Model of Observation Needs We define an observation need as composed of four parts (see Fig.1).
Fig. 1. The observation needs conceptual model
The observation objectives are useful to define the “why” of observation needs. This information allows designers to explicit what they want to do when they will know the results of their observation needs from the concrete observation of the learning situation runtime. This information can also be useful to facilitate the reuse of observation needs for other situations sharing the same objectives. 2.1 The Context Layer The observation context allows to define conditions under which the activity to observe will be done. It consists in selecting one or more pedagogical scenario elements concerned by the observation need and defined to be used by the learning situation actors. Contexts are important and must be well defined since it allows to identify the potential observables. 1
UMTice is the learning platform of the Maine University, it is based on the Moodle learning platform (www.moodle.org/).
740
B. Zendagui
Several works deal with the modeling of context [4] but there not a unique and single definition for this concept. This depends on the context of its use. We use the definition from [5] who focuses its definition of context on the concept of entity. The context is a set of inter-related entities playing roles. An entity can be an object, person, tool, or anything that may influence the activity to study. To guide the definition of the context entities, we use the Engeström triangle [6] from work on the activity theory, because the study of the media use, when an actor is doing a particular activity, is one among the basic principles of this theory [7]. In our research context, these media have an important roles in the progress of a learning situation.
Fig. 2. The Engeström triangle (at left), the representation of the context of an activity (at right)
We consider that the context of any activity can be represented in a single form by using the Engeström triangle [6] (Fig.2 right side) in which each activity is done by a subject and guided by an object. The result is a production. To perform an activity, the subject uses some tools and can interact within a community by respecting some rules. The community members have to share tasks to to achieve the activity objectives [6]. The entities composing the context of an observation need are the concepts of the Engeström triangle (Fig.2 left side). Each activity is performed by one or more actors, and is guided by a particular object. We represent this object by the production which can be of two kinds: a tangible production (eg. production of a report) or intangible production (eg the acquisition of knowledge). Learning activities are done by using tools which can in turn be of two kinds: services/materials to ensure the good functioning of progress of learning activities, and pedagogical tools/materials needed to guide and structure the learning activities. In the context of an activity, actors play roles, have tasks to perform and have to respect some behaviors and functioning rules. This simple representation allows on one hand to facilitate the construction of context by using a limited list of entities types that have a significant impact on learning activities [8] and, on the other hand, to help the instructional designers to ask the right questions about the learning activities progress and the choice of which informations to consider in observing of a given activity [9]. Our works are not based on a particular EML. Each EML proposes a specific vocabulary and a dedicated semantics for specifying learning situations. To guide the definition of the context, the element of EMLs have to be annotated according to the concepts of the Engeström triangle. This allows to give a simple and common
How to Support the Specification of Observation Needs by Instructional Designers
741
semantic for elements of any educational modeling language and to unify the modeling of an observation need context whatever the used EML. The observation need context definition is guided by the learning scenario and the annotated EML. Each element of the observation need context is an element of the learning scenario. 2.2 The Observables Layer The data collected when learners and tutor use the learning environment are specified thanks to the definition of observables. An observable is defined in [2] as a variable that get a value by the observation of the learning situation progress. Because our works takes place within the design phase of learning scenarios, we define an observable as any learning scenario elements for which designers want to get informations after the end of a learning session. The observables are pedagogical scenario elements for which designers want to get informations after the learning situation execution. Concretely, these observables are defined at a scenario level but are conforming to those defined at the EML level. Their specification is done by selecting observables among those that can be automatically proposed according to the context delimitation and the observables identified in the annotated EML. An observable can be any element of the pedagogical scenario. In the process for the observation need specification we propose in [10], there is a step in which an expert analyzes the EML in order to identify the potential observables. This identification is made by adding annotations on the elements of the EML considered, by this expert, as relevant to observe. The result of this step is the same EML enriched with informations about observables. One element of the EML can be used to define various elements of the learning scenario. Therefor all observables defined on one EML element can be used for all elements of the learning scenario conformed to this EML element. The originality of this approach is that an EML is analyzed once in order to define the observables and used in the observation needs specification of all learning scenarios defined thanks to this EML. In an observation need, we define two kinds of observables: the declared observables and the selected observables. The declared observable set is automatically built by using the annotations added to the EML in the observable identification step of our process [10]. These annotations are used to identify the observables of each learning scenario element and which, thereafter, will be proposed to instructional designers. The declared observables are attached to each learning scenario element added to the context. In our mind, this context/observables representation allows instructional designers to form a vision of the learning situation they want to observe and to provide them with all variables whose values can attest the effective progress of the activities they defined in the learning scenario. According to their observation needs, instructional designers can choose to use or not one declared observable. The selected observables set is then a subset of the declared observables. This set contains only the declared observables chosen by instructional designer according to their observation needs.
742
B. Zendagui
2.3 The Informations Layer From our previous work with instructional designers using the UMTice learning platform, we note that a list of pair
, where “observed” is the value of the observable obtained by the observation activity, is sometimes not sufficient to give the informations instructional designers are waiting for to understand the effective learning activities progress. The available data contain a lot of informations. Instructional designer can use all these informations or make efforts to select only the relevant one for them. To address this problem, we propose to provide a set of observation techniques instructional designer can use to specify the informations they want to get by the observation process. Each observation technique is a kind of a function whose result is a simple data like a number or a percentage, or a set of data like all messages posted by learners in a forum. By using an observation technique, one knows the nature of it result. Choosing an observation technique between various ones depend on it result nature and the observation needs of instructional designers. We aim, in this layer, to provide instructional designers with a set of observation techniques that allow them to define all informations they want to get and this by using the selected observables. To this end, we propose to use the sign and the category of behavior techniques. These two observation techniques are used in the classical face-to-face learning situation observation [3]. A sign is a particular behavior to observe. For example, a learner begins an activity later then what was planned in the learning scenario. The signs based observation can be used to focus the observation on some specific behaviors. The use of behavior categories to observe the learning situation progress regroups several behaviors in homogeneous sets and analyzes them as a whole to better understand the behavior of the learning situation actors. For example, by defining a category of behavior in which there are the messages exchanged between students within a forum can enable the detection of active and passive learners or learners in difficulty. The analysis of each message alone can be relevant, but the analysis of all messages in a chronological order could provide more informations to instructional designers. In our mind, sign and category of behavior techniques are two examples of the use of observation techniques. Other observation techniques can be used in this layer.
3 Conclusion In this paper we presented the conceptual model of observation need we propose. In our research context, observation needs are defined during the elaboration phase of learning scenarios by instructional designers. Our goal is to guide the specification of observation needs by using the information defined in the learning scenarios. To this aim, we use the Engeström triangle to structure and guide the definition of the context of observation needs. The context definition allows to provide a vision of the learning situation instructional designer want to observe and it allows to propose some observables that can attest the effective progress of the learning situation. Instructional designers can select some observables to define the informations they want to get by the observation process. The originality of our approach relies on the definition of observation needs in relation with informations about learning situations defined in learning scenarios and
How to Support the Specification of Observation Needs by Instructional Designers
743
annotations made on the used EML. Because EMLs can be used to define many learning scenarios, annotations must be defined in a generic way to be used for all learning scenario. We are working currently on defining techniques allowing to contextualize annotations to each learning scenario used to define observation needs.
References 1. Koper, R., Tattersall, C.: Learning Design – a handbook on Modeling and Delivering Networked Education and Training. Springer, Heidelberg (2005) 2. Choquet, C.: Engineering and re-engineering of TEL systems, the REDiM approach. Professor’s degree thesis. Le Maine University, France (2007) (in French) 3. Wragg, E.C.: Introduction to classroom observation, 2nd edn. Routeledge (1999) 4. Strang, T., Linnhoff-Popien, C.: A Context Modeling Survey. Workshop on Advanced Context Modeling. In: Reasoning and Management as part of UbiComp, Nottingham, England (2004) 5. Rey, G.: Méthode pour la modélisation du contexte d’interaction. RSTI – ISI – 11/2006. Adaptation en contexte, 141–166 (2006) 6. Engeström, Y.: Learning by expanding: an activity-theoretical approach to developmental Research. Orienta-Konsultit Oy, Helsinki (1987) 7. Kaptelinin, V., Nardi, B.A.: Activity Theory: Basic Concepts and Application. In: CHI 1997, Los Angeles (1997) 8. Kurti, A., Spikol, D., Milrad, M., Svensson, M., Pettersson, O.: Exploring How Pervasive Computing Can Support Situated Learning. In: Proceedings of the Workshop of the Pervasive Learning 2007: Design Challenges and Requirements, Toronto, Ontario, Canada (2007) 9. Kaenampornpan, M., O’Neill, E.: Modeling context: an Activity Theory approach. In: 2nd European Symposium on Ambient Intelligence, EUSAI, Eindhoven, The Netherlands (2004) 10. Laforcade, P., Zendagui, B., Barré, V.: Specification of observation needs in an instructional design context: A Model-Driven Engineering approach. In: CSEDU 2009, Lisbonne, Portugal, March 23-26 (2009)
Using Third Party Services to Adapt Learning Material: A Case Study with Google Forms Luis de la Fuente Valent´ın , Abelardo Pardo, and Carlos Delgado Kloos Telematics Engineering Department, University Carlos III of Madrid, Av. Universidad 30, Legan´es, Spain {lfuente,abel,cdk}@it.uc3m.es http://gradient.it.uc3m.es
Abstract. Current Learning Management Systems were typically conceived to offer a self-contained “one size fits all” learning environment. Adaptive educational systems have been exhaustively studied and proposed to satisfy the different needs of students, but they have a poor presence in the LMS market due to integration issues. The emerging trend in the web is toward combining very specialised services into highly personalised environments, and LMS are no exception. This paper presents the Generic Service Integration architecture conceived to embed the use of any third party service as a regular resource in a learning experience. A course author includes a description with the required functionality and the appropriate service is searched and instantiated at enactment time. A case study is presented where Google Forms are used to implement assessment in a IMS Learning Design based course, and adapt its content based on the obtained results. Keywords: IMS Learning Design, adaptive educational systems, service integration.
1
Introduction
Learning Management Systems (LMS) in educational institutions have reached a stage of widespread adoption. The variety of commercial and open-source products conform a wide spectrum of possibilities to manage learning experiences. Most of current LMS can be described as a “one size fits all” service, where as much functionality as possible is provided. However, the trend emerging on the web points to open LMS that allow integration with third party services. One factor that pushes this integrating trend is the innumerable amount of services conforming what is called the Web 2.0. Most of participants of a learning experience are likely to use 2.0 services, but still they are forced to use the counterparts offered by the LMS. Some LMS offer email, bookmark collections, picture albums, personal web pages, etc. This tendency suggests educational systems to act
Corresponding author.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 744–750, 2009. c Springer-Verlag Berlin Heidelberg 2009
Using Third Party Services to Adapt Learning Material
745
as service orchestrators where the functionality is given by third party providers and the pedagogical structure of the course is promoted at the LMS. As analysed in [1] and [2] tools for adaptive learning support, where the objective is to individually satisfy the different needs of multiple users by means of taking into account different student profiles, have a poor presence in the LMS market mainly due to their lack of integration capabilities. As a consequence, the integration of generic services in a educational platform would promote the inclusion of adaptive schemes in learning experiences. Specifications such as IMS Learning Design [3] (henceforth simply IMS LD) conceived to capture the structure and interaction in a learning experiences include a limited but specific formalism to define the interaction with services. However, as pointed out by Martel et al., the activities mediated by services cannot be observed [4]. Thus, the tight and effective integration of a generic service with a simple formalism has been proved extremely complex. The objective of this paper is to answer the following questions: Is it possible to adapt a learning experience based on information originated in a third party service? Can this service be instantiated differently depending on the conditions of the learning scenario? To answer these questions, this paper presents a case study where a Unit of Learning described in IMS LD adapts the activities during the enactment phase through a set of values obtained from the interaction with Google Forms. The integration between IMS LD and this third party service is supported by the Generic Service Integration (GSI) architecture [5]. This paradigm proposes a set of minimum requirements for a service to be integrated in a learning experience. The results of a first pilot experience are also included to show how this new functionality affects a Unit of Learning over its entire life cycle. The rest of this document is organised as follows. Section 2 summarizes related work and proposes the GSI architecture. Section 3 details the use case in the context of the different stages of a learning experience. The paper concludes presenting the results obtained in a pilot experience and a brief discussion of future avenues to explore.
2
Adaptation Using Third Party Services
The term “adaptation” refers to purposely changing one or several aspects of a learning environment to cater to the needs of a student. The effectiveness of adaptation has been a matter of great discussion (see [6] for a thorough examination of some of them or [7] for a discussion in the context of engineering education). As pointed out in [1], tools that facilitate adaptive learning tend to be extremely specialised in an aspect of the learning process at the expense of the integration in a learning management system. In recent years, there has been an effort to provide adaptive tools that are integrated in conventional LMS. The second area relevant to the ideas presented in this paper is the use of third party services. A learning experience may require (or take advantage of) the orchestration of a set of external services. An example in this direction is the
746
L. de la Fuente Valent´ın, A. Pardo, and C. Delgado Kloos
concept of a Personal Learning Environment (PLE) [8]. In [5], a new paradigm is outlined to allow IMS Learning Design engines and services to exchange information and react to each other events. In order to offer versatile service integration while maintaining the usability of a platform, a pragmatical approach is taken. The proposed solution is based on a set of minimum requirements to be fulfilled by both an LMS (the IMS LD engine) and a service. Learning Design [3] is a specification that supports the formal description of activity-centered learning. IMS LD allows multiple pedagogical approaches to be modelled as a Unit of Learning (UoL) [9]. A UoL contains the description of all the activities, instructions on how the participants should interact, and a set of properties and conditions to be deployed in a virtual learning context. Learning Design offers an adequate formalism to achieve adaptation of educational experiences. The case study presented in this paper uses the Generic Service Integration architecture implemented in GRAIL [10], a Learning Design engine integrated in the open source Learning Management System .LRN. 2.1
Generic Service Integration Architecture
The proposed Generic Service Integration architecture provides the semantics to include third party services in IMS LD courses, but can be easily extended to work with any course authoring/delivery framework that supports basic group management, saves the state of the course and reacts depending on this state. In GSI, the integration of a third party service in a learning course has two requirements: the definition of the service usage in the context of the course, and a runtime environment capable of enacting the service functionality. Thus, the proposal consists of two areas related to the course life-cycle: a semantic description to capture service behavior, and an execution model to use the service. The proposed vocabulary has been generically defined. It is assumed that each supported service needs to provide specific meaning for the used verbs. With this approach, the definition of the expected behavior can be written specifically for a concrete service (for example, a blog in Wordpress) but leaving room during enactment to use an alternative that complies with the given requirements. – Groups element: During the authoring phase the course participants are not known, but a description of the grouping policy can be given. Groups are directly mapped into IMS LD roles. – Tool element: Describes the functionality required in the service, expressed in an abstract notation (set-values, open, close, modify-permissions, etc.). Permissions are also very defined at generic level. Tool information also include metadata to easily delimit the type of services suitable to be used and facilitate the searching procedure during run time. – Constraints element: Defines the requirements on the service behavior. While the Tool element defines the required operations, this element contains the detailed description of how and when these actions must be triggered. This description of the service functionality and usage is packaged within the UoL, and will be interpreted when the course is deployed in a compliant
Using Third Party Services to Adapt Learning Material
747
learning management system. Service configuration is then performed once the course participants have been assigned to groups, and before the course is fully available for them. It follows a description of the actions that must take place during course (and therefore, service) deployment. – Service search: Based on the service description included in the UoL, the runtime engine must select the one that best matches the given requirements. This selection can be fully automatic or may require manual intervention. – Service configuration: Once the service is selected, a binding stage is required to connect the community of users in the LMS with the corresponding community of users in the service. This stage depends on, the access requirements imposed by the service. Technologies such as OAuth [11] are conceived precisely to simplify the information exchange in this type of scenarios. – Enactment: Actions during this stage include facilitating service access to the course participants, managing the data exchange between the service and the LMS, and invoking the proper operations in the service.
3
Embedding Third Party Assessment in a Course
Assessment is accepted to be a weakness in the current IMS LD specification, and the problem has been faced in different ways [12] [13]. The approach taken in this case study is the use of the third party service Google Forms to provide assessment facilities. Google Forms allow to easily create web forms whose submissions are stored in a Spreadsheet. Data can be accessed through a public Application Programming Interface, and authentication issues are mediated by the SubAuth protocol, which follows the same principles as OAuth [11]. The Google Forms service has been included via GSI. The inclusion of a service has implications in the whole course life cycle. A simple course has been chosen to illustrate the process in all its stages: authoring, deployment and enactment. All interactions with the third party service are depicted in figure 1. The course used for the study, described in IMS LD terminology, consists of two act: the first is devoted to a profiling questionnaire while the second is composed by a set of suggested readings that are based on the results obtained in the previous test. During these activities, the members of the teaching staff are in charge of tracking the activity and monitor student results. The profiling questionnaire is integrated in the authoring phase by the inclusion of groups, tool and constraints elements described in section 2. If a GSI service is found during course instantiation (as in this case), the third party service must be allocated and configured. The GSI architecture serves as the launcher of software units, called plugins. The engine then selects the service that best matches with requirements, and configures the proper plugin. From this moment onward, any service request will have the selected plugin as mediator. Consequently, output from the service can be parsed and formatted to be adjusted to property data types. GSI service configuration requires a pre-enactment phase in which course participants enter the course without all deployment steps being finished. At
748
L. de la Fuente Valent´ın, A. Pardo, and C. Delgado Kloos
Fig. 1. Interaction among actors during the deployment and enactment phases
this point, participants must grant permissions to access their personal data in the third party service1 . In the presented example, only teachers are requested to do so. Some extra adjustments may take place in this phase: the form’s target where data is submitted can be only obtained after service deployment, and current limitations of the API imposes manual intervention to fill the proper value. When the enactment phase starts, interactions with the third party service come about. First, students insert their responses in the Spreadsheet by using a form. As the plugin acts as a mediator, an identification token is attached to each response. Second, teachers can access at any time to the actual Spreadsheet. Last, the course engine retrieves all the gathered data (which is obtained as an ATOM feed) and make next activity behave depending on the student responses.
4
Results and Conclusions
The case study presented in this work was included as part of a regular postgraduate programme in a higher educational institution. A total of 19 students took part in the experience. It is relevant to remark the successful deployment of the experience, with the robust definition of service life cycle phases as the key factor. Further, IMS LD provided a framework where adaptation was possible. 1
SubAuth tokens can be revoked at any moment by the account owner.
Using Third Party Services to Adapt Learning Material
749
The usage of third party services is potentially much more versatile than the use presented in this article. It would be possible, for example, to use spreadsheet to calculate a formula that takes student results as parameters. This calculated value could be used in the adaptation strategy. The difficulty of IMS LD in the field of data manipulation can be avoided by using a more specialised tool. The results of the case study show the potential of the deployment infrastructure: services from different vendors can be coordinated by IMS LD courses in order to provide adaptation not restricted to facilities build within the LMS. Derived from the experience under study, further developments are suggested to improve the GSI model: it is straightforward that a larger set of plugins needs to be developed. The use of the model is restricted to supported services, so it is desirable to feature a wider set of choices. A rise in the available plugins number will bring up the matter of service search facilities. Keywords, in which the system is based currently, may not be good enough for a larger set of available plugins. Further researches on better search techniques, such as semantic based search, must be accomplished to improve usability of the architecture.
Acknowledgement This work has been partially funded by the Project Learn3 (TIN2008-05163/TSI) from the Plan Nacional I+D+I and the Spanish National Project FLEXO (TSI020301-2008-19, www.ines.org.es/flexo).
References 1. Brusilovsky, P.: Knowledgetree: a distributed architecture for adaptive e-learning. In: Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters, pp. 104–113. ACM, New York (2004) 2. Meccawy, M., Blanchfield, P., Ashman, H., Brailsford, T., Moore, A.: WHURLE 2.0: Adaptive Learning Meets Web 2.0. In: Dillenbourg, P., Specht, M. (eds.) ECTEL 2008. LNCS, vol. 5192, pp. 274–279. Springer, Heidelberg (2008) 3. IMS Learning Design specification (February 2003), http://www.imsglobal.org/learningdesign/ (last visited April 2009) 4. Martel, C., Vignollet, L.: Using the learning design language to model activities supported by services. International Journal of Learning Technology 3(4), 368–387 (2008) 5. de la Fuente Valent´ın, L., Miao, Y., Pardo, A., Delgado Kloos, C.: A supporting architecture for generic service integration in IMS learning design. In: Dillenbourg, P., Specht, M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 467–473. Springer, Heidelberg (2008) 6. Coffield, F., Moseley, D., Hall, E., Ecclestone, K.: Should we be using Learning Styles? What research has to say to practice. Learning and Skills Development Agency (2004) 7. Felder, R.M., Brent, R.: Understanding student differences. Journal of Engineering Education 94(1), 57–72 (2005) 8. Wilson, S., Liber, O., Johnson, M., Beauvoir, P., Sharples, P., Milligan, C.: Personal learning environments: Challenging the dominant design of educational systems. In: Memmel, M., Burgos, D. (eds.) Proceedings of LOKMOL 2006 in conjunction EC-TEL 2006, Crete, Greece, October 2006, pp. 67–76 (2006)
750
L. de la Fuente Valent´ın, A. Pardo, and C. Delgado Kloos
9. Koper, R., Tattersall, C. (eds.): Learning Design. A handbook on Modelling and Delivering Networked Education and Training. Springer, Heidelberg (2005) 10. de la Fuente Valent´ın, L., Pardo, A., Delgado Kloos, C.: Experiences with GRAIL: Learning design support in .LRN. In: TENCompetence Workshop on Current Research in IMS Learning Design and Lifelong Competence Development Infrastructures (2007) 11. OAuth core 1.0, http://oauth.net/core/1.0/ (last visited April 2009) 12. Miao, Y., Sloep, P., Koper, R.: Modeling units of assessment for sharing assessment process information: Towards an assessment process specification. In: Li, F., Zhao, J., Shih, T.K., Lau, R., Li, Q., McLeod, D. (eds.) ICWL 2008. LNCS, vol. 5145, pp. 132–144. Springer, Heidelberg (2008) 13. Dalziel, J.: Implementing learning design: the learning active management system (LAMS). In: Proceedings of the 20th Annual Conference of the Australasian Society fon Computers in Learning, pp. 593–596 (2003)
Virtual Worlds for Organization Learning and Communities of Practice C. Candace Chou School of Education University of St. Thomas 1000 LaSalle Ave., MOH 217, Minneapolis, MN 55403, USA [email protected] Abstract. An increasing number of organizations have established presences in Second Life or virtual worlds for organizational learning. The types of activities range from staff training, annual meetings, to leadership development and commercial transactions. This paper reviews relevant literature on how virtual worlds, especially Second Life, are utilized for organizational learning. Specific emphases will be on the translation of applicable learning theories into the pedagogical design of virtual worlds. Furthermore, the paper explores how organizations establish virtual communities of practice. Finally, examples of virtual worlds that are established for organization learning are examined. Keywords: virtual worlds, virtual communities of practice, organization learning.
1 Introduction Virtual worlds, which refer to a 3D virtual learning environment that supports multiple learners, have been employed by an increasing number of corporations, universities, and education agencies for learning and training [1]. Virtual worlds have a low barrier-to-entry for content creation, can be programmable, and provide an abundant reusable instructional content [2]. In the last few years, a rapidly growing number of business and higher education institutions have established presences in Second Life and other similar virtual worlds. People enter the virtual worlds in Second Life via an avatar to represent themselves. The avatars can walk, talk, and move around the same way that they would move in real world. Most of the current discussions have focused on the pedagogical applications of virtual worlds for learners in higher education. Although some of the theoretical principles can be applied to learners in both education and business, domain-specific examples based on the shared theoretical principles can provide practitioners in organizations a better framework in adopting virtual worlds for training and development. This paper will focus on theoretical frameworks for organization learning in virtual worlds and examples of workplace learning in virtual worlds, especially in Second Life.
2 Literature Review This section will start with a general discussion on the affordances of virtual worlds and the capabilities of virtual world to support learning. Next, the discussion will examine the theoretical principles that provide the guidelines for learning in virtual worlds. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 751–756, 2009. © Springer-Verlag Berlin Heidelberg 2009
752
C.C. Chou
2.1 Affordances of Virtual Worlds As technology evolves, new technological capabilities can infuse innovative approaches into teaching and learning activities in education and the workplace. In Second Life’s virtual world, learners can utilize many of its features to form learning networks, create new identities, and construct new worlds with flexible building tools. In table 1, Jarmon [3] summarized how these new technological features can afford users to transform their experiences in the 3D virtual world. Table 1. Affordances / Extended Capabilities in 3-D Virtual World of Second Life [3]
Affordance Communication/community Embodied social presence
Extended Capability Voice, chat, SL groups, search 3-D perspective on avatars (oneself & others) Building/engineering/design/sculpting Highly flexible robust tools & training Animation and scripting Motion, behaviors, sensors, lighting, sound Data visualizations & simulations Modeling, infinite scale, micro/macro, role-play, spreadsheet conversion, historical, art Sound & spatial relationships Example: reflexive architecture, avatar orchestra Language immersion Example: 27 language-specific islands Learning communities created by & for Example: Educators Coop Residential users Island International SL collapses geography Low capital expense operations costs Overhead, travel, equipment, training, energy Fundraising Am. Cancer Soc., Katrina Relief, kiva.org Recruitment/administration/management Universities, IBM > 1500 employees in SL Bringing distance & online learning Online-course class photo only in SL together in the 3-D virtual world The integrated functions in virtual worlds have presented new opportunities for learning. We are seeing a convergence of social networking, 3D, multimedia, voice, chat, videos, search in Second Life. The extended capabilities are especially appealing to learners who hope to be able to have more control of their online presence in a highly engaged and connected environment. 2.2 Adult Learning Theory Designing learning opportunities for organizations in virtual worlds requires one to have a good understanding of how adults learn. Malcolm Knowles was one of the first
Virtual Worlds for Organization Learning and Communities of Practice
753
educators to establish principles for adult learning. Knowles [4] identified five characteristics of adult learners. Zielke, Roome, & Krueger [5] matched the characteristics of adult learning with the features of virtual worlds as summarized below: • • • • •
Adults are autonomous and self-directed. Virtual worlds enable independent learning. Adults have accumulated a foundation of life experiences and knowledge. Virtual worlds encourage sharing of life experience with others. Adults are goal-oriented. Virtual worlds allow goal-setting and increase skill levels for use in work or hobby. Adults are relevancy-oriented. Virtual worlds provide the opportunity to check own progress and re-learning if needed. Adults are practical and value knowledge that are useful to work. Virtual worlds offer opportunities for problem-solving and immediate application of materials to be learned.
How can the virtual worlds maximize learning within the framework of adult learning? Zielke, Roome, and Krueger [5] presented a case study on how virtual world can also assist people with disabilities to experience physical activities through their avatars. Activities such as dancing, walking, and running, not possible in real-life, are possible in virtual worlds. The new found capabilities can strongly motivate learners to engage in learning. 2.3 Communities of Practice The concept of communities of practice (CoP) has been identified by many as a means to effective knowledge management in organization learning [6]. The concept has existed in various parts of the world for centuries. However, it did not become an established theory in organization learning until Lave and Wenger [7] theorized it in their seminal: Situated Learning: Legitimate Peripheral Participation. Wenger [8] defined a community of practice as “groups of people who share a concern or a passion for something that they do and learn how to do it better as they interact regularly” along three dimensions: • • •
What it is about – its joint enterprise as understood and continually renegotiated by its members How it functions: - mutual engagement that bind members together into a social entity What capability it has produced – the shared repertoire of communal resources (routines, sensibilities, artifacts, vocabulary, styles, etc.) that members have developed over time.
Can communities of practice be established online and/or in virtual worlds? Research has shown that virtual communities of practice are emerging [9, 10]. Virtual CoP has been a more standard term to describe a network of individuals “who share a domain of interest about which they communicate online” [11]. There is a difference between a virtual learning community and virtual CoP. The former aims at enhancing the knowledge of the participating members via formal education or professional development. The latter enhances the knowledge of community members via informal
754
C.C. Chou
learning. Novice members tend to move from peripheral to center through observation of experts and apprenticeship with experienced members. The literature review section has summed up the most recent development in theoretical frameworks relevant to Virtual Worlds. The Adult Learning principles present the pedagogical principles in designing learning opportunities in virtual worlds. The essence of CoP lends itself well for organizations to use virtual worlds for both formal and informal learning.
3 Case Examples University campuses and business have established locations in virtual worlds. Cross and O’Driscoll [12] observed that corporations are using virtual worlds for the following purposes: • • •
A new level of always-on, real time connectivity for collaboration Empowering both customer and employee groups Making informal viral learning a core mechanism of transformation
Werner [13] suggested that virtual worlds have become an appealing venue for training and development for the following reasons: (1) engagement, (2) low cost relative to real life, and (3) quick and easy to change. Virtual worlds are engaging because learners can immerse themselves in a 3D environment that has a high-fidelity to the real environment and move freely in-world with an identity of their choices. Virtual Worlds have been commonly used for the following types of workplace learning: (1) 3D demonstration, (2) simulation, and (3) virtual meetings. The following sections introduces workplace-related examples. 3.1 Workplace-Related Examples 3.1.1 3D Demonstration Palomar West Hospital in Second Life is a prototype of the hospital that is under construction and due to open in 2011. It was designed to provide a preview of the new facility to hospital staff, future patients, media, and the larger medical community [14]. The site can be accessed through the SLRL: http://slurl.com/secondlife/PalomarWest%20Hospital/33/127/34/ 3.1.2 Simulation Role plays through simulation is a common form of organization learning in virtual worlds. General Electric (GE) has utilized Second Life to provide performance-based training. A role-playing strategy game was employed to elicit a time-critical strategic behavior in response to a forced outage situation [15]. 3.1.3 Virtual Meetings IBM was one of the pioneers in utilizing virtual worlds for organization learning and training. IBM has 50 islands and more than 20,000 employees in virtual worlds. In 2008, IBM held an annual meeting for the 200+ members of the Academy of Technology. The conference venue consisted of breakout rooms, a simulated Green Data
Virtual Worlds for Organization Learning and Communities of Practice
755
Center, a library, and areas for community gathering. IBM estimated that the return of investment (ROI) for the Virtual World Conference was roughly $320,000 [16]. 3.2 Communities of Practice The above-mentioned examples presented more concrete and observable cases of organization learning. However, communities of practice for the purposes of organization learning in virtual worlds are limited. Research on business-related virtual CoP in virtual worlds is still a relatively new area. As more organizations establish presences in virtual worlds, more research data will provide a better understanding of the processes and outcomes. Here is a small sample of professional organizations that serve as the venues for virtual CoPs. • • • •
American Society for Training and Development (ASTD): http://slurl.com/secondlife/ASTD%20Island/113/84/23) International Society for Technology in Education (ISTE) Islands: http://slurl.com/secondlife/ISTE%20Island/93/83/30 New Media Consortium (NMC) Campus: A large consortium of universities, organizations, and museums that supports events, classes, demonstration, and art exhibits. http://slurl.com/secondlife/NMC%20Campus/136/91/23) Gronstedt Group: Weekly “Train for Success” sessions bring training and communication professionals globally to explore the new development in leading corporations, http://slurl.com/secondlife/Wolpertinger/161/82/51
4 Conclusion and Future Trends In this paper, the applications of adult learning theory and communities of practice to organization learning in virtual worlds were reviewed. Examples of workplacerelated learning were introduced. Although virtual worlds have been in existence for decades [17], it was not until the introduction of Second Life to the public in 2003 that establishing communities in virtual worlds became a norm in the academic and the business world. It is not clear how organizations can be more effectively exploring the opportunities offered by virtual worlds for organization learning. More studies on how communities of practice in virtual worlds can contribute to knowledge construction, collaboration, and motivation are needed. What will the future hold for organization learning through virtual CoP in virtual worlds? In addition to the affordances of technology and the usability of virtual worlds, it is also important to cultivate a sense of belonging to encourage information sharing, collaboration, and interaction. The development of virtual worlds will be as exciting as the World Wide Web in the 90s.
References 1. The New Media Consortium. The Horizon Report. 2007 Edition (2007), http://www.nmc.org/horizon/2007/report (retrieved April 18, 2009) 2. Mason, H.: Experiential Education in Second Life. In: Livingstone, D., Kemp, J. (eds.) Proceedings of the Second Life Education Workshop, pp. 14–18 (2007), http://www.simteach.com/slccedu07proceedings.pdf (retrieved April 15, 2009)
756
C.C. Chou
3. Jarmon, L.: Learning in Virtual World Environments: Social Presence, Engagement, & Pedagogy. In: Encyclopedia of Distance and Online Learning. IGI Global (2008) 4. Knowles, M.S.: Andragogy in Action. Applying modern principles of adult education. Jossey Bass, San Francisco (1984) 5. Zielke, M.A., Roome, T.C., Krueger, A.B.: A Composite Adult Learning Model for Virtual World Residents with Disabilities: A Case Study of the Virtual Ability Second Life® Island [Electronic Version]. Journal of Virtual Worlds Research 2 (2009), http://jvwresearch.org/ (retrieved April 17, 2009) 6. Kimble, C., Hildreth, P.: Communities of Practice: Going One Step Too Far? [Electronic Version] (2005), http://ideas.repec.org/p/wpa/wuwpio/0504008.html (retrieved April 15, 2009 ) 7. Lave, J., Wenger, E.: Situated learning: Legitimate peripheral participation. Cambridge University Press, Cambridge (1991) 8. Wenger, E.: Communities of Practice: Learning as a social system [Electronic Version] (1998), http://www.co-i-l.com/coil/knowledge-garden/cop/lss.shtml (retrieved April 16, 2009) 9. Dubé, L., Bourhis, A., Jacob, R.: Towards a Typology of Virtual Communities of Practice [Electronic Version]. Interdisciplinary Journal of Information, Knowledge, and Management 1, 69–93 (2006), http://www.ijikm.org/Volume1/IJIKMv1p069-093Dube.pdf (retrieved April 19, 2009) 10. Kondratova, I.L., Goldfarb, I.: Virtual communities: design for collaboration and knowledge creation. In: Proceedings of the European Conference on Products and Processes Modeling, ECPPM 2004 (2004), http://iit-iti.nrc-cnrc.gc.ca/ iit-publications-iti/docs/NRC-47157.pdf (retrieved April 15, 2009) 11. Gannon-Leary, P., Fontainha, E.: Communities of Practice and virtual learning communities: benefits, barriers and success factors (2007), http://www.elearningeuropa.info/files/media/media13563.pdf (retrieved April 16, 2009) 12. Cross, J., O’Driscoll, T., Trondsen, E.: Another Life: Virtual Worlds as Tools for Learning [Electronic Version]. eLearn Magazine (2008), http://www.elearnmag.org/ subpage.cfm?article=44-1§ion=articles 13. Werner, T.: Using Second Life for workplace learning (March 25, 2009), http:// www.slideshare.net/twerner/ using-second-life-for-workplace-learning032509?type=powerpoint (retrieved April 10, 2009) 14. Hanna, A.: Palomar Medical Center West (2008), http://www.collaborationproject.org/display/case/ Palomar+Medical+Center+West (retrieved April 10, 2009) 15. Werner, T.: Best use of virtual worlds for learning (January 30, 2009), http://www.brandon-hall.com/awards/awards/?p=380 (retrieved April 20, 2009) 16. Linden Lab. How meeting in Second Life transformed IBM’s technology elite into virtual world believers (2009), http://secondlifegrid.net.s3.amazonaws.com/ docs/Second_Life_Case_IBM.pdf (retrieved April 20, 2009) 17. Damer, B.: Meeting in the Ether: A brief history of virtual worlds as a medium for usercreated events [Electronic Version]. Journal of Virtual Worlds Research 1 (2008), http://www.jvwresearch.org/v1n1.html (retrieved April 1, 2009)
A Methodology and Framework for the Semi-automatic Assembly of Learning Objects Katrien Verbert1, David Wiley2, and Erik Duval1 1
Dept. Computerwetenschappen, Katholieke Universiteit Leuven, Celestijnenlaan 200A, B-3001 Leuven, Belgium {Katrien.Verbert,Erik.Duval}@cs.kuleuven.be 2 Instructional Design and Technology Department, Brigham Young University, Provo, UT, USA [email protected]
Abstract. One of the major obstacles in developing high quality content for learning is the substantial development cost and effort. In addition, the return on investment is often low, as developed learning materials are difficult to reuse and adapt to new and different educational contexts. In this paper, we present a semi-automatic content assembly methodology to automate, at least partially, the reuse of existing learning content in high quality and effective learning sequences. In addition, we present a case study that integrates the approach into the LAMS learning design environment. Keywords: learning object reuse, learning object metadata, learning design.
1 Introduction Many existing course documents merge the representation of content and the instructional approach [1]. Such hardwired pedagogy restricts the options for teaching and learning, both in terms of reusability and adaptation of learning sequences. Typically, teachers create their teaching strategies and content from scratch or reuse parts of existing course documents by ad-hoc and time-consuming copy-and-paste actions. In addition, adaptation to individual learning or teaching styles, background, experiences, interests or preferences is generally not possible, unless learning content is specifically designed for personalization purposes [2]. In this paper, we present a semi-automatic content assembly methodology for the generation of learning sequences tailored to different pedagogical approaches, based on the explicit design of these sequences by a teacher. The assembly framework employs a teacher model, an instructional model and a domain model to enable the focused retrieval and aggregation of learning resources into learning sequences. Learning resources are retrieved through the GLOBE network of educational repositories [http://globe-info.org/] and from various community driven websites, such as WikiAnswers.com, ProProfs.com and Wikipedia. The assembly framework is described in the next Section. We present a case study that integrates the approach into the LAMS [3] learning design environment in Section 3. Related work is discussed in Section 4, followed by conclusions and remarks on future work. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 757–762, 2009. © Springer-Verlag Berlin Heidelberg 2009
758
K. Verbert, D. Wiley, and E. Duval
2 Content Assembly Framework The content assembly framework supports the selection and assembly of existing learning resources. The framework employs the following models to enable the focused retrieval and aggregation of resources: -
-
The instructional model captures the semantics of the pedagogical strategy employed by a learning sequence and is based on [4]. Narrative structures within this model outline the flow of concepts of a particular learning design strategy and are used as templates when assembling learning sequences. An example is an inquiry based learning strategy that sequences activities like "answer questions", "vote on a list", "discuss responses", "read expert view", "discuss expert view" and "personal reflection". The domain model represents the knowledge domain of a course. It includes concepts outlined in the objectives of a course and their interrelationships. The teacher model defines teacher attributes to enable the personalized aggregation of learning resources [5]. The model includes attributes for representing the level of expertise of the teacher, interests and activities, teaching strategy preferences, background, and presentation styles (Fig. 1).
Fig. 1. Semi-automatic content assembly framework
The assembly engine maps instructional, domain and teacher concepts to PLQL queries and federates the queries to SQI-enabled repositories. The approach is exemplified in [6]. PLQL [7] is primarily a query interchange format for repositories. SQI [8] is a query transport standard that is widely used within the technology enhanced
A Methodology and Framework for the Semi-automatic Assembly of Learning Objects
759
learning community. The GLOBE alliance [http://globe-info.org/] builds on the SQI standard to enable worldwide access to learning repositories. Moreover, to enable retrieval of relevant content resources on the Web, several SQI wrappers were built on top of community driven websites that host large amounts of content, such as WikiAnswers.com, ProProfs.com and Wikipedia. The wrappers retrieve both relevant pages and relevant fragments within the pages. The engine typically exploits the structure of pages to identify content fragments that are reusable, such as individual questions and answers of multiple choice questions or animations within HTML pages. Simple screen scraping approaches are employed to retrieve relevant parts of domain specific websites. Depending on the granularity of the narrative concept, single assets or larger compositions are retrieved, such as single questions versus entire surveys.
3 LAMS Case Study We integrated the assembly approach into the LAMS Learning Activity Management System [3] that integrates different environments for authoring, running and monitoring learning designs. The LAMS authoring environment enables authors to sequence different types of learning activities, such as discussion activities and web polls, as illustrated in Fig. 2. In the next step, learning resources can be added to the learning activities. We have extended the LAMS authoring environment to automate, at least partially, this process. An author can create a sequence of activities or reuse an existing learning design. Suppose she wants to teach the concepts of velocity and acceleration in an inquiry based learning strategy that sequences the activities "answer questions" (a1), "vote on a list" (a2), "discuss responses" (a3), "read expert view" (a4), "discuss expert view" (a5) and "personal reflection" (a6). For activities a1 and a4 that have associated learning resources, the assembly engine generates content suggestions based on domain concepts (velocity and acceleration), instructional concepts (answer questions and read expert view) and teacher attributes (in our current prototype: language, familiar measures and weights, and typical student age range). Learning resources are retrieved on-the-fly from learning object repositories and online Web sources and shown in the content suggestions area, as illustrated in Fig. 2. To obtain a first indication of the quality of the generated content suggestions, a small-scale experiment was conducted in April 2009 at Brigham Young University, during a post-doctoral stay of the first author of this paper. Six staff members of the Instructional Design and Technology department and six students in history and social sciences teaching were asked to reuse an inquiry based sequence and to rate the quality of the generated content suggestions. Two dimensions of quality were assessed: relevancy and accuracy. Relevancy measures whether the content suggestions are applicable and helpful for the task at hand. Accuracy is defined as the extent to which the content is correct, reliable and free of error. The mean for both dimensions on a 7 point scale was 6.5833 (0.51493 std dev.). Although these results are only preliminary, they indicate that participants found the generated content highly relevant and accurate.
760
K. Verbert, D. Wiley, and E. Duval
Fig. 2. LAMS plug-in
A Methodology and Framework for the Semi-automatic Assembly of Learning Objects
761
4 Related Work Reuse is considered to be an effective strategy for building high-quality learning sequences [9]. Whereas both basic and applied research have been conducted in the area of decomposing content into reusable components, little research is available on the automated reuse and assembly of content. In contrast, numerous research efforts have been made to support the development of adaptive personalized courses based on content that has been designed specifically for the course at hand [2]. Typically, multiple models are employed to support adaptivity. Dagger et al. [4] identify an instructional model, a learner model, a teacher model and a domain model. The ADAPT project [10] identifies the context of use, content domain, instructional strategy, instructional view, learner model, adaptation model and detection mechanism. The GRAPPLE project [11] identifies a domain model, a user model, a context model, an instruction model and an adaptation model. In this paper, we shifted the focus from the learner to the teacher, as automated assembly of existing learning resources requires quality control by the teacher. Currently, there exist a range of tools to author learning sequences. Reload LD Editor [12], aLFanet LD Editor [13], CopperAuthor [14] and ASK-LDT [15] are examples of form-based editors for authoring learning designs. MOT+ [16], LD Suite [17], LAMS [3] and ACCT [4] are visual editors. Rather than developing yet another learning design environment, we incorporated our assembly strategy in the widely used LAMS authoring environment. LAMS was chosen because it provides a visual user interface that is targeted to be usable by teachers. In contrast, many of the formbased editors require good knowledge of the IMS learning design specification. In addition, LAMS was released as open source software in 2005 and has a large user community, which can potentially provide a solid basis for targeted validation.
5 Conclusion and Future Work In this paper, we have presented a methodology and framework to automate the assembly of learning resources. The framework retrieves learning resources from the Web and GLOBE repositories based on a teacher model that captures teacher characteristics, an instructional model that captures the pedagogical strategy and a domain model. The approach enables the focused retrieval and aggregation of content fragments tailored to different pedagogical approaches, teacher preferences, etc. In addition, we presented a case study that integrates the approach into LAMS. Future work will focus on validating the approach in real-world settings. One of the major motivations for integrating the approach in LAMS is the fact that LAMS is already used on a global scale. By capturing automatically the actual use by students of generated content suggestions, we will retrieve good indications of the quality of the generated learning sequences. Acknowledgements. The research leading to these results has received funding from the European Community Seventh Framework Programme (FP7/2007-2013) under grant agreement no 231396 (ROLE) and grant agreement no 231913 (STELLAR).
762
K. Verbert, D. Wiley, and E. Duval
References 1. Bush, M.D., Mott, J.D.: The Transformation of Learning with Technology. LearnerCentricity, Content and Tool Malleability, and Network Effects. Educational Technology Magazine (March-April 2009) 2. Vercoustre, A., McLean, A.: Reusing Educational Material for Teaching and Learning: Current Approaches and Directions. International Journal on E-Learning 4(1), 57–68 (2005) 3. Dalziel, J.R.: Implementing Learning Design: The Learning Activity Management System (LAMS). In: Crisp, G., Thiele, D., Scholten, I., Barker, S., Baron, J. (eds.) Interact, Integrate, Impact: Proceedings of the 20th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education, December 7-10, Adelaide (2003) 4. Dagger, D., Wade, V., Conlan, O.: Personalisation for All: Making Adaptive Course Composition Easy. IFETS Journal of Educational Technology and Society, Special Issue on Authoring of Adaptable and Adaptive Educational Adaptive Hypermedia (2005) 5. Virvou, M., Moundridou, M.: Adding an Instructor Modelling Component to the Architecture of ITS Authoring Tools. International Journal of Artificial Intelligence in Education 12, 185–211 (2001) 6. Wiley, D.: Learning objects and the new CAI: So what do I do with a learning object (1999), http://opencontent.org/docs/instruct-arch.pdf 7. Ternier, S., Massart, D., Campi, A., Guinea, S., Ceri, S., Duval, E.: Interoperability for Searching Learning Object Repositories: The ProLearn Query Language. D-Lib Magazine 14(1/2) (2008) 8. Simon, B., Massart, D., van Assche, F., Ternier, S., Duval, E., Brantner, S., Olmedilla, D., Miklos, Z.: A Simple Query Interface for Interoperable Learning Repositories. In: Proceedings of the 1st Workshop on Interoperability of Web-based Educational Systems, pp. 11–18 (2005) 9. Schluep, S.: Modularization and structured markup for web-based learning content in an academic environment. Shaker Verlag, Aachen (2005) 10. Garzotto, F., Cristea, A.I.: ADAPT: Major design dimensions for educational adaptive hypermedia. In: Proc. of ED-MEDIA 2004, June 21-26, pp. 1334–1339 (2004) 11. De Bra, P., Pechenizkiy, M., van der Sluijs, K., Smits, D.: GRAPPLE: Integrating Adaptive Learning into Learning Management Systems. In: Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2008, pp. 5183–5188. AACE, Chesapeake (2008) 12. Reload, Project, http://www.reload.ac.uk/ (accessed April 20, 2009) 13. Van Rosmalen, P., Boticario, J.: Using Learning Design to Support Design and Runtime Adaptation. In: Koper, R., Tattersall, C. (eds.) Learning Design. A Handbook on Modelling and Delivering Networked Education and Training, The Netherlands, pp. 291–301. Springer, Heidelberg (2005) 14. Van der Vegt, W.: CopperAuthor. Heerlen: Open University of The Netherlands (2005), http://www.coppercore.org 15. Sampson, D.G., Karampiperis, P., Zervas, P.: ASK-LDT: A Web-based learning scenarios authoring environment based on IMS Learning Design. Advanced Technology for Learning 2(4) (2005) 16. Paquette, G., Lundgren-Cayrol, K., Léonard, M.: The MOT+ Visual Language for Knowledge-Based Instructional Design. In: Botturi, Stubs (eds.) Handbook on Virtual Instructional Design Languages (2008) 17. Elive Learning Design, http://www.elive-ld.de/content/index_eng.html
Search and Composition of Learning Objects in a Visual Environment Amel Bouzeghoub, Marie Buffat, Alda Lopes Gançarski, Claire Lecocq, Abir Benjemaa, Mouna Selmi, and Katherine Maillet Institut TELECOM, TELECOM & Management SudParis, CNRS Samovar 9 Rue Charles Fourier, 91011 Evry Cedex France {Amel.Bouzeghoub,Marie.Buffat,Alda.Gancarski,Claire.Lecocq, Abir.Benjemaa,Mouna.Selmi,Katherine.Maillet}@it-sudparis.eu
Abstract. This paper presents a complete visual environment which supports the search and composition of learning objects (LOs). It focuses on the end user, learner or teacher. Learners search for LOs in order to learn a new concept or to follow a lesson. Teachers search for LOs for direct use during their lessons or in order to reuse and assemble them with others, thus creating their own, novel LO. Nevertheless, the inner complexity of an LO makes searching for and reusing composed LOs a complex task as well. The end user has to be assisted during this task. The core of our environment is built with a navigational and iterative query language, and a composition model. An iterative, navigational, query language is a complex language. The end user cannot express search queries directly in such a textual language. In the same way, the teacher cannot use a complex textual language to compose a new LO. Our environment is a suite of visual interfaces, supporting interaction with the end user while hiding the inner complexity of the system. Last, a validation module validates the consistency of a composed LO and provides for the dynamic annotation of metadata. Keywords: Learning Object Composition, Visual Search, Dynamic Annotation.
1 Introduction Internet facilitates the development of a large number of web-based educational systems. These systems manage pedagogical resources, also called Learning Objects (LO), available on the Web. In [1], several repositories of LOs are cited, such as the ARIADNE knowledge pool [2]. The reuse of existing LOs is a major issue in organizations in which many LOs are created. In order to facilitate the search and reuse of LOs, several standards like LOM [3] and SCORM [4] were created to define sets of metadata to describe existing LOs. These standards are used in web-based educational systems designed for sharing LOs, and have been quickly adopted by the general public. The first feedback on operational systems provided two conclusions: (1) Describing LOs simply by using a set of metadata is insufficient; semantics have to be added to this description in order to enrich search and reuse (composition) processes, adaptation processes and to improve application interoperability; (2) The efficient reuse of LOs requires the definition of rich composition operators, which remains a U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 763–768, 2009. © Springer-Verlag Berlin Heidelberg 2009
764
A. Bouzeghoub et al.
complex topic [5]. In order to palliate these two weaknesses, several models have been proposed which enrich the semantic dimension of standards. For example, SIMBAD [6] includes semantic models of the learning domain, learners and composed LOs. The search for an LO may be done by a learner or by a teacher: learners in order to learn a concept or a lesson; teachers in order either to use an LO directly during their lessons or to reuse it with others, thus creating a new LO. Nevertheless, considering the inner complexity of LOs, the search, reuse and annotation of complex LOs are also complex. Existing tools are inappropriate: they do not provide any support to authors (no clear visualization of the components of an LO, no rich query language, no composition language using existing LOs [7]). To our knowledge, a few works provide some answers to these problems: [8] in which composition is not taken into account and [9], a project which is still under study. We propose thus an iterative approach to search for LOs: the end user browses a set of LOs and within the inner structure of each one, chooses the ones he/she is interested in. The end user composes his/her own LO, as in the case of a teacher. An iterative, navigational, query language is a complex language. It is not conceivable to propose a textual interaction to end users. Moreover, query results are complex: the number of LOs may be too great to be presented as a list, the inner structure of an LO may be complex and recursive. To resolve these problems we propose a sequence of rich and intuitive visual interfaces. This paper is organized as follows: Section 2 describes the SIMBAD model we used to build our system. Section 3 presents the system architecture, describing the different user interfaces, the query engine, the composition validation and the dynamic annotation. Section 4 concludes and proposes perspectives to our work.
2 SIMBAD Model The SIMBAD model includes semantic models of the knowledge domain (domain ontology), of the learner (her/his knowledge, preferences) and of the LO (content, prerequisites, knowledge gained at the end of the learning). An LO may be atomic or complex. A complex LO is built by applying (if necessary, recursively) composition operators on LOs (atomic or complex). The composition of an LO is a graph. This graph can only have one entry (one LO) but may have several exits. We have chosen five operators, three simple operators (SEQ for the sequence, PAR for parallelism, ALT for alternative) and two more complex operators (AGG for aggregation of two LOs and PROJ to define an LO by projection of another). For example, let R10 be a complex LO; its composition graph is defined by: R10 = R1 SEQ (R5 ALT (R2 SEQ (R3 PAR R4))). R1 is atomic; R2 to R5 are complex.
3 System Architecture The system architecture is presented in Fig 1. The following scenario illustrates the utilization of the different components. A teacher searches for an LO in order to compose a new course. She/He specifies her/his research criteria in the query interface (Fig. 1, n°1). This query is sent to the query engine which interacts with the knowledge server and sends back the answers to the result visualisation interface (Fig. 1,
Search and Composition of Learning Objects in a Visual Environment
765
n°2). At this stage, the teacher may explore the structure of the LO (Fig. 1, n°3). The teacher may copy/past each LO or a component of the LO which she/he wants to keep (Fig. 1, n°2 and 3) and it is sent to the composition editor (Fig. 1, n°4). When she/he validates a composition, the validation module checks its validity and annotates it automatically. The result of these annotations is proposed in an annotation interface (Fig. 1, n°5) for final validation and storage in the knowledge server. Interfaces Query Interface
1
2 Results Visualization Interface
Query engine
3 LO Visualization Interface
4
5
Composition editor
Adding LO form
Knowledge Server
Validation Module
Fig. 1. System architecture
3.1 Search and Composition User Interfaces The user interacts with four interfaces for searching and composing. The query interface (n°1) is a form which allows the user to send requests by specifying the results criteria, and this in an iterative way (the user always has the possibility to refine his/her search). The result interface (n°2) presents all the LOs corresponding to the query results in a structured way. These results are not all visible on the screen simultaneously because there would be too much information. The user must then be able to navigate through these results, in an intuitive way. The LO visualization interface (n°3) must make it possible to explore the composition, potentially recursive, of the LO. Finally, the composition interface (n°4), which is offered to authors, is a graphic editor with which they can compose their own LOs by re-using existing LOs. A study of visualization techniques confronted with the needs of our result interface (n°2) shows that the use of 2D is the most appropriate. Textual display, although very easy to use, cannot structure the results; 3D, although it offers intuitive visualization, imposes navigation features which are quickly disturbing (although we usually see in 3D, we move in 2D on the ground). 3D can be justified for displaying a very large amount of information, which is not necessary in our case. Among the 2D visualization tools, Grokker [10] (a search meta-engine) can be easily adapted to our needs. The results are dynamically generated, organized in a treelike classification and the user explores these results by navigating from the highest level. Grokker’s visualization principles apply equally to our results interface (n°2, results organized with the concepts defined in the domain ontology) as to our LO visualization interface (n°3, recursive composition structure). As with Grokker’s tool, only one “step” of the tree can be visualized at a time, and it is interesting for the user to build his/her mental model. In our application, for each query, a subset of the domain ontology is returned.
766
A. Bouzeghoub et al.
This one being voluminous, the user builds his/her mental model of the ontology step by step, with a reasonable cognitive load. The fact that the user perceives the ontology progressively throughout his/her search allows him/her to learn progressively, learning is based on his/her interests. Fig. 2 illustrates the initial visualization of the query results and then the navigation (zoom) on this result. The LO composition interface (n°5) is a free editor. The author can search for LOs or parts of LOs by using search and LO composition interfaces (n°2 and 3). He/she can select them and drop them into his/her composition space. He/she can then define operators between the LO and thus create a new LO which will be added to the system.
Fig. 2. Results windows, initial (left) and after a zoom (right)
3.2 Query Processor The query processor takes queries from the user interface (n°1) and translates them to be sent to the Ontobroker knowledge server. Ontobroker takes queries and commands to add facts to the knowledge base specified in F-Logic. Facts correspond to semantic descriptions of LOs. Queries contain criteria specified by the user to search for LOs. When the user searches for an LO, each input of the user interface form is verified: if an input is filled, the associated criteria are used to generate an F-logic query. For example, query R1 allows searching for LOs having a SIMBAD metadata description. In R1, O is the variable representing an LO; this variable is instantiated and returned. Suppose that, facing the results of R1 processing, the user refines the search criteria filling the input title of the form. Query R2 is then obtained. R1 : "FORALL O,X,N,M<- N#O[N#hasMetadata>N#X:N#SIMBAD]@M"" R2 : "FORALL O,X,N,M <- N#O[N#hasMetadata>N#X:N#SIMBAD]@M" AND EXISTS G1,T N#X[N#hasElement>N#G1:N#General[N#title->T]]@M AND contains(T,\""+title+"\")@M". OntoBroker returns the result of a query to the query processor. Let O1 a LO belonging to the result. O1 may be described in the following way:
Search and Composition of Learning Objects in a Visual Environment
767
[O1, S1, "\"http://www.owlontologies.com/lom.owl#\"",""\"http://www.owlontologies.com/\"#'lom.owl'"] The interesting information of each LO in the result is taken from its SIMBAD description, like the title and subject. This information is sent to the user results visualization interface (n°2) using an XML document. 3.3 Validation and Annotation of Composed LO The composition of an LO is performed by following a composition pattern (generic graph or composition model) or on the fly. The first case is safer because the pattern proposed by an expert or a teacher is normally valid while the second case may entail problems of composition validity. We focused on the second problem of composition validity with a particular interest for free composition. An LO is valid if its composition graph complies with a set of constraints to ensure the coherence from a structural, semantic and pedagogic point of view. The structural validation checks whether the topography of the graph is correct, by controlling, for example, that the graph has only one start node. As a result, using graph patterns implies structural validity. The more complex semantic validation examines the coherency in the sequencing of the LO. It is necessary, for example, to verify that the level of acquisition of the LO increases with the progression in the graph and not the opposite, or a learner having the required prerequisites has access to at least one path of the graph. The semantic validity is never assured, whatever the composition type (pattern-based or free composition). The pedagogic validation is based on the accordance of the composition graph with a known learning theory. This latest type of validation is not implemented yet. The validation phase is followed by a phase of annotation before storing the new LO in the knowledge server. The author must enter the whole set of metadata which describes it. We propose to facilitate this task by generating some metadata automatically. We use a LOM model which handles complex and long categories, and it is difficult to motivate authors to describe their productions with such a model. Hence, the system deduces metadata values of a composed LO from the metadata of its components. For example, in the heading ` Life cycle', “contribution” indicates authors who contributed to the modification of the LO, the date of the contribution and their role (e.g., author, editor). The contribution of a composed LO is the aggregation of the contributions of each atomic LO. The semantic metadata (contents, prerequisite) can also be generated automatically. Indeed, composition operators have well defined semantics which make it possible to automatically generate the semantics of a composed LO from the semantics of its atomic components.
4 Conclusion Today LO reuse is a hot research topic at the representation level, but few studies have been devoted to user friendly interactive interfaces for LO search and composition. In this context, our system is innovative because, through a specific visual environment, it allows end users (learners and teachers) to be able to easily express their
768
A. Bouzeghoub et al.
queries and view the results. Moreover, the systems offer the possibility to an author (1) to create a new LO by compositing several existing ones, (2) to verify its structural, semantic and pedagogical validity, and (3) to annotate it, by automatically generating part of the associated metadata. Our system is a complete tool for managing LO: creation, search, validation, annotation and insertion in the knowledge base. As a next step we plan to test the system with real users in a real context.
References 1. Beck, R.: Learning Objects Collections (2007), http://www.uwm.edu/Dept/CIE/AOP/LO_collections.html 2. Duval, E.: The Ariadne Knowledge Pool System. Communications of ACM 44(5), 72–78 (2001) 3. Learning Technology Standard Committee: IEEE Standard for Learning Object Metadata, IEEE Std 1484.12.1 4. Advanced Distributed Learning Initiative (ADLI). Sharable Content Object Reference Model. The SCORM Content Aggregation Model. V. 1.2. adlnet.org/ (2007) 5. Harris, M.C., Thom, J.A.: Challenges facing the retrieval and the reuse of learning objects. Workshop on learning object repositories as digital libraries: current challenges. In: 10th European Conference on Digital Libraries (ECDL) Workshop (2006) 6. Duitama, F., Defude, B., Bouzeghoub, A., Lecocq, C.: A framework for the generation of adaptative courses based on semantic metadata. Multimédia Tools and Applications 25(3), 377–390 (2005) 7. Lopes Gançarski, A., Bouzeghoub, A., Defude, B., Lecocq, C.: Iterative search of composite learning objects. In: IADIS International Conference WWW/Internet, Vila Real, Portugal (October 2007) 8. Ramzay, J., McAteer, E., Harris, R., Allan, M., Henderson, J.: Flexible, structured support for the reuse of online learning objects. In: Networked Learning conference (2004) 9. Chaudhry, A.S., Khoo, C.S.G.: Issues in developing a repository of learning objets for Lis education in Asia (2006) 10. http://www.grokker.fr (2009)
A Framework to Author Educational Interactions for Geographical Web Applications The Nhan Luong, Thierry Nodenot, Philippe Lopistéguy, and Christophe Marquesuzaà IUT de Bayonne Pays Basque, LIUPPA-DESI, 2 Allée du Parc Montaury 64600 Anglet, France {thenhan.luong,thierry.nodenot,philippe.lopisteguy, christophe.marquesuzaa}@iutbayonne.univ-pau.fr
Abstract. This paper focuses on the production of authoring tools that teachers may use to prototype interactive geographical web applications. We present some computational models and a toolset that we designed to address some needs of teachers trying to make use of particular localized documents called “travel stories”. Our research challenge is to enable teachers to design interaction scenarios for such a domain, avoiding any programmer intervention. In the design process, the teacher typically faces three activities: (a) Identification of candidate documents, (b) Evaluation of the adequacy of the document and (c) Production of the learning application making use of the selected document. In this paper, we mainly focus on the (c) Production activity. We highlight the necessary use of an “agile” approach to shorten as much as possible the delay between the design and the evaluation step of a prototype. To address the technological challenges raised by such an aim, we present WIND framework and we discuss its capabilities while considering some examples of interactive scenarios generated with WIND framework. Keywords: geographic information, interaction design, interaction programming, agile approach, web application, authoring framework.
1 Introduction Educational scenarios are particular design artifacts that take advantage of current enhancements in the domain of “Science of Design” [1]. Research works dedicated to the design of educational scenarios propose new paradigms, concepts, approaches, models and theories that promote stronger bases for the design of TEL environments [2]. These bases are foundations enabling to improve the processes of both coding, evaluating and maintaining such type of application. Recent works focus on: – the definition and role of a pedagogical scenario [3, 4], – the definition of visual instructional languages [5] and executable [6] scenarios, – the definition and the evaluation of methodological principles allowing designers to produce and to re-use such scenarios [7]. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 769–775, 2009. © Springer-Verlag Berlin Heidelberg 2009
770
T.N. Luong et al.
Nowadays, there is also a strong emphasis on the process of transforming an abstract scenario (that a teacher is able to understand and to design by himself) into an executable scenario (that an execution infrastructure can process). Several works promote model-driven engineering techniques and tools to fully integrate the functions supported by the chosen infrastructure. This approach is interesting not only because of underlying model-driven scientific challenges [8], but also because modeldriven transformations must respect educational constraints specified in the scenario produced by the teacher [9]. Most of identified works are still in progress and it is thus difficult to know if they will soon provide teachers with toolboxes fitted to the design and the implementation of constructivist learning situations. Moreover the complexity of required technologies may put the teacher out of the play (when the educational scenario becomes very detailed, when the scenario is deployed on a target infrastructure…). In most works, the author is a pedagogic engineer. Such designer profiles can be found in e-learning firms but not in most classrooms and teaching institutes: though they are not computer-scientists, teachers must fully handle the design process (from the scenarization step to the deployment step). This paper focuses these particular teachers: we propose a framework that they may use to edit/prototype and to evaluate by themselves an educational scenario. This framework targets the scenarization of interactive resources to be used in inquiry learning activities [10] for a specific applied domain: travel stories. In the second section, we present the background and the objectives of these research works. This leads us to present in the third section WIND framework that facilitates an “agile” production of interactive scenarios. The conclusion section proposes a synthesis about WIND and our future directions of investigation.
2 Background and Objectives The DESI1 group aims to propose software architectures and tools to re-vitalize localized documents that generally rest in the depth of archives, museums and libraries. In particular, travel stories offer very challenging revitalization objectives because tourists and teachers could benefit from e-services developed from such localized documents. Travel stories have intrinsic characteristics that make them good teachingresource candidates. A travel story is a sort of text whose author tells what he discovered while travelling through a territory or a country. On the one hand, the author tries to very precisely present the places he/she visited; on the other hand, he/she tries to tell the events that occurred during his/her trip, he/she reports on his/her activities and explorations. Indeed, the travel story aims at using words to describe the travel reality: the travel story is told day after day, the duration of the trip is often explicitly written in the text in conjunction with the travelled locations. Moreover, travel stories provide an opportunity to ground the design and the operation of systems with text, map and calendar components that require extensive human-machine interaction. Following the experiment available at http://erozate.iutbayonne.univ-pau.fr/ forbes2007/exp/, we proposed three authoring steps to assist as much as possible the process of authoring educational applications making use of travel stories. The first 1
DESI is a French acronym that means Electronic Documents, Semantics and Interaction.
A Framework to Author Educational Interactions
771
teacher’s task consists in selecting in a corpus of documents the travel stories that deal with geographical areas [11, 12]. The second task consists in evaluating the adequacy of the document as regards of teacher’s pedagogical aims [13]. The third teacher’s task consists in producing a highly interactive application based on the semantics of the validated travel story. To this end, the teacher needs an authoring environment that enables him/her to quickly evaluate/correct his/her conceptual choices. This paper focuses on this third task. Following A. Gibbons' works [14, 15], our approach aims at breaking such a design dichotomy/gap for the particular case of educational applications making use of travel stories. Indeed, Gibbons’ instructional design layers are: content, strategy, control, message, representation, media-logic, data-management. Design of each layer can be considered separately from the other layers, providing an important modularization to the design effort. Applied to our application domain (study and design of educational applications making use of geographical information embedded in travel stories), Gibbons’ design theory leads us to design interactions at four levels of abstraction, as suggested in [16]: 1. The most elementary level deals with the data and geographical information embedded/retrieved in a text (cf. the data-management, media-logic and representation layer) that may be associated with goals and activities (input and output parameters), participants, artifacts provided to participants (e.g. map, text, calendar components). 2. The second level (cf. the message layer) focuses on the messages exchanged during an interaction, their structure and the way they are generated from the data and geographical information manipulated by a learner. It also enables to control the execution of the interaction model in accordance with the satisfaction of a certain conditional expression. For example, an interaction may become mandatory depending on previous interactions with an icon representing a particular place mentioned in a travel story. 3. The third level (cf. the control layer) considers the possibility to introduce decisions and commands that enable to change the behavior of interaction scenarios according to the aims of participants having specific rights for the considered educational unit. For example, a learner can decide to mask the calendar component because he/she wants to ignore the travel story’s chronology. 4. The fourth level (cf. the strategy layer) considers the use of events to decide the performance of changes: events can dynamically occur during execution, they can be triggered from the evaluation of time conditions, goals achievement, activity reports. For example, if the learner never interacts with the map component, this means that he/she probably needs some support to take advantage of the map component capabilities. In the next section, we present a framework that addresses interaction design according to these four abstraction levels.
3 A Framework to Facilitate an “Agile” Production of Interactive Scenarios WIND favors empirical design approaches enabling a teacher to easily formalize and evaluate his/her educational ideas by using (as a learner) the automatically generated
772
T.N. Luong et al.
application. Evaluation step is therefore used to check/criticize his/her pedagogical choices. We thus define the concept of an “agile” design tool as a piece of software supporting such an approach. Indeed, we may define an “agile” method as a design approach not only by fully implying the end-user along the whole process but also by rapidly integrating his/her requirements in a technical solution [17]. Final quality of the generated application is ensured thanks to a continuous control all the production process along. As a consequence, each teacher’s pedagogical choice must be fully and automatically traduced into executable code. This constraint implies the use of an applicative model as a design framework. An applicative model is an application generic model that may be instantiated all the design step along; each instance of this model is then automatically traduced into executable code. Our proposed design approach is based on a model-driven approach [18, 19] which is also used for TEL engineering [8]. We distinguish three levels: 1. The generic applicative model describes the core concepts of the application classes. 2. The application model created by the teacher during his/her design step. This model is an instance of the previous generic model. It describes the characteristics of the application desired by the teacher. 3. The code generated from the application model designed by the teacher. The WIND generic applicative model [20] defines the core of the interactive web applications that a teacher will be able to produce. An interaction may be simply defined as a triple <area, event, reaction>. Interaction is the central mechanism which characterizes the applications we wish to develop. The expressive capacity of the interactions may be declined to express not only simple interactions but also more complex ones. This WIND generic applicative model is described in a WIND-XSD schema (available at http://erozate.iutbayonne.univ-pau.fr/Nhan/WINDv2/schema.xsd). Each concept of the model is described as a specific element. This XSD schema helps to instantiate WIND generic applicative model into XML format that describes the interactions of a WIND application model. That is to say it permits to describe a web-based application embedding textual, map and calendar interactive components. Taking advantages of JavaScript, WIND generic applicative model is supported by a WIND-API that implements the different classes as well as their associated methods. WIND-API proposes a homogeneous layer built on lower level APIs, specialized in the handling of text, map and calendar. To avoid any programmer intervention in the teacher activity devoted to the application production, we have developed a JavaScriptCodeGenerator2. The JavaScriptCodeGenerator can parse any WIND-XSD compliant data file (e.g. a WIND application model description) in order to generate JavaScript code for interactions that WIND API can execute. These technologies enable us to shorten the delay between the design and the evaluation step of a prototype. The implementation of WIND interactions may simply be done with four main steps3 : 2
For example, see the XML file at http://erozate.iutbayonne.univ-pau.fr/Nhan/WINDv2/ data.xml and the automatically produced web application that the end-user can exploit: http://erozate.iutbayonne.univ-pau.fr/Nhan/WINDv2/generator.php?file=data.xml 3 A complete example is available at http://erozate.iutbayonne.univ-pau.fr/Nhan/WINDv2/
A Framework to Author Educational Interactions
773
1. Defining the components of the application and their characteristics. 2. Defining reactive areas for each component defined in the previous step. 3. Defining possible reactions for the reactive areas. 4. Defining interactions upon previously defined reactive areas and reactions.
4 Discussion and Future Directions WIND is an operational framework that allows both describing and implementing interactions of web applications mixing texts, maps and calendars. This framework promotes an agile process fitted to designers without computer-science skills: its characteristics make easier the description of interactions. Yet, WIND still needs further developments. We must extend our current framework because WIND does not completely address the design of the four interaction layers presented in section 2. Current version of WIND framework correctly addresses the data-layer: it enables designers to manage sensible parts corresponding to the main concepts (places, dates, movement-verbs, etc.) automatically retrieved in travel stories. As a consequence, interaction design can take advantage of these specific sensible parts. However, we still need to extend WIND framework to manage the same way more complex concepts of an itinerary. Current version of WIND framework enables a designer to specify the messages exchanged with a learner, the semantics of the messages and their appearance. Moreover, conditional messages are easy to describe with WIND functionality, thus satisfying the requirements of the second level/layer. Current version of WIND framework enables a designer to specify who controls an interaction, how it is initiated, what the learner’s degrees of liberty are. WIND provides functionality needed to design mixed-initiative interactions, thus satisfying the requirements of the third level/layer. However, current version of WIND framework fails to completely address the strategic layer because WIND does not provide any functionality to assess cognitive processes. The event-reaction mechanism implemented by WIND provides the required functionality to design reactions on cognitive events, but we do not currently provide designers with any means to detect such cognitive events. Our first experiments have shown that WIND is really helpful to rapidly design and assess inquiry activities making use of the semantics of travel stories. Directions of research discussed above will certainly enhance the instructional design addedvalue of WIND framework. We also need to propose an evaluation protocol to determine to which extend can teachers concretely exploit the current WIND framework and its corresponding authoring tools.
Acknowledgments This research is supported by the French Aquitaine Region (project n°20071104037) and the Pyrénées-Atlantiques Department (“Pyrénées : Itinéraires Educatifs” project).
774
T.N. Luong et al.
References 1. NSF 2007, Science of Design: National Science Foundation 07-505, Program Solicitation (2007), http://www.nsf.gov/publications/ pub_summ.jsp?ods_key=nsf07505 2. Tchounikine, P.: Pour une ingénierie des Environnements Informatiques pour l’Apprentissage Humain. Information Interaction Intelligence 2(1), 59–93 (2002) 3. Pernin, J.-P., Emin, V., Guérayd, V.: ISiS: An Intention-Oriented Model to Help Teachers in Learning Scenarios Design. In: Second European Conference on Technology Enhanced Learning, pp. 338–343 (2008) 4. Dillenbourg, P., Tchounikine, P.: Flexibility in macro-scripts for computer-supported collaborative learning. Journal of Computer Assisted Learning 23(1), 1–13 (2007) 5. Nodenot, T.: Scénarisation pédagogique et modèles conceptuels d’un EIAH: Que peuvent apporter les langages visuels? International Journal of Technologies in Higher Education 4(2), 85–102 (2007) 6. Ferraris, C., Martel, C.: LDL for Collaborative Activities. In: Botturi, L., Stubbs, T. (eds.) Handbook of Visual Languages in Instructional Design; Theories and Practices, pp. 226– 253. IDEA Group, Hershey (2007) 7. Villiot-Leclercq, E.: Modèle de soutien à l’élaboration et à la réutilisation de scénarios pédagogiques. Doctorat en Sciences Cognitives de l’Université Grenoble 1 (2007) 8. Laforcade, P., Nodenot, T., Choquet, C., Caron, P.A.: MDE and MDA applied to the Modeling and Deployment of Technology Enhanced Learning Systems: promises, challenges and issues. Architecture Solutions for E-Learning Systems (2007) 9. Caron, P.-A.: Web Services Plug-in to Implement “Dispositives” on Web 2.0 Applications. In: Duval, E., Klamma, R., Wolpers, M. (eds.) EC-TEL 2007. LNCS, vol. 4753, pp. 457– 462. Springer, Heidelberg (2007) 10. Olson, S., Loucks-Horsley, S.: Inquiry in the National Science Education Standards: a guide for teaching and learning. National Academies Press, Olson (2000) 11. Loustau, P., Nodenot, T., Gaio, M.: Design principles and first educational experiments of π R, a platform to infer geo-referenced itineraries from travel stories. In: International Journal of Interactive Technology and Smart Education, ITSE (2009) 12. Gaio, M., Sallaberry, C., Etcheverry, P., Marquesuzaà, C., Lesbegueries, J.: A Global Process to Access Documents’ Contents from a Geographical Point of View. Journal of Visual Languages and Computing 19, 3–23 (2008) 13. Loustau, P., Nodenot, T., Gaio, M.: Spatial decision support in the pedagogical area: Processing travel stories to discover itineraries hidden beneath the surface. In: 11th AGILE International Conference on Geographic Information Science, pp. 359–378 (2008) 14. Gibbons, A.S.: What and how do designers design? A theory of design structure. Tech. Trends 47(5), 22–27 (2003) 15. Gibbons, A., Stubbs, T.: Using Instructional Design layers to categorize design drawings. In: Botturi, L., Stubbs, T. (eds.) Handbook of Visual Languages in Instructional Design; Theories and Practice. IDEA Group, Hershey (2007) 16. Caeiro-Rodríguez, M., Llamas-Nistal, M., Anido-Rifón, L.: The PoEML Proposal to Model Services in Educational Modeling Languages. In: Dimitriadis, Y.A., Zigurs, I., Gómez-Sánchez, E. (eds.) CRIWG 2006. LNCS, vol. 4154, pp. 187–202. Springer, Heidelberg (2006)
A Framework to Author Educational Interactions
775
17. Vickoff, J.P.: Systèmes d’information et processus agiles. Hermes Science (2003) 18. Seidwitz, E.: What models mean. IEEE Software, 26–32 (2003) 19. Bézivin, J., Blay, M., Bouzeghoub, M., Estublier, J., Favre, J.-M.: Rapport de synthèse de l’Action Spécifique CNRS sur l’Ingénierie Dirigée par les Modèles: Action Spécifique MDA du CNRS (2005) 20. Luong, T.N., Etcheverry, P., Nodenot, T., Marquesuzaà, C.: WIND: an Interaction Lightweight Programming Model for Geographical Web Applications. In: International Opensource Geospatial Research Symposium, OGRS (to appear, 2009)
Temporal Online Interactions Using Social Network Analysis Álvaro Figueira Universidade do Porto, Faculdade de Ciências, DCC - CRACS Rua do Campo Alegre, 1021/1055, 4169-007 Porto, Portugal [email protected]
Abstract. Current Learning Management Systems generically provide online forums for interactions between students and educators. In this article we propose a tool, the iGraph, that can be embedded in Learning Management Systems that feature hierarchical forums. The iGraph is capable of depicting and analyzing online interactions in an easy to understand graph. The positioning algorithm is based on social network analysis statistics, taken from the collected interactions, and is able to smoothly present temporal evolution in order to find communicational patterns and report them to the educator. Keywords: Visualization of online interaction, Web-based learning, Automatic graph drawing, Temporal analysis, Online discussion forums.
1 Introduction Characterizing interactions of a group that usually communicates through an online context is frequently not a simple task. We do recognize that currently written communication has assumed particularities (emoticons, capitalizations, exaggerated punctuations) that were not considered in the past. We propose a tool to help characterizing online interactions depicting them in the form of a graph that in turn is built with the help of “social network analysis” indicators. The proposed graph represents all interactions occurred up to the drawing moment. This characteristic allows building a “history” of interactions, drawing each network state in a singular frame. A slideshow of all available frames can then provide additional insight for the teacher as he may analyze the class according to different key moments and observe its progress over time, such as actors that maintain leadership roles during most of the time, or actors that shift between more or less active positions in different moments.
2 Online Social Network Analysis Social Network Analysis (SNA) consists in the “mapping” and analysis of the relations between people, groups or organizations, through a visual representation and also a mathematical analysis. The visual representation results in a network that includes a set of actors that interact among themselves as well as information flows. U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 776–781, 2009. © Springer-Verlag Berlin Heidelberg 2009
Temporal Online Interactions Using Social Network Analysis
777
The illustration of this network is represented as a Graph, with actors as vertices and interactions as a set of ties, between two or more vertices, represented by lines. We use the event “reply to” a previous posted message as an atomic interaction. The counting process of the answers that were received and sent begins at the first “reply to” in a discussion. The analysis process in SNA generally consists in three perspectives. The first reports to the actors’ positions individually; the second, to group action and, the third to the group or community as a whole. Centrality measures are seen as fundamental attributes of a social network. We assumed Freeman’s [1] procedures to calculate centrality. According to Hanneman & Riddle [2], many Sociologists argue that one of the basic properties of social structures is power. In our study, for a reason of semantic proximity, we will also use this concept intending that it could also be understood as “influence”. Scott [3] also recalls the need to carefully choose indicators and how to apply them keeping in mind the understanding of their properties and if they are adequate and relevant for the study that is being conducted. For the iGraph we used three indicators: Centrality Degree, calculated by summing the vertices that are adjacent to vertex i. Actors who obtain higher results in this indicator may be characterized by being more autonomous and having more influence in the network. Clique identification in a network [1],[2] allows us to locate group of actors where all possible connections are present and expand our comprehension of the group at a global range. Density is one of the mostly used indicators [3],[4]. This indicator reveals, in percentage, the high or low connectivity of a network and is defined as the ratio between the existing and the possible connections. Centralization Index is an indicator for analyzing the network as a whole and is expressed in percentage. It is characterized by the existence of an actor that clearly plays a central role, while being connected to all the vertices in the network.
3 Building a History The motivation for adding history is to understand how the online community evolved along time. Although a graphical representation of the current state of an online community is of much use, that representation lacks the temporal dimension which may hide important aspects of interaction that occurred in past. For example, it is possible that at some time during the interaction, we could find actors that played important roles during the development of the interactions and then their leadership was overcome by two or three other actors, which in present time makes their past importance much reduced. 3.1 Drawing the iGraph There is a vast literature and research area concerning automatic graph drawing [5]. A variety of layout algorithms that are based on graph theoretical foundations have been developed in the last three decades [6]. In 1963 Bill Tutte wrote a paper: How to draw a graph [7] in which he suggested an algorithm that places each vertex into the center of its neighbors. A long way of research has been pursued since then. However, some basic criteria, supported by psychology studies [8][9], still hold: vertices displaying the objects should not overlap each other, nor the lines representing the edges. Moreover, one would want to minimize the edge crossings.
778
Á. Figueira
Our algorithm evolves from a set of basic principles/premises to improve readability and ease of understanding: a) distribute vertices avoiding overlapping; b) information hubs should tend to be placed near the center; c) minimize the crossing of edges and of vertices; d) group cliques; e) dense net tends to spread equally. According to those principles, we built a model of “orbits” in which we place the vertices equally spaced in clockwise manner. The outer orbit is placed near the border of the drawing canvas. In Fig.4 we depict this model.
Fig. 4. Vertex positioning model
The orbit of each vertex is computed according to its centrality degree (the greater the degree, the closer to the center will be the orbit). The centralization index is useful to compute the radius of the closest orbit to the center (a centralization index of 100% means that the smallest orbit will have radius of 0). The net density parameter is useful to set the number of possible orbits (a dense net, will have more orbits). We present the drawing algorithm in Listing 1 where we define k-clique as a clique with k vertices, and an object as either a clique or a vertex. 1. Clique Detection: identify all cliques of size ≥ 3 2. Clique Reduction: pairs of cliques of size n, sharing n-1 vertices are treated as a single n-clique plus the other vertex 3. Let the total number of objects be the number of cliques plus the number of remaining vertices (outside of a clique) 4. Orbit assignment: for each object compute its orbit: 4.1. if object is a vertex compute its normal centrality degree C CD 4.2. if object is a k-clique compute ∑ i as the clique centrality degree, where k
CiCD is the CD of vertex i 5. Layout: dispose objects clockwise, equally spaced 6. Vertex Permutation: for each clique find a permutation (P) of its vertices that minimizes distance to other vertices outside the clique: min{∑ distances( Pi )} i
Listing 1. Vertex positioning algorithm
3.2 Creating Temporal Continuity Our system is based on a series of sequential graphics, each restricted to a temporal frame. Each frame is replaced by the next frame manually, or in automatic mode (in a
Temporal Online Interactions Using Social Network Analysis
779
slideshow). Coherency of graph layout between different frames is ensured by establishing two important premises: the minimum temporal slot for each time frame is the “reply to” interaction and the algorithm for graph drawing must be deterministic to create an illusion of movement, and vertex positioning continuity. Taking the “reply to” relation the following may happen: a) the network density changes; b) the centralization index changes; c) two centralization degrees change; d) a new Clique is formed. In situations a) and c), despite vertices may change their size, the illusion of continuity is preserved. Situation a) creates more orbits and therefore triggers a new assignment of vertices to orbits (preserving continuity). Situation d) may lead to the creation of a neighbor Clique that may continuously increase its size, eventually “absorbing” the other Clique. If this process culminates in the dissolution of the previous Clique, favoring the new one, then a new permutation of vertices has to be found (as in step 7 of the listing). This situation hampers the illusion of continuity between frames and has to be solved by performing a local animation on the involved vertices. To better understand the algorithm we provide an example listed on Table 2: Table 2. Example of interaction between three actors Posts 1 2 3 4 5 6
0 A
1
2
B C A B B
Posts 7 8 9 10 11 12
0
1 A B
2
C C A B
According to interactions listed on Table 2, there are 11 interactions to consider: from post 2 to post 12. In Fig. 6 we show four time frames of the temporal evolution of the iGraph. For the sake of understanding and simplicity we depict frames correspondent of “momentum zero” (T=0), of interactions 5, 8, and 12 (final iGraph).
Fig. 6. Illustration of four time frames for temporal interactions listed in Table 2
From the analysis of Fig. 6 it is clear to understand the benefits of using a temporal evolution in the iGraph: if there would be no temporal evolution of the iGraph, one would look at time frame 12, and conclude that actor C has been away of the discussions and that actor B has a leader role in the forum. However, by observing the evolution of the iGraph it is possible to see that actor B by interaction 5 had a much smaller
780
Á. Figueira
importance and is in fact the “outsider”. Only by interaction 8 all actors have the same number of interactions with each others, and from then on actor B takes the lead.
4 The Interface The iGraph system uses LMS forums to mine for posted messages and presents to the teacher an interface embedded in a web page (as shown in fig 7), which is based on a previous seen proposal [10]. The Centrality Degree is divided into input and output cases: the former is the number of actors that respond to an actor, while the later is the number of actors to which an actor replies. The Centralization Index is also divided into input and output cases, and expressed in percentage. The use of isolated nodes makes the graph include nodes that do not have any link to another node.
Fig. 7. The iGraph interface
Below the graph, it is possible to select any forum that is created in the scope of the current online course and the mode for the iGraph. It is also possible to show cliques of n vertices. In its present version, each actor is assigned a letter which is resolved to his (her) actual name in the box at the lower right.
5 Conclusions We presented a system with an automatic process for characterizing online interactions in discussion forums. The system is capable of depicting current state interactions or a throughout analysis frame-by-frame since the beginning of the forum participations. Trying to obtain illusion of continuity between frames led to the development of a positioning algorithm and a methodology for frame transition. Although we are conscious that it is not currently possible to find the optimal vertex positioning in reasonable time, we believe that our algorithm finds a sub-optimal layout which is not humanly easy to improve, that is aesthetic and easily readable.
Temporal Online Interactions Using Social Network Analysis
781
References [1] Freeman, L.C.: Centrality in Social Networks: Conceptual Clarification. Social Networks 1, 215–239 (1978), http://moreno.ss.uci.edu/27.pdf (accessed: March 2008) [2] Hanneman, R.A., Riddle, M.: Introduction to Social Network Methods. [electronic version] (2005), http://faculty.ucr.edu/~hanneman/ (accessed: March 2008) [3] Scott, J.: Social Network Analysis: a Handbook. Sage, London (1997) [4] Borgatti, S.P., Everett, M.G.: Network Analysis of 2-Mode Data. Social Networks, pp. 243–269 (1997), http://www.analytictech.com/borgatti/papers/ borgatti%20-%20network%20analysis%20of%202-mode%20data.pdf (accessed: March 2008) [5] Jünger, M., Mutzel, P.: Graph Drawing Software. Springer, Heidelberg (2004) [6] Nishizeki, T., Rahman, S.: Planar Graph Drawing. Lecture Notes Series on Computing, vol. 12. World Scientific, Singapore (2004) [7] Tutte, W.T.: How to Draw a Graph. Proceedings of the London Mathematics Society 13, 743–768 (1963) [8] Purchase, H.: Which aesthetic has the greatest effect on human understanding? In: Di Battista, G. (ed.) GD 1997. LNCS, vol. 1353, pp. 248–261. Springer, Heidelberg (1997) [9] Purchase, H., Allder, J.-A., Carrington, D.: User preference of graph layout aesthetics: A UML study. In: Marks, J. (ed.) GD 2000. LNCS, vol. 1984, pp. 5–18. Springer, Heidelberg (2001) [10] Figueira, A., Laranjeiro, J.: Interaction Visualization in Web-Based Learning using iGraphs. In: Proceedings of Hypertext 2007, Manchester, UK (2007)
Context-Aware Combination of Adapted User Profiles for Interchange of Knowledge between Peers Sergio Gutierrez-Santos1, Mario Muñoz-Organero2, Abelardo Pardo2, and Carlos Delgado Kloos2 1
London Knowledge Lab, Birkbeck College, University of London, UK 2 University Carlos III of Madrid, Spain [email protected], {mario,abel,cdk}@it.uc3m.es
Abstract. This paper presents a system that connects students with complementary profiles, so they can interchange knowledge and help each other. The profile of the students is built by a modified intelligent tutoring system. Every time the user profile is updated, a gateway updates the profile stored in the user's personal terminal using a web-service based communication mechanism. The terminals (e.g. mobile phones) are able to find and communicate between themselves using Bluetooth. When they find two complementary user profiles, they help the users getting into contact, thus providing the benefits of social network tools but at short-range and with physical context awareness. Two students are complementary when one knows what the other wants to learn and viceversa, so they can be of mutual help. Keywords: mobile learning, bluetooth, profile matching.
1 Introduction Traditional learning environments are changing significantly. The introduction of pervasive technologies is enhancing the learning process making it more ubiquitous and personalized. However, the anytime-anywhere personalized learning requires also the deployment of an anytime-anywhere personal environment that helps and guides the learning process. This paper defines and provides an implementation of such a ubiquitous personalized tutoring environment by combining a modified intelligent tutoring system with a context aware mobile profile matching service. We describe theoretical aspects as well as implementation issues of such a system. We aim at applying the system in our own university, where students with different profiles can help each other. In other words, the system is expected to work in a traditional learning environment where many students attend lectures and have to study later on their own. Students have the need of more personalized attention, because the very few teachers are not able to adapt their lectures to the specific characteristics of each and every student. The possibility of having a personal tutor would greatly increase the learning of the students. However, the resources are scarce in a traditional education environment and it is not feasible to provide a personal human tutor for each and every student. Another possibility is to build intelligent tutoring systems U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 782–787, 2009. © Springer-Verlag Berlin Heidelberg 2009
Context-Aware Combination of Adapted User Profiles for Interchange of Knowledge
783
(henceforth ITS), that support the learning process of the students by providing feedback on their errors and lacks of knowledge for a specific domain. However, full ITS are very costly to build [4]. There is a third way. A student can ask for help to a peer student that has a deeper knowledge. This “more able peer” is somehow similar to having a personal tutor. However, students are not professional teachers and might not be interested in helping their peers more than occasionally, unless they have something to exchange. The work presented here is based on an economic view of this scenario. In our view, knowledge and expertise in different domains are the scarce resources. Students have a varying amount of knowledge about different subjects. If two students have complementary user profiles (e.g. the first one has a deep knowledge of operating systems, while the second has mastered the computer architecture part of the course) they might be interested in being put in touch to help each other. Therefore, they can interchange what they know and help each other. The use of short-range communication technologies to disseminate information about knowledge and learning needs leads to spontaneous collaboration [1]. Once the system has built a proper profile (i.e. learner model), this is submitted to the personal communication terminal of the student (e.g. mobile phone with Bluetooth capabilities). The terminal operates autonomously from then on, looking for similar devices in the surroundings. Once two such terminals identify themselves, they interchange their user profiles (i.e. learner model). If two profiles are found to be complementary, a message is shown to the students along with additional information. This information aims at facilitating the contact between the two human students (i.e. breaking the ice) and encourages their knowledge interchange. Many systems have tried to benefit from inherent context that exists in short-range technologies such as Bluetooth. The work presented in [5] shows a Bluetooth-based ad hoc e-learning system that connects together students and instructors so that the students can participate in a face-to-face lecture using their personal mobile devices and instructors can receive instant feedback about the students. Although this work uses some of the concepts and technologies presented in this paper, its scope is limited to facilitate student-instructor interactions in a face-to-face class. The work presented here connects the concepts of ITS with context-aware mobile profile-matching applications. Another related work is the one presented in [2] which defines and implement a pervasive communication system from a central learning management system to mobile students based both on SMS and Bluetooth. The idea of synchronizing the status of a central learning management system with the mobile learners is similar to ours. However, we introduce a profile based synchronization mechanism from which peer to peer relationships among students can be established.
2 Architecture The combined central e-learning server-oriented and student’s mobile peer to peer architecture of the system that we have defined is depicted in Figure 1.
784
S. Gutierrez-Santos et al.
Fig. 1. Architecture of the system
The architecture presents two main parts. The first one is the server, that contains an Adaptive Profiler (in our case, a modified intelligent tutoring system) and a synchronization gateway. As a consequence of the interactions between the students and the profiler, the students’ profiles (i.e. user models) stored in a database are populated. These profiles contain the information about the strengths and weaknesses in the learning process of each student. The second part is deployed on the mobile terminals of the students. It contains both the implementation of the synchronization interface used by the gateway to update the student’s profile, and the peer-to-peer profile matching application used to find other students with complementary profiles. It is important to note that the word "server" is used in the figure to express that the Adaptive Profiler and the Gateway are located in a central machine. The server does not actually export any service. The personal terminal, however, does export one synchronization service, shown in Figure 1 with the method setProfile(). In the server part, the two main components are the Adaptive Profiler and the Gateway. The first one is responsible of building the user profile, while the second takes care updating the user profiles to the mobile devices. The user interacts with the Adaptive Profiler through a web browser, either from a desktop computer or from the mobile phone itself. The relational database acts as the indirect communication means between the Profiler and the Gateway. The user profile is stored in a database that is accessible by both the Profiler and the Gateway. The Gateway is responsible of withdrawing the user profile (i.e. learner model) from the database and sending it to the mobile Personal Terminal. This communication is performed using web services. The web service at the mobile Personal Terminal implements a method setProfile() that is called by the Gateway to update the stored user model. The mobile Personal Terminal implements the modules to communicate with the server and with other peers. The module that takes care of the communication with the server implements the setProfile() service. The module that is responsible of the communication with peers looks for other terminals in the surroundings. Once a another terminal is located, communication is established between them in order to interchange the user profiles they store. This communication mechanism is based on Bluetooth providing a contextualized protocol for finding nearby complementary students.
3 Communication Server-Terminal As we have presented in the previous sections, the different interactions between the e-learning users and the modified ITS define the properties of their user profiles.
Context-Aware Combination of Adapted User Profiles for Interchange of Knowledge
785
These profiles are periodically updated by the Adaptive Profiler and need to be synchronized with the context-aware personal user application running on the user’s mobile device. Since mobile devices tend to only implement the consumer part for web-service based communications we have defined and implemented a complete environment for developing and executing web-service based server applications on limited mobile devices. This part of the system is based on a simplification of the J2EE Servlet API on top of which we define a SOAP processing Servlet capable of exporting concurrent web services. One of these web services is the user profile synchronization web service. As described in [3], we have defined and implemented a simplified Servlet API for mobile devices that concentrates on providing the basic functionality required to process HTTP requests. On top of the implementation of this Servlet API we have created the WebServiceServlet which implements the doGet() and doPost() methods to parse the SOAP part of a web service invocation. The main information contained in the web service invocation is the name of the operation to execute and the values of the parameters. The WebServiceServlet parses the XML content of the SOAP message, obtains the name of the operation, creates an array of arguments, instantiates the service class implementing the business logic of the web service and executes the associated method. The result generated is then encapsulated in a SOAP response message and sent back to the client. The UML sequence diagram in the invocation process is shown in Figure 2. We have included the implementation of the synchronization web service in order to show the entire invocation process.
!"
# $ % &
' & ( &!"
Fig. 2. Synchronization process
The implementation of the synchronization web service contains the business logic for the communication between the Gateway and the mobile device. The class contains two main methods. The setProfile() method implements the synchronization protocol between the server and the mobile device. The call() method is needed to connect the synchronization web service class to the WebServiceServlet described in the previous sub-section in systems that do not provide introspection mechanisms (e.g. MIDP profile in J2ME).
786
S. Gutierrez-Santos et al.
4 Communication between the Terminals After interacting with the server, mobile students have their personal profiles synchronized in their mobile devices. The personal profile describes the strengths and weaknesses of the student. When different students get nearby each other either in class, in laboratories or even at the canteen, they may be interested in meeting other students with complementary profiles. We have implemented a Bluetooth based “communication with peers” module for mobile devices in MIDP. This module detects mobile devices near the student, validates that the discovered devices implement the profile matching service and exchanges the student profiles. If there are any students with appropriate complementary profiles, the module shows their details about them and their profiles in order to facilitate face-to-face interaction. The Bluetooth technology provides both the appropriate distance for the communication (showing details only of students a few meters away) and the appropriate service discovery mechanism to find the surrounding mobile personal terminals. Our implementation uses the DiscoveryAgent of the LocalDevice to continuously find devices near the student (we are only interested in devices that implement the profile matching service).
5 Conclusions and Future Work This paper presents a system that helps students finding other students with complementary profiles. The search is performed in short range, making it context-dependent and specially suited for blended learning scenarios in which students interact in classes, at the library, etc. Using context-aware technologies makes it possible to create a sort of virtual market of knowledge, in which students interchange what they know, but without the high cost of advertise themselves. The paper has presented the architecture of the system. The most important parts are the Adaptive Profiler (a modified ITS that builds the user profile) and the module of Communication with Peers at the personal terminal, that is responsible of locating other terminals and interchanging user profiles. Communication between the terminal and the server is also an important issue, which has made it necessary to create a web service infrastructure on the mobile terminal. The system assumes that students interact with the ITS mostly individually (e.g. from home), but have many opportunities to interact among themselves during the day (e.g. in the labs). We do not know yet the influence of non-technical factors (e.g. personal issues or likings) can influence the validity of our scenario. This demands further investigation.
Acknowledgements The work presented in this paper has been partially funded by the Spanish “Programa Nacional de I+D+I” by means of the project TIN2008-05163/TSI “Learn3: Towards Learning of the Third Kind”.
Context-Aware Combination of Adapted User Profiles for Interchange of Knowledge
787
References [1] Heinemann, A., Mühlhäuser, M.: Spontaneous Collaboration in Mobile Peer-to-Peer Networks. In: Steinmetz, R., Wehrle, K. (eds.) Peer-to-Peer Systems and Applications. LNCS, vol. 3485, pp. 419–433. Springer, Heidelberg (2005) [2] Mitchell, K., Race, N.J.P., Mccaffery, D., Mccaffery, D., Cai, Z.: Unified and Personalized Messaging to Support E-Learning. In: Fourth IEEE International Workshop on Wireless, Mobile and Ubiquitous Technology in Education, pp. 164–168 (2006) [3] Muñoz Organero, M., Delgado Kloos, C.: Web-Enabled Middleware for Mobile Devices. In: International Wireless Applications and Computing 2007 Conference, Lisbon, Portugal, July 6-8 (2007) [4] Murray, T.: Authoring Intelligent Tutoring Systems: An Analysis of the State of the Art. International Journal of Artificial Intelligence in Education 10 (1999) [5] Zhang, Y., Zhang, S., Vuong, S., Malik, K.: Mobile Learning with Bluetooth-Based E-Learning System. In: 2nd International Conference on Mobile Technology, Applications and Systems (2005)
ReMashed – Recommendations for Mash-Up Personal Learning Environments Hendrik Drachsler1, Dries Pecceu2, Tanja Arts2, Edwin Hutten2, Lloyd Rutledge2, Peter van Rosmalen1, Hans Hummel1, and Rob Koper1 Open University of the Netherlands, 1 Centre for Learning Sciences and Technologies & 2 Computer Science Department, PO-Box 2960, 6401 DL Heerlen, The Netherlands {hendrik.drachsler,lloyd.rutledge,peter.vanrosmalen, hans.hummel,rob.koper}@ou.nl, {pecceu,ekh.hutten,tg.arts}@studie.ou.nl
Abstract. The following article presents a Mash-Up Personal Learning Environment called ReMashed that recommends learning resources from emerging information of a Learning Network. In ReMashed learners can specify certain Web2.0 services and combine them in a Mash-Up Personal Learning Environment. Learners can rate information from an emerging amount of Web2.0 information of a Learning Network and train a recommender system for their particular needs. ReMashed therefore has three main objectives: 1. to provide a recommender system for Mash-up Personal Learning Environments to learners, 2. to offer an environment for testing new recommendation approaches and methods for researchers, and 3. to create informal user-generated content data sets that are needed to evaluate new recommendation algorithms for learners in informal Learning Networks. Keywords: recommender system, mash-up, personalisation, personal learning environments, MUPPLE, informal learning, emergence, learning networks.
1 Introduction Nowadays, Internet users take advantage of Personal Environments (PEs) like iGoogle or Netvibes to create a personal view on information they are interested in. The existence of PEs inspired researchers in Technology-Enhanced Learning (TEL) to explore this technology for learning purposes. As a consequence Personal Learning Environments (PLEs) were invented for learners [1, 2]. Because of the combination of various Web2.0 sources in a PLE they are also called Mash-Up Personal Learning Environments (MUPPLEs) [3]. MUPPLEs are a kind of instance of the Learning Network concept [4] and therefore share several characteristics with it. Learning Networks consist of user-generated content by learners who are able to create, comment, tag, rate, share and study learning resources. Due to the large amount of learning resources and learners the Learning Network can show emerging patterns. Learning Networks are from the bottom-up driven because their content is not created by paid domain experts but rather by their U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 788–793, 2009. © Springer-Verlag Berlin Heidelberg 2009
ReMashed – Recommendations for Mash-Up Personal Learning Environments
789
members. These networks explicitly address informal learning because no assessment or accreditation process is connected to them. MUPPLEs also support informal learning as they require no institutional background and no fees. Instead the focus is on the learner independent from institutional needs like student management or assessments. Although, they are most appropriate for informal learning, educational scenarios are imaginable where MUPPLEs become integrated into formal courses as well. MUPPLEs are used to combine different information from the web that is supportive to the individual learner regarding the personal competence development. Most of the time, the sources are free to use and selected by the learner. A common problem for PEs and MUPPLEs is the amount of data that is gathered in a short time frame. The learners can be overwhelmed by the information they receive or they might have problems selecting the most suitable learning resource for their personal competence development. Therefore, we developed a recommender system that offers advice to learners to find suitable learning resources for their individual competence development. The main purpose of recommender systems on the Internet is to pre-select information a user might be interested in. The motivation for a recommender system for MUPPLEs is to improve the ‘educational provision’; to offer a better learning goal attainment and to spend less time to search for suitable learning resources [5]. In the following section we first discuss related work (section two). After that we introduce the ReMashed system (section three) and finally discuss future research (section four).
2 Related Work Nowadays, ‘mashing’ information becomes a widely used activity on the Internet. Various tools (Yahoo Pipes, Dapper, Openkapow etc.) provide the opportunity to combine information from other websites in a new way. Users do not need special programming skills to use the tools in order to combine different Internet sources. The users can make advantage of public APIs of Web2.0 services and standardized XML formats like Jason to mash data in a new way. In TEL several European projects address these bottom-up approaches of creating and sharing knowledge. The TENCompetence project addresses learners in informal Learning Networks [6]. The iCamp project explicitly addresses research around MUPPLEs [3]. They created an easy programmable and flexible environment that allows learners to create their own MUPPLE for certain learning activities. However, these systems face the problem that the emerging behavior of these bottom-up approaches gathers large amounts of data. With the ReMashed system we want to offer navigation support for such emerging bottom-up MUPPLEs to help learners to find the most suitable data for their learning goals. In recommender system research, extensive studies is going on to take advantage of tags for recommendations [7, 8]. Single systems like Delicious or Flickr offer recommendations to their users based on their data and also researchers take advantage of single Web2.0 services to create recommender systems [9]. However, the combination of different Web2.0 services to recommend information based on mashed tag and
790
H. Drachsler et al.
rating data has not been attempted so far and especially not for learners in MUPPLEs. Thus, ReMashed offers a new approach by mashing data of learners from various Web2.0 services to provide pedagogical recommendations.
3 The ReMashed System A prominent example of ReMashed from a different domain is the MovieLens project created by the GroupLens research group. They offer a movie recommender service where people can rate movies and get recommendations for movies. Besides this attractive services GroupLens created a frequently used data set for the development of recommender systems and related research [10]. Likewise ReMashed has three main objectives: 1. to provide a recommender system for MUPPLEs to learners, 2. to offer an environment for testing new recommendation approaches and methods for researchers, and 3. to create informal user-generated-content data sets that are needed to evaluate new recommendation algorithms for learners in informal Learning Networks.
Fig. 1. The user interface of the ReMashed system. On the left side, the mashed information from delicious and blogs are shown. On the right side, the rating based recommendations for the current learner are presented.
In order to test our recommendation approach for MUPPLEs we designed a MashUp that enables learners to integrate their Web2.0 sources (see Fig 1). The system allows the learners to personalise emerging information of a community to their preferences. They can rate information of the Web2.0 sources in order to define which contributions of other members they like and do not like. ReMashed takes the preferences into account to offer tailored recommendation to the learner. ReMashed uses
ReMashed – Recommendations for Mash-Up Personal Learning Environments
791
collaborative filtering [11] to generate recommendations. It works by matching together users with similar opinions about learning resources. Each member of the system has a 'neighborhood' of other like-minded users. Ratings and tags from these neighbors are used to create personalised recommendations for the current learner. The recommender system combines tag and rating based collaborative filtering algorithms in a recommendation strategy. Such a recommendation strategy reacts on certain situations by using the most suitable recommendation technique. The recommendation strategy is triggered by certain pedagogical situations based on the profile of the learner or available learning resources [12]. In the initial state of ReMashed, learners have sign up for the system and have not rated any learning resources. ReMashed identifies the cold-start situation of the recommender system [11] and recommends resources based on tags of the Web2.0 sources of the current learner. It computes the similarity between the tag cloud of the current learner with other learners and learning resources. After the learner started to rate resources above a certain threshold a rating based Slope-One algorithm provides additional recommendations to the learner. ReMashed is an Open Source project based on PHP5, Zend Framework 1.7 with the Dojo Ajax framework, MySQL database, Apache Server and the Duine recommendation engine. ReMashed is following the Model-View-Controller programming concept and is therefore fully object oriented. It consists of five sub-systems (see Fig 2), a user interface, a data collector, a user logger, a recommender system and the Duine prediction engine [13].
Fig. 2. Technical architecture of the ReMashed system
792
H. Drachsler et al.
─ The User Interface is responsible for user interaction, authentication of users, registration of new users and updating of user data. ─ The Data Collector establishes the connection between the Web2.0 services and gathers new data into the ReMashed database via a CRON job that runs every hour. ─ The Logger offers logging methods to the other subsystems. It stores log messages and monitors user actions in the system. ─ The Recommender System composes the recommendations for every user and puts them into the database. It allows implementing new recommendation algorithms in PHP but it also provides a connection to the Duine 4.0 prediction engine based on JAVA that can be used to compute recommendations for the learning resource. ─ The Duine Prediction Engine offers extensive options for configuring various recommender algorithms. It provides a sample of most common recommendation algorithms that can be combined in algorithm strategies, thus it is possible to create new recommendation strategies that follow pedagogical rules. We tested the system in an usability evaluation in a group of 49 users from 8 different countries [14]. The evaluation phase ran for one month and was concluded with an online recall questionnaire. In that timeframe 4961 resources were collected, 420 resources were rated and 813 recommendations were offered. The overall satisfaction with the system was positive. Nevertheless, the participants suggested particular improvements we will take into account for the future development of the system.
4 Conclusions and Future Research This article presented the ReMashed system, an evaluation tool for recommender systems for learners in informal Learning Networks. The article showed the design and implementation of a recommender system for MUPPLEs. The future developments of ReMashed rely on an end-user perspective and on a researcher perspective. Regarding the end-user perspective ReMashed needs to integrate additional Web2.0 features (i.e. integrating social networks like facebook). This may improve the isolation of informal learners towards the organisation of learning communities. Retrieved information from social networks can be used to improve the recommendations and strengthen the communities; for instance, learners that have certain social relationships will likely want to share their learning resources with their community. The type of relationship between learners can affect which kinds of recommendations are given. In addition, ReMashed should provide a widget interface to enable learners to integrate recommendations from ReMashed into their MUPPLEs. Such a widget has to provide the recommendations and the possibility to rate learning resources to further personalise the needs of the learners. From a researcher perspective, ReMashed opens the possibility to provide usergenerated-content data sets of various domains. Comparable to the famous MovieLens data set, a standard for the evaluation and development of recommender system algorithm in TEL can be created. Further, when considering different ReMashed communities in health, education or public affairs, data sets from theses domains can be used to develop solutions for the cold-start problem of recommender system by providing an already rated data set for a particular domain.
ReMashed – Recommendations for Mash-Up Personal Learning Environments
793
Acknowledgement Authors’ efforts were (partly) funded by the European Commission in TENCompetence (IST-2004-02787) (http://www.tencompetence.org).
References 1. Liber, O., Johnson, M.: Personal Learning Environments. Interactive Learning Environments 16, 1–2 (2008) 2. Wild, F., Kalz, M., Palmer, M. (eds.): Mash-Up Personal Learning Environments. CEUR Workshop Proceedings Maastricht, The Netherlands, vol. 388 (2008) 3. Wild, F., Moedritscher, F., Sigurdarson, S.E.: Designing for Change: Mash-Up Personal Learning Environments. eLearning Papers 9 (2008) 4. Koper, R., Tattersall, C.: New directions for lifelong learning using network technologies. British Journal of Educational Technology 35, 689–700 (2004) 5. Drachsler, H., Hummel, H., Koper, R.: Identifying the Goal, User model and Conditions of Recommender Systems for Formal and Informal Learning. Journal of Digital Information 10, 4–24 (2009) 6. Wilson, S., Sharples, P., Griffith, D.: Distributing education services to personal and institutional systems using Widgets. In: Wild, F., Kalz, M., Palmer, M. (eds.) Mash-Up Personal Learning Environments, Proceedings of the 1st MUPPLE workshop. CEUR-Proceedings, Maastricht, The Netherlands,vol. 388 (2008) 7. Shepitsen, A., Gemmell, J., Mobasher, B., Burke, R.: Personalized recommendation in social tagging systems using hierarchical clustering. In: Recommender Systems 2008, pp. 259–266. ACM, New York (2008) 8. Symeonidis, P., Nanopoulos, A., Manolopoulos, Y.: Tag recommendations based on tensor dimensionality reduction. In: Recommender Systems 2008, pp. 43–50. ACM, New York (2008) 9. Garg, N., Weber, I.: Personalized, interactive tag recommendation for flickr. In: Recommender System 2009, pp. 67–74. ACM, New York (2009) 10. Sarwar, B.M., Karypis, G., Konstan, J., Riedl, J.: Recommender systems for large-scale e-commerce: Scalable neighborhood formation using clustering. In: Fifth International Conference on Computer and Information Technology (2002) 11. Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM conference on Computer supported cooperative work, pp. 241–250 (2000) 12. Drachsler, H., Hummel, H., Koper, R.: Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology 3, 404–423 (2008) 13. Van Setten, M.: Supporting people in finding information. Hybrid recommender systems and goal-based structuring. Telematica Instituut Fundamental Research Series No. 016 (TI/FRS/016) (2005) 14. Drachsler, H., Peccau, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H., Koper, R.: ReMashed - An Usability Study of a Recommender System for Mash-Ups for Learning. In: 1st Workshop on Mashups for Learning at the International Conference on Interactive Computer Aided Learning, Villach, Austria (submitted)
Hanse 1380 - A Learning Game for the German Maritime Museum Walter Jenner and Leonardo Moura de Araújo HS Bremerhaven, An der Karlstadt 8, 27568 Bremerhaven, Germany [email protected], [email protected]
Abstract. In an one year lasting project at the University of Applied Sciences in Bremerhaven a digital learning game for the German Maritime Museum in Bremerhaven was developed. It is targeted to school pupils in the age between 10 and 14 and should explain the importance of the cog for trading activities between Hanse cities in the 14th century. More detailed learning objectives were defined through a survey with history teachers from Bremen. The historical research was done in cooperation with the museum. Another key-interest was the design and building of an easy-to-use and attractive computer terminal including a special control-interface for the game. The resulting game is evaluated in an user-test with 29 school pupil. It shows that the game makes fun and is easy to understand. Approx. 50% of the pupils achieved all learning objectives.
1
Game–Based Learning in a Museum
One part of the duty of a museum is to provide and transport information to the visitor1 . Traditional museum exhibits show parts and aspects of the topic the museum or the particular exhibition is dealing with. The visitor has a passive role and no possibility to "respond". Interactive exhibits, in contrast, enable the visitor to participate and explore actively the information provided by the museum. The learning effect can increase with interactive exhibits in so far that exhibitions can be more "entertaining" [1] as well as "inspire and provoke exploration ... and to tempt people to look more thoughtfully at traditional museum displays" [2]. Anne Fahy described it like that [3, p. 89]: Interactive Devices have an active and important role to play in the communication process. This is emphasized by research carried out by the British Audio Visual Society which showed that whilst we only remember 10 per cent of what we read, we remember 90 per cent of what we say and do (Bayard-White 1991). 1
See the definition of a museum by ICOM: http://icom.museum/statutes.html#3
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 794–799, 2009. c Springer-Verlag Berlin Heidelberg 2009
Hanse 1380 - A Learning Game for the German Maritime Museum
1.1
795
Game–Based Learning
Game–based learning means that learning content is embedded within a game. In the last years a lot of researches have shown that learning through games can have various advantages. Richard van Eck points out one advantage of games [4, p. 4]: Games embody well-established principles and models of learning. For instance, games are effective partly because the learning takes place within a meaningful (to the game) context. What you must learn is directly related to the environment in which you learn and demonstrate it; thus, the learning is not only relevant but applied and practiced within that context. Learning that occurs in meaningful and relevant contexts, then, is more effective than learning that occurs outside of those contexts, as is the case with most formal instruction. Van Eck stresses the advantage that within a game new knowledge is more meaningful as it can be applied directly. The success of a certain action or strategy is usually shown immediately. Another strength of game-based learning is that learning is joyful as it happens while playing. Traditional learning situations, like lectures in school or self-study from books have the negative picture of being boring and pupils have to be "forced" to learn (e.g. to pass exams). The motivation of playing computer games is much higher as playing is seen as pleasure and not as work. Malone and Lepper researched about what can people motivate to learn, and they have found out that many features found in games (like challenge and performance feedback) positively influence motivation for learning [5]. They differentiate between intrinsic and extrinsic motivation, whereas they define intrinsically motivated learning as learning that occurs in a situation in which the most narrowly defined activity from which the learning occurs would be done without any external reward or punishment. [5, p. 229] They state the hypothesis that intrinsically motivated learning will lead to better learning results. 1.2
Putting the Exhibits in Context
Historic exhibits are dead objects, they are no longer in use nowadays. It is hard to imagine, why certain objects were important in times which are completely different to the present. The conserved cog, which is the main attraction of the exhibition about medieval ships in the German Maritime Museum (GMM), is more then 500 years old and destroyed to a large extent. No doubt that it has an enormous historic value, but without the context of how it was used in the past it cannot be fully understood. Within a game the museum visitor can be enabled to experience the past and learn about the context in which the shown exhibits were used.
2 2.1
Restrictions Target Group
As a target group for the game, pupils aged between 10 and 14 years were taken.
796
2.2
W. Jenner and L. Moura de Araújo
Needs for a Terminal Game in a Museum
As the game should be played on a computer terminal within a museum, it must be easy to understand. A quantitative study by Fleck et al. [6] has shown that a typical museum visitor spends 1-2 minutes at a museum object. However, if the visitor is engaged within that time, the time at one exhibit can increase to 10-15 minutes. The same study has shown that labels and instructions for interactive exhibits are usually not read. Interactive exhibits are tried out directly and people just refer to the instructions if they fail. For a learning game in a museum that means that it is necessary to motivate the visitor within 1-2 minutes to play the game. Long instructions should be avoided and in contrast it should be possible to explore the game. To allow exploration of the game, it must be intuitive and easy to use (which also includes the computer terminal). Finally, the overall game time should not be longer than 10-15 minutes. To summarize, these three requirements were defined: – The game should start immediately. – A tutorial should make it possible to explore the game step by step. – Intuitive hardware controls should make the controlling as easy as possible.
3
Results
The final result is a simulation game. The player takes the role of a young captain of a cog, based in Lübeck, who has to sail and trade goods in the Northand Baltic Sea. The game time is limited to 5-10 minutes which correlates to one sailing season within the game. Roughly, the game can be divided into two different parts, one part is a sailing simulation which considers the special way of sailing in the medieval time. The player has to follow landmarks in order to find the next city, he2 can be attacked by pirates, and he depends on wind from the back, as cogs had a yardarm sail which required exactly that. The second part of the game happens when the player has arrived in a city (Fig. 1). He has to show his skills as a trader, by selling and buying goods. In order to show the devoutness of people in medieval times, it is also possible to donate money to the church. As the player donates more money his influence in the city increases, which has a positive effect on his final score. Also, if he donated enough money, the gods might help him when pirates attack. There is also a high score list of the ten best players, which should be a motivating reward. 3.1
Direct Start of the Game and the Tutorial
The game can be started very quickly—instead of presenting long instructions at the beginning, small junks of information are presented step by step. After 2
Although in this report the player (the user, etc.) if referred to in the male form, it is directed at both sexes.
Hanse 1380 - A Learning Game for the German Maritime Museum
797
Fig. 1. Trading part of the game in Lübeck. Important parts of city—as the church— are based on old drawings.
the player successfully finished one step in the tutorial the next step is shown. Therefor the new knowledge is connected to the current situation in the game and thus should be remembered easier. 3.2
Computer Terminal
To control the cog in the sailing simulation the player uses a miniature model of a capstan and a rudder. The design of the controls is connected to the real look of those instruments. Firstly, the mental mapping of the control to its corresponding function should be supported by that. Secondly, due to this similarity to the real instruments, the player also gets an impression how these instruments look like on cogs. Also, the whole terminal design looks like a small cog, which creates a more interesting atmosphere and invites people to use the terminal. Additionally the game uses a touchscreen for user-input. 3.3
User Test
With an unfinished prototype of the game a user test with 29 pupils fitting the target group was conducted. It tested if the pupils are able to understand the game and control the cog, if they like the game (and which parts of it) and if they achieve the learning objectives. Additionally it included questions about general usage of computer games.
798
W. Jenner and L. Moura de Araújo
Attitude Towards Computer Games. Some pupils play computer games daily and all of them play at least multiple times per week. Regarding the preferred genre no clear preference can be found. The games range from "shooting games" (in particular Counter-Strike), strategy games, racing games to simulation games (The Sims). Shooting games are more popular for boys (7 boys and 3 girls stated to play shooting games), whereas The Sims is only played by girls in this test group. The majority of the tested pupils have not played games in museums so far (21 of 29). Usability. In general the usability of the game was good. All of the pupils understood how to control the cog and they rated the difficulty of it with 2,213 . 89,29 % of the tested pupils understood what their task in the game is. 89,29% understood how the current time of the season is indicated. 72,41% understood how the damage of the cog is indicated. 96,55% understood how the wind is indicated. And 85,71% understood the landmarks. On the question how much they like the game and single parts of it (graphics, sound, dialogue, overall) an average of 2,164 was achieved. Learning Objectives. In general not all children achieved the learning objectives, which were requested in the post-interview. 89,66% of the pupils remembered at least one hanse city. The naming of correct products was more difficult, but the trading feature was not fully implemented in the test-version of the game. 44,83% of the pupil could name the correct duration of a trading season, but again the prototype was not finished regarding that aspect, so it is not a surprise to have this result. The century in which this game takes place was not remembered well, just 41% did so. The same percentage of pupil could name the trading alliance, this game is dealing with. As this knowlegde is not needed within the game, it supports the hypothesis that factual knowlegde, which is not applied in the game, is not remembered very well. Summary. A general positive result is that most pupils liked the game. An overall grade of 2,16 is promising. It shows that the game-play functions and that the goal to make a good game in general is reached. In particular the victory condition of the game is communicated well (89,29% understood it), which by supporting the competitive element is an important part of a game [7]. What is also very positive is that the vast majority understood the game itself and the interface very well.
4
Conclusion
Learning objectives need to be integrated strongly within the game. Information which is just provided but not needed to successfully finish the game will not 3 4
On a scale from 1 to 4, where 1 is too easy and 4 is too difficult. An a scale from 1 to 5, where 1 is very good and 5 is very bad.
Hanse 1380 - A Learning Game for the German Maritime Museum
799
be remembered. Roughly two different ways to integrate learning content can be observed. Firstly, content can be transported via rules. For example if the objective is that the player should know how long a trading season is, then the according game rule can stress that the player has to finish a task within one trading season. Another way to integrate a learning objective into a game is via a feature. An example used in this game are pirates. The according learning objective is to show the danger of pirates in the medieval time. It is implemented in a way that on special routes the players cog might be attacked by pirates. To survive the attack of pirates the player then has various possibilities which correlate to the possibilities that seamen had in medieval times. At the same time it got clear that information which is not directly integrated into the game is not remembered. Our tests have shown that not many children could remember the name of the famous trading union ("Hanse") although textual hints refer to it multiple times and also the name of the game itself "Hanse 1380" which is very prominently placed.
References 1. Witcomb, A.: Interactivity: Thinking beyond. In: Macdonald, S. (ed.) A Companion to Museum Studies, pp. 353–361 (2007) 2. Stevenson, J.: Getting to grips. Museums Journal, 30–32 (May 1994) 3. Fahy, A.: New technologies for museum communication. In: Hooper-Greenhill, E. (ed.) Museum, media, message, pp. 82–96. Routledge, London (2002) 4. Eck, R.V.: Digital Game-Based learning: It’s not just the digital natives who are restless. EDUCAUSE Review 41(2) (2006) 5. Malone, T.W., Lepper, M.R.: Making Learning Fun: A Taxonomic Model of Intrinsic Motivations for Learning. In: Conative and Affective Process Analyses. Aptitude, Learning, and Instruction, vol. 3 (1987) 6. Fleck, M., Frid, M., Kindberg, T., O’Brien-Strain, E., Rajani, R., Spasojevic, M.: From informing to remembering: ubiquitous systems in interactive museums. IEEE Pervasive Computing 1(2), 13–21 (2002) 7. Salen, K., Zimmerman, E.: Rules of Play: Game Design Fundamentals. MIT Press, Cambridge (2003)
A Linguistic Intelligent System for Technology Enhanced Learning in Vocational Training – The ILLU Project Christoph Rösener Fachrichtung 4.6 Angewandte Sprachwissenschaft sowie Übersetzen und Dolmetschen, Universität des Saarlandes, Bau A2 2, Postfach 15 11 59, D-66041 Saarbrücken [email protected]
Abstract. In this paper I will describe a linguistic intelligent software system, using methods from computational linguistics, for the automatic evaluation of translations in online training of interpreters and translators. With this system the students gain an online interface offering them proper translation training. The main aim in developing such a system was to create an e-learning unit which allows the students to translate a given text in a special online environment and afterwards receive an automatic evaluation of the entered translation from the system. This is done on a computational linguistics basis using special analyzing software, model solutions and stored classifications of typical translation mistakes. Keywords: Vocational training, Language Learning, Natural Language Processing.
1 Introduction The types of interactive e-learning units used in the vocational training of translators and interpreters are currently limited by the technical possibilities provided by various e-learning systems. On the one hand there are e-learning units where users can obtain an automatic evaluation that is performed by the system. On the other hand, the evaluation of the texts is done by tutors. In the aforementioned case the given data is initially sent to the relevant tutor. After the evaluation of the texts by the tutor the results are sent back to the students or stored in an online rating system. If the elearning unit offers automatic evaluation by the system, the variety of units is very limited. In most cases the units are term and definition questions, multiple choice exercises, cloze units, exercises to reconstruct text or word order etc. But there is one thing all these exercises have in common: it is not possible to automatically evaluate free text. These texts can only be evaluated by a tutor. An automatic evaluation of free texts with regard to the quality of language and translation is not yet available1. 1
Approaches such as NIST [4], BLEU [5], Levenshtein are based on a measurement of character string similarity. Thus they are not really a yardstick for translation quality.
U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 800–805, 2009. © Springer-Verlag Berlin Heidelberg 2009
A Linguistic Intelligent System for Technology Enhanced Learning
801
2 Description In this paper I will describe an intelligent software system, using methods from computational linguistics, which is able to evaluate free text translations record-by-record automatically. In addition the system is able to give qualified feedback for each mistake found automatically. The process-scheme is shown in Figure 1.
Fig. 1. Process-scheme
3 Requirements For the successful implementation of such a system certain requirements were necessary. These included as the core the linguistic resources. Furthermore it was necessary to provide additional resources, including the source texts and possible model translations as well as examples of possible mistakes. A differentiated error code and corresponding feedback texts were also required and material about special translational problems. For an initial automatic evaluation of the posted translation commercial spell and grammar checkers are used. For a more profound analysis the posted text is morphosyntactically and semantically analysed. For this also, depending on the source language, various existing software packets are used. Finally special software for the comparison between the analysed translation posted by the students and the stored model solutions and examples of possible mistakes had to be developed within the project. In the process both model solutions as well as possible mistakes are stored in the system. A consistent, differentiated error code, which describes precisely the various mistake-scenarios, provides the basis for detailed feedback messages to the students. The system was initially intended to focus only on special translational problems of a certain language pair. Therefore it was necessary to provide material for these specific problems together with corresponding examples.
802
C. Rösener
4 Approach For the prototypical system (special translational problems E->D and F->D respectively) as a spell and grammar checker the existing software "Duden Korrektor Plus" of the Duden Verlag Mannheim is used. This software provides spell and grammar checking in consideration of the context. In detail the system offers correction of typing errors, spelling of hyphenated words, upper and lower case, compound or separate spelling, abbreviations, punctuation, mistakes in congruency, typography and regimen. This is done on the basis of the standard Duden dictionaries and books of reference [1]. For the morphosyntactic and semantic analysis of the posted translation the program MPRO is used in the ILLU system. MPRO is a software package for the morphosyntactic and semantic analysis of texts, which was developed by the Institute of Applied Information Science (IAI) in Saarbrücken. The program assigns a bundle of linguistic information to every recognized character string of a text. Normally the basic form (citation form) and part of speech (noun, verb, adjective etc.) are generated. Furthermore MPRO provides information about the inflection (case, number, gender, tense, person) as well as the structure of a word. For so-called "meaningful words" (nouns, adjectives, verbs, adverbs) the program also provides a semantic class. The assigned information is added to each string in form of a feature bundle. For the analysis of a word MPRO uses a dictionary of morphemes. The dictionary for German presently contains about 90,000 entries [2].
5 Comparison Module Due to the morphosyntactic and semantic analysis there are many features available for the comparative operation between the posted translation and the stored model solutions and possible mistakes. At word level the most important are the original string and the basic form, case, number, gender, tense and part of speech. At sentence level there are some more, e.g. word occurrence, word order, marking of phrases or sentences to name but a few. For the comparison operation it was necessary to define distinct parameters on the basis of which the comparison is made. On the one hand the feature bundles which are used for the comparison had to be defined. After that it was essential to define a method to compute a measure of similarity between the posted text and the stored model solutions and possible mistakes. Initial tests led to the implementation of a prototypical comparison module. The program computes whether certain feature bundles between two structures are identical or not. Depending on the various linguistic features this is done using different strategies to find the differences between the structures. Finally the various mistakes, if any, are determined and the result is sent to the next module. A differentiated definition of possible types of mistakes and their classification was one of the basic requirements of the system. Here the complexity of the error code corresponds directly with the quality of the system. The more differentiated the error code, the more powerful the system is. It is however not necessary to redefine everything. In the past many research projects have dealt with typical translational mistakes. Some of the material that was acquired in these projects was used for the prototypical system2. 2
The material acquired e.g. in the MeLLange project [3].
A Linguistic Intelligent System for Technology Enhanced Learning
803
Fig. 2. Detailed process sequence (translation E->D or F->D respectively)
The implementation of rules for the determination of mistakes was very labourintensive at the beginning of the project. But together with the aforementioned comparison operation these rules are responsible for the quality. The more differentiated the rules for a certain translation and the corresponding model solutions and possible mistakes, the more high-quality the system is. Beside rules based on the morphological, syntactic and semantic level (e.g. false verb, false relative pronoun etc.) it is also possible to implement rules which are sentence specific (e.g. changed constituents, word occurrence). If the topic of a certain unit is a particular translational problem, it is also possible to define specific rules for this. So far only rules on the morphological and syntactic level have been implemented. One of the ideas of the project is, that after initially collecting all rules as singular rules per text and translation, perhaps at a later date specific rules can be summarised to more abstract rules. Additionally this might be a chance to gain interesting results for translation studies. After the translational mistakes have been precisely determined by the comparison operation the corresponding feedback messages are sent back to the students. After processing one sentence the messages are given back to the students. Until now there is a fixed set of possible feedback messages implemented. But there is no restriction concerning the form of the feedback messages. It is for example possible to store not only detailed feedback messages for specific translational problems. In the future whole e-learning units and links to special phenomena and further literature can be provided.
6 Examples and Preliminary Results In Figure 3 an example is given to show how the system works. The original text in the source language is shown in the text field "Originalsatz". The text field "Lösung" contains a possible model translation, which is shown to the students on demand. The textfield "Lösungshinweis" contains advice for a possible solution and is also shown to the students on demand. Further down the corresponding model solutions and
804
C. Rösener
Fig. 3. User interface of the prototypical System (tutor interface; translation E->D)
possible mistakes in German are entered into the system. Due to linguistic intelligence this is possible on a phrase basis. This provides more possible combinations and therefore variety for possible translations. The system has the basic strategy of identifying first correct and false solutions. If this process is finished and none of the stored model solutions or possible mistakes correspond with the posted translation, the system gives feedback to check the translation again. At the same time the students can on demand obtain advice about a possible solution as well as a model solution for the current translational problem. And simultaneously the possibility of a separate evaluation of the posted translation by a tutor is given within the frame of parallel translation lessons or via email.
7 Evaluation and Conclusion The advantage of interactive e-learning units for translators and interpreters is, as for all e-learning systems, their availability. It is an additional e-learning possibility, which the students can use independent of time and place. A further advantage is that it also reduces the workload of the lecturers. Within the translation lessons only special translational problems need to be covered. No more time need be spent on spelling and grammar mistakes. These mistakes have already been corrected automatically by the system. Furthermore, interactive e-learning units for translators and interpreters are particularly suitable for the consolidation of special translational problems. Special translational phenomena can be explained by model sentences and texts. With the help of a detailed feedback system additional material can be provided for the students. Here the system can be constructed in a modular way and used in addition to the translation lessons, where attendance for students is obligatory. In implementing
A Linguistic Intelligent System for Technology Enhanced Learning
805
the present prototype the implementation of the rules for the comparison module turned out to be difficult. This requires further analysis. Perhaps the use of certain existing methods, e.g. "fuzzy-match" techniques of TM Systems is a solution to this problem. A further difficulty turned out to be the storing of model solutions and possible mistakes. During the implementation of the current system various interfaces were developed. Finally now both these things are possible with the help of a special tutor interface, which is easy to use and therefore suitable also for lecturers without any programming knowledge. Another disadvantage of the outlined system is that the automatic evaluation of translations is only possible record-by-record. Thus not all possible versions of a translation can be covered. Perhaps this problem can be solved in the near future by techniques used already in the alignment process of Translation Memory systems. However, it has been demonstrated that the development of linguistic intelligent interactive e-learning units used in the vocational training of translators and interpreters is possible. Further tests with the prototype will need to demonstrate whether the students accept such systems. Certainly the potential effects of such a system on the elearning community are obvious: When it is possible to evaluate free text with relation to certain stored model solutions or other requirements, the system represents a powerful software tool which can be used not only in the vocational training of translators and interpreters, but also in other areas, where the possibility of free text input is desirable.
References [1] [2] [3] [4] [5]
Duden Verlag Mannheim. Bücher und Software. Bibliographisches Institut & F. A. Brockhaus AG (2007), http://www.duden.de/produkte/ Maas, H.-D.: Multilinguale Textverarbeitung mit MPRO. In: Lobin, G. (ed.) Europäische Kommnikationskybernetik heute und morgen. KoPäd, München (1998) MeLLange: Multilingual eLearning in LANGuage engineering (2007), http://mellange.eila.jussieu.fr/ NIST: Automatic Evaluation of Machine Translation Quality Using N-gram CoOccurrence Statistics, Automatic Evaluation of MT Quality, NIST (2005) Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: A Method for Automatic Evaluation of Machine Translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (2002)
e³-Portfolio – Supporting and Assessing Project-Based Learning in Higher Education via E-Portfolios Philip Meyer, Thomas Sporer, and Johannes Metscher Institute for Media and Educational Technology Universitätsstr. 2, 86135 Augsburg, Germany [email protected]
Abstract. e³-portfolio is a software tool which supports learning and working in student project groups. Besides features for collaboration via social media, the software offers an electronic portfolio system. The e-portfolio helps to integrate informal project-based learning into the formal curriculum of higher education. This paper gives an overview of how the software tool is designed and relates the design to the underlying didactic concept. Keywords: Project-based learning, e-portfolio, e-collaboration, e-assessment.
1 Introduction Practical experiences and key competencies are becoming increasingly important for students in today’s working life. One way to attain those competencies is to take part in self-organized project groups at the periphery of their university. Here students learn to solve problems and become part of a community of practice [1]. At the University of Augsburg students can get such extra-curricular learning activities accredited through the study programme "Problem Solving Competencies" [2]. This study programme builds on the reflection of the student’s experiences via e-portfolios and focuses the assessment on the articulation of the competencies that the students acquire [3]. The organisation of that study programme is facilitated by the software tool outlined below.
2 Description of the Software Tool The technological basis of the software tool is the open-source platform and content management system Drupal (www.drupal.org). The various features of Drupal are utilised to foster collaboration of the users. The tool is structured into three parts: Students organise their project groups in the community area. They create their journals and project reports via the portfolio area. Further it structures the assessment process of the student’s learning achievements and their accreditation in the assessment area [4]. When visiting the website (www.begleitstudium-problemloesekompetenz.de), a welcome page informs the users about the aims of the study programme (e.g. press U. Cress, V. Dimitrova, and M. Specht (Eds.): EC-TEL 2009, LNCS 5794, pp. 806–810, 2009. © Springer-Verlag Berlin Heidelberg 2009
e³-Portfolio – Supporting and Assessing Project-Based Learning in Higher Education
807
releases, reference to the project blog, interviews with participants). The three main areas, however, can only be accessed to their full extend after registration.. In the following sections these areas are described – in their functionality for unregistered and already registered users. 2.1 Community Area For unregistered users the community area gives an overview of the project groups that take part in the study programme (e.g. campus magazine or campus radio). Each project has a public space where they can present themselves, the project ideas (e.g. via video interviews with the project leaders) and descriptions of the activities participants can take on. Project groups can adapt this public area to their "corporate design" to ensure the identity of the project is maintained. News about the project can also be published to inform others about the initiative. After registration the internal community area provides access to all the groups of which the user is a member or owner. Registered users can create new project groups or join existing groups by request.
Fig. 1. Overview of the features in the community area
Additionally the community area features various tools for project and knowledge management. There is a community blog where discussions within the group can take place and by which the group can organize their collaboration by announcing important dates and deadlines. Moreover there is a wiki for each group which offers the functionality to share knowledge between the group members. And there is a document repository which allows to publish meeting protocols and to share files.
808
P. Meyer, T. Sporer, and J. Metscher
2.2 Portfolio Area In its unregistered view the portfolio area shows exemplary profiles from participants of the study programme. In short video interviews participants describe what motivated them to attend the project group and what is special about being part of their project. Aside you can view some personal information about the participants and browse through their learning journals. After registration the participants can write their project diary in form of a blog in the portfolio area. Here students periodically reflect on the experiences they make during their project activities. The reflection process is scaffolded by some guiding questions like “What happened since the last entry in my project diary?” or “What are my thoughts and feelings as to the current situation in the project?”. At the end of each semester students can create a project report. This report summarises the salient events during the participation in the project and presents them in form of a learning history.
Fig. 2. Overview of the features in the portfolio area
The portfolio area also helps the students to keep track of all their diary entries and project reports. Here they can collect all these items and prepare them for submission to the assessment area. 2.3 Assessment Area In its public view the assessment area is rather unspectacular. It shows a description of what this area is supposed to offer, namely a space for registered users to submit project diaries and reports and to get feedback for their learning and working achievements. The registered view of the assessment area thus enables the organisation of all the achievements that have been performed in the context of the study programme and their accreditation in the formal curriculum.
e³-Portfolio – Supporting and Assessing Project-Based Learning in Higher Education
809
Fig. 3. Overview of the features in the assessment area
After the participant has completed all building blocks of the study programme, she can obtain the certificate "Problem Solving Competencies". If the student wants to have the credit points that were gained during the project work accredited in the formal studies, the project report has to be handed in via the assessment area and becomes graded by the coordinator of the co-curricular study programme.
3 Underlying Didactical Concept The platform was designed to support a didactical concept which focuses on the integration of informal learning activities into the formal university curriculum [4]. The three main areas described above therefore differ in the degree of formalisation of the learning setting (see Fig. 4). The community area is very close to the practice of the project group as an informal learning community. Students discuss, collaborate and share their experiences, but this all happens on an informal level with a low degree of formalisation. In the portfolio area the students begin to formalise their experiences by writing them down in a personal diary. But this still happens close to the context of what is actually going on in the project practice and the involvement of theoretical assumptions is marginal. Finally, in the assessment area, the students decide which of the texts and artifacts they created during the project work are worth being submitted to the programme coordinators. The students choose entries, where the reference to the goals of their formal studies is obvious. They also make assumptions in their project report on how their project participation and their formal studies relate to one another. In figure 4 the portfolio-based assessment strategy is summarised: The students collect their working achievements and diary entries in the working portfolio. At the end of the semester they combine these artifacts to a coherent learning history in the story portfolio. Via the test portfolio they finally argue what competencies they acquired in a project report and show how their experiences relate to their formal studies.
810
P. Meyer, T. Sporer, and J. Metscher
Fig. 4. Areas of e³-portfolio and blended assessment strategy
4 Conclusion and Future Work This article described the features of a software tool which is currently being used at the University of Augsburg. The software tool supports the collaboration of student’s project groups and it offers a way to integrate informal learning activities into the formal curriculum of higher education via a blended assessment strategy based on eportfolios. Recently, evaluation studies have shown that students want more interconnectedness between the different areas of the software tool. Especially in regard to the portfolio and the assessment area the current state of implementation lacks the functionality to give feedback on the content provided by the participants. Due to the collaborative nature of the community area there is already a lot of interactive functionality present. However, we are planning to introduce even more features in the community area that can support group collaboration.
References 1.
2.
3.
4.
Dürnberger, H., Sporer, T.: Selbstorganisierte Projektgruppen von Studierenden: Neue Wege bei der Kompetenzentwicklung an Hochschulen. Erscheint im Tagungsband der 14. Europäischen Jahrestagung der Gesellschaft für Medien in der Wissenschaft. Waxmann, Münster (in press) Sporer, T., Reinmann, G., Jenert, T., Hofhues, S.: Begleitstudium Problemlösekompetenz (Version 2.0): Infrastruktur für studentische Projekte an Hochschulen. In: Merkt, M., Mayrberger, K., Schulmeister, R., Sommer, A., Berk, I.v.d. (eds.) Studieren neu erfinden – Hochschule neu denken, pp. 85–94. Waxmann, Münster (2007) Reinmann, G., Sporer, T., Vohle, F.: Bologna und Web 2.0: Wie zusammenbringen, was nicht zusammenpasst? In: Keil, R., Kerres, M., Schulmeister, R. (eds.) eUniversity - Update Bologna. Education Quality Forum. Bd. 3, pp. 263–278. Waxmann, Münster (2007) Sporer, T., Jenert, T., Meyer, P., Metscher, J.: Entwicklung einer Plattform zur Integration informeller Projektaktivitäten in das formale Hochschulcurriculum. In: Seehusen, S., Lucke, U., Fischer, S. (Hrsg.) DeLFI 2008. Die 6. e-Learning Fachtagung Informatik der Gesellschaft für Informatik e.V. Gesellschaft für Informatik, Bonn (2008)
Author Index
Abel, Fabian 154 Abel, Marie-H´el`ene 682 Adam, Jean-Michel 602 Aehnelt, Mario 639 Ala-Mutka, Kirsti 350 Alario-Hoyos, Carlos 621 Alavi, Hamed S. 211 Allmendinger, Katrin 344 Arrebola, Miguel 127 Arts, Tanja 788 Asensio-P´erez, Juan I. 621 Avouris, Nikolaos 267 Barnes, Sally-Anne 700 Beekman, Niels 160 Beham, G¨ unter 73 Belgiorno, Furio 712 Benjemaa, Abir 763 Benz, Bastian F. 521 Berkani, Lamia 664 Betbeder, Marie-Laure 196 Bevan, Jon 7 Bielikov´ a, M´ aria 99, 492 Bimrose, Jenny 700 Bitter-Rijpkema, Marlies 732 B¨ ohnstedt, Doreen 521 Borek, Alexander 391 Borthwick, Kate 127 Bote-Lorenzo, Miguel L. 621 Boticario, Jesus G. 596 Bouchon-Meunier, Bernadette 633 Bourguin, Gr´egory 405 Bouzeghoub, Amel 763 Boytchev, Pavel 549 Breuer, Ruth 166 Brown, Alan 700 Brusilovsky, Peter 88 Budd, Jim 37 Buffat, Marie 763 Cao, Yiwei 166 Cerioli, Maura 651 Charlier, Bernadette 298, 304 Chatti, Mohamed Amine 310 Chen, Hsiu-Ling 706
Chikh, Azeddine 664 Chou, C. Candace 751 Chounta, Irene-Angelica 267 Condamines, Thierry 273 Corness, Greg 37 Courtin, Christophe 572 Cress, Ulrike 254, 338 Cristea, Alexandra I. 7 Daele, Amaury 298, 304 de Hoog, Robert 639 de la Fuente Valent´ın, Luis 744 Delgado Kloos, Carlos 744, 782 Demetriadis, Stavros N. 535 Derntl, Michael 447 De Troyer, Olga 627 Dietrich, Michael 688 Dillenbourg, Pierre 211 Div´eky, Marko 492 Drachsler, Hendrik 788 Dubois, Michel 602 Duval, Erik 757 Emin, Val´erie 462 Esnault, Liliane 304 Ewais, Ahmed 627 Fern´ andez-Manj´ on, Baltasar Ferrari, Anusca 350 Ferraris, Christine 379 ´ Figueira, Alvaro 776 Friedrich, Martin 507
725
Gaˇsevi´c, Dragan 140, 441 Gegenfurtner, Andreas 676 Giretti, Alberto 112 Glahn, Christian 52 Goguadze, Georgi 688 G´ omez-Albarr´ an, Mercedes 645 G´ omez-S´ anchez, Eduardo 621 Gribaudo, Marco 719 Gu´eraud, Viviane 462, 602 Gutierrez-Santos, Sergio 556, 782 Hamann, Karin 344 Hatala, Marek 37, 140, 441
812
Author Index
Heintz, Matthias 584 Held, Christoph 254 Hendrix, Maurice 7 Herder, Eelco 240 Hesse, Friedrich W. 5 Hoppe, H. Ulrich 365 Howard, Yvonne 127 Hsiao, I-Han 88 Hummel, Hans 788 Hutten, Edwin 788 Indriasari, Theresia Devi Ivanovi´c, Mirjana 657
Lu, Tianxiang 67 Lucas, Margarida 325 Luong, The Nhan 769
310
Jahn, Marco 507 Jarke, Matthias 310 Jenner, Walter 794 Jeremi´c, Zoran 441 Jim´enez-D´ıaz, Guillermo 645 Jovanovi´c, Jelena 140, 441 Kahrimanis, Georgios 267 Kalz, Marco 160 Kaplan, Frederic 211 Karabinos, Michael 391 Karsten, Anton 160 Kawase, Ricardo 240 Kempf, Fabian 344 Kennedy-Clark, Shannon 609 Klamma, Ralf 166 Kleinermann, Frederic 627 Koper, Rob 160, 477, 788 Kovatcheva, Eugenia 549 Krauß, Matthias 226 Kravcik, Milos 52 Krogstie, Birgit R. 418 Kump, Barbara 73 Law, Effie Lai-Chong 181 Leblanc, Adeline 682 Leclet, Dominique 405 Lecocq, Claire 763 Lehtinen, Erno 676 Lejeune, Anne 602 Lewandowski, Arnaud 405 Ley, Tobias 73, 700 Lindstaedt, Stefanie N. 73, 639, 700 Lopes Gan¸carski, Alda 763 Lopist´eguy, Philippe 769 Loughin, Tom 37
Magoulas, George D. 106 Maillet, Katherine 763 Malandrino, Delfina 712 Malzahn, Nils 365 Mandran, Nadine 602 Manno, Ilaria 712 Marenzi, Ivana 154 Markus, Thomas 385 Marquesuza` a, Christophe 769 Marsala, Christophe 633 Mart´ınez-Ortiz, Iv´ an 725 Martel, Christian 379 Mavrikis, Manolis 556 Mazarakis, Athanasios 615 McLaren, Bruce M. 391, 688 McSweeney, Patrick 127 Meier, Anne 267 Melis, Erica 67, 688 Memmel, Martin 112 Metscher, Johannes 806 Meyer, Ann-Kristin 688 Meyer, Philip 806 Millard, David E. 127 Mohabbati, Bardia 37 Monachesi, Paola 385 Moreira, Ant´ onio 325 Mossel, Eelco 385 Moura de Ara´ ujo, Leonardo 794 Muise, Kevin 37 Mu˜ noz-Organero, Mario 782 Nejdl, Wolfgang 154, 240 Neumann, Susanne 447, 477 Nguyen-Ngoc, Anh Vu 181 Niemann, Katja 507 Nikolova, Nikolina 549 Nivala, Markus 676 Nodenot, Thierry 769 Oberhuemer, Petra 447, 477 Ouari, Salim 379 Oudshoorn, Diederik 160 Palmieri, Giuseppina 712 Papadopoulos, Pantelis M. 535 Pardo, Abelardo 744, 782 Pearce, Darren 22
Author Index Pecceu, Dries 788 Pellens, Bram 627 Pemberton, Lyn 226 Pernin, Jean-Philippe 462 Pirolli, Peter 1 Poulovassilis, Alexandra 22, 106 Pozzi, Francesca 670 Punie, Yves 350 Putnik, Zoran 657 Putois, Georges-Marie 633 Quenu-Joiron, Celine
273
Reffay, Christophe 196 Rensing, Christoph 521 Ribaudo, Marina 651 Riege, Kai 226 R¨ osener, Christoph 800 Ruiz-Iniesta, Almudena 645 Rummel, Nikol 267 Rutledge, Lloyd 788 S¨ alj¨ o, Roger 676 Santos, Olga C. 596 Sauvain, Romain 283 Savin-Baden, Maggi 433 Scarano, Vittorio 712 Scheffel, Maren 507 Schmitz, Bernhard 521 Schmitz, Hans-Christian 507 Schoefegger, Karin 700 Scholl, Philipp 521 Schr¨ oder, Svenja 365 Schw¨ ammlein, Eva 338 Selmi, Mouna 763 Sendova, Evgenia 549 Sharples, Mike 3 Siadaty, Melody 140 Sie, Rory L.L. 732 Sierra, Jos´e-Luis 725 ˇ Simko, Mari´ an 99 Sloep, Peter B. 732 Smits, David 7 Sosnovsky, Sergey 88 Spada, Hans 267
Specht, Marcus 52, 310 Sporer, Thomas 806 Stamelos, Ioannis G. 535 Stefanova, Eliza 549 Steinmetz, Ralf 521 Szilas, Nicolas 283 Talbot, St´ephane 572 Talon, B´en´edicte 405 Tanenbaum, Karen 37 Ternier, Stefaan 52 Torniai, Carlo 140 Tosatto, Claudio 719 Tran, Tri Duc 633 Tsovaltzi, Dimitra 688 Ullrich, Carsten
67
Van Bruggen, Jan 160 Van Labeke, Nicolas 106 Van Rosmalen, Peter 160, 788 Varella, Stavroula 127 Vatrapu, Ravi K. 694 Vega-Gorgojo, Guillermo 621 Verbert, Katrien 757 Verpoorten, Dominique 52 Vignollet, Laurence 379 Villiot-Leclercq, Emmanuelle 379 Voyiatzaki, Eleni 267 Vuorikari, Riina 166 Wakkary, Ron 37 Weber, Nicolas 700 Wiley, David 757 Winter, Marcus 226 Wodzicki, Katrin 338 Wolpers, Martin 112, 507 Yaron, David
391
Zdravkova, Katerina 657 Zeiliger, Romain 304 Zendagui, Boubekeur 738 Zerr, Sergej 154 Ziebarth, Sabrina 365
813